Most companies that license Agentforce haven't deployed it. The reason is almost never the tool. Here is what's actually blocking it.

Salesforce's own data puts Agentforce adoption well below 30% among licensed customers. Most teams that paid for it (often as part of an enterprise contract renewal) and haven't turned it on. The common explanation is that the tool is complex or that the use cases aren't clear. That's not wrong. But it's not the real reason.

The real reason is that Agentforce requires clean inputs to function. It can't qualify leads it can't read. It can't route contacts whose firmographic fields are empty. It can't personalize outreach from a record where half the fields are blank, wrong, or 18 months stale. The tool is ready. The data isn't.

What Agentforce Actually Needs

Agentforce agents operate on CRM data. They read field values, evaluate criteria, take actions, and log results. Every part of that chain depends on data quality.

Before any agent goes live, you need:

  • Defined qualification criteria at the field level. The agent needs explicit rules: what makes a lead qualified, in terms of specific field values. "Good fit" is not a rule. "Company size 50-500 AND industry in [Technology, SaaS, Professional Services] AND lead source not [Event]" is a rule.
  • Populated fields. If the fields in your qualification logic are empty on 60% of records, the agent will score 60% of your leads as incomplete and either skip them or default to a fallback. You will route bad leads to sales and wonder why adoption is low.
  • Consistent data formats. Inconsistent picklist values break rule-based logic. If your industry field has "Technology," "Tech," "tech," and "Technology/Software" as values, your agent will treat them as four different industries.
  • Activity logging. Agents that factor in engagement history need that history logged in the CRM. If your reps are working from email and not logging to Salesforce, the agent has no behavioral signal to evaluate.

The Three Blockers We Find in Every Deployment

Across deployments we've run, the blockers are almost always the same three things.

1. The data foundation isn't ready. Fields are empty, inconsistent, or stale. The agent has nothing to work with. This is the most common blocker and the most fixable, but it requires addressing the underlying data quality process, not just cleaning a batch of records.

2. Qualification criteria were never defined in operational terms. Every team has a sense of what a good lead looks like. Almost no team has translated that into explicit, field-level rules that a system can execute. "We want mid-market SaaS" is not a definition. The exercise of writing explicit rules usually surfaces disagreements between marketing and sales that nobody knew existed.

3. The deployment skipped the shadow phase. The right way to deploy an AI agent is: shadow mode first (the agent runs but doesn't act, you review its outputs), then assisted mode (the agent makes recommendations, a human approves), then autonomous mode. Most failed deployments skipped shadow mode because it felt like wasted time. It isn't. Shadow mode is how you catch the cases where the agent would have made a bad call before it makes one at scale.

How to Tell If You're Actually Ready

A quick test: pull 50 recent leads. For each one, ask whether an automated system could determine whether it should be qualified and routed, using only the data in the CRM record. If the answer is yes for 40+ of those 50 leads, your data foundation is probably sufficient to start. If the answer is yes for fewer than 30, the foundation work has to come first.

Most teams that run this test find the number is closer to 20.

The Staged Deployment

Companies that get Agentforce working follow a consistent pattern:

  1. Fix the data foundation. Standardize the fields the agent will use. Enforce required field rules. Run a deduplication pass. Get activity logging working.
  2. Define qualification rules in writing. Get marketing and sales in a room. Write out explicit criteria. Resolve the disagreements. Document the edge cases.
  3. Deploy in shadow mode for 4-6 weeks. Review agent outputs daily for the first two weeks, then weekly. Note every case where the agent's recommendation would have been wrong.
  4. Adjust rules, then move to assisted mode. Sales sees agent recommendations on every lead before taking action. Another 4 weeks.
  5. Move to autonomous mode for the defined use cases. Keep shadow mode running for any new use cases.

This process takes 12-16 weeks from data foundation work to full autonomy. Companies that try to compress it to 4 weeks hit the same failures: bad recommendations, rep distrust, low adoption.

One RevOps team we worked with had been trying to deploy Agentforce for 7 months before we engaged. The issue was a data foundation with 40% empty industry fields and no defined qualification criteria. We fixed the foundation in 8 weeks. They went live in shadow mode at week 10 and moved to autonomous mode at week 18. MQL-to-SQL conversion went from 11% to 27% in the first full quarter. (We cover the full AI lead qualification deployment process in a separate post.)

Take the AI Readiness Scorecard to see where your foundation stands across four dimensions before you attempt a deployment. Or book a call if you want a frank conversation about whether you're 8 weeks away or 6 months away.