
Why measuring usage instead of impact is setting organizations up to fail…
Last year, Accenture trained roughly 550,000 employees in generative AI. Now, promotions for some senior leaders hinge on putting AI tools into practice. On the surface, this makes sense. AI is reshaping industries; leaders should be fluent, and organizations need to adopt it at scale. But the move reveals a corporate reflex that has derailed nearly every major technology wave before this one.
When you tie promotions to tool usage, you don’t get transformation. You get performance of transformation. And in the long run, that gap is more dangerous than not adopting AI at all.
Organizations Get What They Measure
This is one of the oldest management principles, and it applies here with uncomfortable precision. If you measure AI logins, usage frequency, and tool engagement, you will get more logins, more usage, and more engagement. What you won’t necessarily get are better decisions, faster execution, higher margins, or genuine innovation.
When promotions depend on visible AI usage, people adapt rationally. They optimize for what’s scored. Workflows get AI injected into them not because it adds value, but because it creates a visible signal. Outputs get routed through AI tools unnecessarily. Leaders learn to narrate their AI use rather than deepen it. The organization becomes full of people who appear AI-forward without actually becoming AI-leveraged. These are very different things.
This isn’t cynicism about human nature; it’s basic incentive design. The problem isn’t the people. It’s the metric.
The Ghost of Innovation Quotas Past
We have been here before. Companies have tried to mandate innovation through idea submission counts, digital transformation dashboards, and “innovation hours” tracked in project management tools. The result was almost always the same: a lot of visible activity and very little actual change.
The reason is structural. Mandates are good at producing compliance. They are poor at producing the mindset shifts that make new capabilities actually stick. Innovation, like genuine AI adoption, requires experimentation, which means tolerating failure, exploring edge cases, and making connections that aren’t obvious from the start. None of those behaviors flourish when someone’s promotion is riding on demonstrating the right optics.
You can mandate that people show up. You cannot mandate that they think differently when they get there.
What AI Theater Actually Looks Like
Consider what a senior leader at a large consulting firm might reasonably do if their next promotion depends on demonstrating AI adoption. They start routing client deliverables through AI summarization tools they don’t really need. They reference AI-generated first drafts in internal updates even when they would have written something better from scratch. They attend every AI working group on the calendar, not to learn, but to be seen.
None of this is dishonest; it’s adaptive. But it produces something that looks like transformation on a dashboard while the actual thinking, decision-making, and workflow architecture stays exactly the same. The organization gets AI-compliant leaders instead of AI-leveraged ones. And when the next wave of capability arrives, they’ll be just as unprepared as they were before, because the muscle they needed to build, the habit of genuine experimentation, was never actually developed.
Measure Impact, Not Activity
The fix isn’t complicated in principle, though it requires more rigor in practice. Instead of asking whether someone used an AI tool, ask what changed because of it. Did AI reduce cycle time on client deliverables? Did it improve the quality of strategic analysis? Did it allow a team to take on work they previously couldn’t? Did it create a new service offering or revenue stream?
These questions are harder to score on a quarterly review, but they’re the only ones that actually measure what organizations claim to want. Usage is an input. Impact is the output. Tying incentives to inputs while calling it transformation is a category error.
The deeper shift is from measuring AI as a tool to measuring AI as a capability. A leader who used AI fifty times last quarter but produced nothing materially better is not an AI-leveraged leader. A leader who used AI three times but redesigned a core workflow from the ground up is. The incentive structure should be able to tell the difference.
The Better Model: Structured Experimentation Tied to Outcomes
Requiring AI fluency for leadership isn’t the mistake. In fact, it’s necessary. The mistake is treating fluency as a destination rather than a starting point, and mistaking activity for evidence of it.
A more effective model has three layers working simultaneously.
- Baseline literacy: everyone gets trained, not just in how to use specific tools, but in how to think about what AI makes possible.
- Protected experimentation: defined time and resources where leaders can try things without penalty, with the explicit expectation that most experiments will fail and that failure is the point.
- Outcome accountability: leaders are evaluated not on whether they used AI, but on whether their team’s results reflect the leverage AI can provide.
The critical distinction is between sandboxed experimentation and randomness. Experimentation without accountability tends to produce enthusiasm and very little change. People play with the tools, learn something interesting, and return to their existing routines because there’s no structural reason to integrate what they discovered. The goal is to combine genuine freedom to explore with real accountability for what that exploration produces.
The Question Nobody Is Asking
Most AI adoption programs are designed around a question that sounds strategic but is actually tactical: “How do we get our people using AI tools?” The question that actually drives transformation is different:
“If AI were native to this process from the beginning, how would we rebuild it?”
That’s a first-principles question. It doesn’t ask how to retrofit AI into existing workflows; it asks what the workflow would look like if you were designing it today with AI as a core assumption. The answer almost always involves redesigning the architecture of work, not just adding a new tool to an existing sequence.
You cannot force that question with a usage metric. You can only create the conditions, safety, time, leadership modeling, and real accountability for outcomes that make people want to ask it themselves.
The Companies That Win Won’t Have the Most Logins
The organizations that come out ahead won’t be the ones with the highest AI adoption rates on a dashboard. They’ll be the ones where AI changed how work actually gets done, where decisions are faster and better, where teams can do things they couldn’t before, where the architecture of operations reflects what the technology makes possible.
Getting there requires holding two things at once: high expectations for what AI can produce, and genuine patience for the experimentation that makes real adoption possible. Tying promotions to usage achieves neither. It creates pressure without direction, visibility without change.
The companies that will win are the ones that designed their systems around outcomes. And that requires something a KPI dashboard cannot measure: the organizational courage to experiment before you’re certain, and to reward impact over optics.
