
I work with executives and founders on AI transformation. I sit in the boardrooms. I see the decks. I hear the talking points. And I’ve noticed a pattern: the more senior the room, the more comfortable everyone is with comfortable lies.
Not malicious lies. Comfortable ones. The kind you tell yourself when the alternative, actually confronting what’s happening, feels too disruptive, too uncertain, too threatening to the org chart you’ve spent years building.
I’m done being diplomatic about it. Here are eight things most executives know, at some level, to be true, and almost none of them are saying out loud.
1. Your “AI strategy” isn’t a strategy. It’s a budget line.
Ask most executive teams to walk you through their AI strategy, and what you get is a list: a vendor here, a pilot there, a task force that meets monthly, a slide in the board deck showing investment figures.
That’s not a strategy. That’s a collection of bets placed by different people, in different directions, with no shared logic connecting them.
A real AI strategy starts with a clear answer to two questions:
- What organizational problem are we solving
- How will we know if we’ve solved it?
Most companies don’t have answers to either. They have activity. Activity is not strategy. It’s the appearance of strategy, which is worse, because it buys you time without buying you progress.
2. Transforming a broken business with AI is like putting rocket boosters on a PT Cruiser.
AI amplifies what’s already there. That’s not a metaphor, it’s a mechanical reality.
If your operations are inefficient, AI makes them faster and more expensively inefficient. If your data is fragmented, AI surfaces that fragmentation at scale. If your culture resists accountability, AI gives people better tools to avoid it.
The executives I see betting on AI as a turnaround mechanism are making a category error. AI is an accelerant. It multiplies the underlying business. If the underlying business has structural problems, and most do, you don’t fix those with a model. You fix them with the hard, unglamorous work that was always required: clear ownership, clean data, honest process design.
AI rewards the organized. It punishes the chaotic faster.
3. 90% of AI transformation has nothing to do with AI.
The failure mode I see most often isn’t technical. It’s organizational.
Companies invest in AI tooling to avoid the conversation they should be having about people, process, and data. The technology becomes a displacement activity, something to point to while the real problems stay unaddressed.
Your AI initiative will fail if your data is siloed and nobody owns fixing it. It will fail if the people closest to the work aren’t involved in designing how AI fits into it. It will fail if you treat it as an IT project instead of an operational transformation.
The technology is the easy part. It has never been easier to access capable AI. The hard part is the same hard part it has always been: getting humans to change how they work. No model solves that.
4. You are lying to your employees if you say AI won’t cause long-term job loss.
I understand why executives say it. It’s destabilizing to say otherwise. People get anxious. Productivity dips. HR sends a memo.
But if you are standing in front of your organization saying AI won’t change headcount long-term, you are not managing uncertainty; you are manufacturing false certainty for your own comfort.
You don’t know the timeline. Nobody does. You don’t know exactly which roles, which functions, which layers of the organization will compress. But if you have spent any real time with these tools, and you should have, you know that the trajectory is not pointing toward more headcount for the same work.
The honest version of this conversation is harder. It also builds more trust than the comfortable version. People can handle uncertainty. What erodes trust is finding out later that leadership knew the direction and chose to obscure it.
Tell people the truth: the nature of work is changing, we’re navigating that together, and the people who invest in staying close to these tools will be better positioned than those who don’t. That’s honest. That’s also useful.
5. Your top performers are going to leave if you over-govern AI.
Enterprise AI governance is not optional. You need it for compliance, for risk management, for data security. I’m not arguing against it.
I’m arguing that most organizations are implementing governance as if the primary risk is AI doing too much, when the more immediate risk is talented people leaving to work somewhere AI is allowed to do anything.
Your best people are experimenters by nature. They are already using frontier tools on their own time. They are already frustrated by the gap between what they can do at home and what they’re allowed to do at work. When that gap becomes a wall, they don’t file a complaint. They update their LinkedIn.
Governance without enablement is just restriction. The organizations that get this right will treat governance and capability as a deliberate tradeoff to be managed, not a binary choice. The ones that don’t will find out the hard way that compliance frameworks don’t retain talent.
6. You don’t have a moat anymore. And you know it.
I ask this question in almost every executive conversation: What is your defensible advantage in a post-AI world?
Most people pause longer than they should.
The honest ones will tell you privately that they’re not sure. The dangerous ones will reach for the old answers, brand, relationships, and institutional knowledge, without examining whether those answers still hold when AI commoditizes information asymmetry, compresses execution timelines, and lowers the barrier to entry in almost every market.
Brand still matters. Relationships still matter. But they are not moats on their own anymore. The actual moats in the AI era are narrower and harder to build: proprietary data that nobody else has, workflows so deeply embedded in operations that they’re genuinely hard to replicate, and institutional knowledge that lives in systems and processes rather than people’s heads.
If you can’t articulate specifically what that is for your business, not in general terms, but specifically, then you don’t have a moat. You have a head start. Those are different things, and the clock on a head start is running faster than it ever has.
7. The least sexy AI initiative is probably the right one.
Stop chasing custom models. Stop commissioning transformation roadmaps that take six months to produce and another six to get approved.
The highest-ROI AI initiative for most companies right now is also the most boring one: get an enterprise LLM subscription in front of your entire organization, start with engineering and operations, and let people learn by actually using the tools.
Broad enablement before grand strategy is the right sequence. It builds AI literacy across the org. It surfaces the use cases you wouldn’t have thought to put in a roadmap. It creates internal advocates who understand the tools because they’ve used them, not because they sat through a presentation about them.
This isn’t complicated. It’s not a differentiator. It’s table stakes, and most companies haven’t done it yet because it doesn’t feel significant enough to announce.
Do it anyway.
8. The people setting your AI strategy are often the least qualified to do it.
This is the one nobody says out loud in the room where it matters.
In most organizations right now, AI fluency and seniority are inversely correlated. The executives designing an AI strategy are frequently the ones with the least hands-on experience with the tools. The people with the most direct, practical understanding of what AI can and can’t do are sitting three levels below the decisions being made about it.
This isn’t a character flaw. It’s a structural problem produced by how organizations work: the people who rise to the top did so in a pre-AI environment, using skills that are genuinely different from the ones required to evaluate AI capability.
The fix isn’t to replace your leadership team. It’s to deliberately close the gap, through direct exposure, through including practitioners in strategy conversations, through being honest that expertise in this domain doesn’t follow the org chart.
The executives I respect most right now are the ones who will sit down with a 28-year-old engineer and ask them to show them what they’ve built. That’s not a weakness. That’s the correct response to a moment where the knowledge gradient runs in an unusual direction.
The real question
Every item on this list points to the same underlying issue: most organizations are treating AI as something that happens to the business rather than something that requires the business to change.
The technology is not the obstacle. It never was.
The obstacle is the same one it’s always been, the willingness to look clearly at what’s actually true, to have the uncomfortable conversations, and to act before the moment of obvious necessity.
The executives who are doing this well aren’t waiting for certainty. They’re building the capacity to move without it.
That gap, between the ones who are moving and the ones who are waiting, is widening faster than most people realize.


