Category Archives: entrepreneurship

Will Job Displacement Take Decades?

As of this moment, AI is not responsible for mass job displacement. It is somewhat responsible for hiring freezes and headcount reduction. They are two different things, but people feel it. A recent report by Anthropic on the labor-market impact of AI featured an image that generated significant buzz.

It shows the most exposed occupations and their theoretical vs observed AI coverage.

The image also shows that AI is far from reaching its theoretical capabilities.

most exposed ocupations and their theoretical vs observed AI coverage

Still, the question is worth asking: Will job displacement take decades?

The honest answer is: probably yes. But “slow” isn’t the same as “safe,” and that distinction matters more than most business leaders realize.

The Hidden Cost of Best Practices: Why Playing It Safe Is the Riskiest Move You Can Make

Every industry has a playbook. Proven methods, validated approaches, and accumulated wisdom passed down through conferences, business schools, and consulting decks. We call them best practices, and for good reason. They work. They reduce risk. They help organizations avoid costly mistakes.

There’s just one problem: they also guarantee you’ll never pull ahead.


When “Proven” Becomes a Trap

Best practices are backward-looking by design. Distilled from what already worked, optimized for reliability, built to produce predictable outcomes. That’s valuable when you’re managing operational risk. It’s a strategic liability when you’re trying to compete.

Here’s what nobody says out loud: if your competitors have access to the same best practices, and they do, following those practices doesn’t give you an edge. It gives you parity. You execute well, stay in the game, and converge toward the same outcomes as every other serious player in your industry.

The practices designed to protect your business have quietly become its ceiling.

This is how entire industries sleepwalk into commoditization. Every player runs the same playbook, makes the same moves, then wonders why differentiation feels impossible. The answer isn’t that differentiation has become harder. It’s that everyone is optimized for the same thing. When you copy the industry’s logic, you inherit the industry’s limits.

Ray Davis, president and CEO of Umpqua Holdings, decided not to do that. And, he has a very unique way to express his competitor’s best practices: CRAP, as in competitor rules and practices.

He understood that best practices are borrowed limits, and Umpqua rewrote the rules of banking and defied commoditization.

First Principles Thinking: A Different Question

First principles thinking starts from a completely different place. Instead of asking “what works?” it asks “what’s actually true?”

It means stripping a problem to its foundations,  discarding inherited assumptions, ignoring how things are supposed to work, and rebuilding your reasoning from scratch. The difference between asking “how do leading companies approach customer retention?” and asking “why do customers actually leave, and what would we build if we had zero preconceptions about how retention works?”

The first question lands you in industry benchmarks and incremental improvements. You get a refined version of what already exists. The second opens solution spaces your competitors aren’t looking at, because they’re all reasoning from the same inherited playbook, asking the same first question, and arriving at the same range of answers.

This is where genuine competitive asymmetry lives. Not in executing the playbook better than everyone else. In questioning whether the playbook is even solving the right problem.

The Convergence Trap

Watch what happens in a mature industry as best practices spread. Early adopters develop them and gain a real advantage. Then consultants package them. Business schools teach them. LinkedIn spreads them. Within a few years, every serious competitor has absorbed them, and the advantage has evaporated. The practice becomes table stakes.

This cycle now plays out faster than ever. Information moves at speed. Methodologies get documented, shared, and replicated in months, not years. What gave your competitor an edge eighteen months ago is probably being implemented across your industry today.

The half-life of advantage derived from best practices keeps shrinking. And yet, most organizations respond by doubling down, seeking newer, better practices to adopt rather than questioning the fundamental logic of their strategy. They’re trying to solve a differentiation problem with a tool that produces convergence. It doesn’t work. It can’t work.

First principles thinking short-circuits this cycle. When you reason from foundational truths rather than inherited methods, you arrive at answers that aren’t in the shared playbook, because you didn’t use the shared playbook to find them. That’s not just differentiation. It’s a structural advantage that’s genuinely difficult to copy, because competitors can’t replicate your conclusion without first replicating your entire reasoning process. By the time they get there, you’ve moved again.

A Diagnostic You Can Run Right Now

Here’s how to tell which mode your organization is actually operating in. When a significant strategic decision lands on the table, notice the first question that surfaces in the room.

If it’s “what do leading companies do in this situation?” you’re in best practices mode. You’ve already accepted someone else’s frame of what’s possible. You’re searching for the most refined version of an existing answer, which means the best outcome available to you is a slightly better version of what already exists.

If it’s “what’s actually true about this situation, and what would we build from scratch if we ignored everything that came before?” you’re in first principles mode. The possibility space is still open.

Be honest about which question your team reaches for first. Most organizations, under pressure, default to the first. It feels safer. It’s defensible. You can point to precedent. But that reflex, reaching for the playbook under pressure, is precisely how organizations cap their own upside without ever making an explicit decision to do so.

The Strategic Choice You’re Actually Making

Competitive strategy is about being different in ways that matter. That definition has no room for “we follow the same practices as everyone else, just more rigorously.”

The organizations that consistently create distance from their competitors share one trait: they’re willing to question assumptions their industry treats as settled. Not to be contrarian. Not to take risks for its own sake. But because that’s where the leverage actually is, in the gap between what the industry assumes is fixed and what’s actually true.

Best practices will tell you how to run the race everyone else is running. First principles thinking asks whether you should be running a different race entirely.

That question is uncomfortable. It means discarding the safety net of precedent and sitting with genuine uncertainty. It means your strategy can’t be defended by pointing at what competitors do. But it’s the only question that opens outcomes your competitors can’t predict, model, or replicate in time to matter.

Play the existing game well enough, and you’ll survive. Question the game itself, and you create the kind of distance that doesn’t close.

Most organizations never make an explicit decision to cap their upside. They just keep asking the wrong first question and call it due diligence.

AI Didn’t Make These Skills Important. It Exposed Who Skipped Them

The economy spent 30 years rewarding the wrong skills. AI just sent the invoice. Suddenly, everyone is talking about taste, critical thinking, creativity, and adaptability. Not because they’re new. Because their absence is now visible.


The Illusion of Skill

For a long time, you could build a business on execution. Follow the process. Produce the volume. Repeat what worked last quarter. The economy rewarded output, but producing it was hard.

Now it isn’t.

AI writes faster than your team. Analyzes faster. Drafts faster. Generates more variations in an hour than your best person could in a month. Output is no longer the constraint. Judgment is.

What Happens When You Skip Judgment

Imagine a mid-sized marketing agency, call them a client, I’ve seen a version of more than once, that gets excited about AI-generated content. They 10x their output overnight. Blog posts, social copy, email sequences, proposals; volume they never could have produced before.

Six months later, they’ve lost two anchor clients. The feedback: “Your content all sounds the same. It doesn’t feel like you anymore.”

They weren’t technically using AI wrong. They were using it without taste.

They became what I call a slop cannon, flooding the market with content that is technically competent but entirely forgettable; average ideas, average voice, average decisions, at scale.

AI didn’t cause this. The absence of judgment did. AI just made it faster and more visible.

The Four Skills That Actually Matter

1. Taste: The Multiplier of Abundance

When AI can generate 100 options in seconds, the differentiator isn’t generation. It’s selection. Taste is the ability to recognize what’s excellent, and to reject everything that isn’t, even when it’s good enough.

Without taste, AI turns you into the agency above. With taste, AI becomes a force multiplier. The difference isn’t the tool. It’s the eye and mind behind it.

As an executive, your taste now sets the ceiling for everything your team produces with AI. Raise the standard, or the slop multiplies.

2. Critical Thinking: The Filter in an Age of Plausible Nonsense

AI doesn’t just produce answers. It produces convincing answers. Sometimes they’re incomplete. Sometimes biased. Sometimes wrong in ways that only become obvious six months later, in a flawed strategy, a missed risk, a contract with a subtle error.

The leaders who accept AI outputs uncritically will make faster, worse decisions. The ones who interrogate, challenge, and refine outputs will make faster, better ones.

Critical thinking isn’t about knowing more. It’s about thinking better than the tool you’re using.

3. Creativity: Beyond Pattern Completion

AI is extraordinary at pattern completion. It learns from what already exists and produces sophisticated variations. What it cannot do is decide that the pattern itself is wrong.

Creativity is pattern disruption, seeing what isn’t there yet, recombining ideas in ways that open genuinely new possibilities. The companies that win won’t be the ones using AI the most. They’ll be the ones imagining what AI makes possible that wasn’t before.

That vision has to come from someone willing to think beyond precedent.

4. Adaptability: Updating Your Model of Reality

What worked three years ago may not work today. What works today may not work in 2027.

The executives who struggle aren’t the ones who lack intelligence. They’re the ones who are emotionally attached to the strategies that built their previous success. Adaptability is the willingness to let go of certainty, to experiment quickly, to update beliefs when evidence changes, and to rebuild around what’s now possible rather than to defend what used to work.

The rigid get disrupted. The adaptive set the new terms.

The Real Shift

AI is compressing the distance between average and excellent. Production is automated. Information is abundant. Execution is commoditized. What remains genuinely scarce and therefore genuinely valuable is judgment.

Taste. Critical thinking. Creativity. Adaptability.

These aren’t AI-era skills. They’re human-era skills. They’ve always been the foundation of durable competitive advantage. AI raised the stakes. The leaders who treated them as optional now face that bill.

The New Advantage

The future doesn’t belong to those who use AI. It belongs to those who combine AI with refined judgment, who generate faster and make better choices, who move quickly and think clearly.

AI didn’t create the need for these skills. It exposed who developed them, and who never did.

The gap is widening. Which side are you on?

AI Is an Engine for Value Creation. Most Companies Are Using It as a Band-Aid

Here’s the premise: Value Creation > Cost Reduction

The promise of AI is that you will be able to do more with less. And that “less” means fewer people. This is true, but it’s not the full story.  Most companies obsess over cost reduction. Fewer employees. Fewer tools. Fewer expenses. They think efficiency is the path to winning.

Six Ways Organizations Disguise Avoidance as AI Strategy

The gap between what AI can do and what most companies are doing has nothing to do with tools, budgets, or readiness. It has everything to do with courage.


Most companies are not failing at AI. They are succeeding at avoidance and calling it strategy.

The evidence isn’t subtle. AI can now write code, analyze contracts, predict demand, run customer support, generate campaigns, and compress weeks of analysis into hours. The tools exist. The case studies exist. The ROI exists. And yet, most organizations are stuck: in workshops that lead to pilots, in pilots that lead to reports, and in reports that lead to more workshops.

This is not an information problem. Every executive reading this already knows AI is important. They have read the articles, attended the conferences, and sat through the vendor demos.

The real problem is that knowing something is important is not the same as being willing to change because of it.

“AI adoption is not stalling because organizations lack capability. It is stalling because they lack the courage to stop protecting how work currently happens.”

Here is what that actually looks like in practice: six ways organizations disguise avoidance as diligence.

01 — The Literacy Excuse “We Don’t Understand It Yet.”

This is the polite version of delay. Leaders frame their hesitation as a knowledge gap, as if a complete understanding of AI were a prerequisite for acting on it. It never was. You did not wait to fully understand the internet before building a website. You did not master cloud infrastructure before migrating to it.

The organizations winning with AI right now do not have more information. They have more tolerance for learning while doing.

What’s Actually Happening: Teams are waiting for certainty before they experiment. Training is scheduled as a future event rather than treated as the experiment itself.

What Moves the Needle: Build role-specific AI literacy through real work, not seminars. The person who learns fastest is the person who starts first.

02 — The ROI Trap “Show Me the Payback First.”

ROI frameworks were built for predictable investments. AI is not a predictable investment; it is a capability multiplier whose value compounds over time, and faster for those who start earlier.

Demanding proof before experimentation is not financial discipline. It is a way of making inaction feel responsible.

The companies that will dominate their categories in five years are not the ones who waited for ironclad case studies. They are the ones building proprietary data loops right now, while competitors debate spreadsheets.

What’s Actually Happening: Organizations are applying capital-allocation logic to competitive positioning decisions. These are not the same thing.

What Moves the Needle: Run 30–60 day pilots that measure speed, quality, and decision velocity, not just cost. AI ROI shows up first in things that don’t fit neatly on a spreadsheet.

03 — The Tool Avalanche “Buying Tools Instead of Redesigning Work.”

There are now hundreds of AI tools, and organizations are drowning in them. Most companies respond to this by buying more of them, adding them to existing workflows, and waiting for the transformation to occur.

It never does. Adding AI to a broken process does not fix the process. It accelerates it.

Stop asking “which tool should we use?” Start asking, “Which decision or task should no longer exist?”

AI-native companies do not start with tools. They start with a first-principles question: if we were building this operation from scratch today, with AI available from day one, what would it look like? The answer is almost never “same as now, but with a chatbot.”

04 — The Real Resistance “It Is Not About the Technology.”

When someone says “I’m not sure AI is ready,” they usually mean “I’m not sure I am ready.” The resistance is not technical. It is personal, about status, identity, and the discomfort of being a beginner again.

Middle managers resist because AI exposes the layers of process around which they built their authority. Senior leaders resist because admitting uncertainty conflicts with the image of competence they are paid to project. Teams resist because they fear being seen as replaceable.

None of this is shameful. All of it is human. But mistaking human discomfort for strategic caution is how organizations lose their window.

What’s Actually Happening: Fear of irrelevance is being laundered as risk management. The conversation stays technical to avoid becoming personal.

What Moves the Needle: Name the real fear openly. Position AI as capacity expansion, not replacement. Start with assistive use cases before autonomous ones. Make it safe for beginners.

05 — The Legacy Lock “Attaching Jet Engines to Bicycles.”

You cannot bolt AI onto legacy operations and expect transformation. The workflow structures, approval layers, reporting chains, and information flows that most organizations run on were designed for a world where intelligence was expensive and human attention was the bottleneck.

AI does not fix that. It reveals how outdated it is, loudly, immediately, and expensively.

Reinvention requires a different kind of discipline: the willingness to ask whether entire categories of work should exist at all. That question makes people uncomfortable. It should. That discomfort is the feeling of actual transformation, not just transformation theater.

06 — The Ownership Void “When It Is Everyone’s Job, It Is Nobody’s Job.”

AI sits awkwardly between IT, operations, innovation, and strategy, making it a shared responsibility no one actually owns. The result is an endless loop of pilots that generate reports that recommend more pilots.

Organizations do not fail at AI because they lack talent or budget. They fail because they lack someone with the mandate and authority to make uncomfortable decisions and see them through inevitable friction.

→ Assign a single accountable AI owner with real authority, not just a title

→ Build a small, cross-functional task force with a mandate to remove friction

→ Measure them on outcomes, not on activity or compliance

→ Give them permission to kill legacy processes, not just manage them

AI adoption dies in committees. Every month without an owner is a month of compounding competitive disadvantage, running silently in the background while you debate governance structures.

The Companies That Win Will Not Be the Most Technical.

They will be the ones who moved before they felt ready. Who experimented before the ROI was guaranteed? Who redesigned how work happens instead of protecting what already exists.

AI is no longer a technology problem. The technology works. It works remarkably well, right now, for organizations willing to build their strategy around it rather than tack it on.

What remains is the harder work: the cultural change, the organizational courage, and the willingness to make decisions in the face of uncertainty rather than use uncertainty as an excuse not to decide.

The adoption gap is real. And every day it stays open, it widens because AI does not wait, and your competitors who are already experimenting are compounding the advantages you have not yet started building.

The question was never whether AI works. The question is whether you are willing to change before you are forced to.


Stop Waiting. Start Somewhere.

The organizations transforming right now did not start with a perfect strategy. They started with a real experiment and iterated from there. The only thing standing between where you are and where you need to be is the decision to begin.

AI Isn’t Leveling the Playing Field; It’s Tilting It

Everyone says AI is the great equalizer. That it gives everyone the same shot. They’re wrong. I’ve spent eight years building AI companies and the last two helping businesses implement AI. Here’s what I’m actually seeing: AI is creating the biggest capability gap in modern business. And most companies are on the wrong side of it.

What Have You Failed at This Week?

What Have You Failed at This Week?

Nobody likes to fail. Yet most people, and most companies, claim they value learning. There’s your problem right there. Real learning comes from trying things with high uncertainty. And high uncertainty means frequent failure. You can’t have one without the other.