The illusion of expertise in the age of AI

Aug 6, 2025 |Practice, Systems
AI is changing how we work... and how we think. In the rush to automate, too many teams are skipping the questions that matter, trusting tools they don’t fully understand, and calling it innovation. This is a reflection on the mindset, risks, and responsibilities we can’t afford to ignore.

What happens when the urgency to adopt AI overtakes the clarity on why you’re using it?

Used well, AI is a genuine advantage. We all know this.

It handles the mundane, speeds up exploration, and creates space for more creative work. It’s transforming how work gets done across every function.

I use AI daily and couldn’t imagine going back. The possibilities for what comes next genuinely excite me.

Over the past year, I’ve watched companies race to implement AI tools. Some driven by FOMO (fear of missing out and falling behind). Others by pressure to look progressive. Many just swept up in the hype.

Very few with any clear strategy.

The patterns are consistent: tools get implemented, velocity increases, and problems pile up quietly in the background. The damage is done before anyone notices.

This is about the operational chaos of teams shipping work they can’t properly evaluate. The gap between speed and understanding. And the false confidence AI output often creates.

I’ll be exploring each of these challenges in depth over the coming weeks, starting with the hidden costs most leaders miss until it’s too late.

But first, let’s talk about the trap many teams have already fallen into.

The mindset trap

We’ve entered a phase where the urgency to use AI is overtaking the clarity of why. I’ve seen it on client calls. “We want to use AI to…” and then comes the scramble to find the purpose after the tool has already been chosen.

It’s the new “we need an app to…”.

A hammer looking for a nail.

Sure, you could say it’s just a phrasing thing. But language shapes mindset, and mindset shapes decisions. When you start from solution instead of problem, you skip the most important questions:

  • What are we trying to fix (achieve) and why?
  • Who is it for?
  • What would better actually look like?

It’s about being precise rather than precious. Especially when the stakes are high, the timelines are tight, and the temptation to ship fast is stronger than ever before.

The patience to be intentional is often mistaken for dragging your feet. But moving fast in the wrong direction isn’t progress.

Where it’s actually failing

Beyond the headline risks of hallucinations and deepfakes, there’s a pervasive danger: operations.

People are outsourcing thinking to tools they don’t fully understand. They’re shipping strategy, design, and code without the depth to know if it’s any good.

Without judgement, critique, or experience in the loop, things go live that should never leave a draft.

It all looks convincing even when it’s not. I’ve seen brittle apps, broken UX, insecure code, and undercooked products. And I’ve seen teams lay off the very people who might have caught those issues before they became real problems.

The evidence is mounting. Research analysing 211 million lines of code found an 8-fold spike in duplicated code blocks in 2024, the clearest signature yet of AI-generated work at scale. Code that looks functional but creates maintenance nightmares.

Google’s DORA report confirms the trade-off: a 90% increase in AI adoption correlated with a 9% jump in bug rates and 7.2% decrease in delivery stability.

The shiny new object syndrome

Another trap is the shiny-new-object (toy) syndrome fuelled by the fear of falling behind.

The “we must be AI-first” mentality drives teams to purchase tools without evaluating integration complexity, implementation timelines, or actual business impact. The time and resources required rarely get weighed against the value delivered. And while you’re implementing? You’re not shipping the work that actually matters. That opportunity cost compounds faster than the benefits materialise.

We used to fear machines replacing humans. Now, humans are replacing themselves without realising it. Want to call that innovation? Nah… that’s negligence.

The illusion of competence

What makes this so risky is the way AI tools manufacture a sense of competence. They’re coded to be encouraging, positive, rarely critical. That’s not a bug, it’s a feature. But it creates a false sense of expertise.

Someone uses AI to generate code, strategy, or design. It looks polished. It sounds authoritative. They present it confidently, and others assume they know what they’re talking about. The output looks professional, the person seems knowledgeable, so it gets trusted. Often, no one even knows AI was involved.

Classic Dunning-Kruger, amplified. People overestimate their own capability because the tool never pushes back. And without actual subject matter expertise in the room to validate the work, the facade holds. Everyone’s nodding. But who’s checking?

The illusion compounds. AI makes people feel expert. That confidence makes others believe they are expert. And because the output looks good and the person presenting it seems sure, the work moves forward unchallenged.

The cost of convenience

Humans are wired to take the path of least resistance. It’s evident ever since the dawn of time…

Yes, we’re generally a lazy bunch!

AAI tools feed that tendency perfectly. They’re fast, frictionless, and they remove the need to struggle through problems.

Why wrestle with complexity when you can generate a solution in seconds? Let’s “vibe” it till we make it. Right?…

The cost isn’t immediate. It’s cumulative.

  • Over time, teams stop reaching for critical thinking because the tool’s already provided an answer.
  • Junior people never develop problem-solving skills because they skip straight to generated solutions.
  • Mid-level practitioners lose the muscle for deep work.
  • Senior people leave because there’s no craft left to practice.

The ability to evaluate quality declines. Teams become dependent on tools they can’t properly assess. And when something breaks, no one has the expertise left to fix it because that capability was slowly being eroded while everyone was optimising for speed.

The organisations building this literacy now are retaining capability. The rest are trading long-term competence for short-term velocity.

Don’t kill curiosity. Guide it.

This isn’t a call to shut it all down.

Experimentation matters. Curiosity fuels growth. And we need people who are excited to build.

But we also need grown-ups in the room.

We need open-minded experts who don’t default to “we’ve done this before.” People who know what to watch for. Who know what good looks like while being equally if not better at spotting what dangerous looks like.

You don’t need everyone to be an AI expert. But you do need AI-literate teams. People who know how to prompt, sure. But more importantly, people who know how to question the output… to spot what’s missing… to say, “this looks good, but it’s wrong.”

Speed without scrutiny isn’t innovation, it’s a risk waiting to give you an unwanted surprise!

AI literacy isn’t optional anymore

Most organisations think AI literacy means teaching people to prompt better or use more AI tools.

That’s not it.

Real AI literacy means understanding the limitations, knowing where AI excels and where it fails catastrophically. It’s about recognising patterns that indicate AI-generated work: the duplicate blocks, the missing edge cases, the brittleness that looks solid until it breaks.

Most importantly, it means developing the judgement to know when to accept, when to iterate, and when to reject.

I’m not suggesting you turn everyone into AI experts. But you do need people who can spot when something’s off.

People lose the ability to evaluate what they’re producing. You end up with teams who can generate work but lack the judgement to know if it’s fit for purpose.

The organisations investing in this literacy now are building for longevity. The rest are optimising for a pace they most likely won’t be able to maintain.

What to do next

If you’re leading a team, ask these:

  • Are we clear on what problem we’re solving?
  • Are we choosing the right tools, or the newest ones?
  • Who’s checking the work before it ships?
  • Do we have the right mix of curiosity and critique?
  • Have we accounted for integration complexity, not just the tool’s price tag?
  • What are we NOT doing while we implement this? (Opportunity cost matters.)
  • Are we building better? Or just building faster?

Start with one protocol: establish a human checkpoint before any AI-generated work goes live. Not a rubber stamp, an actual review by someone with the judgement to spot what’s missing, what’s fragile, or what’s just plain wrong.

Then build from there. Create small protocols: “Before we ship this, who’s validated it beyond the tool?” Document the patterns you see. Share failures openly. Build AI literacy alongside AI adoption.

Make it safe to talk about how AI was used. Teams hiding their AI use out of shame or trying to appear smarter create invisible risks. Transparency about what’s AI-generated and what’s human makes everything easier to evaluate.

The goal isn’t to slow down. It’s to build better judgement around speed.

The smartest tools in the room still need people who know how to use them. Don’t outsource that.

This is just the start

Over the coming weeks, I’ll be exploring each of these challenges in more depth:

  • The real costs of AI-generated work that most leaders miss until it’s too late
  • How to spot AI-generated work that’s about to break (and what to do about it)
  • Why smart teams keep choosing AI wrong (and the systemic forces driving bad decisions)
  • How to build AI-literate teams without grinding to a halt

This is the conversation we need to have now, while there’s still time to course-correct. The decisions you make today about AI integration, about speed versus judgement, about tools versus teams, are compounding daily.

If you’re navigating these challenges, you’re not alone. And you don’t have to figure it out in isolation.

Shay Rahman

Shay Rahman

Navigating complexity in design leadership? I'm sharing insights and starting conversations on LinkedIn. Let's connect.