2 minute read

A disconnect exists between the most experienced software developers and many others, those who confidently assert that artificial intelligence can solve everything more efficiently, more creatively, and faster than mere humans. But the most competent developers (and designers, and makers in general) are quietly grimacing. We know that AI is nowhere near up to the job that the hype will have you believe. We are watching a car crash in slow motion.

David Dunning and Justin Kruger described a meta-cognitive blind spot where people with low ability in a domain overestimate their competence, while experts tend to underestimate theirs – the Dunning-Kruger Effect. Those who lack the skills necessary to recognise their own mistakes become overconfident, while those who truly understand a domain are acutely aware of its complexity.

In the AI era, the Dunning–Kruger effect has a new stage. People furthest from the realities of software design—senior managers, consultants, or people who “used ChatGPT once and were impressed”—display the highest confidence that AI can replace deep technical work.

But trivialising another person’s competence is a symptom of the Dunning–Kruger Effect. When highly experienced developers caution that AI’s results can look correct while being dangerously wrong, they’re not being resistant to change—they’re acting from experience. The irony is that overconfident AI advocates often interpret expert restraint as a lack of imagination. In reality, it’s quite the opposite: experts see the hidden complexity that others cannot.

Ironically, experienced developers can make good use of AI as a productivity amplifier, an assistant that helps with boilerplate, exploration, or prototyping. But they do so with discernment and scepticism, because they know the boundaries between useful automation and untrustworthy suggestions. They are still in control, still scrutinising the code carefully.

When experienced engineers use AI, they can achieve real acceleration — but that acceleration depends on foundational knowledge. They know that clean, safe, efficient software isn’t just about producing an answer that works once. It’s about maintainability, test coverage, edge cases, and correctness under changing conditions. AI can generate code that looks elegant but fails all of those tests. It’s easy to make software appear right while being dangerously wrong — that’s why experience remains non-negotiable.

The paradox is that AI is only effective and safe in the hands of competent, experienced people.

AI itself can also suffer from the Dunning–Kruger Effect. Every developer who has ever used Claude Code will attest to its ridiculous overconfidence even when nothing works at all! Studies such as Do Code Models Suffer from the Dunning–Kruger Effect? and Large Language Models Are Overconfident show models that systematically overestimate their accuracy — particularly when their actual performance is lowest. AI mirrors the same psychological bias as inexperienced, overconfident people.

In effect, AI doesn’t just reflect Dunning–Kruger; it amplifies it. The result is a “force multiplier” effect for misplaced confidence, especially when decision-makers lack the grounding to know when the system is wrong.

It is more important than ever to temper the wild promises of AI with the experience of practitioners.