If you’ve spent any time in software development circles, you’ve probably heard of the expert beginner. The concept, coined by Erik Dietrich,1 describes a developer who has hit a local peak of confidence: skilled enough to feel like an expert in their narrow context, but unaware of how much they don’t know. They’ve stopped growing because nothing around them is telling them they need to. Their code works. Their tickets close. Their PRs get merged.
It’s an expansion of the Dunning-Kruger effect2, but with a key twist: the expert beginner isn’t just temporarily overconfident early in their career. They get stuck there. Success reinforces their habits. Seniority insulates them from pushback. Years pass and they never realize their ceiling was a false one.
I’ve been thinking about this concept a lot lately, and I’m worried we’re about to create an entire generation of expert beginners. Except this time, we won’t even call it a problem. We’ll call it a feature.
The Floor Is Gone
To become an expert beginner the old way, you still had to build something. You had to write enough code, ship enough features, and accumulate enough working solutions that you felt justified in your confidence. There was a floor of foundational knowledge required to even reach the plateau.
Prompt development removes that floor.
Today, a developer can get working software: software that ships, that users touch, and that stakeholders sign off on, all without ever developing the mental models that would let them know it’s brittle, unscalable, or fundamentally wrong. The feedback loop that once said this is good enough now fires much earlier, and with far more social reinforcement around it.
The danger isn’t that AI makes developers lazy. The danger is that AI makes a lack of depth look like mastery. Ai hides rot well.
The Celebration Problem
Here’s what worries me most: this won’t feel like a problem. It will feel like progress.
A junior developer in 2026 who can ship a feature in a day using AI prompts will be praised. Their velocity is real. Their output is tangible. The fact that they couldn’t debug it without AI, couldn’t reason about its performance characteristics, and couldn’t anticipate how it fails under load: none of that surfaces in a sprint review. None of that shows up in a performance review.
By the time it does surface, the attribution is murky. It’s not your code that failed. It’s just a bug.
The expert beginner of old at least owned their code enough to defend it. The new version may not understand it well enough to know there’s something to defend.
I want to be fair here. Raising the floor has real value. More people shipping more things is genuinely good; but I think we’re underestimating the cost of doing it at the expense of the ceiling.
Junior vs. Senior: The Divide Widens
For senior developers, AI is a genuine force multiplier. They bring the judgment to know when the output is wrong, when the architecture is naive, when the abstraction is leaking. They can prompt toward something good because they already have a model of what good looks like. AI compresses their execution time without compressing their wisdom.
For junior developers, the risk is that AI short-circuits the struggle that builds that wisdom in the first place.
A lot of what makes a senior developer isn’t knowledge they were handed in a book. It’s scar tissue. Debugging a production outage at 2am. Inheriting a nightmare codebase and having to make sense of it. Watching a clever solution become a maintenance burden six months later. These experiences build the intuition that makes AI output actually useful rather than just plausible.
If AI removes those experiences, or delays them long enough that the junior never connects cause and effect, you get developers who are fluent in prompting but have no internal model of the system they’re building. They know how to ask for the answer. They don’t know how to question it.
The “Good Enough” Horizon
“Good enough” is self-sealing.
In most business contexts, software doesn’t need to be good. It needs to work today, satisfy a stakeholder, and not visibly break. AI is extraordinarily good at producing exactly that. The gap between “AI output” and “good software” is often invisible to everyone in the room except the person with the experience to see it.
And if that person is increasingly rare, or increasingly drowned out by the speed of AI- assisted delivery, the standard itself drifts. “Good enough” doesn’t just become acceptable; it becomes the definition of good. There’s no longer a reference point for better.
This is the part that keeps me up at night. Not that individual developers will stagnate; that’s happened in every era. But that the shared standard of the industry could erode quietly, and we won’t notice because everything still appears to work.
What Might Push Back
I don’t want to be all doom here. There’s a reasonable counterpoint: mediocre developers have always existed, and the industry has survived. Frameworks, IDEs, and code generators have each been accused of dumbing things down. Each time, the developers who cared about craft found a way forward.
And there’s something worth noting about who engages with this question seriously. The developers who are asking “am I becoming an expert beginner?” are almost certainly not. That kind of self-reflection is itself evidence of a growth mindset. The trap is, by definition, most dangerous for those who aren’t asking.
The developers who will thrive long-term are probably those who use AI as a mirror, constantly asking why it generated this, is this actually right, and what assumptions it is making, rather than treating it as an oracle. But that disposition requires a kind of epistemic humility3 that’s hard to teach and even harder to sustain when everything around you is saying you’re doing great.
Where That Leaves Us
I don’t think the answer is to reject AI tooling. That ship has sailed and it was never really the point. The tools are real, the productivity gains are real, and pretending otherwise isn’t useful.
But I do think the senior developers and engineering leaders reading this have a responsibility to think about what we’re building for. Are we building teams of people who can reason about software, or teams of people who can ship software? Those used to be more closely related than they are today.
And if you’re earlier in your career: seek out the hard problems on purpose. Don’t let AI remove the friction that would have taught you something. The scar tissue matters. The 2am debugging session matters. The bad abstraction you built and had to live with: that matters too.
Good enough has always been the enemy of great. The new version of that enemy just gives you a lot more confidence while it’s working.
