Every organisation celebrating its AI productivity gains is also, quietly, depriving its junior people of the experiences that make them leaders.
Nobody is talking about this yet. But they will.
Here is what is happening. AI is extraordinarily good at handling the messy, repetitive, lower-order work — the drafting, the analysis, the summarising, the first-pass decisions. For senior professionals with deep experience, this is a genuine multiplier. They can do in hours what once took days. Their judgment, shaped by years of getting things wrong, is now applied at a far higher rate.
For junior people, the story is different.
They are using the same tools. They are producing work that looks polished. But they are skipping the part where people actually learn.
A Harvard Business Review piece from February this year put it precisely: because AI now handles the messy, repetitive tasks that once built judgment, junior employees miss chances to develop it. Organisations risk ending up with managers who have never done the underlying work — and thin leadership pipelines as a result.
This is not about AI being bad. It is about what we sacrifice when we optimise only for output.
Think about how judgment actually develops. It is not built in classrooms or through reading frameworks. It is built through friction. Through making a call, being wrong, understanding why, and adjusting. Through doing the unglamorous work at ground level — the first draft that gets torn apart, the data analysis that leads nowhere, the recommendation that does not land. These are not inefficiencies to be automated away. They are the load-bearing experiences of a leadership pipeline.
The stress that builds judgment is exactly the kind of stress we are now designing out of the work.
I wrote about antifragile teams earlier this year. The core idea, drawn from Nassim Nicholas Taleb, is that systems do not just survive stress — they improve because of it. Muscles grow. The immune system strengthens through exposure. Teams develop adaptive habits through repeated contact with reality.
Antifragility does not emerge from protection. It emerges from calibrated exposure to difficulty.
What we are doing with AI, inadvertently, is building fragility into the next generation of leaders. We are raising a cohort that can produce high-quality outputs without developing the internal architecture to evaluate those outputs. They can ask the right questions of a tool. But they cannot always tell whether the answer is any good.
Deloitte’s 2026 Global Human Capital Trends research found that 60 percent of executives now regularly use AI to support their decisions. That is not surprising. What concerns me is the downstream question nobody is asking: what does that shift do to the people watching, who are never required to develop the same muscles?
Senior leaders get sharper. Junior talent gets thinner. The gap compounds quietly.
This is not something that shows up in your productivity metrics. The output looks the same — often better. The problem surfaces five years from now, when you look at your leadership bench and realise that people have portfolios but not instincts. They have tools but not judgment. They have never been wrong in a way that cost them something.
So what should leaders do?
Do not slow down AI adoption. That is not the answer, and it is not realistic. But do deliberately design judgment-building back into the work. Create moments where junior people are required to form a view before the AI does. Ask them not just to present an output but to explain why they trust it. Restore the stretch experience — the task that is slightly beyond someone’s current ability, where failure is possible and learning is guaranteed.
Think about the small after-action review. The question at the end of a decision: what did we learn? Not just what did we decide, but what did we understand about the situation that we did not understand before?
These are the 1 percent improvements that build judgment the way marginal gains build culture. Not dramatic. Not loud. Just deliberate.
AI will not hollow out your leadership pipeline. Passively deploying AI without protecting the conditions that grow leaders will.
The organisations that understand this will have a significant advantage. Not because they use AI more. Because they use it in a way that strengthens the humans alongside it, rather than bypassing the hard work that makes humans capable in the first place.
That is the kind of antifragility worth designing for.
