Here’s a thought experiment. You’re playing Monopoly. The game is well underway. One player owns most of the board. And then something unusual happens: that player discovers they no longer need the other players to land on their properties to collect rent. The income just arrives anyway.
What happens to the other players? They keep rolling the dice for a while. They pass Go, collect $200. But they’re no longer economically necessary to the game. The surplus is being generated without them.
This is the question AI is forcing us to ask about the real economy. Not “will AI take my job?” — though that matters too. The deeper question is: if AI does an increasing share of the productive work, who gets the benefit?

What the data is actually showing
I’ve written before about what’s happening to work at the level of individual roles and careers. This article is about the structural picture underneath that — the economic logic that’s being disrupted.
But let’s start with a specific finding, because it illustrates the pattern clearly.
In November 2025, a research team at Stanford published a study using payroll data from ADP — the largest payroll processor in the US, covering 25 million workers. The researchers tracked employment and salary changes through September 2025, linking payroll records to established measures of AI exposure by occupation.
Here’s what they found.
Overall unemployment? Barely changed. Around 4.9% through the period. Stable.
But for workers aged 22–25 in high-AI-exposure jobs — software development, customer service, writing, legal support, financial analysis — employment dropped by around 20% since late 2022. Workers in the same roles aged 35 and over grew by 8%+ over the same period.
The pattern begins almost exactly when ChatGPT launched in November 2022. It holds across industries, within individual firms, for both men and women. The authors tested for alternative explanations — the tech sector correction, interest rate effects, post-pandemic normalisation. None of them account for it.
The adjustment isn’t showing up in the headline number. It’s showing up in who isn’t getting hired.
Why the headline numbers are misleading
This is worth pausing on, because the stable unemployment figure gets used frequently as evidence that AI’s impact on work has been overstated.
But there are structural reasons why we should expect a lag.
First, unemployment statistics measure only those actively seeking work — not those who’ve given up, not those working one hour a week, not those whose hours have been reduced from full-time to part-time as their employer absorbs new AI capability. Underemployment is the more sensitive instrument.
Second, firms adjust at the margins before they restructure. When a new technology arrives that can do what junior staff used to do, the easiest move isn’t to fire those people — it’s to stop hiring replacements when they leave, and to slow the intake of new graduates. That’s exactly what the Stanford data shows: not mass layoffs, but a quiet freeze at the entry level.
Third, and most importantly: displacement takes time to compound. A firm that can now complete 90% of its knowledge work with AI assistance doesn’t immediately restructure. It absorbs the capability. It keeps its existing people. It hires offshore to smooth the transition. The structural reckoning — the point at which the economics force a genuine reset — comes later. The unemployment signal, when it arrives, will be a lagging indicator.
We’re likely in the transitional period right now.
The experience buffer — and its limits
The Stanford paper offers a useful explanation for why experience protects workers, at least for now.
AI substitutes codified knowledge — documented processes, established workflows, things that can be written down and handed off to a system. It complements tacit knowledge: the judgment that comes from years of practice, the client relationships built over time, the ability to know what to do in a situation that doesn’t quite fit any of the established patterns.
Entry-level workers supply mostly codified knowledge. That’s what you’re hired for when you don’t yet have experience: the ability to follow documented processes reliably. AI is now competitive with that.
Experienced workers have accumulated tacit knowledge. That’s harder to substitute — and AI may actually make experienced workers more productive, not less, by handling the codified work while freeing them to apply judgment.
This is reassuring, to a point. But there’s a problem embedded in it.
If entry-level roles are where tacit knowledge is developed — if the apprenticeship years are where people accumulate the judgment that makes them valuable later — then eliminating the entry-level is also eliminating the pipeline. Who trains the next generation of experienced workers? How do people develop tacit knowledge if they never get the years it requires?
This isn’t a hypothetical. It’s already a live question in law, accounting, consulting, and technology. The people at the top of those professions are fine. The path for the people trying to get there is getting narrower.
The question our economic system hasn’t faced before
All of this points toward something larger than a skills conversation or a career advice conversation.
Our economic system is built on a foundational assumption: human labour is a necessary input to production. Capital needs workers. Workers need wages. Wages create purchasing power. Purchasing power creates demand. The whole system coheres because human effort is woven into the process of creating value.
This has been true through every previous wave of technological change. The steam engine displaced certain kinds of physical labour, but created others. The computer displaced certain kinds of clerical work, but created others. The system absorbed the disruption and generated new categories of work that hadn’t existed before.
The question now is whether that pattern will hold.
Previous technologies displaced specific tasks. They couldn’t think. They couldn’t write. They couldn’t reason across domains, adapt to novel situations, or handle the open-ended complexity of professional knowledge work. Humans retained a comparative advantage in those areas, and those areas kept expanding.
What happens if that changes? What happens if AI becomes capable enough, across enough domains, that the general case for human labour — not this specific task, but the overall claim that human effort is economically necessary — starts to weaken?
For the first time in the history of capitalism, that question is genuinely on the table.
Two futures
There is an optimistic version of this story, and I don’t want to dismiss it.
In the optimistic version, AI generates so much additional productivity that society becomes genuinely wealthier — not just the small group who owns the systems, but broadly. The surplus gets redistributed through mechanisms we develop: shorter working weeks, new forms of social income, collective ownership of AI infrastructure, an expansion of what we count as valuable human contribution. People have more time for family, community, care, creativity, and meaning. The historical parallel isn’t the horses displaced by the automobile — it’s the liberation of enormous human energy toward things that matter more.
That future is possible. I genuinely believe it is.
But it requires deliberate choices. It doesn’t arrive automatically. And the direction of travel right now — the way AI investment is structured, the way productivity gains are currently flowing, the absence of policy frameworks adequate to the change — is not pointing cleanly toward it.
The less optimistic version doesn’t require catastrophe. It just requires drift: the gradual emergence of a two-tier economy where a relatively small group remains complementary to AI systems, and a much larger group finds their economic participation becoming structurally optional from capital’s point of view. Not sudden collapse. Just a slow narrowing of who the economy is really for.
That’s not a stable arrangement. History is reasonably clear on what happens when large groups of people find themselves economically superfluous.
What I’m not saying
I’m not saying capitalism is evil, or that technology is the enemy, or that we’re heading inevitably toward catastrophe.
I’m saying that an economic system built on the necessity of human labour is encountering something it wasn’t designed to handle: a technology that may, for the first time, make human labour genuinely optional at scale. That system will need to adapt. The adaptation won’t happen automatically. It will require enough people to understand what’s happening clearly enough to make deliberate choices about it.
That’s not panic. It’s not paralysis. It’s the beginning of the conversation we need to be having.
The gap between what’s actually happening and what most people currently understand is enormous. Closing that gap — one conversation at a time, in rooms full of curious people willing to face the question honestly — is exactly what Future Together is here for.
What are you seeing in your field — and what questions does this raise for you? Join the conversation at our next monthly meetup →