Charles N. Garrison

What is happening to work?

AIFuture Of WorkAI ImpactEconomic ChangePrepare NowKnowledge WorkRobotics

A therapist I know has been seeing clients for years. She’s good at her work — she listens well, asks the right questions, holds space for people in ways that matter.


A few months ago, a client she’d been seeing regularly stopped coming. She found out why through a mutual contact: he was talking to an AI chatbot instead.

Her response wasn’t what you might expect. She didn’t dismiss it. She said something like: “I don’t believe AI can replace the real human connection in therapy. But I’m not sure that will matter to people once it’s good enough — and available at 2am when they need it.”

She’s worried about her profession. She’s also noticed that more people who could never afford regular therapy are now getting support they couldn’t access before.

She’s holding two true things at once. That’s probably the most honest response to what’s happening to work right now.


The shape of change

Here’s something worth understanding about how transformative technologies work.

Steam engines were invented around 1700. For the next 120 years, they got steadily better — roughly 20% more efficient each decade. During all of that time, the number of working horses in the United States kept growing. There was more demand for transportation than ever, and horses still filled most of it.

Then, between 1930 and 1950, 90% of the horses in the US disappeared.

Progress was steady. Equivalence was sudden.

The same pattern showed up in chess. In 2000, a human grandmaster could expect to win 90% of their games against a computer. Ten years later, the same grandmaster would lose 90% of their games. Not a gradual decline. A reversal. One decade.

This is what most people miss when they say “but AI still makes mistakes.” Of course it does. Steam engines were inefficient for 120 years. The question isn’t whether the technology is perfect today. The question is: what happens when it crosses the threshold?

Because when it does, it doesn’t cross gradually.


What’s already happening

I’m not speculating about the future here. I’m describing what I’m watching happen to people in my world right now.

A sales professional described how he uses AI to run deep research on target organisations — analysing buying signals, financial patterns, public information — more thoroughly than a whole research team could have managed before. He’s not being replaced. He’s operating at a different level. But the roles that used to sit between him and that research capability? Those are gone.

A creative I know uses AI to ingest a client’s entire back-catalogue of marketing material — years of it — and understand their voice, their tone, their patterns. Then plans the next stage of their work from that foundation. A skill that used to take months of absorbed experience, compressed into an afternoon.

A web designer recently lost a client. Not because his work was bad. Because the client discovered they could do it themselves.

The pattern underneath all of these is the same: work that used to require significant time, expertise, or headcount is becoming something one person — or no person — can do with the right tools.


Capable, but not yet fully formed

When people push back — “AI makes mistakes, it’s not really replacing anyone yet” — I find myself reaching for an analogy.

Think of current AI as an adolescent adult. Capable enough to be doing real work, but still inexperienced, still needing oversight, still making the kinds of mistakes that come from capability without full judgment.

An adolescent adult isn’t someone you dismiss. They’re someone you take seriously — and plan around. Because they’re not going to stay at this level.

The AI tools available right now are approximately that: capable enough to be genuinely useful across a wide range of work, unreliable enough that you still need to check their output, and getting measurably better every few months. The people who understand this aren’t debating whether AI is currently good enough. They’re watching how fast it’s maturing.


When AI does what the human system can’t

Two stories came to me recently — both about medical diagnosis — that complicate the “AI makes mistakes” objection in an important way.

In the first, a patient had been struggling to get a clear answer from their doctor. Not because the doctor was incompetent, but because there wasn’t enough time. Modern medicine runs on appointments that are too short to work through a complicated case. The AI had no such constraint. It kept asking questions, worked through the complexity, and arrived at a diagnosis.

In the second, a younger man had a condition that typically affects older women. The standard treatment protocols were designed around that demographic — his circumstances fell completely outside the normal parameters. Doctors couldn’t give him a plan that fitted his situation. So he spent six months carefully tracking his condition, fed all of it to an AI, and received a treatment program built specifically for him.

These aren’t stories about AI being infallible. They’re stories about the human system having structural limits that AI doesn’t share. It doesn’t run out of appointment time. It doesn’t have a mental model of the “typical” patient that crowds out the unusual one. It can hold the full complexity of an edge case without reverting to the nearest pattern.

When people say “but AI makes errors” — yes. And so does the existing system. Sometimes the question isn’t whether AI is perfect; it’s whether it performs better than what people actually have access to.


Physical work is next — but the timeline is different

So far I’ve been talking about knowledge work: work that happens on screens, in documents, in conversations and decisions. If that describes your job, the honest answer is that you should expect significant disruption in the near term. Not distant future. Near term.

For physical work, the timeline is longer — but the direction is the same.

Right now, you can subscribe to a humanoid robot for your home. Today. These robots are slow and clumsy. They’re also getting better very quickly. In factories and warehouses, humanoid robots are already operating. In a timber mill near where I live, traditional fixed robots have already replaced workers who used to do those jobs by hand.

The complex physical work — navigating rough terrain, operating in unpredictable environments, doing things that require a human body reading conditions in real time — that work has a longer runway. Not because it’s safe, but because the technology is further behind. The direction is the same.

The last category to be displaced will likely be work where genuine human connection is the product itself. The relationships a good real estate agent builds over decades. The pastoral role of a religious leader. The specific human presence that people seek out and can’t substitute.

But even here — as the therapist’s story shows — “irreplaceable human connection” turns out to be a softer wall than we assumed.


A framework for thinking about your own situation

Rather than asking “will AI replace my job?” — which is probably too binary to be useful — try asking a different set of questions.

How much of my work happens on a screen, with information? If most of what you do is reading, writing, analysing, deciding, communicating, you are already inside the zone of rapid change. The question isn’t whether your work is affected; it’s how you’re positioning yourself in relation to that change.

How quickly is “good enough” approaching for what I specifically do? Some tasks in your role are already being handled by AI tools. Others are further off. Understanding which is which — and watching how that boundary moves — is more useful than optimism or pessimism about your industry in general.

What’s the genuinely human part of my work? Not “I enjoy doing it” or “it requires expertise.” What specifically requires you — your relationships, your judgment formed from lived experience, your physical presence? That part is worth doubling down on.


What I’m not saying

I’m not saying this is hopeless. I’m not saying you should panic about your career, or that every job is about to disappear overnight.

What I am saying is that the pattern is clear, the direction is set, and the pace is faster than most people outside the technology world currently understand. The horses didn’t notice the steam engines for 120 years. Then 90% of them were gone in two decades.

The people in the best position right now are the ones paying attention — not to reassure themselves that everything will be fine, but to understand what’s actually happening and think clearly about what it means for them.

That’s not something any individual can work out alone. It requires conversation — with colleagues, with people in other industries, with people who are watching different parts of the change.

That’s what we’re here for.


What are you seeing in your field? Join the conversation at our next monthly meetup →