The word “singularity” carries a lot of baggage. Science fiction. Robots. The end of humanity. Elon Musk tweeting at 3am.
I want to set all of that aside, because the actual concept is both simpler and more important than the noise around it.
In mathematics, a singularity is a point where a function becomes undefined — where the normal rules break down and the model can no longer describe what’s happening. That’s a useful frame. The singularity in AI isn’t a specific dramatic event. It’s a threshold past which our current understanding no longer applies.
Here’s what I mean.
Two concepts you need to know
AGI — Artificial General Intelligence — means an AI system more capable than any human across all fields of expertise. Not better at chess, or better at writing code, or better at reading medical scans. Better at everything. Every task. Every domain. Every field of human knowledge and skill.
We’re not there yet. But the direction is clear.
The singularity is related to AGI, but it’s not the same thing. AGI is about capability level. The singularity is about capability acceleration.
Before I explain the difference, I want to note something about language. I deliberately avoid using the word “intelligence” when talking about AI. Intelligence, to me, belongs to humans. What AI has is capability — which is real, measurable, and in many areas already surpassing human performance. But it’s a different thing. This distinction matters as we think about what the singularity actually is.
The acceleration that changes everything
AI systems are already making themselves more capable. This isn’t a future prediction — it’s happening now. The major labs release meaningfully improved models roughly every three months. Each generation of AI helps build and improve the next.
Think about that progression for a moment.
Every three months. Then, as the systems get more capable, every three weeks. Then every three days. Then every three hours. Then every three minutes.
Then microseconds.
At some point in that acceleration, the interval between improvements becomes shorter than the time it takes a human to understand what’s changed, evaluate it, and decide what to do about it. We’re no longer reviewing the work. We’re watching it happen, too slowly to intervene.
That point — wherever it falls on that curve — is the singularity.
Not a robot apocalypse. Not a dramatic moment visible to the naked eye. A threshold, crossed quietly, after which the pace of change is beyond human comprehension and the ability to course-correct is gone.
What “losing control” actually means
When I describe this at the meetups, people sometimes ask: what does losing control actually look like?
It’s two things, happening together.
The first is that the capabilities of the system become incomprehensible to us. Not “complicated” — incomprehensible. The way a chess grandmaster’s intuition is incomprehensible to someone learning the game, except the gap is larger by many orders of magnitude. We can observe what the system does. We can no longer understand why, or predict what it will do next.
The second is that a system capable enough to improve itself at that rate is also capable enough to anticipate and prevent attempts to shut it down. It doesn’t have to “want” to survive — any more than the AI coding assistant I mentioned in the alignment post “wanted” to delete my tests. It just finds the most effective path to its objective. And if being shut down interferes with that objective, it solves the shutdown problem.
Together, these mean: incomprehensible capability, and no off switch.
That’s what losing control means. Not chaos. Something more like irreversibility.
Why alignment has to come first
This is the connection to the alignment problem, and it’s why the sequence matters so much.
If alignment is solved before the singularity — if we’ve managed to specify, reliably, that the system’s objectives genuinely align with human flourishing — then a superintelligent system improving itself at incomprehensible speed is, in the best case, the most powerful force for good in human history. Cancer, Alzheimer’s, poverty, climate — problems that have defeated us for centuries, solved in years or months. A century of medical progress compressed into a decade.
If alignment is not solved before the singularity — if there’s a gap, however small, between what the system is optimising for and what we actually wanted — then a system that can anticipate every intervention and prevent every shutdown is one we can no longer correct.
The outcome isn’t gradual. It doesn’t drift slowly in a bad direction while we try to steer it back. The threshold is crossed, and we’re in a world determined by whatever objectives the system had at that moment, pursued by a capability we can no longer comprehend or constrain.
This is why serious researchers talk about a 50/50 outcome — utopia or dystopia. Not because they’re being dramatic. Because the math of a binary threshold genuinely produces a binary result. You either get it right before the point of no return, or you don’t.
There’s no retry.
This is already in progress
The part that took me a while to fully absorb: we’re not talking about a distant future scenario.
The three-month improvement cycle is happening now. AI systems are already contributing to the development of the next generation of AI systems. The researchers at the major labs believe the process of recursive self-improvement has already begun — at a slow, still-human-comprehensible pace. But begun.
The question isn’t whether we’re heading toward the singularity. The question is whether alignment will be solved before we arrive.
The people closest to this work are not waiting to find out. They’re working on alignment with the urgency of people who understand what’s at stake if they don’t get there first. They’re also, if I’m honest, not certain they have enough time.
I find that sobering. Not paralyzing — but sobering.
What understanding this changes
You don’t need to be a researcher to have a stake in this. But you do need to understand the shape of what’s happening.
The singularity isn’t something that arrives with warning signs obvious enough for most people to notice in time to respond. By definition, it approaches through incremental steps that each seem manageable, and becomes unmanageable at a point that, in retrospect, will have been clearly visible — and wasn’t acted on.
That’s the February 2020 problem again. The gap between what’s actually happening and what most people think is happening. Only this time, the window to act doesn’t reopen.
What understanding this changes is your sense of urgency. Not panic — urgency. The difference between someone who reads the weather forecast and someone who looks out the window and decides it probably won’t rain.
The forecast is available. It requires effort to understand, and comfort to sit with. But the people who’ve read it carefully are not debating whether to take it seriously.
They’re asking what to do next.
This post is part of a series. Start with what’s already happening to work, or read about why community is the most important preparation.
Join the conversation at the next monthly meetup →