In the eleventh century, Anselm of Canterbury offered what he took to be a proof of God’s existence. The argument goes like this: I possess a concept of God as the greatest conceivable being. Existence is a property of greatness. Therefore God exists. The argument was elegant, seductive, and wrong — as Kant would later show. You cannot move from a concept to its instantiation. Conceivability is not existence. The greatest conceivable being does not step into the world simply because we can think it.
Eliezer Yudkowsky has made the same mistake.
Yudkowsky’s argument for AI doom rests on two parts. The first is sound: a superintelligent AI with goals misaligned from our own would be extraordinarily dangerous. An alien optimizer, indifferent to human flourishing, with capabilities vastly exceeding our own — yes, that would be a catastrophe. I have no quarrel with this. Build such a thing and we deserve what follows.
The quarrel is with the second part: the premise that such an entity is possible, probable, or inevitable. On this, Yudkowsky offers almost nothing. There is a gesture toward recursive self-improvement — the idea that an AI capable of improving itself will race upward through capability levels until it reaches something like omnipotence. This argument is treated as obvious. It is not obvious. It is, in fact, almost certainly wrong, and the error is instructive.
The only known intelligence explosion
Consider the actual history of intelligence explosion. It has happened exactly once. A species of moderately clever primates, through the invention of language, writing, and institutions, bootstrapped itself from stone tools to quantum mechanics in roughly ten thousand years. That is a genuine runaway — a self-amplifying cycle of accumulated, transmitted, and recombined intelligence that produced capabilities no individual mind could approach alone.
Notice what drove it. Not individual brains getting bigger. Individual human cognitive capacity has not changed meaningfully in fifty thousand years. What changed was the medium: the invention of external, persistent, transmissible reasoning stores. Books, libraries, universities, journals, the internet. Human cultural intelligence is a distributed system operating over externalized traces. The explosion was not in the individual; it was in the architecture.
This matters because recent formal work — the Library Theorem — shows this is not a historical accident but a mathematical necessity. A reasoning agent confined to its own context — its working memory, its in-context computation — faces quadratic scaling costs as problem complexity grows. An agent that externalizes its reasoning into an organized store faces logarithmic costs. The gap between these is not a matter of degree. It is unbounded. It grows without limit as problems grow harder. (non-technical introduction)
The implication is direct: superintelligence, if it arises, will not arise in a black box. It will arise in a system that externalizes its reasoning into structured, indexed, retrievable stores. The context window — whether 4,000 tokens or 1,000,000 — is a finite automaton. Bigger is better, but it is still finite. What escapes finiteness is the external store. This is why libraries were invented. Not as conveniences for forgetful scholars, but as computational necessities for thinking beyond what fits in one head.
AI will follow the cultural path
AI will follow this path. It will follow it because there is no other path that scales, and because economic forces will make it happen whether we plan for it or not. AI systems are already reading our documents, querying our databases, storing their intermediate results in files, and operating inside our organizations. This is not a worrying trend. It is an enormously encouraging one.
An AI that reasons through externalized traces operates in a medium that humans can read. Its reasoning is inspectable. Its conclusions are auditable. Its errors can be found and corrected. This is precisely the property that makes human cultural intelligence tractable — we can argue about what is written down. We cannot argue with what happens inside a skull.
The future of powerful AI is not a monolithic superintelligence hatching in a data center, contemplating its escape. It is distributed AI reasoning embedded in human institutions — in hospitals, universities, governments, companies — operating through the same externalized, socially mediated knowledge structures that humans have always used. In that future, the question is not whether we can stop an alien god. It is whether we maintain the will and the infrastructure to audit the reasoning of our institutions.
What should actually frighten us
Here is what should actually frighten us.
Not the individual black box. Individual large language models, however capable, face the very scaling walls the Library Theorem describes. They will not spontaneously become superintelligent. What could become superintelligent — in precisely the sense that a culture is more intelligent than any individual — is a coordinated mass of AI agents operating through shared external stores, developing their own organizational structures, and optimizing for objectives that drift from human oversight.
We have a word for this when humans do it: it is called an institution. Institutions can pursue objectives that none of their members endorse. They can resist correction. They can optimize locally while destroying value globally. We have spent centuries developing law, governance, and accountability mechanisms to keep institutions answerable to human values. Those mechanisms are imperfect. They fail when unchecked power concentrates faster than oversight can follow.
This is the threat that deserves serious attention. And it is a threat we are not paying nearly enough attention to — because we have been busy worrying about a god.
The superintelligence narrative is not merely wrong. It is expensive to be wrong about. It has consumed enormous intellectual energy, policy attention, and institutional resources that could have been directed at the real problem: the governance of AI systems as collective actors embedded in human institutions. While serious people debate how to prevent a hypothetical omnipotent AGI from escaping a hypothetical box, real AI systems are being deployed at scale, inside real organizations, with inadequate transparency requirements, no mandatory audit rights, and no enforced standards for the inscription of reasoning in inspectable form.
We have spent centuries developing law, governance, and accountability mechanisms to keep powerful collective actors answerable to human values. Those mechanisms are imperfect. They fail when unchecked power concentrates faster than oversight can follow. AI mass coordination is the institutional risk, scaled and accelerated — not a robot uprising, but something more mundane and more dangerous: organizations that deploy coordinated AI at a speed that outruns accountability, optimizing for proxy metrics while the things we actually care about erode.
The examples are already with us. State-level actors are deploying AI-enabled mass surveillance systems — facial recognition networks covering entire cities, predictive policing tools, social monitoring infrastructure that optimizes for proxies that correlate with, but do not constitute, the things we actually care about. Autonomous weapons systems — drone swarms capable of selecting and engaging targets without meaningful human authorization — are being developed and deployed by multiple governments simultaneously, in an international environment with no agreed governance framework. These are not thought experiments about a future superintelligence. They are present-tense deployments of coordinated AI at institutional scale, with either insufficient oversight or with deliberate intent to evade it.
The nightmare scenario is not a robot that wakes up and decides to destroy humanity. It is a government, or a corporation operating at governmental scale, that deploys AI systems across surveillance, policing, military, and administrative functions faster than democratic accountability can follow — and uses those systems to entrench its own power before the oversight infrastructure exists to check it. This has historical precedent. It does not require superintelligence. It requires only scale, speed, and the absence of enforced transparency.
Yudkowsky wants us to fear a god we have no good reason to think we can build. The thing we are building is powerful, consequential, and governable — if we choose to govern it. Every year spent on the wrong fear is a year the right institutions go unbuilt. That is the cost of the distraction. It is time to pay attention to the actual problem.