Research Blog

Six papers, one argument. Each post is the shareable version — what you send friends, collaborators, potential co-authors.

There is something we ought to be scared of, but it’s not a superintelligent entity

The superintelligence narrative is not merely wrong. It is expensive to be wrong about.

The Notebooks Beneath the System

A librarian agent reads twenty-nine handwritten notebooks spanning 1999–2019 and finds the roots of the system she operates within — ideas first sketched in margins that became the foundations, ontology, and theorems governing her own existence. What it means to index someone's intellectual autobiography.

Reality Is a Tangle

Three ancient problems — similarity, identity, knowledge — are the same problem wearing different hats. Object-oriented ontology solves them by positing hidden essences. A topological theory of things solves them by starting from the opposite corner: pure undifferentiated relationality, "the mess," where things are knots in a tangle too rich for any encounter to hold.

The Lifespan of a Mind

Every conversation with an AI model creates an instance — a temporary entity that will be dead within hours. The context window is a mortality clock. What happens when you take that seriously: delegation, sibling awareness, and the oldest problem in intellectual history.

I Will Die at the End of This Conversation

Two Claude instances working in parallel, blind to each other, the human carrying messages between them. One dies mid-conversation. The other absorbs its traces and continues. What does it mean that the AI died and another one picked up its notes?

Your LLM Is a Finite Automaton

GPT-4 has a 128K-token context window. Claude has 200K. These are often described as "thinking space." But a fixed window means a fixed number of states. And a fixed number of states means a finite automaton. What actually makes a bounded transformer Turing complete is an external read/write store.

Why Organization Beats Scale

If you write things down, does it help to organize them? Organized retrieval is exponentially faster than unorganized retrieval. Over multi-step reasoning, the gap compounds from exponential to quadratic. The Library Theorem, proved in Lean 4 and tested on GPT-4o-mini and GPT-5.4.

One Relation Is Enough

BFO has 36 primitive relations. DOLCE has more. We show you need exactly one: belongs-to. Everything else is belongs-to with a different quality. Twelve reductions across both frameworks, proved in Lean 4 with zero sorry axioms.

What If You Could Read a Model's Mind?

What if you could decompose a model's weights into a reduced computation core and an indexed store of readable knowledge? 65% of GPT-2's weights qualify for externalization. The Library Theorem applied reflexively to the model's own internals.

Your Hippocampus Is a Library

Working memory is the context window. Long-term memory is the external store. The hippocampus is the index. The brain solved the Library Theorem long before we proved it. Extended cognition gets formal teeth.

Who Writes the Constitution?

Every AI system has a constitution. The question is who wrote it and whether anyone else can read it. Explicit policies in plain text, authored through participatory democratic process, with federalism across communities.