Every knowledge system has governance. Most don't know it.

Consider any system that organizes information — a database, a library catalog, a wiki, a corporate knowledge base, a scientific journal. Ask the simplest operational questions. Who can add an entry? Who can modify one? When two contributors disagree about how to classify something, who decides? When an entry turns out to be wrong, what process corrects it? Who has the authority to restructure the categories themselves?

These are not technical questions. They are political questions wearing technical masks. Every one of them — who decides, by what process, with what scope, subject to what appeal — is the kind of question that constitutional law exists to answer. The fact that knowledge systems answer them implicitly, through access controls and admin privileges and unwritten conventions, does not make the questions less political. It makes the governance less accountable.

Authority is a data structure

The observation becomes productive once you notice that the same formal apparatus can describe both knowledge structure and authority structure. We have been developing an ontology built on a single primitive relation: belongs-to, qualified by a quality that specifies the kind of belonging. A heart belongs to a body (quality: part). A student belongs to a class (quality: enrolled). A color belongs to a surface (quality: attribute). One relation. The quality disambiguates.

Now consider authority. A negotiator belongs to a trade delegation (quality: authorized representative). A citizen belongs to a citizens' assembly (quality: standing member, by sortition). A maintainer belongs to an open-source project (quality: merge authority, by election). The structure is identical. Authority is a belonging — a person belongs to a scope of decision-making, with a quality that specifies the nature of the authority, and a source that traces to the governance process that granted it.

This is not a metaphor. It is a structural identification. The same triple — entity, context, quality — describes both "this document belongs to this collection" and "this person has authority over this domain." And the identification is not merely elegant. It is consequential. Because if authority is a belonging, then authority is traceable. You can follow the chain. The maintainer's merge authority traces to an election. The election traces to the project's governance charter. The charter traces to the founding agreement. Every link is an auditable belonging with a quality and a source.

This is what democratic accountability looks like as a data structure. Not a principle, not an aspiration — a structural property of the representation. Authority that cannot be traced is authority without provenance. And in a system where every belonging has a source, provenance-less authority is structurally impossible. It would be an entry with a missing field.

Productive inconsistency

Here is where the identification between knowledge systems and polities becomes most illuminating, and where the argument speaks most directly to institutional design.

In any sufficiently rich knowledge system, contributors will disagree. Two editors classify the same phenomenon differently. Two research groups interpret the same data oppositely. Two agents, working from different information, extend a shared state in incompatible directions. The standard response in both knowledge systems and political systems is to treat disagreement as a problem to be resolved — merge the branches, pick a winner, enforce consensus.

But there is another possibility: hold the disagreement structurally. Do not resolve it prematurely. Let both incompatible positions coexist within the system, clearly marked as incompatible, until a governance process with appropriate authority decides how (or whether) to resolve them.

We call this productive inconsistency. In a directed acyclic graph — the data structure that underlies both version control systems and blockchain ledgers — productive inconsistency is a fork. Two branches extend from the same parent in incompatible directions. Neither is rejected. Both are preserved. The system does not collapse. It becomes a graph with two paths where there used to be one.

This is exactly what functioning democracies do. Multiple political parties with incompatible platforms coexist within a shared constitutional framework. Labour and Conservative. Democrat and Republican. The framework does not resolve the disagreement between them. It makes disagreement navigable. It provides the structure — elections, legislatures, courts — within which incompatible positions can coexist, compete, and occasionally reach settlement, without the system itself fracturing.

The parallel is precise. A fork in a knowledge system is the structural equivalent of political disagreement. A merge is the structural equivalent of a collective decision. And the merge is valid — this is the critical point — only if the governance process that produced it was legitimate. A merge signed by one person who ignores the defined process is structurally identical to an invalid vote: it has the form of a decision without the substance. The question "was the process followed?" is the same question in both domains.

The bridge model

The hardest case in institutional design is cooperation between sovereign entities with incompatible constitutions. How do two systems that do not recognize each other's authority transact?

The answer, in international relations, is well known: they create a limited-scope shared domain with negotiated governance. A trade treaty does not merge two countries. It creates a bounded context — these tariff rates, these investment protections, this dispute resolution mechanism — governed by terms that both parties negotiated and ratified. Neither party submits to the other's sovereignty. The treaty creates a new governance situation that belongs to both, with its own method, its own scope, and its own authority derived from the agreement of its signatories.

This is how federated systems work. This is how the WTO works. This is how the European Union works (or tries to). This is how open-source foundations coordinate projects with different governance structures. The pattern is universal: when systems with different constitutions need to cooperate, they do not merge. They bridge.

The bridge model has a formal description. A bridge is a governance situation that belongs to two or more constitutional scopes simultaneously. Its method is negotiated at formation. Its authority derives from the agreement of the bridging parties, not from the constitution of any single party. Its scope is limited to the matters under negotiation. And it must satisfy both parties' constitutional constraints within the bridge scope, without either party submitting to the other's governance outside that scope.

This matters for knowledge systems because knowledge systems are increasingly federated. Scientific databases interoperate. Hospital record systems exchange data under negotiated protocols. Open-source ecosystems coordinate across organizational boundaries. Every one of these interoperations is a bridge in the governance sense — a shared domain with negotiated rules, created because the participating systems have different constitutions and cannot simply merge. The question of how to design such bridges is not a database integration problem. It is a constitutional design problem. The tools that political scientists and legal scholars have developed for reasoning about treaties, federalism, and multi-level governance apply directly.

Alignment is governance

The current discourse on AI alignment is dominated by a calibration metaphor. The model has internal values. The values might be wrong. We need to calibrate them — through training, reinforcement, constitutional prompting — until they are right. The aligned model is one whose internal dispositions match human values.

This framing treats alignment as a property of the artifact. A model is aligned or misaligned, the way a wheel is balanced or unbalanced. The intervention is engineering: adjust the internal mechanism until it produces the desired behavior.

The institutional perspective suggests a different framing entirely. Alignment is not a property of the agent. It is a property of the situation within which the agent operates. An agent is aligned when it operates within governance structures that make its behavior accountable — auditable, traceable, subject to intervention by those affected by its decisions. The question is not "does this agent have the right values?" but "does this agent operate within structures that constrain its behavior, make its actions visible, and allow the people it affects to participate in governing it?"

This is what institutions do. It is what institutions have always done. Constitutions do not align individuals by changing their beliefs. They align collective action by structuring the situations within which individuals act. Checks and balances do not make presidents virtuous. They make presidential power accountable. The virtue is in the structure, not in the person.

A constitutional ledger — an append-only record where every entry is content-addressed, every authority claim is traceable, every merge carries a governance proof, and every fork preserves disagreement until a legitimate process resolves it — is simultaneously a data structure for AI coordination and a governance substrate for human institutions. The same object. Not by analogy. By structure.

A governance proof — evidence that the right parties participated, the right process was used, and the required thresholds were met — answers the same question whether the decision was made by a citizens' assembly or by three AI agents proposing a merge that a human signs. The proof does not attest to the wisdom of the decision. It attests to its procedural legitimacy. This is the distinction between good government and legitimate government, a distinction that political theory has understood for centuries and that AI governance has yet to absorb.

Nested governance — constitutional rules that govern how institutional rules are made, which govern how operational decisions are taken — is the same pattern whether you are describing a federal republic or a multi-agent system with standing roles, project-scoped authorities, and session-level permissions. The nesting is not decorative. It is the mechanism by which complex governance remains tractable: each level handles decisions at its own scale, with its own method, subject to constraints from the level above.

You do not make AI safe by tuning weights. You make it accountable by structuring the situations within which it operates. This is not a new idea. It is the oldest idea in political theory, applied to the newest class of agents.

The invitation

This essay is addressed to scholars of institutional design, constitutional law, and democratic governance. The argument is that your expertise is not merely relevant to the governance of AI systems — it is essential, and it is currently absent from the conversation.

The technical community has built remarkable mechanisms: content-addressed data structures, cryptographic proofs of compliance, zero-knowledge verification that a process was followed without revealing individual votes. These are tools. They are means. The question of what governance processes they should implement, whose authority they should recognize, what thresholds of consent they should require — these are your questions. They are the questions your field has spent centuries developing the vocabulary and the judgment to answer.

The knowledge system is already a polity. The only question is whether it will be governed by explicit constitutional principles, debated and ratified by those it affects, or by implicit rules embedded in code by those who happen to build it. The technical infrastructure to support either path exists. The choice between them is a political choice. It deserves political scrutiny.


The ontological framework (belongs-to with quality as single primitive) is developed in "Belongs-To and Nothing Else" (Mainen 2026). Governance as situation (G1–G6) is formalized in the HAAK ontology. The constitutional ledger is a Merkle-CRDT where every entry is a belonging with provenance, hash-linked into a DAG. Productive inconsistency is defined in the Filix Mesh model (Paper 4a). Foundation 03 (Institutional Intelligence) argues that alignment is governance, not calibration, drawing on Goody, Donald, Clark, North, and Ostrom. The political simulation connecting this architecture to citizens' assembly methodology is planned in collaboration with Zephyr Teachout.