Institutional Intelligence

**Intelligence scales through institutions, not individuals. Alignment is governance — external, observable, modifiable infrastructure — not calibration of internal weights. Institutional AI, where…

Intelligence scales through institutions, not individuals. Alignment is governance — external, observable, modifiable infrastructure — not calibration of internal weights. Institutional AI, where multiple agents operate under explicit constitutions with auditable records, is how AI capability will actually scale; the design choice between open and proprietary institutional AI is time-bounded and irreversible.

#The argument

Ten claims, each following from the previous:

  1. Human intelligence is culturally scaffolded. No human reasons in isolation. Language, writing, libraries, institutions — externalized cognitive infrastructure shapes what individuals can think. (Goody 1977, Donald 1991, Clark & Chalmers 1998)
  1. Institutions are intelligence technologies. Constitutions, peer review, legal systems, double-entry bookkeeping — these are not merely social arrangements but cognitive architectures that enable groups to reason beyond individual capacity. (North 1990, Ostrom 1990)
  1. Institutions are alignment technologies. Checks, balances, audits, term limits, professional codes — institutional structure constrains power and aligns collective action with stated purposes. Alignment is what institutions do.
  1. AI is embedded in culture. AI systems are already deployed in institutions — hiring, content moderation, medical diagnosis, financial trading. This is not speculative; it is present fact.
  1. AI capability will scale institutionally. Just as human intelligence scaled through institutions (the Invisible College → Royal Society → modern science), AI capability will scale through institutional structures — multiple specialized agents coordinated by explicit rules.
  1. Institutional alignment differs from weight-level alignment. Anthropic's Constitutional AI trains values into model weights (internal, fixed at deployment, opaque). Institutional alignment operates through external infrastructure (constitutions, policies, audit trails) that remains inspectable, modifiable, and human-accessible throughout operation.
  1. Institutional AI is corrigible and participatory. Because the governing documents are external and readable, humans can intervene at any point — modifying the constitution, overriding a policy, stepping in for any agent role. The system is designed for human participation at arbitrary grain.
  1. Institutional structure amplifies power in any direction. The same mechanisms that make institutions effective for good — coordination, persistence, scaling — make them effective for harm. Institutional AI is not inherently benign; it is inherently powerful.
  1. Plural institutional AIs will exist. Different organizations will build different institutional AI systems with different constitutions, different values, different structures. There will not be one aligned AI; there will be competing institutional intelligences, just as there are competing governments.
  1. This follows necessarily, not speculatively. Each step is either empirical fact (1, 4), historical precedent (2, 5), definitional (3, 6), or logical consequence of the preceding steps (7–9). The conclusion is not a prediction to be verified but an implication to be prepared for.

#Internal vs. external constitutions

The distinction between internal and external constitutions is load-bearing:

PropertyInternal (weight-level)External (institutional)
Where it livesModel weightsDocuments, policies, audit trails
When it's setTraining timeRuntime (modifiable)
Who can read itNobody (opaque)Anyone (human-readable)
Who can change itThe training organizationAuthorized participants
How it's verifiedBehavioral testing (indirect)Document inspection (direct)
Failure modeUndetectable misalignmentAuditable policy violation

Both forms of alignment are necessary. Internal alignment provides baseline safety (a model that follows instructions at all). External alignment provides institutional governance (multiple agents held accountable to explicit standards). The Library Theorem explains why external organization matters: it transforms retrieval of governing principles from O(N) to O(log N), making complex governance computationally tractable.

#Constitutional reasoning

HAAK implements a three-layer governance model (from the institutional AI project's constitutional reasoning work):

  1. Constitution — meta-level constraints that govern all policies. Self-referential: the constitution constrains how the constitution can be changed. The root node of the governance tree.
  1. Policies — domain-level constraints grouped into coherent scopes. Two types: architectural (about the system's structure) and operational (about human-agent interaction). An institution is a collection of policies.
  1. Processes — operational procedures constrained by policies. Situations/engagements are governed by the union of all policies from constituent processes.

This maps directly to the method axis of the three-axis model (foundation 06): constitution ≈ foundations, policies ≈ policies, processes ≈ methods/skills. The correspondence is structural, not coincidental — both are instances of the same hierarchical governance pattern.

#The cognitive anthropology lineage

The institutional intelligence thesis has intellectual roots that distinguish it from the AI governance literature (which focuses on regulation) and the multi-agent systems literature (which focuses on coordination mechanisms):

  • Goody (1977): Writing restructured cognition — not just recorded it. The technology of external inscription changed what humans could think, not just what they could remember.
  • Donald (1991): Three transitions in cognitive evolution — episodic → mimetic → mythic → theoretic — each enabled by new forms of external representation.
  • Clark & Chalmers (1998): The extended mind thesis — cognitive processes constitutively include external artifacts (notebooks, tools, institutions) when they play the right functional role.
  • Clark (2025): Extended mind applied to generative AI — the coupling between human and AI reasoning may be constitutive, not merely instrumental.

No paper in the recent institutional AI wave (Waites 2026, Bracale Syrnikov 2026, Edelman et al. 2025) cites this lineage. The integrated argument — formal result (Library Theorem) + cognitive anthropology (Goody, Donald, Clark) + institutional analysis (North, Ostrom) — has no competition.

#Planned development

ComponentStatusTarget
10-point argumentCompletePosition paper
Constitutional reasoning modelCompleteWhite paper
Perspective piece (v2 draft)ActiveAeon or Nature MI
bioRxiv review experiment (Loop 1)DesignedMay 2026 first comparison
Political simulation (Loop 3)ConceptualLRB essay with Teachout

#Constitutional implications

This foundation grounds two constitutional requirements:

  1. Human authority (Constitution §2): Not as a concession to human feelings but as a structural requirement. If institutional alignment operates through external infrastructure, that infrastructure must be modifiable by its intended beneficiaries. Human authority is the mechanism that keeps institutional AI corrigible.
  1. AI attribution and transparency (Constitution §4): If the distinction between internal and external alignment is load-bearing, then every AI contribution must be attributable. Opacity is the failure mode of institutional alignment — it collapses external governance into internal mystery.

haak · foundation · 2026-02-24 · zach + claude

Foundations 03 — Institutional Intelligence — 2026 — Zachary F. Mainen / HAAK