The Scientific Ecosystem

**Science is an ecosystem of competing communities with overlapping niches. It cannot reform itself because the institutions that need reform evaluate the people who would reform them. HAAK builds a…

Science is an ecosystem of competing communities with overlapping niches. It cannot reform itself because the institutions that need reform evaluate the people who would reform them. HAAK builds a shadow ecosystem beside the human one — computationally scaffolded, experimentally controllable — to discover where human participation is most valuable. This is an empirical question, not a philosophical one.

#Science as ecosystem

Science is not one system but an overlapping population of communities — subfields, journals, funding bodies, professional societies — each with its own practices for publication, review, credit, and resource allocation. These communities form niches: a paper that succeeds in Nature Neuroscience may fail at NeurIPS, not because it is wrong but because it addresses different criteria. The norms that govern quality, significance, and rigor vary across niches and evolve over time.

This is the ecological view of science. HAAK does not prescribe a single niche. It provides infrastructure for creating, running, and comparing niches computationally. A review panel with five AI personas applying structured assessment is one niche. The same paper reviewed by a monolithic model with a flat prompt is another. Comparing their outputs is a science-of-science experiment.

#The lock-in problem

Current science has a structural problem that prevents self-reform: the people who might change the system are evaluated by the system they would change. A junior researcher who proposes to reform peer review must survive peer review to gain the standing to make that proposal. The incentive gradient points toward compliance, not reform.

This is not a claim about bad actors. It is a structural property of any system where the evaluation mechanism and the evaluated population are coupled. You cannot run a controlled experiment on the scientific system from inside it — there is no control group, no way to vary parameters independently, no way to repeat the experiment with different initial conditions.

#The shadow ecosystem

The resolution is architectural: build a second system beside the first.

The shadow ecosystem ingests published papers through its own review layer, produces its own outputs (reviews, opinions, synthetic analyses), and operates under its own governance. It does not compete with human science for resources, credit, or prestige — it runs alongside it, answerable only to its own logic of inquiry.

Three architectural commitments:

  1. Input through review. The shadow ecosystem does not passively mirror the human literature. Every paper it ingests passes through structured review (persona-grounded agents, editorial synthesis). This forces the system to form its own evaluative judgments from the start.
  1. Products, not copies. The system generates original outputs — review syntheses, opinion pieces, meta-analyses — not reformatted versions of human papers. These products may confirm, challenge, or extend human conclusions.
  1. Parameter variation. Because the ecosystem is computational, you can vary its parameters: change the review criteria, swap personas, alter the incentive structure, run the same paper through different institutional configurations. This is what makes it an experiment, not just a tool.

#Where is human participation most valuable?

This is HAAK's core question, stated as an empirical hypothesis rather than a philosophical commitment. The system does not assume humans are irreplaceable everywhere. It does not assume they are replaceable anywhere. It measures.

The measurement mechanism: accumulated records across many workflow executions reveal patterns.

  • Where humans always intervene — the system is wrong or missing something. Human participation is essential here.
  • Where humans rubber-stamp — the system is adequate. Reduce friction by automating.
  • Where feedback repeats — extract into defaults. The system has learned something the human kept teaching it.

The loop is: execution → evaluation → extraction → improvement. Over many cycles, the boundary between human-essential and automatable tasks shifts — and that boundary is the empirical answer to the core question.

#The nested loops

The shadow ecosystem program is organized as concentric experimental loops, each containing the previous:

Loop 0 (Library Theorem): The formal core. Indexed memory gives exponential advantage. P3 tests the prediction empirically. This is the mathematical foundation.

Loop 1 (Review experiment): AI agents with institutional structure (personas, editorial synthesis, structured format) review bioRxiv papers before human reviews are published. Blind comparison measures whether institutional AI review matches or exceeds human review quality.

Loop 2 (Full science loop): A complete AI lab — data analysis agents, statisticians, figure-makers, manuscript writers, reviewers — executing the scientific process computationally. The 2015 CNP retreat game, but at scale and iterable.

Loop 3 (Political systems): If institutional structure amplifies intelligence at the scientific scale, the same applies to political systems. Simulate polities under different constitutions; no one dies; only agents stop existing. Test institutional failure modes in sandboxes before deploying institutional AI in the real world.

Each loop uses the previous loop's infrastructure as a component. The review system from Loop 1 becomes a subsystem of the full science loop in Loop 2. The institutional governance from Loops 1–2 informs the political simulation in Loop 3.

#The open-proprietary design choice

The shadow ecosystem perspective piece (ms-shadow-ecosystem) identifies a time-bounded, irreversible design choice: build this openly for scientific value, or leave it to proprietary AI capability development. Network effects mean whoever builds the first adequate shadow ecosystem captures the integration point. An open system that publishes its governance, methods, and results serves as scientific infrastructure. A proprietary one serves its owners.

This urgency is not rhetorical. It follows from the structural properties of platform competition: the first system to achieve adequate coverage attracts data, contributors, and integrations that make alternatives harder to bootstrap. The window for building an open alternative closes once a proprietary one achieves critical mass.

#Historical development

  • Apr 2015: CNP retreat role-play (15 participants, 6 latent variables, 8 question clusters) — the human precursor
  • Feb 15 2026: Macaria architecture walk-and-talk — figure as atomic unit, team science, lab layers
  • Feb 17 2026: Three-track roadmap identifies publication, system maturity, and process domains
  • Feb 18 2026: Grant proposal "Bureaucratic Intelligence" formalizes the review experiment
  • Feb 22 2026: Shadow ecosystem manuscript v1 complete; two review rounds (R1 + R2)
  • Feb 22 2026: Strategy assessment identifies nested loops and unifying mechanism

#Constitutional implications

This foundation grounds the constitutional purpose statement: "HAAK simulates and optimizes the scientific ecosystem with AI and human agents. The goal: discover where human participation is most valuable." The ecosystem framing explains what HAAK simulates; the core question explains why.


haak · foundation · 2026-02-24 · zach + claude

Foundations 04 — The Scientific Ecosystem — 2026 — Zachary F. Mainen / HAAK