This document extends [[05_perception-memory]] to cover the hardest case yet identified: a person who sits at a computer to write a text about relational situational ontology, using a large language model to complete their thought. The scenario is maximally hard for four reasons: the AI agent bundles actor, method, and material properties at simultaneously high values; it lacks the persistent cross-session internal domain that the perception analysis established as central to cognitive actors; the act of writing produces a transitive chain of representations the ontology cannot yet track; and the scenario is self-referential — it describes the process by which this ontology is being produced.
The running example is (a) the person-AI writing session described above. Three analogues: (b) a composer who hums a melody into a notation app that harmonises it automatically, (c) a surgeon who dictates an operative note and a transcription system completes standard phrases, (d) a student who writes the first sentence of an argument and asks a tutor to continue it.
| Section | What it covers |
|---|---|
| Extensions | Four new definitions (P8–P11) in dependency order |
| The scenario | Six acts mapped to the definitions |
| What the scenario reveals | The hardest tensions; the self-referential closure |
#Extensions
Definition P8 (Expressive method). An expressive method is a method-type that takes an internal material as primary input and produces an external material-token as output, where the output is intended to represent the internal material to other actors or to the external domain. Expression is the productive inverse of perception (Definition P4). Perception takes an external material and produces an internal one; expression takes an internal material and produces an external one.
Expression is always lossy. No external material fully encodes its originating internal state. The composer's notation captures pitch and rhythm but not the imagined timbre. The surgeon's dictation captures procedure but not tactile judgment. The student's first sentence captures the opening claim but not the full argument. The person's prompt captures part of the idea but not the idea itself. The external material produced by expression represents the internal material that caused it (Definition P6 applies), but structural correspondence is always partial.
The expressive method and the representation relation. An external material produced by expression represents the internal material: it was causally produced from it; it preserves some features; it can persist independently (the text remains after the thought is gone, or changes). The text is not the thought; it is a representation of the thought.
Definition P9 (Dispositional state). A dispositional state is a property of an actor — produced by a prior transformative process — that shapes how the actor responds to inputs without itself being a percept or memory of any particular situation. It is a tendency rather than a record: not a representation of a past situation but a readiness to behave in certain ways given certain inputs.
How dispositional states are produced. They are acquired through learning or training — prior methods that transformed the actor by exposing it to many materials over time. A doctor who has examined thousands of patients has acquired a dispositional state: a clinical intuition not attributable to any particular patient but shaped by all of them. A large language model that has processed vast amounts of text has a dispositional state encoded in its parameters: a trained readiness to continue any text in statistically natural ways.
Dispositional state and memory. A dispositional state differs from memory (Definition P7) in a critical respect. A memory represents a particular past situation — it is a specific internal material with temporal reference, causally traceable to a particular perceptual act. A dispositional state represents no particular situation — it is a compression of many prior transformations into a standing readiness. The LLM's parameters are not memories of specific texts; they are the statistical aggregate of a vast corpus. This aggregate constitutes a kind of collective memory: the encoded residue of prior human expression, made available as a standing disposition.
Definition P10 (Generative method). A generative method is a method-type that takes one or more external materials as input and produces a new external material whose properties are not fully present in any single input, drawing on the actor's dispositional state to supply what the input does not specify. A generative method does not merely transform its input — it extends it.
The completion operation. The canonical form of a generative method is completion: given a partial material (a prompt, a melody fragment, a partial sentence), the actor uses its dispositional state to produce a continuation that coheres with the partial material and draws on patterns not explicitly present in it. The large language model's response to a prompt is a completion in this sense.
Completion is constrained but not determined. The continuation produced by a generative method must cohere with the input. But among all coherent continuations, the actor selects those its dispositional state has shaped it to find natural. The model does not complete the prompt randomly; it completes it in ways its training has encoded as statistically apt.
What is completed. A generative method can complete an external material (the prompt-text) without ever accessing the internal material (the idea) the prompt was meant to represent. Whether the completion also coheres with the idea depends on whether the prompt adequately expressed the idea — that is, on the quality of the prior expressive method.
Definition P11 (Joint product). A joint product is a material-token produced by two or more actors through interleaved methods, where no single actor's contribution is sufficient to produce the material alone and the material's properties cannot be fully attributed to any single contributor.
Joint products and authorship. A joint product does not have a unique author. The book written by a person with AI assistance is a joint product: the person contributes the internal materials (ideas, intentions, direction), the expressive methods (formulating prompts, selecting and editing completions), and the organisational judgment. The AI contributes the generative methods (completing prompts, extending arguments, supplying vocabulary and structure). Neither contribution alone produces the book.
Joint products are not merged actors. Two actors producing a joint product remain distinct. The joint product is a material; the actors who produced it retain their separate identities and relational property profiles.
#The Scenario
Act 1 (sits at the computer). The person-actor moves to a new domain-instance: the computer workstation. The workstation carries resources: hardware, text editor, and the AI accessible through a network interface. A new engagement begins: the person's intention is to write a text about relational situational ontology. Clean.
Act 2 (has an idea about relational ontology). The person has an internal material — a thought, a conceptual claim, an emerging formulation. This internal material was produced by prior perceptual and cognitive methods: reading, conversation, reflection. It exists within the person's internal domain and is not directly accessible to any other actor. Mapped to P1 (internal material), P7 (memory of prior discourse).
Act 3 (types the prompt). Method: write_prompt — an [[expressive method]] (P8). Input: the internal material (the idea). Output: the prompt text — a new external material-token. The prompt represents the idea but not completely. Some features of the idea are encoded in the prompt's vocabulary and structure; others are lost. The internal material persists unchanged after expression. Mapped to P8.
Act 4 (the model responds with a completion). Method: generate — a [[generative method]] (P10). Actor: the large language model. Input: the prompt (external material). The model's [[dispositional state]] (P9) — its trained parameters — supplies what the prompt does not specify. Output: the completion text — a new external material-token.
> ⚡ Tension P.D (The AI's ontological kind). The large language model bundles actor-property, method-property, and material-property simultaneously at high values. Relative to the prompt: high actor-property (it transforms input to output). Relative to the person using it: high method-property (it is a repeatable defined procedure). Relative to the infrastructure running it: high material-property (it is processed by hardware, operated through an interface). No single axis-property dominates. The AI is the most extreme case of property bundling the ontology has encountered. Proposed term: instrument — an entity whose actor, method, and material properties are jointly high. A tool that is also an agent. Unresolved.
> ⚡ Tension P.E (The AI lacks persistent cross-session internal domain). The perception analysis established that cognitive actors are domains — they accumulate internal materials over time. The AI violates this at the cross-session level: it has no memory of prior conversations, no accumulated percepts from past interactions. Each session begins fresh. Within the session, the AI has a temporary internal domain — the context window — carrying the conversation as internal materials. But this domain undergoes material-token termination (P5) at session end. The AI is an actor whose episodic memory is empty across sessions. It has a vast dispositional state (training) but no episodic record. Ontology gap: no name for a cognitive actor without cross-session internal domain.
> ⚡ Tension P.F (Transitive representation). If the completion is good, it coheres not only with the prompt but with the person's original internal idea: idea → [expression] → prompt → [generation] → completion. The completion represents the idea at one remove, through the prompt as intermediary. This transitive representation is possible only because the prompt preserved enough features of the idea for the model's dispositional state to extend it coherently. Definition P6 defines representation as a two-term relation. Here it composes across steps. Ontology gap: representational transitivity is unnamed. Deferred to [[02_relations]].
Act 5 (the person reads the completion). Method: read — a [[perceptual method]] (P4). Input: the completion text (external material). Output: a new internal material — a percept of the completion. The person compares this with their original internal idea. If the completion coheres, the person's internal state is enriched; if not, the person modifies the prompt and repeats. Mapped to P4, P3.
Act 6 (the book grows). Accepted completions are incorporated into the text-material-token. The book is a [[joint product]] (P11): the person contributes ideas, direction, selection; the model contributes generative extension. The book has properties that neither contributor alone would have produced. Authorship is distributed and cannot be reduced to either party. Mapped to P11.
#What the Scenario Reveals
Implication 1: The instrument. The AI writing scenario introduces a kind of object — provisionally an instrument — that bundles actor, method, and material properties at simultaneously high values. A hammer has material-property and some method-property but minimal actor-property. A procedure manual has method-property but no actor-property. A human doctor has actor-property and embodied method-property but low material-property. The AI approaches high values on all three simultaneously. This triple bundling may require a new concept beyond the four that emerged from the correspondence exercise.
Implication 2: The actor without episodic memory. Every cognitive actor previously discussed accumulates internal materials across situations. Memory is the mechanism by which past situations shape present action. The AI breaks this pattern. Its dispositional state encodes a vast aggregate of prior human expression — richer than any individual human's — but it has no episodic memory of prior situations it has been in. This is a genuinely new kind of cognitive object: an actor whose dispositional state is immense but whose episodic memory is empty. The actor/domain bundling is temporary, expiring at session end.
Implication 3: The self-referential closure. The scenario is not merely an illustration. It is the actual situation of this document's production. The text being written is this ontology. The person is the human author of HAAK. The AI is the agent producing these definitions. The prompts are the human's messages; the completions are these pages. The book is HAAK itself.
This means the ontology must be able to describe the relational situation of its own production — and by its own commitments, it can. The relational situational ontology holds that objects are revealed through situations, that no situation exhausts the object, and that all ontological properties are relational. This session is one situation in which the ontology is revealed. The situation is partial — it captures the ontology in one state, on one day, in one collaboration. The full extent of the ontology is not exhausted here, any more than the full extent of the apple is exhausted by the moment it is seen. What is revealed here is real; it is not everything.
The self-referential closure is not a paradox. It is what the ontology predicts about itself.
ontology · AI writing scenario · 2026-02-25 · zach + claude
Ontology 06 — The AI Writing Scenario — 2026 — Zachary F. Mainen / HAAK