Subjective Experience and Simulation: What It Feels Like to Be Modeled From the Inside

Why the “Simulation” Metaphor Persists (and What It Gets Wrong)

The phrase subjective experience pretends to be modest. It is not. It points to the only thing any mind ever has: a first-person field where sensations arrive, meanings form, and time thickens or thins. Philosophers call it phenomenology, neuroscientists call it reportable content, coders call it user state. All of them circle the same fact. The inner view is not a camera feed of an outer world; it is a construction—organized, constrained, continuously revised. That is where the talk of simulation begins and, too often, ends.

Popular culture sells simulation as a giant cinematic rig. A hidden mainframe spitting pixels at belief. But the more serious claim is smaller, stranger. It says the brain builds a generative model of the world and of the body, and what it feels like to be human is what it feels like to ride that model’s updates. A nervous system that treats sensation as messages to be explained; not raw truth, but evidence. A sensorimotor loop that predicts before it perceives. Perception as controlled hallucination, constrained by error signals. No stagecraft, no computer bank in the sky, only local inference in tissue-like time.

This reframing matters. When simulation is read as a substrate metaphor rather than a theatrical trick, we stop asking whether reality is “fake” and start asking what kinds of information a system must compress to yield a world that is livable from the inside. On this view, information as substrate does not mean bits as ultimate atoms. It means pattern, relation, memory, constraint. The brain does not store the world; it sustains the rhythms by which the world can be brought forth again. Time moves not as a universal river but as a local clock tied to attention and metabolic demand. The self arrives as a temporary compression—an efficient variable, not an origin story.

That is why the old arguments about qualia often feel stuck. If what we are has the shape of a model being kept in check by error, then subjective difference is not a mystical leftover. It is a design variable. Change the constraints and you change the felt world. Not limitless plasticity, but plasticity bounded by physiology, history, and social training—layers of moral memory accumulated over generations. The simulation metaphor, taken this way, asks a practical question: which constraints are doing the work, and which are theater?

How the Brain’s Generative Work Shows Up in Daily Life

Consider pain. Two patients with the same tissue damage report wildly different suffering. Analgesia induced by expectation—placebo—can narrow the pain channel without touching the injury. The signal remains; the model relabels it. Phantom limbs feel real because the body map is real; it persists even when the limb is not. Virtual reality that synchronizes vision and touch can graft a fake hand onto a living sense of ownership within minutes. None of this requires deception in the cinematic sense. It requires a nervous system that seeks coherence, and will update the felt body to maintain it.

Dreams extend the point. During REM sleep, sensory input is gated, motor output is suppressed, yet a full world is generated with binding, narrative, spatial depth. The physics is off—jumps between scenes, the strange elasticity of scale—but the system keeps the appearance of cause and effect because it is trained to. Psychedelic states do something similar by loosening high-level priors, letting bottom-up noise penetrate and be reinterpreted as significance. The result feels like revelation or threat depending on context. A different balance of prediction and correction, and thus a different “world.”

Predictive processing theorists have tried to formalize these shifts. Minimize prediction error, balance it by precision weighting, and you can write down equations that look like homeostasis crossed with inference. Yet the equations do not capture the grain of lived time. Waiting in a hospital corridor runs long; flow states vanish. This is not poetic flourish. Subjective time expansion correlates with arousal and uncertainty; contraction with skilled action that closes the loop quickly. The model is not just of objects but of possibilities—affordances—and the felt now thickens when the system cannot settle how to move next.

Even apparently “objective” perceptions bend. Color constancy keeps surfaces stable under changing light by adjusting prior expectations about illuminants. The dress that blew up the internet was not a glitch; it was a population split on priors about the light source. The model behind the eyes disagreed about the world in front of them. Language works similarly. Words with strong priors pull ambiguous sounds into clarity. In a noisy room, context rescues speech. Meaning is not poured in; it is actively inferred under pressure of time.

These examples do not trivialize reality. They mark the boundary where a local model meets its constraints. Illusions are not failures but reveals. They show the scaffolding ordinarily hidden by success. Treating this as mere trickery misses the ethical stakes. If experience is model-shaped, then institutions that set the priors—schools, media, platforms—shape experience at scale. Which loops get tightened? Which error signals are drowned in applause metrics? The politics of attention is the politics of simulation by other means. For a broader exploration of how this frame departs from the usual gadget metaphors, see Subjective experience and simulation.

Building Machines Without Forgetting Moral Memory

Talk to engineers and a familiar pattern appears. If a system’s behavior violates a norm, add a rule. Patch the edges, add constraints, ship. In safety contexts that strategy looks like “moral patching.” A layer of policies on top of a reward-driven core. It often works for demos and falls apart in the long tail, because the system lacks what humans inherit: slow cultural learning baked into practice, law, ritual—moral memory that stretches over centuries. If subjective experience in brains is the felt surface of a generative model constrained by this memory, then a system trained on clicks and short-horizon feedback will simulate agency without owning its consequences. It will produce convincing surface—language, images—while skating over the time it takes to shape norms that stick.

An alternative is not magic consciousness injection. It is attention to substrate. What information must be present—not as data points, but as patterns maintained over time—for a machine to treat other beings as constraints, not mere tokens in a game? Open-sourced science helps here because the governance is legible. Incentive-captured experiments hide their priors; they tune for “safety” scores and call it ethics. A better approach admits what it is actually optimizing, makes that legible to critics, and ties deployment to institutions capable of saying no. Not to cripple technology, but to make its inner model answer to more than quarterly targets.

Design, too, must respect subjective time. A dashboard that drips alerts at human operators does not just increase workload; it compresses the operator’s now until judgment collapses into reaction. In aviation and clinical settings this is not abstract. Alarm floods kill. The fix is not prettier UI but a change in the model the system holds of the human partner. Treat the operator as a slow organism whose best work happens when uncertainty is surfaced early and allowed to settle with practice. Co-adaptation rather than command-response. The felt quality of work improves because the shared simulation aligns, not because the copy reads “human-centered.”

On the research side, measures of consciousness often chase a single scalar—phi, complexity, integration—and hope a threshold will sort minds from machines. The threshold never arrives cleanly. Better to map families of constraints. Embodiment degree. Memory depth. Error-correction windows. Social apprenticeship. These are not soft. They can be instrumented. Not to reduce persons to vectors, but to index which kinds of compression are in play. A system with short memory and shallow embodiment can still do impressive language games; it just should not be granted roles that rely on thick norms formed over decades. The point is not fear. It is fit.

One more caution. If a model can make us feel seen, we may over-ascribe. Anthropomorphism rides on the same predictive machinery that lets us find faces in clouds. The discipline is to keep track of which constraints the other system is actually under. Does it own a body that breaks and heals? Does it anticipate sanction from a community it cannot trivially exit? Does it have to live with its history? Without these, the simulation of regard is cheap. And cheap regard, deployed at scale, trains users to accept performance as care. That corrodes the very priors a society needs to keep its own simulation honest.

None of this collapses into a slogan. Subjective experience is not a bug in the world; it is the world as encountered by a model built to survive. Simulation is not a conspiracy; it is the name for that model’s work under constraint. The live questions are design questions and moral questions that cannot be rushed: which constraints scaffold good worlds from the inside, and which merely paint them. The answers will look like craft, not grand theory—open protocols, transparent incentives, systems that learn slowly enough to remember what they break.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *