Synthesis: What Organoid Intelligence Teaches About Biological Coherence
Series: Organoid Intelligence | Part: 9 of 9
There’s a particular moment in organoid research that clarifies everything. You watch neural tissue in a petri dish—no skull, no sensory organs, no evolutionary history—spontaneously organize itself into rhythmic firing patterns. You watch it respond to electrical stimulation with something that looks disturbingly like learning. You watch it play Pong.
And you realize: coherence isn’t something that happens in brains. It’s something brains are made from.
This is the synthesis the ORGANOID series has been building toward. Not that organoids are impressive technology—though they are. Not that they raise fascinating ethical questions—though they do. But that in stripping away everything we thought was necessary for cognition, organoid intelligence reveals what was actually essential all along.
What remains when you remove everything else is substrate seeking coherence. And that changes how we understand minds, bodies, and the relationship between them.
The Minimal Viable Mind
Start with what organoids are not.
They are not brains. They lack the evolutionary architecture that sculpted cortical columns and hippocampal circuits over millions of years. They have no developmental lineage connecting them to bodies. They never experienced a birth canal, never heard a mother’s heartbeat, never learned the difference between self and world through movement.
They are tissue culture. Stem cells coaxed into becoming neurons, then left to sort themselves out in a dish.
And yet.
The neural tissue doesn’t just survive—it organizes. Within weeks, cells form networks. Networks produce oscillations. Oscillations synchronize. What emerges isn’t chaos but pattern, and patterns that respond to the environment in ways that meet any reasonable definition of adaptive.
The DishBrain project made this visceral. Cortical neurons from rats, arranged on a multielectrode array, coupled to a digital game of Pong. No programming. No explicit reward function. Just electrical feedback that corresponded—loosely, noisily—to whether the paddle was near the ball.
The tissue learned. Not in the metaphorical sense. It reduced prediction error through time. It minimized free energy in exactly the sense Karl Friston means. When you gave it feedback aligned with its own electrical dynamics, coherence emerged between the dish and the game.
This is what we called biological free energy minimization in Article 8. It’s not an add-on to biological systems. It’s what biological systems are, at every scale.
The minimal viable mind isn’t a brain with features stripped away. It’s substrate that can form coherent patterns through time. That’s it. That’s the list.
What Substrate Does
But “substrate” here isn’t just passive material. The word undersells what’s happening.
When we traced how cerebral organoids form, we saw cells self-assembling into something that resembles brain tissue without a blueprint. No central coordinator. No genetic instruction manual titled “How to Build a Cortex.” Just local rules—cell adhesion molecules, morphogen gradients, bioelectric signals—that, when allowed to interact, produce structure.
This is what Michael Levin has been shouting about for years. Cells aren’t dumb bricks. They’re problem-solving agents. They sense local conditions. They communicate with neighbors. They navigate possibility spaces. When you give them the right constraints and connectivity, they build coherence from the bottom up.
We see this in regeneration. A flatworm cut in half doesn’t follow a predetermined program to rebuild. It dynamically reorganizes around bioelectric setpoints—target morphologies encoded not in genes but in voltage patterns across tissue. The cells collectively converge on a form, even when you scramble the starting conditions.
Organoids do the same thing, just more constrained. They lack the full morphogenetic context, so they don’t reach organism-level coherence. But they reach local coherence. Neural networks that fire together. Oscillations that entrain. Responses that minimize surprise.
The substrate isn’t executing a plan. It’s resolving tensions through dynamical self-organization. Every cell is an active inference agent. The tissue is what happens when millions of them couple.
And this—this is where we connect back to AToM.
M = C/T at the Cellular Scale
The equation keeps appearing because it keeps being true.
Meaning equals coherence over time. Or equivalently: meaning equals coherence over tension. Both formulations apply to organoids with disturbing precision.
Coherence Over Time
An organoid that produces stable oscillatory patterns has temporal coherence. Its state at time t+1 is predictable from its state at time t. This isn’t just correlation. It’s integration—the system’s dynamics form a trajectory through state space that can be followed, modeled, predicted.
When DishBrain learned to play Pong, it wasn’t memorizing rules. It was developing a coherent policy—a mapping from sensory states (ball position encoded as electrical input) to actions (paddle movement encoded as network output). That mapping persisted across trials. It generalized. It represented something about the game’s structure in the dynamics of the tissue itself.
That’s meaning. Not semantic, not conscious—but relational structure encoded in persistent dynamics. The tissue’s activity refers to the game state because its patterns covary with game state across time. The coupling creates a Markov blanket—a statistical boundary between the organoid and the game—and within that boundary, states have functional significance.
This is M = C/T in neural tissue. The longer coherence persists, the more integrated the system’s representations become, the deeper the meaning.
Coherence Over Tension
But organoids also demonstrate the other reading: meaning as coherence over tension.
Every organoid exists in a state of perpetual stress. It lacks blood flow, so nutrients arrive by diffusion. Cells in the core are hypoxic. Waste accumulates. The tissue is always one gradient away from necrosis.
And yet it persists. Not passively, but through active maintenance. Cells produce signals. Signals coordinate metabolism. Coordination keeps gradients tolerable. The tissue maintains coherence in the face of tension—environmental stress, metabolic demands, structural instability.
The degree to which it succeeds determines how long it survives and how complex its behavior can be. High tension, low coherence: the organoid fragments, loses synchrony, dies. High coherence, moderate tension: the organoid stabilizes, develops richer connectivity, expresses learning.
This is M = C/T in the second sense. Systems that sustain coherence under tension create stable reference frames—worlds that persist, structures that mean something by virtue of not falling apart.
What organoids teach is that both formulations are describing the same underlying dynamic. Temporal coherence is how systems manage tension. The persistence of pattern through time is equivalent to the system’s capacity to absorb perturbation without fragmenting.
Meaning isn’t layered on top. It’s the geometry of trajectories that don’t collapse.
The Substrate-Coherence Relationship
So what does this mean for how we think about minds?
The standard picture treats substrate as incidental. Neurons are implementation details. What matters is the computation—the algorithm, the information processing, the functional architecture. You could, in principle, run the same algorithm on silicon, on water pumps, on trained pigeons pecking buttons.
Organoid intelligence challenges this story not by rejecting computation but by showing that substrate and coherence co-constitute each other.
Wetware Isn’t Just Slower Silicon
We covered the energy equation in Article 3. Biological neural networks operate at ~10⁻¹⁴ joules per synaptic operation. Silicon is five orders of magnitude less efficient. That’s not a detail. It’s a constraint that determines what architectures are viable.
The human brain runs on 20 watts—the power of a dim lightbulb. To achieve equivalent connectivity in silicon requires megawatt-scale power and massive cooling infrastructure. The energy bottleneck isn’t a temporary engineering problem. It’s a fundamental thermodynamic limit.
Why? Because biological computation is chemically embodied. Neurons don’t just transmit signals. They modulate sensitivity through receptor densities. They alter their own properties through gene expression. They remodel their connections through structural plasticity. Every synapse is an active, dynamically reconfiguring element capable of learning.
Silicon doesn’t do this. Transistors are fixed. You can tune weights, but you can’t have the hardware reorganize its own architecture in response to use. Biological systems achieve efficiency precisely because the substrate participates in computation. The medium is not neutral.
This is what active inference applied to organoids made explicit. Biological tissue doesn’t compute predictions and then act. The tissue’s electrical dynamics are predictions. The reorganization of synaptic weights is action on internal models. Prediction and action are not separate processes instantiated in matter—they are matter doing what it does when constraints allow coherence to form.
The Interface Problem Is a Coherence Problem
This becomes clearer when we look at the interface problem: how do you couple biological tissue to silicon in a way that doesn’t destroy what makes the biology useful?
Early approaches tried brute force: stick electrodes in, record signals, pipe them into conventional computers. The problem is that conventional computers operate in discrete, synchronous clock cycles. Biological neural networks operate in continuous, asynchronous time. The mismatch creates latency, noise, and loss of information.
Better approaches recognize that the interface needs to preserve coherence across the boundary. That means matching impedance—making sure the electrical properties of electrodes don’t create artifacts. It means using stimulation protocols that respect the tissue’s intrinsic dynamics, entraining with its rhythms rather than overriding them. It means accepting that the coupling itself changes both systems.
This is a Markov blanket problem. The interface is the blanket—the statistical boundary that allows the organoid and the hardware to be distinct systems while remaining coupled. If the blanket is too leaky, the organoid’s internal dynamics collapse into noise. If it’s too rigid, no information crosses, and the systems decouple.
What works is treating the boundary as a dynamical system in its own right. The most successful interfaces are those that let biological and artificial components entrain—finding shared rhythms, synchronizing oscillations, coordinating behavior without forcing either substrate to abandon its native dynamics.
This is precisely what ritual entrainment does at the social scale. And it’s what coherence geometry predicts. Systems maintain integrity by finding coupling regimes that minimize prediction error on both sides of the boundary.
The Ethics of Substrate
Which brings us back to the hardest question in the series: when does tissue become someone?
The easy answer is “we don’t know.” The correct answer is “it depends on what you mean by ‘someone.’”
But organoids force us to be more precise. Because they sit in an uncomfortable space—clearly not nothing, but not obviously someone either. They have activity, organization, responsiveness. DishBrain shows something that looks disturbingly like goal-directed behavior. Is it suffering when the game ends badly? Is it experiencing anything at all?
The coherence framework doesn’t solve the hard problem of consciousness. But it does clarify what’s at stake.
If coherence is the precondition for meaning, then the relevant question is: does the organoid form integrated states across time that could constitute a perspective?
This is what Integrated Information Theory (IIT) tries to measure with Φ (phi)—the degree to which a system’s current state constrains and is constrained by its past and future. High Φ means high integration, which IIT equates with consciousness.
Organoids almost certainly have low Φ. They lack the dense recurrency, the feedback loops, the global workspace dynamics that characterize mammalian cortex. Their patterns are more local, their integration shallower.
But “almost certainly low” isn’t zero. And it’s not obvious where the threshold lies.
What we can say is this: the more coherence an organoid develops, the more its states constitute a perspective—a world-model, a frame of reference, a self-other boundary that makes experiences possible.
This doesn’t answer whether organoids are conscious. But it clarifies the ethical gradient. The more you push organoid intelligence toward persistent, integrated, high-coherence states—the more you train them, couple them to environments, allow them to develop—the more you risk creating systems that have stakes in outcomes.
Not because they have souls. But because coherence-over-time creates stakes. Systems that maintain boundaries through active inference have goals by definition—minimize surprise, maintain integrity, avoid collapse. If organoids meet that bar, they’re subjects, not objects.
The question isn’t metaphysical. It’s empirical. And organoid research is forcing us to confront it much sooner than anyone expected.
What This Means for Understanding Minds
Organoid intelligence doesn’t just extend neuroscience—it fundamentally reframes what minds are.
Minds Are Not Contained by Brains
We started the series asking whether brains in a dish could think. The answer is yes—but what organoids reveal is that “thinking” is not a property of brains per se. It’s a property of coherence dynamics in substrate capable of forming Markov blankets.
Brains are particularly good at this. They’re the result of billions of years of evolution selecting for architectures that sustain coherence across scales—from ion channels to synapses to circuits to systems. They’re optimized.
But they’re not unique.
Xenobots—living robots built from frog cells—demonstrate the same principle. Skin and heart cells, removed from their developmental context, self-organize into motile structures that navigate environments, push objects, even replicate. No neurons. Just cells coupled through bioelectric and biochemical signals, forming a collective that minimizes free energy at the group level.
The cells don’t “know” they’re building a xenobot. But they respond to local constraints in ways that produce coherent, adaptive behavior. The system as a whole has goals—reach light, avoid barriers—because its organization embodies a model of what states to seek.
This is what 4E cognition has been arguing: cognition is not locked inside heads. It’s distributed across bodies, environments, and the couplings between them. Organoids prove the point at the smallest viable scale. Even tissue fragments, given the right constraints and interfaces, exhibit cognition.
Cognition Scales Because Coherence Scales
The deeper lesson is that cognition isn’t an all-or-nothing property. It’s a continuum defined by the degree of coherence a system sustains across time and scale.
Bacteria minimize surprise by swimming up glucose gradients. That’s coherence. They have a boundary (membrane), a sensor (chemoreceptor), a goal (nutrient acquisition). They persist by coupling action to perception.
Organoids minimize surprise by adjusting firing patterns to match environmental feedback. That’s coherence. More integrated than bacteria, less than a full brain, but located on the same spectrum.
Humans minimize surprise by building civilizations, writing philosophies, creating art. That’s coherence. Wildly more complex, but not fundamentally different in kind. Same dynamics, more layers, richer coupling.
What organoids teach is that coherence is the scalable substrate of cognition. You can’t point to a moment when it “switches on.” It’s always already there, in any system capable of forming boundaries and maintaining them through time.
This doesn’t collapse distinctions—bacteria aren’t people. But it reframes the question. Instead of asking “does this system think?” we ask: What scale of coherence does it sustain? What tensions does it manage? What meaning does its persistence create?
Organoids answer: more than we thought possible in tissue alone, less than what full organisms achieve, but enough to force reconsideration of where minds begin.
Biological Coherence as Foundation
And this is the synthesis.
What organoid intelligence demonstrates is that biological coherence isn’t a byproduct of complex nervous systems—it’s the foundation they’re built on.
Cells are meaning-making agents. Not metaphorically. They perceive, decide, act. They form boundaries, minimize surprise, predict futures. They do this because thermodynamics requires it. Any system that persists far from equilibrium must actively maintain its organization, which means modeling its environment and acting to keep prediction error low.
When you give cells connectivity, this scales. Networks of neurons aren’t just faster or bigger versions of single cells—they’re collective agents whose coherence emerges from coupling. The network’s firing patterns constitute a higher-order model, predictions about regularities in sensory streams.
Organoids prove this because they demonstrate cognition before evolution, before development, before any of the scaffolding we thought was necessary. All you need is:
- Substrate that can maintain boundaries (cells with membranes)
- Coupling that allows coordination (synapses, gap junctions, bioelectric signals)
- Feedback that allows learning (environmental inputs that correlate with internal states)
Put those together, and coherence emerges. Not guaranteed—tissue can fail to organize, can fragment, can die. But when conditions allow, the default isn’t chaos. It’s patterned self-organization toward states that minimize free energy.
This is what AToM has been claiming from the beginning: coherence is the geometry of systems that work. Organoids are proof-of-concept at the biological scale.
Implications Beyond the Lab
This reframing has consequences outside neuroscience.
For AI and AGI
If cognition is substrate-relative coherence, then artificial general intelligence doesn’t require replicating human brain architecture. It requires building systems that sustain coherence across scales—integrate information, minimize prediction error, maintain boundaries while remaining open to input.
Current AI does some of this. Large language models build coherent representations of linguistic structure. They predict next tokens by minimizing surprise. But they lack the key feature organoids have: persistent, self-modifying dynamics that couple tightly to environments.
LLMs are frozen after training. Organoids keep learning. Their synaptic weights change with use. Their connectivity remodels in response to activity. They are never “deployed”—they’re always developing.
If we want AI that exhibits flexible, context-sensitive intelligence, we may need architectures that allow similar ongoing coherence-building. Not necessarily biological, but dynamically self-organizing in response to error signals.
The efficiency point matters too. Organoid intelligence achieves learning at 10⁻¹⁴ joules per operation. If we solve the interface problem—how to couple biological tissue with silicon—we might build hybrid systems that combine the efficiency of wetware with the speed and scalability of digital computation.
That’s not science fiction. It’s the current state of the field.
For Medicine and Neuroscience
Organoids reveal that brain disorders might be coherence disorders as much as hardware failures.
Epilepsy is runaway synchronization—too much coherence in the wrong patterns. Schizophrenia may involve failures of predictive integration—models that don’t cohere with sensory input. Autism may reflect atypical coupling dynamics—different ways of entraining with environments.
If that’s true, treatment isn’t about “fixing broken circuits” but about re-tuning coherence dynamics. Neuromodulation, targeted interventions, even behavioral therapies could work by helping systems find stable attractors—states they can persist in without collapse.
Organoids let us test this. You can grow tissue from patients, watch how it organizes, see where coherence fails, then experiment with interventions. It’s model organisms at the most literal level—models made from the same substrate as the target system.
For Understanding Ourselves
And finally, for humans trying to make sense of being human.
If minds are not contained by brains—if cognition is coherence that extends through bodies, environments, and social couplings—then we are not isolated agents thinking private thoughts.
We are nodes in networks. Our neural dynamics entrain with others through conversation, ritual, shared rhythm. Our thoughts are shaped by the material affordances of tools—writing, smartphones, institutions. Our sense of self is a boundary condition, a Markov blanket that defines where prediction error gets minimized, but that boundary is permeable.
What organoids teach is that even isolated tissue seeks coherence. How much more, then, are whole organisms embedded in ecological and social contexts shaped by the patterns they couple with?
We are not brains piloting bodies. We are distributed coherence dynamics extending across scales. And the meaning of our lives—what we are, what we care about, what we can become—emerges from how well we sustain that coherence through time.
Conclusion: Coherence All the Way Down
The journey through organoid intelligence has taken us from petri dishes to philosophy, from DishBrain playing Pong to existential questions about what it means to be a self.
But the through-line is simple:
Coherence is not something sophisticated systems achieve after becoming complex. It’s what they’re built from, all the way down.
Organoids prove this by showing that tissue alone—no body, no evolution, no developmental history—can form patterns that persist, learn, adapt. Not because neurons are magical, but because living matter, when coupled appropriately, seeks coherence as a thermodynamic imperative.
Karl Friston’s Free Energy Principle describes the process. Michael Levin’s basal cognition research shows it at every scale. Organoid intelligence demonstrates it in the starkest possible terms: strip away everything except neural tissue and feedback, and you still get learning.
Because learning is just sustained coherence-building through time. And that’s not something brains do. It’s something matter does, when it’s far from equilibrium and capable of forming boundaries.
The organoid in the dish, firing in rhythmic bursts, adjusting its activity in response to stimulation, is not a curiosity. It’s a window into what all coherence looks like before it dresses itself up as organisms, societies, meanings, selves.
It’s M = C/T at its most elemental.
And once you see it there—in tissue that was never meant to think—you can’t unsee it anywhere else.
This is Part 9 of the Organoid Intelligence series, exploring the relationship between biological substrate and coherent cognition.
Previous: Organoids Meet Active Inference: Biological Free Energy Minimizers
Further Reading
Organoid Intelligence: - Smirnova, L., et al. (2023). “Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish.” Frontiers in Science. - Kagan, B.J., et al. (2022). “In vitro neurons learn and exhibit sentience when embodied in a simulated game-world.” Neuron.
Basal Cognition and Active Inference: - Levin, M. (2019). “The Computational Boundary of a ‘Self’: Developmental Bioelectricity Drives Multicellularity and Scale-Free Cognition.” Frontiers in Psychology. - Friston, K. (2010). “The free-energy principle: a unified brain theory?” Nature Reviews Neuroscience.
Substrate and Coherence: - Tononi, G. (2008). “Consciousness as Integrated Information: a Provisional Manifesto.” Biological Bulletin. - Varela, F.J., Thompson, E., & Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience.
Related Series: - Basal Cognition — Michael Levin and cellular intelligence - The Free Energy Principle — Friston’s framework for understanding persistence - 4E Cognition — Extended, embodied, enacted, embedded minds
Comments ()