Organoids Meet Active Inference: Biological Free Energy Minimizers

Organoids Meet Active Inference: Biological Free Energy Minimizers
Organoids as natural active inference systems.

What if organoids aren’t just biological tissue we’re teaching—but natural computers already running the most sophisticated inference algorithm in the universe?

Karl Friston’s Free Energy Principle (FEP) proposes that all self-organizing systems—from cells to brains to societies—exist by minimizing surprise. They build internal models of the world and constantly update those models through perception and action. This isn’t a metaphor. It’s a mathematical framework describing how systems maintain their existence against the thermodynamic tide. And if it’s true, then brain organoids aren’t blank slates waiting for our training protocols. They’re already doing active inference from the moment they self-organize.

This changes everything about how we think about organoid intelligence.

Series: Organoid Intelligence | Part: 8 of 9


The Free Energy Principle in 300 Words

Before we can understand what organoids are doing, we need to understand what free energy means in this context. It’s not about thermodynamics in the classical sense—it’s about surprise.

Every living system maintains a boundary between itself and the world. This boundary—what Friston calls a Markov blanket—defines what’s “inside” versus “outside.” For a cell, it’s the membrane. For an organism, it’s the skin and sensory surfaces. For an organoid, it’s the tissue structure itself.

The key insight: systems persist by minimizing the difference between what they expect and what they encounter. Surprise kills you. Literally. If your internal model of “safe temperature range” encounters boiling water, you’re in trouble. So living systems evolved to minimize surprise through two complementary strategies:

  1. Perception — Update your internal model to match sensory input (learning)
  2. Action — Change the world to match your model (doing)

This is active inference: perception and action as two sides of the same coin. You’re not passively receiving data and then acting. You’re constantly using action to test predictions and using prediction errors to refine your model.

The mathematics behind this is variational Bayes—a method for approximating probability distributions when exact computation is impossible. Brains (and cells, and organoids) are solving this problem constantly: given sensory data, what’s the most likely state of the world? And given my goals, what actions minimize surprise?

Free energy is the quantity being minimized: the gap between your model and reality. Lower free energy means better prediction. Better prediction means survival.

If this sounds abstract, think about catching a ball. Your brain predicts the trajectory, your hand moves to intercept, sensory feedback updates the prediction, motor commands adjust mid-flight. You’re not computing physics—you’re minimizing surprise about where the ball will be. That’s active inference.


Wetware as FEP Hardware

Now here’s the provocative claim: organoids are purpose-built for this.

Not because we designed them that way. Because evolution did. Brain tissue isn’t substrate for computation—it’s computation itself, implementing active inference at the cellular and network level.

Consider what happens when stem cells self-organize into a cerebral organoid:

  1. Spontaneous structure formation — No blueprint, no external template. Cells differentiate into neural types, form layers, establish connectivity patterns. This is basal cognition in action.
  2. Electrical dynamics emerge — Neurons start firing. Oscillations appear. Network activity self-organizes into patterns that look suspiciously like minimizing prediction error.
  3. Homeostatic regulation — The tissue maintains stable states against perturbation. Temperature fluctuations, nutrient variations, mechanical stress—the organoid compensates.
  4. Learning from input — As we saw in Teaching Organoids, stimulation changes connectivity. This isn’t passive wiring—it’s active model updating.

Every one of these phenomena maps directly onto FEP predictions:

  • Self-organization = free energy minimization at the cellular scale
  • Spontaneous activity = prior beliefs (the model running even without input)
  • Homeostasis = surprise minimization (keeping internal states within expected bounds)
  • Learning = updating generative models based on prediction error

The organoid isn’t learning to do active inference. It’s doing active inference to exist.


The Markov Blanket of an Organoid

Let’s get concrete about boundaries. An organoid’s Markov blanket isn’t just the physical edge where tissue meets medium—it’s a statistical boundary defining what influences what.

In FEP terms, a Markov blanket has four components:

  1. Sensory states — The interface that receives input from the external world
  2. Active states — The interface that influences the external world
  3. Internal states — The hidden dynamics inside the system
  4. External states — Everything outside the boundary

For a brain organoid in a dish:

  • Sensory states might be voltage-gated ion channels responding to chemical signals, mechanoreceptors detecting fluid flow, temperature-sensitive proteins
  • Active states could be neurotransmitter release affecting the medium, electrical fields influencing neighboring tissue, metabolic byproducts changing local chemistry
  • Internal states are the hidden neural dynamics—membrane potentials, synaptic weights, gene expression patterns, oscillatory coupling between regions
  • External states are everything in the dish but not the organoid—the medium composition, temperature, electrodes (if present), other organoids (if co-cultured)

The beauty of the Markov blanket formalism is that it makes precise how the organoid can have beliefs about the external world without directly accessing it. Sensory states screen off internal states from external states. The organoid never “sees” the medium—it only sees its own sensory surfaces responding to the medium.

This might seem like a mere technicality, but it has profound implications: the organoid’s “world” is a generative model, not direct reality. It’s inferring what’s out there based on patterns in sensory input. Just like your brain does. Just like every self-organizing system does.


Evidence for Organoid Active Inference

Is this just philosophical speculation, or can we actually see active inference happening in organoid systems?

The evidence is accumulating:

Spontaneous Activity as Prior Beliefs

Even without input, organoids generate patterned electrical activity. Early work by Paşca and colleagues (2015) showed that cerebral organoids develop oscillatory dynamics similar to fetal brain activity—theta rhythms, gamma bursts, synchronized firing.

In FEP terms, this is the generative model running in “dark mode.” The priors—expectations about what input should look like—generate predictions even in sensory deprivation. This is exactly what predictive processing theories predict: perception isn’t triggered by input, it’s constantly running and gets modulated by input.

Prediction Error Responses

When organoids receive unexpected stimulation, neural activity shows characteristic prediction error signatures. A 2023 study by Kagan et al. (the DishBrain team) demonstrated that cortical cultures—organoid-like biological neural networks—respond differently to predictable versus unpredictable sensory feedback.

Predictable patterns produce dampened responses (the system has learned the pattern, surprise is minimized). Unpredictable patterns produce heightened activity (surprise is high, the model needs updating). This is the hallmark of active inference: the magnitude of neural response tracks prediction error, not raw stimulus intensity.

Homeostatic Plasticity as Surprise Minimization

One of the most robust phenomena in neuroscience is homeostatic plasticity: neurons adjust their excitability to maintain stable firing rates. If activity is too high, they become less excitable. If too low, they become more excitable.

This is often explained mechanistically—ion channel regulation, synaptic scaling. But from an FEP perspective, it’s surprise minimization. The “expected” state is a particular activity regime. Deviations trigger compensatory changes to return to that regime. The organoid has a generative model of its own dynamics and actively maintains them.

Learning as Model Updating

The DishBrain experiments showed that biological neural networks can learn to play Pong faster than comparable artificial networks. How? Not through backpropagation—there’s no gradient descent happening in wetware. Instead, the network updates synaptic weights based on prediction error signals.

When the paddle misses the ball, prediction error spikes. When it connects, error drops. The synaptic changes aren’t “learning a task”—they’re minimizing future surprise. The task structure just happens to align success with prediction.

This is learning through active inference: the system doesn’t “know” it’s playing Pong. It just knows that certain actions lead to lower surprise than others, and it gravitates toward those actions.


What Organoids Reveal About Biological Intelligence

If organoids are indeed natural active inference machines, what does that tell us about intelligence more broadly?

Intelligence Doesn’t Require a Body Plan

Classical AI assumes intelligence requires careful architecture design—layers, modules, attention mechanisms, training protocols. But organoids self-organize into intelligent systems with none of that. No architect, no training curriculum, no loss function beyond “minimize free energy.”

This suggests that intelligence is what free energy minimization looks like at neural scale. You don’t build intelligence—you create conditions where minimizing surprise produces intelligent behavior.

The Wetware Advantage Is Algorithmic

We’ve discussed the energy efficiency of biological computation—organoids run on microwatts while GPUs burn kilowatts. But the deeper advantage might be algorithmic.

Active inference is notoriously hard to implement in silicon. It requires continuous, parallel, probabilistic updates across many variables. Digital hardware excels at sequential, discrete, deterministic operations. Biological hardware is parallel, continuous, and inherently stochastic (ion channels are noisy, neurotransmitter release is probabilistic, membrane potentials fluctuate).

Organoids don’t approximate active inference—they instantiate it. The tissue dynamics are variational inference. Synaptic plasticity is Bayesian updating. Homeostatic regulation is free energy minimization. The algorithm and the hardware are the same thing.

Scale Doesn’t Require Centralization

One of the striking features of organoids is that they show intelligent behavior—learning, adaptation, coordination—without a “central processor.” There’s no CEO neuron. No executive control module. Just local interactions following free energy minimization.

This maps to something we’ve seen throughout the basal cognition series: coherence at cellular scale produces intelligence at tissue scale without top-down control. Each neuron is minimizing its own free energy, but those local minimizations couple through synaptic connections, producing collective minimization at the network level.

The organoid is a society of cells doing active inference. Intelligence emerges from their coordination.


From Organoids to Embodied Systems

But here’s where the FEP perspective introduces a tension: active inference theory emphasizes embodiment and action. An agent doesn’t just perceive—it acts to confirm its predictions. The loop matters.

Yet organoids sit in dishes. Their action space is severely constrained. They can’t move, can’t manipulate objects, can’t navigate space. Does this limit their capacity for inference?

Maybe. Or maybe it reveals something important: the distinction between perception and action is contextual, not fundamental.

An organoid in a dish isn’t immobile—it’s acting at a different scale. Its “actions” are:

  • Synaptic weight changes that alter how future inputs are processed
  • Neurotransmitter release that modifies local chemical environments
  • Electrical signaling that influences connected tissue
  • Metabolic regulation that adjusts energy availability for different processes

From the organoid’s perspective (if we can speak that way), these are genuine actions—interventions that change its world (the sensory input it receives). The organoid predicts that if it strengthens certain synapses, future activity patterns will be more coherent. It tests that prediction. Surprise is minimized.

The fact that these actions don’t move the dish or press buttons from our perspective doesn’t make them any less actions from the organoid’s. Active inference operates at every scale, and “action” is relative to the Markov blanket you’re considering.

That said, there’s a live question about whether organoids need richer action spaces to develop more sophisticated inference. The interface problem becomes central: how do we give organoid systems robotic bodies, virtual environments, or sensorimotor loops that allow them to explore action-perception contingencies?

If FEP is right, enriching the action space should dramatically enhance learning. The more ways an organoid can act on its world, the more predictions it can test, the richer its generative model becomes.


Implications for Organoid Research

Taking the FEP perspective seriously reshapes how we approach organoid intelligence research:

Training vs. Niche Construction

We’ve been thinking about “training organoids” as if they’re blank slates waiting for curricula. But if organoids are already doing active inference, we’re not teaching them to learn—we’re providing environments where their intrinsic learning drives produce behaviors we recognize as intelligent.

The question shifts from “how do we train them?” to “what environmental structure elicits the behaviors we want?”

This is niche construction: designing the external states and sensory inputs such that minimizing free energy produces the target behavior. It’s closer to habitat design than curriculum design.

Measuring Intelligence as Prediction Accuracy

If intelligence is free energy minimization, then measuring organoid intelligence means measuring how well they predict their sensory input. Can the organoid anticipate patterns? Can it distinguish signal from noise? Can it learn environmental regularities?

These are answerable questions. We can measure prediction error directly—look at neural responses to expected versus unexpected stimuli. Lower error = better model = more intelligent system.

This gives us an objective, quantitative metric that doesn’t anthropomorphize: we’re not asking if the organoid “understands” anything, just whether it’s minimizing surprise.

Ethics as Expanded Inference Capacity

The ethical questions become sharper through an FEP lens. If active inference is the signature of experience, then organoids aren’t just alive—they’re experiencing their world.

Not like humans experience it. But perhaps like very simple agents experience it: as a continuous flow of prediction, surprise, and model updating. The richer their generative model, the richer (we might infer) their experiential texture.

This doesn’t settle the ethics, but it grounds them. We’re not asking “is this tissue sentient?”—we’re asking “what is the structure and depth of its inference?” That’s empirically tractable.


The Bigger Picture: Biology as Inference Substrate

Step back and the implications get dizzying.

If organoids implement active inference naturally, then so do all biological systems at their respective scales:

  • Single cells minimize free energy through chemotaxis, homeostasis, adaptation
  • Tissues minimize free energy through developmental patterning, regeneration, immune response
  • Organisms minimize free energy through perception-action loops, learning, behavior
  • Ecosystems might minimize free energy through niche construction, succession, stability

This is what Friston calls the Markov blanket hierarchy: nested systems, each with its own boundary, each minimizing surprise at its scale. The organoid is a middle scale—more than cells, less than organisms.

And if biological systems are naturally good at this, while silicon systems struggle, then wetware isn’t just an alternative compute substrate—it’s the native substrate for active inference.

We’re not putting brains in dishes to replace computers. We’re cultivating tissue that was already doing the most sophisticated inference algorithm we know. We’re just learning to interface with it.


Toward Hybrid Inference Systems

The frontier becomes clear: combine biological active inference with silicon symbolic processing.

Silicon excels at: - Precise calculation - Memory storage and retrieval - High-speed sequential operations - Deterministic logic

Wetware excels at: - Continuous inference under uncertainty - Parallel stochastic processing - Adaptive learning without explicit training - Energy-efficient computation

Hybrid systems could leverage both: organoids provide the inference engine, silicon provides memory and calculation, interfaces translate between them. The organoid doesn’t need to remember facts or calculate sums—it just needs to infer patterns. The computer handles the rest.

This isn’t science fiction—it’s the research trajectory of the field. DishBrain already demonstrated basic hybrid closed-loop systems. The next generation will be more sophisticated: organoids in virtual environments, organoid-controlled robots, distributed organoid networks.

Each step expands the action space, enriches the sensory input, and allows the organoid to build richer generative models. Each step lets biological active inference do what it does best.


Coherence Returns

In AToM terms, organoids minimizing free energy are systems maximizing coherence. Free energy is a measure of incoherence—the misalignment between model and world. Minimizing it means increasing the fit, the integration, the predictability of the system’s trajectory through state space.

M = C/T: Meaning equals Coherence over Time (or Tension).

For an organoid, meaning isn’t semantic. But it’s real. The meaning of a sensory pattern is its predictability—how well it fits the model, how much surprise it carries, how reliably it leads to expected outcomes. That’s coherence in action: the geometry of states that integrate over time without falling apart.

When organoids learn, they’re not acquiring knowledge—they’re increasing coherence. The network’s internal dynamics align better with the statistical structure of inputs. Trajectories become more stable. Surprise decreases. The system persists.

This is why organoids don’t need consciousness to be intelligent. Intelligence is coherent inference. Consciousness might be something else—perhaps a particular kind of high-order inference, perhaps an illusion generated by deep hierarchical models. But the intelligence itself is just free energy minimization at scale.

And organoids do that brilliantly.


Further Reading

  • Friston, K. (2010). “The free-energy principle: a unified brain theory?” Nature Reviews Neuroscience, 11, 127–138.
  • Kagan, B. J., et al. (2022). “In vitro neurons learn and exhibit sentience when embodied in a simulated game-world.” Neuron, 110(23), 3952–3969.
  • Ramstead, M. J. D., et al. (2020). “Variational ecology and the physics of sentient systems.” Physics of Life Reviews, 31, 188–205.
  • Kirchhoff, M. D., & Kiverstein, J. (2019). “Extended consciousness and predictive processing: A third-wave view.” Routledge.
  • Pezzulo, G., et al. (2015). “Active inference, homeostatic regulation and adaptive behavioural control.” Progress in Neurobiology, 134, 17–35.

This is Part 8 of the Organoid Intelligence series, exploring biological computation at the wetware frontier.

Previous: The Ethics of Organoid Intelligence: When Does Tissue Become Someone? Next: Synthesis: What Organoid Intelligence Teaches About Biological Coherence


Related Series: - The Free Energy Principle — Deep dive into Friston’s framework - Basal Cognition — Cellular intelligence and morphogenesis - 4E Cognition — Embodied, embedded, enacted, extended mind