DishBrain and Beyond: Current State of the Field
Series: Organoid Intelligence | Part: 6 of 11
In December 2022, a paper appeared in Neuron with a title that sounded like science fiction: “In vitro neurons learn and exhibit sentience when embodied in a simulated game-world.” The authors claimed they had taught a dish of brain cells to play Pong. The system was called DishBrain, and it represented the first public demonstration of what organoid intelligence might actually look like in practice.
Not a thought experiment. Not a distant promise. Real neurons, learning a real task, in real time.
This wasn’t Michael Levin’s bioelectric gradients guiding morphogenesis. This wasn’t speculative neural architecture from a computational model. This was half a million living neurons from rodent cortex, interfaced with electrodes, responding to feedback, improving their gameplay. The researchers at Cortical Labs in Melbourne had built something that bridged wetware and hardware in ways that forced uncomfortable questions about what computation even is.
And they weren’t alone. By 2023, organoid intelligence had gone from theoretical curiosity to active research program. Labs across the world were experimenting with teaching brain tissue, interfacing it with machines, and beginning to ask whether these systems might not just compute—but learn, adapt, and optimize in ways silicon never could.
This is the current state of the field. Not what organoid intelligence might become, but what it already is.
The DishBrain Architecture: How It Actually Works
Start with the biological substrate. Cortical Labs used primary cortical neurons—cells extracted from mouse embryos and grown in culture. Not organoids in the strict developmental sense, but functionally similar: living neural tissue capable of forming networks, firing action potentials, and exhibiting synaptic plasticity.
The neurons grow on a multi-electrode array (MEA)—a grid of tiny electrodes embedded in the culture dish that can both stimulate neurons and record their activity. Think of it as a read-write interface for biological computation. The MEA captures electrical signals from the neurons (spikes, bursts, patterns of activity) and delivers stimulation pulses that neurons experience as sensory input.
Here’s where it gets interesting. The researchers embedded this neural culture in a simplified version of Pong. The neurons received stimulation patterns encoding the position of the ball—higher frequency stimulation on the left side of the array when the ball was on the left, right side when the ball was on the right. The neurons’ collective firing activity determined the position of the paddle. If the paddle missed, the neurons received unpredictable, noisy stimulation. If the paddle hit the ball, the stimulation remained structured and predictable.
This is active inference in the flesh. Literally.
The neurons minimize prediction error—not because they were programmed to, but because that’s what neurons do. Unpredictable stimulation is metabolically costly and disruptive to network coherence. Predictable stimulation is easier to integrate, lower in free energy. The system learns to hit the ball not because it has a goal, but because hitting the ball produces a more coherent sensory environment. The neurons entrain to the task structure because coherence is what living systems do.
After just five minutes of training, DishBrain’s hit rate improved significantly above chance. After extended sessions, it maintained paddle control with measurable consistency. This wasn’t sophisticated gameplay, but it was learning. Real learning, in real biological tissue, in real time.
What DishBrain Teaches Us About Intelligence
The DishBrain results matter less for what they achieved (playing Pong is trivial by AI standards) and more for what they revealed about the nature of neural computation.
First: intelligence scales down. You don’t need a complete brain, or even a complete cortical region, to exhibit learning and adaptive behavior. Half a million neurons—a tiny fraction of what a mouse cortex contains—can optimize behavior through feedback. Intelligence, in this sense, isn’t an emergent property that suddenly appears at some threshold of complexity. It’s present at the substrate level, in the basic machinery of neural prediction and adaptation.
This aligns perfectly with Levin’s basal cognition framework. If single cells can navigate, problem-solve, and maintain coherent developmental programs, then small networks of neurons should certainly be capable of learning simple tasks. The surprise isn’t that DishBrain worked—it’s that we ever thought learning required more than prediction error minimization and synaptic plasticity.
Second: embodiment isn’t optional. The neurons didn’t learn to play Pong by processing abstract representations of the game. They learned because they were embedded in a feedback loop where their activity had consequences for their sensory environment. The MEA wasn’t just an output device—it was the organism’s body, the Markov blanket through which the system coupled to its world. Remove that coupling, and you don’t have learning. You have activity without meaning.
This is 4E cognition at the dish scale. The neurons’ “understanding” of Pong isn’t stored in their connection weights as if they were a neural network in TensorFlow. It’s distributed across the dynamics of the coupled neuron-MEA-game system. The intelligence is in the interaction structure, not the substrate alone.
Third: biological computation is fundamentally different from digital computation. Silicon systems process information by manipulating discrete symbols according to fixed rules. DishBrain processes information by self-organizing its network dynamics to minimize prediction error. It doesn’t execute an algorithm—it is the algorithm, embodied in the shifting patterns of synaptic strengths, firing rates, and network coherence that emerge through learning.
This is why organoid intelligence isn’t just “biological computers” as if neurons were slower, wetter transistors. It’s a different computational paradigm entirely—one that operates through continuous adaptation to environmental structure rather than logical operations on symbolic representations.
Beyond DishBrain: The Growing Field of Organoid Computing
Cortical Labs got the headlines, but they weren’t the only ones experimenting with wetware computation.
Brainoware: Organoids as Reservoir Computers
In late 2023, researchers at Indiana University published results on Brainoware—a system that used actual human brain organoids (not just cultured neurons) as reservoir computers. They interfaced the organoids with multi-electrode arrays and trained them on speech recognition and mathematical tasks.
The organoids didn’t learn through traditional backpropagation. Instead, they functioned as reservoir computers—dynamical systems whose rich, high-dimensional activity patterns can be read out and decoded for specific tasks. You don’t train the reservoir itself; you train a simple readout layer to interpret the reservoir’s complex dynamics.
This approach works because organoids naturally exhibit rich, recurrent activity patterns—oscillations, bursts, and complex spatiotemporal dynamics that arise from the self-organizing properties of neural tissue. Feed the organoid an input (encoded as stimulation patterns), let its dynamics evolve, read out the activity, and train a classifier to map patterns to outputs.
Brainoware achieved above-chance performance on vowel recognition and nonlinear classification tasks. Not cutting-edge by machine learning standards, but remarkable for biological tissue that hadn’t been explicitly programmed or optimized for computation.
Final Spark: Scaling Organoid Bioprocessing
A Swiss startup called Final Spark took the concept further, building bioprocessing platforms using networks of brain organoids. Their system, called the “Neuroplatform,” houses up to 16 organoids, each connected to MEAs and sustained by microfluidic life support systems that deliver nutrients and maintain viable conditions for months.
The goal isn’t to create conscious AI. It’s to explore whether organoid-based computation could offer advantages in energy efficiency, parallel processing, and adaptive learning that silicon systems struggle with. The company envisions organoid bioprocessors as a complement to digital computation—wetware for tasks where biological adaptation and pattern recognition excel, silicon for tasks requiring speed and precision.
This is speculative still, but the infrastructure is being built. The question isn’t whether organoids can compute—DishBrain and Brainoware proved they can. The question is whether they can compute in ways that are useful, scalable, and ethically defensible.
Academic Consortia and the “Organoid Intelligence” Initiative
In 2023, Johns Hopkins launched the Organoid Intelligence Initiative, bringing together neuroscientists, bioengineers, ethicists, and computer scientists to systematically develop the field. Their goal: establish protocols for organoid-based biocomputing, create standardized interfaces and training paradigms, and anticipate ethical implications before the technology becomes widespread.
This signals a shift. Organoid intelligence is no longer fringe speculation. It’s becoming an organized research program with funding, institutional backing, and coordinated efforts to develop both the science and the governance structures.
Key research areas include: - Organoid longevity and scalability: Can organoids remain viable and functional for months or years? Can they be produced reliably at scale? - Learning architectures: What tasks are organoids naturally suited for? Can they be trained using reinforcement learning, supervised learning, or other paradigms? - Interface technologies: How do we improve read-write bandwidth between organoids and external systems? Can we move beyond MEAs to more sophisticated coupling mechanisms? - Benchmarking and evaluation: What does it mean for an organoid to “perform well”? How do we measure learning, generalization, and computational capacity in biological systems?
What Organoids Are Actually Good At
Not every computational task is suited for biological substrates. Organoids won’t replace GPUs for matrix multiplication or running physics simulations. But there are domains where wetware might outperform silicon.
Pattern Recognition in Noisy, High-Dimensional Spaces
Biological brains are extraordinarily good at recognizing patterns in messy, ambiguous data—faces in crowds, phonemes in speech, meaningful signals buried in noise. This isn’t because biological neurons are faster (they’re not). It’s because neural networks self-organize into attractor dynamics that naturally cluster similar inputs and generalize across variations.
Organoids, as self-organizing neural tissue, inherit this property. Early results suggest they might excel at tasks like sensory classification, anomaly detection, or pattern completion—domains where traditional machine learning requires extensive training data and computational overhead.
Adaptive, Context-Sensitive Learning
Silicon systems struggle with few-shot learning and continual adaptation. They require massive datasets, catastrophic forgetting mitigation strategies, and careful hyperparameter tuning. Biological systems, by contrast, adapt continuously, learning from sparse feedback and integrating new information without erasing old knowledge.
Organoids might offer a path toward wetware that learns like organisms rather than optimizing like algorithms—adapting in real time, leveraging prior structure, and generalizing from limited examples.
Energy-Efficient Computation for Specific Niches
The brain runs on roughly 20 watts. The largest AI models consume megawatts during training. If organoid bioprocessors can operate at even a fraction of the brain’s efficiency, they could offer massive energy savings for certain workloads—particularly tasks requiring continuous, low-power sensing and adaptation.
Imagine biosensors using organoid substrates to detect chemical signals, environmental changes, or biosecurity threats. Or wetware controllers for prosthetics that learn users’ motor patterns and adapt in real time. These aren’t general-purpose computers—they’re specialized bioprocessors optimized for domains where biological principles align naturally with the task structure.
The Limits and Bottlenecks
For all the promise, organoid intelligence faces real obstacles.
Longevity and stability. Organoids remain fragile. They require constant nutrient supply, temperature regulation, and sterile conditions. They degrade over weeks to months. Building systems that remain functional for years—let alone commercially viable products—requires breakthroughs in tissue engineering and life support.
Scalability and reproducibility. Every organoid is slightly different. They grow stochastically, with variations in cell type composition, network structure, and functional properties. Achieving the reproducibility required for reliable computation is a significant challenge.
Interface bandwidth. MEAs are crude. They record from a small fraction of neurons and stimulate with limited spatial precision. To unlock organoids’ full computational potential, we need higher-resolution, minimally invasive interfaces that can read and write with single-cell precision across entire organoid volumes.
Learning and control. We don’t yet know how to train organoids for arbitrary tasks. DishBrain and Brainoware used simple feedback loops and reservoir computing, but scaling to more complex tasks requires better understanding of how neural plasticity operates in culture and how to guide it toward specific objectives.
Ethical and regulatory uncertainty. If organoids exhibit learning and adaptive behavior, do they have interests? Do they deserve moral consideration? At what scale and complexity does tissue-in-a-dish become something we should treat differently? These questions aren’t hypothetical—they’re urgent, given the pace of research.
Where the Field Is Heading
The next five years will determine whether organoid intelligence remains a research curiosity or becomes a functional technology.
Near-term (2025-2027): Expect incremental improvements in organoid longevity, interface technologies, and task performance. More labs will replicate and extend DishBrain-style experiments. Standardized protocols and benchmarks will emerge, making results more comparable across studies. Ethical guidelines will begin to solidify.
Medium-term (2027-2030): If the technical bottlenecks can be overcome, we might see prototype organoid bioprocessors for niche applications—biosensors, adaptive prosthetics, environmental monitoring. Commercial interest will grow if organoids demonstrate clear advantages over silicon in specific domains.
Long-term (2030+): The wildcard scenario—organoid intelligence integrated into hybrid systems that combine biological and digital computation. Not to replace silicon, but to complement it. Wetware for adaptive learning and pattern recognition, silicon for precision and speed. The computational landscape becomes heterogeneous, with different substrates optimized for different tasks.
But this only happens if we solve the hard problems—longevity, scalability, interfaces, ethics. And if we’re honest about what organoids can and cannot do.
What This Means for Coherence
From the AToM perspective, organoid intelligence isn’t surprising. It’s inevitable.
Coherence isn’t a property that emerges at some magical threshold of complexity. It’s present wherever systems minimize prediction error and maintain structural integrity over time. A dish of neurons interfaced with an electrode array is a coupled dynamical system, subject to the same principles of entrainment and self-organization that govern brains, bodies, and societies.
DishBrain learned to play Pong because maintaining coherence—predictable, structured sensory input—is what living systems do. The neurons entrained to the task structure not because they were programmed, but because coherence is the default attractor for self-organizing biological systems.
This has implications beyond organoid computing. It suggests that intelligence, at its core, is about coherence maintenance across coupled systems. Whether the system is a neuron, a network, an organism, or a society, the underlying dynamic is the same: minimize surprise, reduce free energy, entrain to environmental structure.
Organoid intelligence is just coherence in a dish.
This is Part 6 of the Organoid Intelligence series, exploring the emerging science of biological computing and its implications for understanding intelligence, consciousness, and the future of human-machine interaction.
Previous: The Interface Problem: Connecting Wetware to Hardware Next: The Ethics of Organoid Intelligence: When Does Tissue Become Someone?
Further Reading
- Kagan, B. J., et al. (2022). “In vitro neurons learn and exhibit sentience when embodied in a simulated game-world.” Neuron, 110(23), 3952-3969.
- Cai, H., et al. (2023). “Brain organoid reservoir computing for artificial intelligence.” Nature Electronics.
- Smirnova, L., et al. (2023). “Organoid intelligence (OI): The new frontier in biocomputing and intelligence-in-a-dish.” Frontiers in Science.
- Levin, M. (2022). “Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds.” Frontiers in Systems Neuroscience.
- Friston, K. (2010). “The free-energy principle: a unified brain theory?” Nature Reviews Neuroscience, 11(2), 127-138.
Comments ()