Computation
Coherence in artificial systems.
Computation is where Ideasthesia studies coherence outside living tissue. Silicon systems do not metabolize, heal, or reproduce the way organisms do, but they still face related structural problems: how to preserve information, compose parts, generalize across contexts, recover from noise, and keep a model stable enough to act without becoming too rigid to learn.
This hub collects the mathematical and engineering paths through that territory. It is the bridge between biological cognition and artificial intelligence.
Applied Category Theory
Applied Category Theory is the mathematical language of compositional systems. Objects, morphisms, functors, natural transformations, string diagrams, and monads look abstract at first, but they answer a practical question: how do complex systems preserve structure while transforming?
This path is useful for machine learning, physics, linguistics, programming, and any domain where relationships matter more than the internal nature of the things being related.
Mechanistic Interpretability
Mechanistic Interpretability asks what neural networks are actually doing inside. Circuits, features, superposition, sparse autoencoders, and model editing all treat AI systems as objects that can be opened rather than oracles that must simply be trusted.
This series is about legibility. If a model coheres, we should be able to ask how. If it fails, we should be able to locate the failure.
Alternative Computing Architectures
Hyperdimensional Computing explores robust computation in high-dimensional vector spaces. Neuromorphic Computing studies chips that spike, adapt, and operate more like nervous tissue than conventional processors.
Together, these paths show why the future of computation may not be bigger versions of the same architecture. Brains are efficient because they compute differently.
Retrieval And Agents
Graph RAG moves retrieval beyond naive vector similarity into structured knowledge. Test-Time Compute Scaling studies the new economics of inference: when it is better to think longer instead of train larger. Active Inference Applied translates surprise-minimizing agents into buildable systems.
These series are the applied edge of the computation hub. They ask how artificial systems can search, reason, plan, and update without losing coherence.
Why Computation Belongs Here
The biology section shows coherence in wetware. Computation shows coherence in formal systems, learned representations, hardware, and agents. The substrate changes, but the central question remains: what lets a system preserve useful structure while continuing to adapt?
Comments ()