Computation

Computation
Coherence in artificial systems: from category theory to neuromorphic hardware.

Coherence in artificial systems.

Silicon is learning to cohere. Not through magic — through math we're only now understanding. Neural networks aren't black boxes if you know where to look. Category theory isn't abstract nonsense — it's the compositional structure that makes learning possible.

Applied Category Theory
The new mathematical language for compositional systems. Functors, morphisms, string diagrams. Why this abstract math is eating machine learning, physics, and linguistics.
Mechanistic Interpretability
Reading the minds of AI systems. Superposition, circuits, sparse autoencoders. What we're learning about coherence by looking inside neural networks.
Hyperdimensional Computing
Computing in 10,000 dimensions. Kanerva's insight that high-dimensional vectors might be how brains actually compute. Efficient, robust, and weirdly biological.
Neuromorphic Computing
Chips that spike like neurons. Intel Loihi, event-based sensing, liquid neural networks. Why brain-inspired hardware will be 1000x more efficient than GPUs.
Graph RAG
Knowledge graphs for AI retrieval. Beyond naive vector search to structured reasoning. How to give AI systems actual knowledge, not just pattern-matching.
Test-Time Compute Scaling
The new scaling law. Why thinking harder beats training bigger. From o1 to o3 and the economics of inference.
Active Inference Applied
From theory to code. Generative models, expected free energy, message passing. How to actually build agents that minimize surprise.
The Intelligence of Energy
Hub page for compute efficiency and energy constraints series - why thinking costs joules and what that means for AI and biology
The Intelligence of Energy
Hub page for compute efficiency and energy constraints series - why thinking costs joules and what that means for AI and biology