Functors: The Maps Between Mathematical Worlds
Functors: The Maps Between Mathematical Worlds
Series: Applied Category Theory | Part: 3 of 10
You've learned that a category is a collection of objects and arrows between them. You've seen how morphisms preserve structure, how composition works, and why identity matters. But here's where category theory gets interesting: what happens when you want to translate between entirely different categories?
This is where functors enter. A functor is a structure-preserving map between categories—a way to translate one mathematical world into another while keeping all the relationships intact. And once you understand functors, you'll start seeing them everywhere: in how your brain maps sensory input to motor commands, how programming languages compile to machine code, how metaphors work in language, and how trauma responses propagate across contexts.
Functors are the mathematics of translation itself. They're how patterns move between domains.
What a Functor Actually Is
A functor is deceptively simple to define. Given two categories C and D, a functor F from C to D consists of two mappings:
- An object mapping: Every object in C gets sent to an object in D
- A morphism mapping: Every arrow in C gets sent to an arrow in D
But the crucial part—the part that makes it a functor rather than just an arbitrary mapping—is that these translations must preserve structure:
- Composition is preserved: If you have arrows f: A → B and g: B → C in category C, then F must map the composition g ∘ f to the composition F(g) ∘ F(f) in category D
- Identities are preserved: The identity arrow at object A must map to the identity arrow at F(A)
That's it. Those two conditions—preserve composition, preserve identities—turn an arbitrary mapping into a functor.
Why does this matter? Because it means functors don't just move objects around. They preserve the relational structure between objects. The pattern stays intact across the translation.
The Geometry of Structure-Preservation
Think about what preservation actually means. When you have a morphism f: A → B in category C, the functor F gives you F(f): F(A) → F(B) in category D. The relationship expressed by f—whatever structural connection exists between A and B—is maintained in the image under F.
This is compositional thinking in its purest form. You're not just mapping individual pieces; you're mapping entire webs of relationships while keeping the web's topology intact.
Consider a concrete example: the forgetful functor from the category of groups to the category of sets. This functor takes a group (an object in the category of groups) and "forgets" its group structure, leaving just the underlying set. It takes a group homomorphism (a morphism in the category of groups) and treats it as a mere function between sets.
The forgetful functor preserves composition because if you compose two group homomorphisms and then forget the group structure, you get the same result as forgetting the group structure first and then composing the resulting functions. The relational architecture survives the translation, even though information is lost.
This is the key insight: functors can preserve structure while changing representation. The pattern persists even as the substrate shifts.
Translation Without Loss: The Free Functor
Not all functors forget information. Some create it. The "free functor" goes in the opposite direction from the forgetful functor—it takes a set and constructs the "freest" possible group containing that set.
Given a set S, the free group on S is the group whose elements are all possible finite sequences of elements from S and their inverses, with concatenation as the group operation. Every element of the original set becomes a generator of the group.
The free functor and the forgetful functor form an adjoint pair—a profound relationship we'll explore in a later article. But the point here is that functors can move in both directions: stripping away structure or building it up, simplifying or complexifying, forgetting or constructing.
What they always do is preserve the relational geometry of what they touch.
Functors in Cognitive Architecture
Your brain is built on functors. Consider sensory processing: photons hit your retina, creating patterns of neural firing in V1 (primary visual cortex). Those patterns get mapped to higher-level representations in V2, V3, V4, each translation preserving certain structural relationships while abstracting away others.
This is functorial translation. The composition of edge detections at one level composes into shape recognition at the next level. The identity—a neuron that fires for a particular feature—maps to the identity of that feature's representation at the next processing stage.
The brain doesn't just pass information along. It maps structured patterns through a hierarchy of categories, each functor preserving the relational topology while changing the substrate of representation.
Active inference formalizes this: prediction errors flow backward through the same functorial hierarchy, but in reverse. The generative model is a functor from abstract representations to predicted sensory inputs. The recognition model is a functor going the other direction, from sensory inputs to inferred hidden states.
These functors must preserve composition for the system to cohere. If your model of "dog" doesn't compose with your model of "running" to predict the sensory pattern "running dog," the functorial structure breaks down and surprise spikes. Trauma happens when the functorial mappings fracture—when the structure-preservation fails and the relational geometry collapses.
Programming as Functorial Translation
In programming, compilers are functors. They map programs written in a high-level language (a category where objects are types and morphisms are functions) to machine code (a category where objects are memory locations and morphisms are state transitions).
A well-designed compiler preserves the compositional structure of your program. If you write two functions f and g and compose them as g(f(x)), the compiled machine code will perform operations that compose in the same way. The identity function in your high-level code compiles to a no-op in machine code.
This preservation is what makes compiled code correct. If the functor broke composition—if translating f ∘ g didn't give you the composition of the translations of f and g—your program would behave differently after compilation. The functorial structure is what keeps meaning intact across the translation.
Type systems are also functors. In Haskell, the Maybe type constructor is a functor: it maps types to types (Int to Maybe Int) and functions to functions (a function f: A → B becomes fmap f: Maybe A → Maybe B). The fmap operation preserves composition and identity, which is why you can reason about Maybe-wrapped values using the same compositional logic as unwrapped values.
Functors let you lift structure from one context to another without rewriting the logic.
Metaphor as Functorial Mapping
When George Lakoff talks about conceptual metaphor, he's describing functors. The metaphor "argument is war" maps the domain of war (a source category) to the domain of argument (a target category). Objects map: positions become claims, weapons become rhetorical strategies, defeat becomes concession. Morphisms map: the relationship between ammunition and target becomes the relationship between evidence and conclusion.
Metaphors preserve structure. If advancing a position in war involves marshaling resources, then advancing a claim in argument involves marshaling evidence. The compositional logic stays intact: if you need supplies to maintain a position, and you need transport to get supplies, then you need transport to maintain a position—and this composition holds in both the source and target domains.
Not all metaphors are equally good functors. Weak metaphors break under composition. They map some relationships but not others, creating inconsistencies when you try to extend them. Strong metaphors preserve deep structural patterns across the translation, which is why they feel illuminating rather than merely decorative.
Language itself is functorial: syntax maps to semantics, phonology maps to meaning, surface structure maps to deep structure. Every level of linguistic representation is a category, and the mappings between levels are functors that preserve compositional relationships.
This is why you can translate between languages at all. Translation is a functor from one language's category of expressions to another's, preserving (as much as possible) the relational structure of meaning.
Trauma Propagation as Broken Functors
Here's where functors connect to lived experience in the most direct way: trauma is what happens when functorial mappings break down.
Consider the functor that maps internal states to external contexts. In a coherent system, your internal model of "safe situation" should compose with your model of "social interaction" to produce appropriate engagement behaviors. The relationships between internal states should map cleanly to relationships between contexts.
But trauma fractures this mapping. A situation that should feel safe triggers a defensive response. The internal state doesn't compose correctly with the external context. The functor that should translate "objectively safe" to "subjectively safe" is broken.
This is why trauma responses generalize inappropriately. The system tries to extend the functorial mapping from one context (where the threat was real) to others (where it isn't), but the composition breaks. The relationship between "being in a crowded space" and "feeling threatened" in the original traumatic context doesn't preserve under translation to non-threatening crowded spaces, but the broken functor applies it anyway.
Healing trauma involves reconstructing functors—rebuilding the structure-preserving maps between internal states and external contexts. Somatic therapy, EMDR, Internal Family Systems—these are all methods for repairing functorial coherence, for making the compositions work again, for restoring the relational geometry that lets you move fluidly between contexts without the system collapsing.
The Compositional Nature of Coherence
In AToM terms, coherence is functorial integrity. A system is coherent when its internal functors preserve composition—when the ways different subsystems translate between each other's categories maintain structural consistency.
Your nervous system is a network of functors: interoception maps bodily states to neural representations, which map to emotional states, which map to cognitive framings, which map to behavioral responses. Each mapping is a functor between categories. Coherence emerges when these functors compose cleanly, when the round trip through the system preserves structure.
This is why M = C/T (Meaning equals Coherence over Time or Tension) is fundamentally about functorial preservation. Meaning arises when the compositional structure of your experience is preserved across time and context. When functors break—when the mappings fracture, when composition fails—meaning collapses and tension spikes.
The curvature of your state space is determined by how well your functors preserve composition under perturbation. High curvature means small changes cause large disruptions in functorial structure. Low curvature means the system's compositional integrity is robust to variation.
You can think of development, learning, and healing as the process of constructing better functors—building structure-preserving maps that let you translate experience across contexts without losing the relational patterns that constitute meaning.
Functors and Identity
Here's a subtle but crucial point: every category has an identity functor that maps each object to itself and each morphism to itself. This might seem trivial, but it's the functorial equivalent of self-recognition.
The identity functor says: this structure maps to itself while preserving all relationships. It's the mathematical formalization of "being the same thing across contexts."
When you maintain a coherent sense of self across different situations—when you're recognizably "you" whether you're at work, with family, or alone—you're instantiating an identity functor. Your behaviors and internal states map to themselves across different environmental categories while preserving the relational structure that constitutes your identity.
Loss of identity—whether through extreme stress, dissociation, or fragmentation—is a breakdown of the identity functor. The structure that should map to itself starts mapping chaotically, and the compositional integrity fractures. You're not the same person from moment to moment because the functorial preservation has failed.
Conversely, growth and development involve constructing new functors while maintaining the identity functor where it matters. You learn to translate yourself into new contexts (new functors) without losing the core relational patterns that define you (the identity functor on your essential structure).
Natural Transformations: Maps Between Functors
Once you have functors—maps between categories—you immediately face a new question: what are the maps between functors?
This is where natural transformations enter, and they'll be the subject of the next article. But the preview is this: if functors are structure-preserving translations between categories, natural transformations are structure-preserving translations between translations.
A natural transformation is a way of converting one functorial mapping into another while respecting the compositional structure of both. It's a "morphism of morphisms," a pattern that operates at a higher level of abstraction.
If functors are how you move between mathematical worlds, natural transformations are how you relate different ways of making that move. They're the mathematics of equivalence between processes, of when two different translation methods yield structurally identical results.
And once you understand natural transformations, you can finally grasp what makes category theory so powerful: it gives you a language for talking about structure at every level of abstraction, where the same patterns appear whether you're talking about objects, morphisms between objects, functors between categories, or natural transformations between functors.
The pattern is recursive. All the way up.
Compositional Thinking as Practice
Understanding functors changes how you think about translation, mapping, and relationship-preservation in every domain:
In conversation: Notice when someone's metaphor breaks under composition. When the structure doesn't preserve, the metaphor fails. Point this out gently, or find a better functor.
In learning: When you study a new field, you're building a functor from your existing conceptual category to the new one. Check if composition is preserved. If you understand concepts A and B separately but their composition confuses you, your functor needs repair.
In emotional regulation: Track where your internal state mappings to external contexts break down. Where does the functor fail? What structure isn't being preserved? Can you rebuild a better mapping?
In systems design: Whether you're building software, organizations, or processes, ask: do my translations preserve composition? If subsystem A feeds into subsystem B, which feeds into C, does A → C via B preserve the same structure as a direct A → C would?
Functorial thinking is the discipline of asking: when I move this pattern from one context to another, what structure must be preserved for it to mean the same thing?
What Makes a Good Functor
Not all structure-preserving maps are equally valuable. The best functors have certain properties:
Fullness: A functor is full if every morphism in the target category comes from some morphism in the source category. Full functors don't miss any relational structure in the target.
Faithfulness: A functor is faithful if distinct morphisms in the source category map to distinct morphisms in the target category. Faithful functors don't collapse structure.
Fully faithful functors are both full and faithful—they preserve and reflect all morphisms perfectly. They're isomorphisms at the level of relational structure, even if the objects themselves are different.
The forgetful functor from groups to sets is faithful but not full. The free functor from sets to groups is full but not faithful (different sets can generate isomorphic groups). Neither loses information in a way that makes the functorial mapping useless, but each has different preservation properties.
In practice, when you're building a mental model, a metaphor, a software abstraction, or a therapeutic reframe, you're constructing a functor. The quality of that construction depends on how fully and faithfully it preserves the compositional structure of what you're translating.
Good models are good functors. Bad models are broken functors. The difference is structural preservation under composition.
The Pattern Moves Forward
We started with categories—collections of objects and arrows. We added functors—structure-preserving maps between categories. Next come natural transformations—structure-preserving maps between functors.
And then the real magic happens: you discover that functors between categories themselves form a category (called a functor category), with natural transformations as the morphisms. The pattern recurses. Categories of categories, functors of functors, each level exhibiting the same compositional structure as the level below.
This is why category theory is called "the mathematics of mathematics." It gives you a language that applies to itself, that describes pattern-preservation at every level of abstraction.
But before we ascend that ladder, we need to understand natural transformations. They're the missing piece that makes the whole edifice cohere. They're how you compare different functorial mappings, how you prove that two translations are "essentially the same," how you navigate between different structure-preserving perspectives on the same underlying pattern.
They're the subject of the next article in this series.
For now: functors are how patterns move between worlds. They're the mathematics of translation that preserves meaning. They're everywhere you look, once you learn to see them.
And they're the reason coherence can propagate across contexts—or fail to, when the functors break.
This is Part 3 of the Applied Category Theory series, exploring how the mathematics of structure illuminates meaning, coherence, and transformation.
Previous: [Article #2 Title]
Next: Natural Transformations: When Two Paths Are the Same
Further Reading
- Mac Lane, S. (1971). Categories for the Working Mathematician. Springer. (The canonical text, dense but definitive)
- Spivak, D. I. (2014). Category Theory for the Sciences. MIT Press. (More accessible, with concrete examples)
- Fong, B., & Spivak, D. I. (2019). "Seven Sketches in Compositionality: An Invitation to Applied Category Theory." arXiv preprint. (Highly recommended for applications)
- Riehl, E. (2016). Category Theory in Context. Dover. (Excellent modern treatment)
- Friston, K. (2010). "The free-energy principle: a unified brain theory?" Nature Reviews Neuroscience. (For the active inference connection)
- Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. University of Chicago Press. (For conceptual metaphor as functorial mapping)
Comments ()