We Are All Hive Minds
Intelligence without a central planner
It’s common to think that intelligence and decision-making need a central thinker. We humans do smart stuff all the time, and that’s because we’re (seemingly) a centralized thinker, able to weigh different options and constraints holistically to find solutions. While we occasionally have referendums, we frequently have leaders to make decisions for groups of people—we elect representatives or hire managers.
Yet many intelligent systems work without a central authority. Eusocial insects like ants are a canonical example of swarm intelligence, where individual ants making decisions locally solves some collective problem of the colony.
For example, take the foraging of desert harvester ants. These ants scavenge for seeds that have been blown into their environment by the wind. But foraging in a desert is hard—being out in the sun in low humidity conditions takes a lot of water, which they can only replenish through eating seeds. The colony needs to balance water loss against potential gain. Having lots of ants out foraging means losing water quickly, and sometimes foraging conditions just aren’t very good. But other times, there might be a copious amount of seeds recently blown into the area, so sending out lots of ants to forage will be worth the harvest.
How does the colony decide how many ants to send out? It’s easy to think maybe some central planner does it—the colony has a queen, after all. But the queen isn’t really the leader of the colony, she’s more like the colony’s reproductive organ. She stays deep in the colony, away from where all the foraging ants are making decisions.
Instead, it’s solved through a feedback loop of collective behavior. In the morning, a set group of ants (patrollers) initiate foraging. They go out, and when they find a seed, they return. Other potential foragers hang out in the cool comfort of the nest tunnels. If one of the potential foragers keeps bumping into ants that were out foraging, that’s a signal to them that conditions must be good—foragers keep finding stuff they need to bring back to the nest. Better get out there to help!
The ant is performing a sort of evidence accumulation. The frequency of ants returning is evidence of foraging conditions. Each interaction with a returning ant is a bit of information, and if enough interactions occur in a short enough period of time, that means conditions are “good enough”.
In technical terms, the ant can be modeled as a leaky integrator—each interaction adds evidence that decays over time, until the evidence hits a certain threshold and the ant reacts. This sort of leaky integrator model is often used to model neurons as well—neurons accumulate electrical charge from their inputs until the combined signal crosses a threshold, and the neuron fires.
The threshold of what is “good enough” can change based on other conditions, like the humidity. The ant, locally, knows the weather conditions and so can adjust how many forager run-ins she (all worker drones are female) would need to have before venturing out. This local threshold change makes the system flexible and globally adaptive.
This forms a closed-loop system, allowing the forager ant colony to solve a colony-wide problem (how many foragers do we send out) by having ants make decisions locally. Each ant is doing something simple, but the colony-wide behavior solves a hard problem.
This is just one example of the collective intelligence of ants (and other eusocial insects). Ants and other eusocial insects work together to solve other problems like where to start a nest.
The intelligent decision-making of ant colonies is neat, but perhaps not surprising—a bunch of little brains doing something collectively smart makes some sense. But there’s an example of similar smart system-level decisions resulting from localized rules without any brains involved at all: Slime molds.
Slimy Decisions Without a Brain
P. polycephalum is a weird creature. It doesn’t have any membranes separating cells within it (though it does have many cell nuclei), so it is technically one giant cell. It forms networks of interconnected veins as it explores its environment, looking for food sources—like bacteria or fungi. In good conditions, they can grow to be over 900cm2.
Why am I talking about slime molds? Because they are another example of decision-making happening via local rules being followed.
The different veins of a slime mold branch out in the environment. When one hits food, it triggers biochemical mechanisms that shuttle more cytoplasm to that vein. This thickens the vein—at the expense of all other veins. The thickening is more extreme if more food is found.
This forms a feedback loop. If a vein isn’t finding food, and food is being found elsewhere, it will shrink down (negative feedback). If a vein is finding lots of food, it grows larger (positive feedback).
These simple feedback rules allow the slime mold to find the shortest paths to food—even navigating mazes in labs. Local rule-following by the veins leads to global problem-solving.
Other experiments have shown slime molds can make tradeoffs between food quality and risks—specifically, they avoid the risk of light (which can damage the cell), but can be coaxed into the light if the patch of food is high quality enough.
The slime mold doesn’t have a brain or nerves. There’s no central decision-maker within the slime mold, but complex biochemical reactions that integrate different forms of information. These biochemical mechanisms aren’t fully understood, but we know some of what they do: veins that detect food get more cytoplasm, those encountering light get less. The amount of cytoplasm within the cell is fixed, so more going to one vein means less in another. The feedback loop means the slime mold will navigate their environment, making tradeoffs.
The Creepy Crawlies in our Heads
Okay, here’s the payoff of all of this: slime molds can make decisions without a brain, but brains themselves don’t have a central decision-maker. Brains don’t have brains.

It’s easy to imagine there’s some place in the brain where “we” reside, some central office of decision-making. But that picture just pushes the problem back a step. If there were one place the decision is made, what would we find inside? Certainly not a little executive with their own little brain making decisions, because if so we would just have to open up their little head and find how the decisions are made in there.
With ant colonies, we took the ant-level point of view of how following local rules solved colony-wide problems. Similarly, with the slime mold, we can take a view of what an individual vein is detecting to solve slime mold wide problems. With brains, we can do the same thing: take the neuron-level view of how local rule following leads to solving brain/person level problems.
Just as ants integrate information about their interactions with returning foragers, neurons integrate their synaptic inputs. Neurons act as a leaky integrator and fire when the activity of upstream neurons hits a certain threshold.
Part of what makes the brain so amazing and capable of solving problems much more abstract than those of an ant colony is the flexibility of the local rules neurons follow. Neurons adjust how they connect with other neurons based on the activity they receive. These adjustments allow a population of neurons, without a centralized planner, to respond to patterns in their environment.
One common rule neurons follow is Hebbian learning: that which fires together, wires together. Neurons that find themselves firing shortly after the firing of an upstream neuron strengthen those connections. Why might that be useful? Caveat here: this is all a vast simplification meant to get across a general principle (local rules can find structure), not meant as some unified picture of learning, vision, development, or the brain.
Imagine a neuron, D, receiving input from three upstream neurons, A, B, and C. Each responds to contrast in different patches in the visual field—they become active when a particular area in a scene contains both lightness and darkness.
If the areas A and B respond to are right next to each other, and the area C responds to is further away, an interesting thing happens. Because of the structure of our world—where natural objects have defined “edges” to them—A and B will tend to fire together often. If there is contrast in one area, it is more likely that there is contrast right next to it, because the objects that fill our visual field have edges, and edges have contrast. So the firing of A and B will be correlated, just because what they are responding to is correlated in our environment. Meanwhile, because C is responding to an area further away, it won’t be correlated (or at least, less correlated) with the other two.
Whenever A and B fire together, they’re more likely to drive D to fire with them. Since A and B are correlated, this will happen regularly. Meanwhile, C will most often fire by itself, and be less likely to drive D to fire—it has to drive the activity all by itself.
The result is D becomes something special: an edge detector. No neuron knows that it’s detecting edges, it’s just following local reinforcement rules. It becomes sensitive to a new pattern in the environment, one that A and B on their own are not sensitive to.
Slime mold veins are reinforced by finding food, and when they fail to find food, they whither away. Synaptic connections do something similar, but instead of being strengthened when they find food, they are when they find correlation/structure/patterns.
This is all extremely oversimplified and shouldn’t be taken as anything close to a complete picture of edge detection, the brain, or plasticity. In the actual brain, Hebbian-like learning interacts with inhibitory competition, normalization, pre-structured inputs, and many other mechanisms we have varying levels of understanding of. But what I hope this does is give a flavor of how a simple local rule can lead to self-organization that intelligently picks out structure from our world.
Edge detection is one low-level pattern, but there are patterns in patterns: objects form certain shapes out of their patterns. Different kinds of objects occur with each other, and have different meanings to us and the behavioral responses we can have to them. Neurons acting based on local information find patterns in the patterns in the patterns, creating the complex cognitive representations we use to navigate our world.
Sliming Through the Skull
The amazing pattern-finding talked about above might feel like it comes from the sophisticated wiring already present in the brain—and to some extent, it does. But how does a brain get wired up in the first place?
Our DNA is not a blueprint. It’s more like a recipe or a set of instructions that, when followed, produce a human. But those instructions are context sensitive. Based on various factors of their environment, like the chemical signals they encounter, different genes are activated in the individual cell, producing different proteins that have different effects. As the cells making us up divide, they each get their copy of those instructions, and their different environment leads to different genes being activated, giving rise to different cell types.
The brain itself doesn’t develop as a fully wired-up organ that just slowly unfurls into the mature version.
Neurons are born, migrate to their rough neighborhoods, and then extend axons tipped with growth cones. These growth cones, like slime mold veins, respond to local chemical cues. Some molecules attract them and others repel them. Other cells, following their own instructions based on their local chemical cues, emit these chemicals. With each cell following their own local ruleset, neurons connect, and neural circuits form.
Now, this leads to an odd problem. DNA doesn’t tell every neuron exactly what it should connect to. Neurons in some areas in the brain are tightly constrained on the connections they can make, but others are more promiscuous. It turns out evolution hit on a neat trick to avoid specifying every connection: make more connections than you need, and then see which ones are useful and get rid of the rest. This is called synaptic pruning—baby brains go through a process of massively reducing the number of connections it has, to improve the useful signals and get rid of the useless ones.
Of course, evolution (and DNA) aren’t able to judge which connections are useful. There’s no central planner or evaluator. Instead, there needs to be a local rule that neural connections follow.
Once wired up, the environment of a neuron changes in an important way: the activity of upstream neurons—or lack thereof—becomes part of the local environment a neuron is responding to.
This is where the story of development joins up with the story above about neurons finding patterns. Some of the same mechanisms mentioned above for how neurons pick structure out of the world are also at work in the building of a brain to begin with.
Many aspects of development are deeply dependent on experience. The activity of neurons in the retina affects downstream neurons. If there is no such activity during certain critical phases, the visual cortex wouldn’t develop. We know this based on experiments where kittens were blind for life after having their eyes sewn shut during certain critical periods.
All activity isn’t equal, though. When an animal is raised in an environment only containing horizontal or vertical contours, neurons in the visual cortex come to prefer those orientations, and the animal is worse at detecting other orientations.
While there are critical periods where plasticity is stronger and some of the foundational structures are being put down, this process doesn’t simply end when development ends. The brain remains in a continuous feedback loop with the environment. Some researchers have described the visual cortex as in a state of dynamic equilibrium, capable of changing in response to altered visual environments. It is always ready to make changes—it’s the relative stability of features in our environment that keeps it in place.
Where there is instability, our neurons continue to change to accommodate the change in the patterns they’re seeing. If you’re suddenly hearing sounds with a different underlying structure—for example, learning a new language—neurons will change what they respond to through this process of reacting to local rules. If the patterns change, the patterns the neurons find change too, without a central planner needing to rewire the whole shebang.
Control Loops and Emergent Behavior
Each ant in a colony is following simple rules based on the information it has around it. This leads to colony-level behavior that is intelligent, dynamic, and flexible. It solves the problems that the colony has, because each ant performs a simple function that, combined with other ants, creates a robust control loop.
When a slime mold spreads out its veins in search of food, it’s a chemical process. Without a nervous system, it spreads out in search of food, reacting to the environment in seemingly intelligent ways. Chemical and mechanical reactions in each vein of the slime mold result in decisions that solve its problems, lead it to food, and leave it better off.
When we take the neuron-level view of the brain, we see a similar story. Each neuron responds based on its local environment: chemical gradients and electrical signals. It isn’t trying to solve the brain-level problem of detecting objects in the environment, instead just following rules that result in pulling out patterns from its direct input.
And yet, from following these local rules, it’s able to solve brain-wide problems of finding the structure and patterns that allow us to navigate the world successfully. Just as ant colonies regulate foraging without a leader and slime molds solve mazes without a nervous system, the human brain builds and adapts itself without a central control.
Just as a colony is not a single ant, we are not an individual neuron. We are instead what the neurons combine and add up to. We are a colony of neurons, acting in concert to produce a robust system that navigates this chaotic world by responding to its structure to solve the problems it needs.
If you enjoyed this, please hit the “Like” ❤️ button, restack, or share this article to help others find it.
If you enjoy Cognitive Wonderland, consider supporting it by becoming a paid subscriber at whatever level feels comfortable for you.
If you’re a Substack writer and have been enjoying Cognitive Wonderland, consider adding it to your recommendations. I really appreciate the support.









This is great. I think we can go further and look at dissipative structures, like how rivers form and function similarly to the slime moulds, with the water spreading out, exploring different routes, and reinforcing those routes that allow the water to dissipate its gravitational potential energy more efficiently. And if you watch lightning in slow motion it's similar again, with it seeking how to dissipate its charge.
That was really interesting and well explained, thank you