Cognitive Wonderland

Cognitive Wonderland

The Idea of the Brain

A history of neuroscience, the mind/body problem, and whether the brain is a computer, inspired by the book by Matthew Cobb

Tommy Blanchard's avatar
Tommy Blanchard
Jan 02, 2026
∙ Paid

Where is “the seat of thought”? In the modern day, we take it for granted that it’s the brain. But for much of history, we didn’t know that. Thinkers thought we think (polyptoton!) with the heart.

Various folks proposed the brain might have something to do with this whole thinking thing at various times, but it wasn’t until the early 17th century that there was general agreement among European intellectuals that the brain is the thinking organ.

For most of the history of modern science, we lacked an understanding of the basic principles of the functioning of the nervous system. Matthew Cobb’s book, The Idea of the Brain, takes us through this history and up to the present, even speculating about the future of neuroscience in the final chapter.

Early experiments showed the importance of the nerves—cutting them or tying them off made it so muscles didn’t contract. What wasn’t known was how this was possible. What could be traveling through the nerve to contract the muscle? And how could it move so quickly? Various ideas floated around—”animal spirits” or perhaps hydraulics.

It wasn’t until the late 1700s that Luigi Galvani performed experiments with frog legs, showing electric current running through nerves caused muscles to contract. These ideas also influenced a young Mary Shelley, being part of the inspiration for Frankenstein, and thus (arguably) the start of science fiction as a genre.

There was nearly a century between when we figured out the brain, not the heart, was the seat of thought, and when we figured out the basic way it functioned was using electricity. It was many more years before we figured out the nature of that electrical signaling (ion gradients and pumps)—and research on these mechanisms continues in the present day.

It was another hundred years after Galvani that it was understood that the brain is made out of cells, and it wasn’t until the 1950s that we understood there are small gaps between most neurons (the synaptic cleft), through which chemical signals are sent from one neuron to another.

What was shocking to me about the history was less the long gaps between these discoveries or their relative recency, but how other elements of neuroscience history fit in. Psychiatrists were attempting to treat schizophrenia with drugs prior to a general understanding that there were neurotransmitters in the brain. Theorists were trying to come up with a grand unified theory of the brain before we understood how brain cells communicate.

Two regions of the brain, Wernicke’s area and Broca’s area, are pointed to as major discoveries in cognitive neuroscience. Briefly: both areas are involved in language. If Wernicke’s area is damaged, a person is still able to speak words—but the words are a meaningless word salad, and the person is unable to understand language or recognize that they are not making sense. If Broca’s area is damaged, on the other hand, language comprehension is fine, but the person has difficulty pronouncing words.

This gives us a classic “double dissociation“, providing evidence that these two processes (fluently producing words versus language comprehension) are functionally distinct.

I had learned all this in an introductory cognitive psychology class as an undergraduate. What I didn’t realize is that Broca and Wernicke made these discoveries in the 1870s—before we had an understanding of basic brain neurophysiology. This was 200 years before the advent of fMRI, the gold-standard of modern day neuroimaging (and the source of all those “brain light up” images you see).

I hadn’t realized these discoveries about the neuroscience of language occurred when our understanding of the brain was still in its infancy. But it also raises a question: in the modern day, with the benefit of modern technology and foundational knowledge of the biology of the brain, why haven’t we figured out more of the brain? Cobb points to some of the same examples I’ve pointed to in the past: we don’t have a full understanding of the brain of nematodes, who only have 302 neurons, and we haven’t even figured out the 30 neuron circuit that controls lobster stomachs.

The answer, in short, seems to be that the brain is just really complex, and our tools for studying it are worse than you might imagine.

To tackle this complexity, neuroscience has become an enormous field. It encompasses everything from studying the molecular mechanisms that allow ions into a brain cell to isolating the brain regions involved in creative endeavors. Every year, tens of thousands of neuroscientists gather for the annual Society for Neuroscience conference in the USA, with rows upon rows of posters displaying in-progress scientific research. They’re broadly organized by category, and I know from experience that anything one or two rows away from my poster I’m unlikely to understand.

Modern neuroscience is big. Cobb breaks down contemporary neuroscience into a few major strands: circuit-based approaches, like connectomes; computational neuroscience; attempts to understand the chemistry of the brain; and attempts to localize different functions in different parts of the brain. You can make a plausible case that each of these might hold the key to figuring out the brain, but each also has severe limitations.

My favorite paragraph of the entire book is the very last one, where Cobb, speculating on the future of neuroscience, lists over a dozen possible ways the future of our understanding of the brain could play out. The point he’s making is, it’s hard to know what the future holds, and while he’s easy to point to the flaws in any one approach, it’s hard to say any one of them might not end up holding the key to cracking the mysteries of the brain.

On the importance of theory

One of the threads that weaves its way throughout the history Cobb’s presents is the mind-body problem. This is articulated by Leibniz, in his famous passage from 1714:

It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine.

— Gottfried Leibniz, Monadology, sect. 17

Even if we could see all the motions in a “thinking, feeling, perceiving” machine, we wouldn’t see the thinking, feeling, or perceiving. So, the argument goes, we have an unbridgeable gap. This is the 300-year-old version of what in modern philosophy of mind is called the hard problem: you can’t explain the subjective experiences of consciousness via the form and structure of matter.

The passage is interesting because it seems to prove too much.

Of course if you walked around inside a large, complex machine with no theoretical understanding of it, you wouldn’t “see” what it does. Imagine walking around the inside of an enormous computer. If you had no understanding of computer theory and no blueprint, you wouldn’t see that it was Turing Complete—the property of general purpose computers that means they are able to complete any computable problem. You wouldn’t see the specific programs being run or calculations being performed, just wires becoming electrified and magnets completing circuits.

The point is, without a theoretical understanding of what’s going on, we can’t see what properties emerge from the interacting of parts.

Conscious experience and neural activity may feel qualitatively different from each other, but we’ve learned numerous times how seemingly qualitative differences can arise from interactions among parts.

Goal-directed behavior can emerge from simple control loops. Even in simple organisms like E. coli, we can see control loops: proteins that form simple appendages (flagella) allow the cell to move around, and proteins form receptors that detect the sugars they consume. When the receptors detect sugar, they modulate the activity of the flagella, causing the E. coli to move in the direction that the most sugar has been detected in. E. coli alternate between a “tumbling mode” where it orients itself towards the sugar gradient, and a “running mode”, where it swims smoothly in one direction.

The tumbling and running modes used by E. coli to direct itself towards food. Image source.

We understand how this behavior, that allows E. coli to swim towards a food source. It’s accomplished by “dead” molecules that, when combined, allow detection and action towards a goal (food). The goal directedness of simple life arises in a comprehensible way from the “qualitatively different” behavior of the chemicals they are composed of.

We once thought design was qualitatively distinct from “dumb” unguided processes, requiring intelligence. We now know variation coupled with selective pressures can produce staggeringly complex design, as in the evolution of life.

Large Language Models have taught us that you can capture the meaning of language through statistical relationships among words. Math and word meanings seem qualitatively distinct things, but the fact that models can capture semantic meanings of words and sentences suggests otherwise.

I think people who worry about the hard problem have a particular conception of consciousness, where experience is sort of something added on top of physical goings-on (hence thought experiments like p-zombies and the inverted spectrum). We don’t see the many physical things going on in the brain involved in an experience, so it’s easy to imagine those experiences are free-floating. Just like goal-directedness, design, and semantic meaning, it seems experience is qualitatively different from the physical functioning of a brain!

But experience doesn’t vary independently of the physical functions and behavior. If you change the conscious discriminations someone is able to make, you change their experience, and vice-versa. You can’t swap the experiences of the colors red and green and have everything else remain the same—our color experiences are entangled with their relations to other colors. The experience is inextricably tied to the function—our ability to discriminate a color and situate it based on its relations to the other colors.

If you wonder, “Why should this feel this way?”, ask yourself, “What would happen if it felt another way?”. The answer is, you would act differently. We can account for why a color red appears red—if it appeared otherwise, it would lose its relations with the other colors. If pain wasn’t painful, we wouldn’t avoid it. We don’t have access to all of the physiological stuff going on under the hood that gives our experiences their incredible richness, but this doesn’t mean the experience floats free of those. And the more we understand about the functions being performed by the physical goings-on, the closer those physical goings-on get to the experience itself.

I suspect the hard problem feels appealing to some people because it feels odd to offer a third person explanation for a first-person feeling, these feel like different sorts of things. And they are different things! But why should we expect an explanation of an experience to feel like an experience? If someone explains their experience to me, I might be able to use my imagination to try to simulate a similar experience. My ability to simulate the experience based on a description, however, isn’t a necessary component of it being an explanation.

So yes, if we shrunk ourselves down and looked at the brain, we wouldn’t see “consciousness”. There’s no simple fMRI experiment someone can devise to suddenly make sense of subjective experience. But this doesn’t mean an understanding of consciousness in terms of brain functioning isn’t possible—it just means the brain is complicated and we need better theories, something we already knew.

Is the brain a computer?

This last section is for paid subscribers, as a thank you for their support. If you would like to support Cognitive Wonderland or join our community, consider becoming a paid subscriber.

User's avatar

Continue reading this post for free, courtesy of Tommy Blanchard.

Or purchase a paid subscription.
© 2026 Tommy Blanchard · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture