Understanding understanding
What does it mean to "get" something?
Reminder that we’re starting a Cognitive Wonderland book club! Join us in reading and discussing cool books on science and philosophy.
Every semester, hundreds of thousands of students cram all kinds of knowledge into their heads to prepare for exams. In many subjects, it's very possible the students could, around exam time, rattle off many facts their professors wouldn't be able to—logic and math students could give the exact steps to long proofs, history students could give exact historical dates of a huge number of events, business students could… do whatever business students do.
Yet only the most delusional students would think, after a semester of studying something, they have a deeper understanding of the subject matter than their professors. The students may have many things memorized, but they lack some deeper connections that the professors presumably have. There's a difference between rote memorization and understanding material.
But what does it mean to understand? What is it the professors have that the cramming students don't?
Brittle Understanding and Counter-Factuals
I teach graduate level data science courses for a couple of different university programs. Some of these programs are Master's degree programs aimed at professionals who want to up-skill. The widely disparate backgrounds of the student population is a big challenge to teaching this demographic.
In an ideal world, basic programming would be a prerequisite, but alas, I often end up needing to teach a group that includes at least a few students with limited to non-existent programming skills. My job isn't to teach programming—I teach machine learning and natural language processing. But to really learn these topics, you need to get your hands dirty with examples—which requires programming.
The non-coders need a lot of guidance, while I want to at least offer those who can code some opportunities to do something deeper. The imperfect solution I've landed on is doing tutorials where I walk students through, step-by-step, example code that does something very similar to what they need to do in the assignments. The assignments are a bit of a "choose-your-own-adventure", allowing students who are skilled coders to try making extensive changes to the guts of the template I provide, while the less confident ones can make do with copying and pasting and changing the values of a couple of parameters.
Both the coders and the non-coders can do the assignments, but the coders understand what the code is doing at a deeper level than the non-coders.
If I changed the requirements of the assignments a little bit, or changed the example code I give them, the coders would be able to figure things out and do it. In contrast, the non-coders have a brittle grasp on the code—change anything, and it's unlikely they'll complete the assignment.
The coders have something the non-coders don't: they know the interrelations between the lines of code and the effects of each line. Based on this knowledge, they can see what a change to the code would produce. They know (or can figure out) what aspects of the output depend on which lines of code.
Knowing the dependencies and relationships in a process is one of the key features philosophers often point to in theories of what it means to understand. One way of testing for understanding is to test if the person would know what would have happened if something was different. Being able to answer counterfactual questions shows the person has something that goes beyond surface level memorization, since it requires reasoning based on a deeper structure of the phenomenon.
Understanding code means understanding what each line is contributing to the process. Similarly, understanding how a device works means knowing the dependencies of each part to the functioning of the whole—what would happen to a bicycle if you removed the chain? The pedals would still turn, but it wouldn't cause the back wheel to turn, so you wouldn't move forward. The question reveals knowledge of how the parts affect each other.
Counterfactual questions can probe if someone knows what a specific part in a process does, but not all this structural knowledge is equally valuable. A coder might know what a particular line of code does without seeing how it fits into the larger structure. You could keep piling up such piecemeal facts, but at some point you would have to get out a pen and paper to work out the implications, and that doesn't feel like understanding. A real understanding is deeper because it explains more with less.
Compression, Unification, and Mechanism
The philosopher Daniel Wilkenfeld argues for a theory of understanding as a form of compression. If you've just memorized some facts, there aren't any additional useful implications you can draw out of that. But if you really understand something, you're able to see the implications of it. You can make more with less.
Wilkenfeld points to the example of Adam Smith's The Wealth of Nations, which provides a few simple principles of supply and demand that can be unpacked to understand a broad range of facts about the structure and functioning of society. There is a relatively small kernel, but unpacking the implications of it is basically all of macroeconomics. Anyone can memorize and write down the related equations, but actually understanding them is a whole other ball game.
The professors who give their students exams might not have as many facts rote memorized as their students, but they have an understanding that allows them to unpack general principles and produce more information. Students who have only memorized things can regurgitate the same facts, but they lack the underlying structure that would allow them to produce more insights.
Wilkenfeld’s view highlights how a small kernel of principles can generate a wide range of implications. A related theory is to think of understanding as unification. When you are able to unify more facts under the same umbrella, you have a deeper understanding.
Newton's theory of gravity unified the movement of the planets with objects falling on Earth. There was a deeper principle that connected these two phenomena—gravity, a physical force related to mass that attracts objects together. We went from two different kinds of phenomena to one that presents itself in different ways, which is why Newton's theory was so groundbreaking.
Of course, Newton's theory ended up being wrong in important ways—it was superseded by Einstein's theory of relativity. But it did correctly identify that falling on Earth was the same sort of thing as planets orbiting. Parts of the structure Newton saw remained intact, even if other aspects were changed.
It's unclear how far these examples from physics and economics stretch, though. It's a common observation that biologists seek different kinds of explanations than physicists—biologists often seek mechanisms, while physicists seek laws. They almost seem opposite—mechanisms are explained in terms of reduction, by taking one thing and breaking it into multiple smaller pieces, while unification under a single law is taking multiple different things and showing them to be one thing.
Perhaps in an abstract way these can be seen as the same, though—when we explain the different parts of a mechanism, we usually expect those parts to act in ways that are unified with other phenomena under more general laws. If I explain each of the pieces of how a bicycle works, it's usually in terms of underlying laws of basic physical mechanics.
Abstraction and Mental Simulations
Regardless of how we reconcile understanding as unification versus understanding as mechanistic explanation, I think there's an interesting analogue in cognitive science. Abstraction and mental simulations are often raised as playing central roles in our reasoning.
Abstraction is a unifying force. To categorize things, we often abstract away the ways they're different to leave behind the ways they're the same—we see two cats as still "cats" because, though they may differ in color and size, they are alike in other ways, like going "meow" and purring. Our ability to abstract underlies the concepts we use to carve things up to make predictions and reason about the world.
Similarly, cognitive scientists often talk of mental simulations. What would happen if I changed this line of code? My mental representation of the code, if it contains the right depth, allows me to imagine what would happen. If I know how each line interacts to produce the final outcome, I can simulate what would happen without running the code itself. Having a mental model with the right mechanistic parts and their relationships allows mental simulation.
So I think that these theories of understanding connect pretty deeply with cognitive science accounts of reasoning. Abstraction lets us compress many cases into a single concept, while a rich mental representation of the mechanisms behind a complex system allows us to mentally simulate how that system will act in different counterfactual circumstances. To understand something is to be able to reason about it.
Memorization only gives isolated facts. Understanding gives you a structured model that you can manipulate, extend, and apply to new situations.
What students are lacking that professors have is the deeper internal models of the subjects they're studying. The professors have the right abstractions and can run rich mental simulations. Understanding isn't about how much you know, but about having the right conceptions and mental models to use what you know.
Please hit the ❤️ “Like” button below if you enjoyed this post, it helps others find this article.
If you’re a Substack writer and have been enjoying Cognitive Wonderland, consider adding it to your recommendations. I really appreciate the support.




Excellent piece, Tommy!
I was lucky enough to have 3 clear examples of this occur during my undergrad physics degree, 35 years ago.
The first came after my second summer, spent building out what would be our small university’s first computer lab (as a means to share costs & own after-hours cycles for my boss’ computational simulations of brown dwarfs & what later became known as Type 1a supernovae — my 1st, 3rd, and 4th year summer jobs).
I was approached by a Sociology professor and asked to TA her “Stats for Jocks” class, using the new lab to introduce computers & SPSS to them. Few ever “got” the concept of data that could be analyzed algorithmically: “what is your favourite food” vs “on a Likert-style scale, rate these foods.”
The second was a counter example: I never could “get” quantum mechanics, though I could, as Neils Bohr put it, “shut up and calculate.”
The last (and most profound) was the day that I “got” statistical mechanics, and was able to produce chemistry’s Boyle’s Law (PV=NkT) from first principles. It was a moment of transcendence, one whose example of emergent properties of complex systems has informed my subsequent career in IT, information security, and risk management — as well as deepening appreciation for Foundation, politics, and, coming full circle, sociology (among many other topics).
Keep helping people learn, and understand understanding 🙏
Thanks, Tommy.
"Knowing the dependencies and relationships in a process is one of the key features philosophers often point to in theories of what it means to understand." This sums it up for me.
I tend to think of 'understanding' in terms of having a mental map of the relationships between different aspects of the data set. So in my field of psychotherapy, I might represent that as eg a family tree: Freud begat Berne's Transactional Analysis, which would be a cousin of cognitive analytic therapy etc.