Abstract concepts
In the previous post, I talked about how we learn concepts and the fuzzy boundaries they have. This post is where I connect this to how I think about philosophy—and one of my biggest concerns with much of academic philosophy.
Models of concept learning offer a useful way to think about concepts: we learn concepts through a combination of sensory experience and the context we hear these words used. For abstract concepts, sensory experience doesn't play a role, so we only have the context given by other words.
We learn abstract concepts, like "knowledge", "remembering", and "justice" through hearing them used in certain situations, in relation to other concepts we have some concrete grounding in. Through learning what situations they're used in, we’re able to abstract a pattern. We then try to match that pattern in the future to recognize when these terms apply.
The point is, we rarely learn these concepts by hearing a dictionary definition. We don't have strict, clear-cut rules for when they apply. We learn through experience and develop an "I know it when I see it" conception. In semantic space—a conceptual space that represents the relationships between different words—there are regions we would consider "remembering" or "knowing", and we should expect these regions to have fuzzy boundaries based on how we learn them.
Philosophy: Sharpening Concepts
A big part of what I see as the job of philosophy is sharpening our concepts. "What is knowledge?" is a question with lots of philosophy vibes (replace "knowledge" with any abstract concept to generate ideas for undergraduate philosophy courses—maybe I should make an app for that to take advantage of the surely enormous "philosophy professors looking for course ideas" market).
Determining what we mean by knowledge is a central question in epistemology (epistemology is literally "the study of knowledge").
Philosophers have debated about what it means to know something for a long time. A concise definition might be you know something when you have a justified, true belief. But that definition runs into the problem of Gettier cases:
A desert traveller is searching for water. He sees, in the valley ahead, a shimmering blue expanse. Unfortunately, it’s a mirage. But fortunately, when he reaches the spot where there appeared to be water, there actually is water, hidden under a rock. Did the traveller know, as he stood on the hilltop hallucinating, that there was water ahead?
(As an amusing historical aside, this example comes from Dharmottara, who was writing in 770 A.D.—approximately 1200 years before Edmund Gettier's paper for which the Gettier cases are named).
The idea is the traveler has a belief that is justified (they seem to see water), it is true (there is water), but the justification is not connected to why it is true (they saw a mirage, not the water).
Did the traveler know there was water? This is a funny case—the mirage misled them, but misled them to something that was true.
It has some features of knowledge, but certain important features differ from our prototypical cases.
There is a deep theoretical and experimental literature on these cases, trying to tease out what exactly the features are that are important for our concept of knowledge. People differ on whether they think cases like this are cases of knowledge or not—a minority seem to, and there may be some cultural differences.
I think it's useful (and fun!) to take our abstract concepts and try to sharpen them so we can get more precise about what these words mean. It can enrich our understanding of the world and the things we value.
All that said, at some point arguing about abstract philosophical concepts becomes about as interesting as arguing about the right definition of chair: it isn't illuminating anymore and it lacks any real purpose.
Empty Ideas
In "Empty Ideas", Peter Unger criticizes much of the philosophical literature as, well, arguing about empty ideas.
One of the vivid examples he goes through is Davidson's Swamp Man thought experiment:
Suppose lightning strikes a dead tree in a swamp; I am standing nearby. My body is reduced to its elements, while entirely by coincidence (and out of different molecules) the tree is turned into my physical replica. My replica, The Swampman, moves exactly as I did; according to its nature it departs the swamp, encounters and seems to recognize my friends, and appears to return their greetings in English. It moves into my house and seems to write articles on radical interpretation. No one can tell the difference. But there is a difference. My replica can't recognize my friends; it can't recognize anything, since it never cognized anything in the first place. It can't know my friends' names (though of course it seems to), it can't remember my house. It can't mean what I do by the word 'house', for example, since the sound 'house' it makes was not learned in a context that would give it the right meaning—or any meaning at all. Indeed, I don't see how my replica can be said to mean anything by the sounds it makes, nor to have any thoughts.
— Donald Davidson, Knowing One's Own Mind
When Davidson says the Swampman can't recognize or remember, he doesn't mean the Swampman can't call to mind this information. Swampman acts in every way like Davidson would have. Davidson just means we shouldn’t describe these activities as remembering or recognizing, since they aren't connected to the requisite history.
Peter Unger's critique of this is quite simple: None of this matters.
It just doesn't matter if we say Swampman is recognizing, knowing, and remembering. Swampman will go about acting exactly the same. We can imagine two worlds, one where we accept Davidson's definitions of these cognitive terms (where a certain history is required) and one where we use different terms, "schmecognizing", "schmowing", and "schmemembering" in the same way but they don't have this history requirement. Swampman schmecognizes but doesn't recognize.
What does it matter if we adopt one term or the other? Whether he is remembering or schmemembering, Swampman is going to go about his life, writing philosophy and interacting with people. The difference in these terms will not predict anything different about the world.
Unger uses the term "concretely empty ideas" for conceptual differences like this that have no bearing on the world, and argues if this is all we're doing in philosophy, we're not doing anything useful.
Swampman provides a weird gray area example of some of our concepts. It shows the fuzziness in some concepts—Davidson doesn't think it makes sense to apply these cognitive concepts to Swampman. Others have different intuitions.
But does it matter that the concept is fuzzy in this area?
When our concepts matter
In other fields, terms are reworked or made more precise because they have some relevance. Biologists fiddle with the definitions of species because, in certain research areas, definitions break down. Talking about the ability of animals to reproduce with each other as the defining trait of a species doesn't help when you're looking at fossils of long-extinct animals or single-celled creatures that don't sexually reproduce. It's worth thinking about a more useful definition for species to help guide research in those areas, which is exactly what happens. Philosophers of biology can (and do) help by clarifying the concepts in these areas.
If Swampman highlighted some issue with our concept of "memory" that led to cleaner conceptualizations that helped cognitive science research on memory, it would obviously be helpful. But this isn't the case, no one in memory research is concerned about lightning striking a swamp and creating an exact replica. This isn't addressing some conceptual issue making memory research hard.
Cognitive concepts like "believing" and "remembering" are all highly complex abstractions of brain processes. Any complex biological process will have fuzzy definitional edges. The right approach to these gray areas is to ask why this ambiguity matters.
We argue about terms like "justice" and "equality" because we attach value to these concepts. We argue both about what definitions of justice are and how to apply them because they make specific differences in things like the tax policy we might adopt. Is it "just" to have a progressive taxation system?
By clarifying what we mean by "justice", we can have a more productive debate. We can pin down whether we disagree about the concept of "justice" or about how we are applying it. We can identify where our differences in values (or understanding) lies more easily if we take care to be precise about our concepts.
It's easy to imagine situations where our concept of "knowledge" matters because we need to make a value judgment. In a court of law, someone might be criminally liable for running someone over if they knew the person was lying behind their car, but not liable if they did not know the person was behind their car. These concepts matter.
Is Davidson suggesting that we shouldn’t hold the Swampman criminally liable if he runs someone over, because he would be incapable of "knowing"? I doubt it.
Most of our concepts are just labels we use to simplify the world. There's no platonic form of "remembering"—it's just a neat trick the lump of meat in our skulls can do, extracting information that it stored in its synapses from a previous experience (unless you're Swampman schmemembering, in which case it’s information stored in synapses but not from a previous experience). It's a useful term because we do it a lot so it's nice to have an easy way to refer to it.
We shouldn't be too worried if the concept breaks down when trying to apply it to science fiction scenarios—if those scenarios became reality with practical implications, we would come up with better conceptions and terminology for those spaces, just like biologists do with the concept of species.
Our concepts are fuzzy. It can be fun and interesting to sharpen them by plumbing our intuitions and looking at the logical implications. But at some point, that a concept still has fuzzy areas ceases to be an interesting feature, and the discussions are so far removed from practical significance we have to ask ourselves: what are we even doing here?
Unger argues that pretty much all of mainstream philosophy is just empty ideas. I'm not so pessimistic—I think there is a ton of interesting and exciting modern philosophy. There are philosophers working hand-in-hand with scientists to help tackle complex conceptual and theoretical issues, and political philosophers and ethicists working out thorny values questions. But I share his concern that much of analytical philosophy is spent on empty ideas, and this was a big part of why I left academic philosophy. More on that soon.
If you enjoyed this post, consider sharing it. You can also help others find it by hitting the heart button to “Like” the post.
If you’re a Substack writer and have been enjoying Cognitive Wonderland, consider adding it to your recommendations. I really appreciate the support.
I'm about as pessimistic as Unger. A ton of interesting and exciting philosophy? Maybe there's stuff I don't know about or we have different interests, but could you give some examples of what you have in mind?
As an aside, there are several points where your remarks sound just like what a pragmatist would say. These in particular:
"The difference in these terms will not predict anything different about the world."
"The right approach to these gray areas is to ask why this ambiguity matters."
Not sure if you're into pragmatism at all since I'm new to reading your work but the approach you favor here is one I'm very sympathetic to.
Based.