30 Comments

I'm about as pessimistic as Unger. A ton of interesting and exciting philosophy? Maybe there's stuff I don't know about or we have different interests, but could you give some examples of what you have in mind?

As an aside, there are several points where your remarks sound just like what a pragmatist would say. These in particular:

"The difference in these terms will not predict anything different about the world."

"The right approach to these gray areas is to ask why this ambiguity matters."

Not sure if you're into pragmatism at all since I'm new to reading your work but the approach you favor here is one I'm very sympathetic to.

Expand full comment
author

In terms of interesting (to me) philosophy:

I think some of the work in philosophy of perception (e.g. by A.S. Barwich or M Chirimuut) are interesting and helpful characterizations and clarifications of the conceptual landscape.

"Inventing Temperature" is an interesting exploration of what measurement means and how science progresses.

When I wanted a better understanding of what it means to understand/explain something, I found I immediately went for the philosophy on the topic to understand the conceptual landscape of theories of explanation--not sure if this counts, but I have "Explaining Explanation" and "Theories of Explanation" on my to-read list.

There's some creative stuff Eric Schwitzgebel has worked on about AI/consciousness and ethics that I think is interesting.

More generally there are examples of philosophers working directly with scientists on difficult conceptual areas in their fields--I'm most familiar with it happening in philosophy of cognitive science, philosophy of biology, philosophy of physics.

Also, because philosophy's domain is incredibly wide, there are lots of examples of philosophers writing on more general topics in ways that are at least more accessible to the public, and I think those count as important contributions--for example, the journal "Think", or Eric Schwitzgebel's "A Theory of Jerks", etc.

In terms of being a pragmatist: I'm sympathetic to it, not sure if I would label myself as such but that might be out of ignorance rather than principle.

Expand full comment

Fair enough, quite a bit of that might be interesting. I suppose it's a matter of emphasis; the cliche of a glass half full vs. half empty when we might see things similarly. I think there's interesting philosophy out there. But there's so much stuff I think isn't very productive or useful, it tends to loom large, giving me a sense that there's not that much.

Expand full comment
author

Yeah, I'm not optimistic about the proportion of philosophy that's productive! It's just there's a lot of philosophy and some small portion of it is good, so there's a large absolute amount that's good.

Expand full comment

Have you read much experimental philosophy?

Expand full comment
author

Only a bit

Expand full comment

Some of it is pretty cool. I critique a lot of it, but that doesn't mean it's not at times really fascinating.

Expand full comment
Aug 28Liked by Tommy Blanchard

Based.

Expand full comment
Aug 18Liked by Tommy Blanchard

Love the article Tommy, and a good provocation. Two thoughts come to mind: First is that I think honing our understanding of abstract ideas is a bit like learning how to finetune the resolution on a microscope. With each setting we work through, it allows us to better make sense of real-life scenarios. Someone with a high-resolution understanding of the concepts of justice might be far more capable of navigating and guiding complex moral issues, for example.

But I also agree that time spent refining the lens also can be its own trap. One of the worlds I move through is that of leadership philosophy, and I also observe this interesting dynamic at play, where there are many people who constantly debate and philosophise on what leadership ought to be, but then no one actually does anything, rendering the entire exercise moot. Concepts and ideas also need to be tested 'on the field' to validate their applicability.

Expand full comment
author

Thank you! And I totally agree--concepual sharpening is absolutely useful in many instances, and the issue is one of "taking it too far". Cheers!

Expand full comment
Aug 16Liked by Tommy Blanchard

Have you read “Language vs. Reality: Why Language is Good for Lawyers and Bad for Scientists” by NJ Enfield? Might be of interest, it gets into this topic from the linguistics side rather than the philosophy side.

https://mitpress.mit.edu/9780262548465/language-vs-reality/

Expand full comment
author

I haven't heard of it, thanks for sharing with me! Looks interesting

Expand full comment
Aug 15Liked by Tommy Blanchard

It's because of empty ideas that I still believe in the logical positivists' criterion of empirical significance: statements are meaningless unless some empirical observation can confirm or disconfirm them. "It's raining" is a meaningful statement because you can look out the window and see if it's true or not. But "truth is beauty" is meaningless because there aren't any facts about the world that could confirm or disconfirm it.

Expand full comment
Aug 15Liked by Tommy Blanchard

“Peter Unger's critique of this is quite simple: None of this matters”

I haven’t read Empty Ideas yet, but I wonder if Unger considers the objection that there are non-ridiculous arguments to the effect that knowledge is intrinsically valuable (more so than beliefs that are simply true). If those arguments pan out, then it seems like it will matter (to some extent) whether swamp-man really knows things. You might object that swamp-man does know things because epistemic externalism is true of whatever, but there are non-ridiculous arguments for internalism, so it seems like a hard sell.

Expand full comment
author

Unger has two main theses: Modern philosophy is full of concretely empty ideas, and it is full of analytically empty ideas. The case I outline is mostly about the concretely empty side--it doesn't matter for any practical purpose to distinguish knowing from schmowing.

The analytically empty side is that the abstract conceptual work in philosophy is so far removed from the real world, it's only relevant in academic philosophy. He's very critical of the definition->counter-example->definition->counter-example cycle, and thinks a lot of the distinctions philosophers are making are only relevant to debates that have spun in that loop long enough to become untethered from anything anyone else would care about. So I guess a response he could give to the idea that knowledge is valuable and therefore the concept matters is to say that sure, knowledge matters, but not all conceptual distinctions about that concept matter.

Expand full comment
Aug 15Liked by Tommy Blanchard

Excellent. Unger wasn’t someone I was familiar with. Thank you.

Expand full comment

Thanks for the excellent read, Tommy. I was hoping to ask if you could share your source for the parable by Dharmarotta? It was such an apt example for Gettier problems that it seems like we'd all be better off citing "Dharmarotta's Mirage" as a byword for this particular challenge to the standard definition of "Knowledge", and it's already inspired a certain epistemic question. Perhaps I can clarify.

It's rather clear to some segment of those of us interested in the topics where Large Language Models and theories of consciousness overlap, that when LLM's produce a response which delivers true information, in some sense this event is always a happy accident, rather than the result of an attempt by the LLM to report on its "beliefs".

That is to say, when we ourselves report "the sky is blue", it seems to be widely assumed that we are reporting on a belief typically justified by our own experience of looking up on a cloudless day, which ranges of spectrum are commonly held as referring to the word "blue", etc., whereas the justification for an LLM lies entirely within the parameters of its model which links "sky" tokens (i'm speaking loosely, mixing CS/ML terms for "token" with linguistics here, apologies), to "blue" tokens. Insofar as this may be the case, it seems correct to say that all responses of an LLM are in fact "hallucinations", however some responses happen to be true.

In reading Dharmarotta's mirage, it occurred to me to wonder: is it possible that all knowledge works the same way? That is, is it actually the case that we "know" things, or is every fact we can report only accidentally true?

Expand full comment

"If Swampman highlighted some issue with our concept of "memory" that led to cleaner conceptualizations that helped cognitive science research on memory, it would obviously be helpful. But this isn't the case, no one in memory research is concerned about lightning striking a swamp and creating an exact replica. This isn't addressing some conceptual issue making memory research hard."

Here and in the following paragraphs you may be selectively applying the criterion of 'practical relevance'. The swampman example plausibly reveals that remembering essentially requires the right kind of causal connection between some initial experience and the recollecting event. Swampman plausibly lacks this connection, and to that extent we doubt that he remember.

The courtroom example you give to show the practical relevane of epistemology can easily be adapted to the case of swampman. It is easy to imagine that some verdict turns on whether someone remembered that the car was blue or whether he in fact never saw the car, but for whatever reason truly believed the car was blue. Since memory is typically regarded as a source of knowledge, it's not surprising at all that we can construct a parallel case for memory.

Expand full comment
author

Can you come up with an example where the conceptual distinction swampman is supposed to make is relevant? Not just an example of unreliable memory or a justification not being tied to a belief, but where the differentiation of remembering and swampman's schmemembering is important.

Expand full comment

Nice article! In a way, Peter Unger continues the tradition of Wittgenstein, detailing his ideas in a less opaque and dense way, but in the end arriving at the same conclusion: that some philosophical arguments or problems can be, in a way, eliminated by showing the confusion of specific terms in language that they result from. But I would say that that is one of the features of language; that it can be intepretative, subjective, and changing over time. It is not just a static concept, as it moves with us through time and evolves through societal movements. I would say quite the opposite; a lot of the problems arise from trying to be too rigid with language. As an example, Dijkstra has a really good fitting quote (especially in the current craze of LLMs we're in right now) that goes like, "The question of whether machines can think is about as relevant as the question of whether submarines can swim.". In my mind, this shows exactly the problem between people trying to be too rigid with language, and people leaving it too open-ended.

Expand full comment

Where does the meaning of “the brain” come from? It cannot be ‘in the brain’ as that would imply that the content of the brain is bigger than the brain: which approximates Russel’s paradox . It cannot be outside of the brain if all meaning is internal to the brain. Where is Meaning? Where is Brain?

Expand full comment
author

I don't follow. Why can't it be in the brain? What kind of meaning are you referring to?

Expand full comment

The idea of “brain” is distinct from what it signifies, and nothing can signify itself by itself (that would imply that a thing is not identical to itself: a violation of the law of identity). All we have is the idea of a thing, which tells us nothing about the process of its signification.

Direct self-reference either collapses to pure self-identity, which is trivially true of everything but does not entail consciousness, or presupposes itself as something in excess of self-identity, as a concept of itself constructed in terms of itself and contained within itself, therefore is both identical and not identical to itself, therefore either a contradiction or no reflexive consciousness. This conclusion can be made more salient by the analogy of a sentence that refers to itself: ‘This sentence is true’. The law of identity is implicitly violated by equivocating between the identity of the sentence ‘This sentence is true’ and the word ‘sentence’ in the sentence. It can be demonstrated that these two instances of ‘sentence’ are not the same identity. In fact, the phrase ‘This sentence’ does not refer to anything at all: substitution of the whole sentence for every recurrent instance of the phrase ‘This sentence’ results in infinite regress and an empty subject: “(((((…) is true) is true) is true)…)”. The sentence cannot be meaningfully completed; when consistently parsed it does not make sense and is not even a sentence.

Expand full comment
author

I'm not following. Let me lay out my understanding.

You're concerned with how the brain can store the concept of "brain". To me, concepts are information, and we know the brain can store information.

You seem concerned with the self-referential aspect of the brain storing information about itself. But the information we store about the brain is clearly less than what it can store.

I'm not sure what your concerns about consciousness are here or how they connect with this.

Expand full comment

Concepts are not just information, but meaningful information, and meaningful to someone. The computer can store complete design information about itself, but this does not make the computer meaningful to itself, let alone conscious. The ‘meaning’ of information is something extra that the information does not contain but must be ascribed from somewhere else.

My objection is rather that direct self-signification (of itself by itself as itself, at any level) is logically impossible, and the premise that ‘what’ the concept of the brain signifies contains the concept of the brain that signifies what it is, is logically circular and empty of meaning. We (conscious selves) conceive of the concept of the brain but this does not imply that its meaning is ‘in’ the brain, hence the question about the location of meaning.

Expand full comment
author

Concepts take on meaning from direct sensory experience and their association with other concepts (see for example models of concept learning like this: https://psycnet.apa.org/fulltext/2018-20732-001.html)

I'm still not clear why you think self-referencing concepts are impossible, and unclear how, if that were the case, any view could account for the fact that we clearly do have concepts about what we are

Expand full comment

There are two main types of self-referencing concepts: 1) statements purporting to be logically circular, for example, ‘this sentence is true’ which are either meaningless or contradictory, depending on how they are parsed; 2) reflexive ascriptions, for example, ‘I am thinking’, which signify reflexive consciousness but are not directly self-referential, tacitly referring to some other contextual meanings in terms of which the Self in question is objectified.

Expand full comment

I agree that all concepts are meaningfully related, cross-referential, including the concept of sensory experience, but this testifies only to the fact that meanings are already conceived of, not where or how meaning arises for consciousness. Sensory experience is meaningful to us, but not ‘because of’ sensory experience, since it is still only a concept. Positing the meaning of sensory experience as the source of meaning is begging the question.

Expand full comment