46 Comments
User's avatar
Lance S. Bush's avatar

I'm about as pessimistic as Unger. A ton of interesting and exciting philosophy? Maybe there's stuff I don't know about or we have different interests, but could you give some examples of what you have in mind?

As an aside, there are several points where your remarks sound just like what a pragmatist would say. These in particular:

"The difference in these terms will not predict anything different about the world."

"The right approach to these gray areas is to ask why this ambiguity matters."

Not sure if you're into pragmatism at all since I'm new to reading your work but the approach you favor here is one I'm very sympathetic to.

Expand full comment
Tommy Blanchard's avatar

In terms of interesting (to me) philosophy:

I think some of the work in philosophy of perception (e.g. by A.S. Barwich or M Chirimuut) are interesting and helpful characterizations and clarifications of the conceptual landscape.

"Inventing Temperature" is an interesting exploration of what measurement means and how science progresses.

When I wanted a better understanding of what it means to understand/explain something, I found I immediately went for the philosophy on the topic to understand the conceptual landscape of theories of explanation--not sure if this counts, but I have "Explaining Explanation" and "Theories of Explanation" on my to-read list.

There's some creative stuff Eric Schwitzgebel has worked on about AI/consciousness and ethics that I think is interesting.

More generally there are examples of philosophers working directly with scientists on difficult conceptual areas in their fields--I'm most familiar with it happening in philosophy of cognitive science, philosophy of biology, philosophy of physics.

Also, because philosophy's domain is incredibly wide, there are lots of examples of philosophers writing on more general topics in ways that are at least more accessible to the public, and I think those count as important contributions--for example, the journal "Think", or Eric Schwitzgebel's "A Theory of Jerks", etc.

In terms of being a pragmatist: I'm sympathetic to it, not sure if I would label myself as such but that might be out of ignorance rather than principle.

Expand full comment
Lance S. Bush's avatar

Fair enough, quite a bit of that might be interesting. I suppose it's a matter of emphasis; the cliche of a glass half full vs. half empty when we might see things similarly. I think there's interesting philosophy out there. But there's so much stuff I think isn't very productive or useful, it tends to loom large, giving me a sense that there's not that much.

Expand full comment
Tommy Blanchard's avatar

Yeah, I'm not optimistic about the proportion of philosophy that's productive! It's just there's a lot of philosophy and some small portion of it is good, so there's a large absolute amount that's good.

Expand full comment
Lance S. Bush's avatar

Have you read much experimental philosophy?

Expand full comment
Tommy Blanchard's avatar

Only a bit

Expand full comment
Lance S. Bush's avatar

Some of it is pretty cool. I critique a lot of it, but that doesn't mean it's not at times really fascinating.

Expand full comment
Ian Jobling's avatar

It's because of empty ideas that I still believe in the logical positivists' criterion of empirical significance: statements are meaningless unless some empirical observation can confirm or disconfirm them. "It's raining" is a meaningful statement because you can look out the window and see if it's true or not. But "truth is beauty" is meaningless because there aren't any facts about the world that could confirm or disconfirm it.

Expand full comment
Dmitrii Zelenskii's avatar

Mathematical models, for one, are often sufficiently removed from the world to lack empirical evidence other than the math itself.

Expand full comment
Nathan Barnard's avatar

I think the strongest response to this is that this is a self-defeating position. It's doesn't seem like "statements are meaningless unless they can be verfifed by sense data" can itself be verified by sense data.

I think the part of philosophy that's on firmest ground as being value and distinct to philosophy (or formal disciplines more broadly) is laying out the relationship between different premises and conclusions in normative domains. I think it's very valuable philosophical work for instance to show that theory underdetermines data so one is forced to make some (ostensibly) non-empirical assumptions about how to interpret data, like having a simplicity prior.

I think we just in general really don't have a good theory (and plausibly provably can't have a good theory because of Godel's second incompleteness theorem and Tarski's undefinability theorem) about which premises one should ultimately accept.

I think one of the best critiques of philosophy in practice is that lots of the best work in this vein has been done by mathematicians, computer scientists and economists, rather than philosophers (although much has also been done my philosophers, and the foundations of mathematics project that produced lots of this work I think was spearheaded by Frege and Russel who worked out of philosophy departments, although Russel started as a mathematician.)

Expand full comment
Ian Jobling's avatar

The criterion of empircal significance is not an empirical observation, so there's no question of confirming it empirically. Rather, it's a definition of what "meaningful" means. It's justified to the extent that it clarifies the way that we use the term.

Expand full comment
Nathan Barnard's avatar

I think this is a plausible position to take, I think it is circular though (which is fine, but does commit one to an anti-foundationalist position.)

Taking the theory on its own terms then, I think that in ordinary language we use meaningful to refer to things which can't be observed empirically. For instance, I think when I say that "we should use experiments to change our beliefs about propositions about the external world," ordinary language would I think would say that that's a meaningful statement even though it isn't empircally verifiable.

I think we then start to argue about the definition of meaningful. This is other standard objection to logical positivism, that there's no analytic-synthetic distinction, so we can't argue about what meaningful means (if we aren't appealing to what it appears to denote in ordinary language) without appealing to some normative criteria of how we define meaningful.

You might be interested in this blog post which argues for your position though. https://runtimeverification.blog/2025/07/11/a-defense-of-logical-positivism/

Expand full comment
Ian Jobling's avatar

Your use of "should" in your example suggests that you are making a statement about moral value. You're saying that it's morally good to use experiments to change beliefs. Such statements are made meaningful by reference to some moral theory that allows such statements to be confirmed or disconfirmed. So utilitarians would judge whether using experiments in this way contributes to happiness, and Kantians would evaluate whether you could coherently will "we should do experiments to change beliefs" to be a universal law. Rawlsians could evaluate whether that's something that someone would choose in the original position. And so forth.

Expand full comment
Nathan Barnard's avatar

I'm claiming that one requires a normative criterion to decide how one should define "meaningful" if one doesn't appeal to the ordinary language definition which I think would be the only one which could be supported empircally. Normative criteria aren't required to be moral though, they could be epistemic for instance (e.g one should change one's beliefs using conditionalisation would be an epistemic should.)

So concretely, if we aren't using the ordinary lanague definition of meaningful, then I think you have to argue for some definition of meaningful, e.g that we should only think statements as meaningful if they can be argued about via appeals to sense data, but I think that that position really is self-defeating, if you accept that this isn't how we use meaningful in ordinary language. If one does accept that then now it seems like one is appealing some criteria which can't be adjudicated via external sense data, because that would consiste defining "meaningful" in terms of a psyhoglical attitude.

Expand full comment
Scott Ko's avatar

Love the article Tommy, and a good provocation. Two thoughts come to mind: First is that I think honing our understanding of abstract ideas is a bit like learning how to finetune the resolution on a microscope. With each setting we work through, it allows us to better make sense of real-life scenarios. Someone with a high-resolution understanding of the concepts of justice might be far more capable of navigating and guiding complex moral issues, for example.

But I also agree that time spent refining the lens also can be its own trap. One of the worlds I move through is that of leadership philosophy, and I also observe this interesting dynamic at play, where there are many people who constantly debate and philosophise on what leadership ought to be, but then no one actually does anything, rendering the entire exercise moot. Concepts and ideas also need to be tested 'on the field' to validate their applicability.

Expand full comment
Tommy Blanchard's avatar

Thank you! And I totally agree--concepual sharpening is absolutely useful in many instances, and the issue is one of "taking it too far". Cheers!

Expand full comment
Curious Sarah's avatar

Have you read “Language vs. Reality: Why Language is Good for Lawyers and Bad for Scientists” by NJ Enfield? Might be of interest, it gets into this topic from the linguistics side rather than the philosophy side.

https://mitpress.mit.edu/9780262548465/language-vs-reality/

Expand full comment
Tommy Blanchard's avatar

I haven't heard of it, thanks for sharing with me! Looks interesting

Expand full comment
Nathan Ormond's avatar

You should heck out Mark Wilson's Imitation of Rigour which is a good complement to Unger's work and I think does a slightly better job than Unger with its criticisms ( https://academic.oup.com/book/39060?login=false )

Expand full comment
Tommy Blanchard's avatar

Ooh thanks, this looks really good

Expand full comment
Ryan Bromley's avatar

I think there's a problem with trying to draw lines of definition in a continuous universe that expresses in spectra.

Where exactly does blue become green? Where does chemistry end and biology begin? Where is the boundary between what I call myself and everything else?

Definition is a tool employed for communication. The problem is that people forget that it's a sort of shorthand, imprisoning themselves within the walls they construct in their minds.

Expand full comment
Aleksy's avatar

Based.

Expand full comment
Amos Wollen's avatar

“Peter Unger's critique of this is quite simple: None of this matters”

I haven’t read Empty Ideas yet, but I wonder if Unger considers the objection that there are non-ridiculous arguments to the effect that knowledge is intrinsically valuable (more so than beliefs that are simply true). If those arguments pan out, then it seems like it will matter (to some extent) whether swamp-man really knows things. You might object that swamp-man does know things because epistemic externalism is true of whatever, but there are non-ridiculous arguments for internalism, so it seems like a hard sell.

Expand full comment
Tommy Blanchard's avatar

Unger has two main theses: Modern philosophy is full of concretely empty ideas, and it is full of analytically empty ideas. The case I outline is mostly about the concretely empty side--it doesn't matter for any practical purpose to distinguish knowing from schmowing.

The analytically empty side is that the abstract conceptual work in philosophy is so far removed from the real world, it's only relevant in academic philosophy. He's very critical of the definition->counter-example->definition->counter-example cycle, and thinks a lot of the distinctions philosophers are making are only relevant to debates that have spun in that loop long enough to become untethered from anything anyone else would care about. So I guess a response he could give to the idea that knowledge is valuable and therefore the concept matters is to say that sure, knowledge matters, but not all conceptual distinctions about that concept matter.

Expand full comment
Nathan Ormond's avatar

When do you foresee it mattering? Do you spend a lot of time in Swamps?

Expand full comment
Amos Wollen's avatar

Well um yeah my uncle is Shrek

Expand full comment
John's avatar

Excellent. Unger wasn’t someone I was familiar with. Thank you.

Expand full comment
Dogscratcher's avatar

“Most of our concepts are just labels we use to simplify the world.”

Preach it. This is a truly useful concept

Expand full comment
Castineliel Molineux's avatar

Thanks for the excellent read, Tommy. I was hoping to ask if you could share your source for the parable by Dharmarotta? It was such an apt example for Gettier problems that it seems like we'd all be better off citing "Dharmarotta's Mirage" as a byword for this particular challenge to the standard definition of "Knowledge", and it's already inspired a certain epistemic question. Perhaps I can clarify.

It's rather clear to some segment of those of us interested in the topics where Large Language Models and theories of consciousness overlap, that when LLM's produce a response which delivers true information, in some sense this event is always a happy accident, rather than the result of an attempt by the LLM to report on its "beliefs".

That is to say, when we ourselves report "the sky is blue", it seems to be widely assumed that we are reporting on a belief typically justified by our own experience of looking up on a cloudless day, which ranges of spectrum are commonly held as referring to the word "blue", etc., whereas the justification for an LLM lies entirely within the parameters of its model which links "sky" tokens (i'm speaking loosely, mixing CS/ML terms for "token" with linguistics here, apologies), to "blue" tokens. Insofar as this may be the case, it seems correct to say that all responses of an LLM are in fact "hallucinations", however some responses happen to be true.

In reading Dharmarotta's mirage, it occurred to me to wonder: is it possible that all knowledge works the same way? That is, is it actually the case that we "know" things, or is every fact we can report only accidentally true?

Expand full comment
Pageturner's avatar

"If Swampman highlighted some issue with our concept of "memory" that led to cleaner conceptualizations that helped cognitive science research on memory, it would obviously be helpful. But this isn't the case, no one in memory research is concerned about lightning striking a swamp and creating an exact replica. This isn't addressing some conceptual issue making memory research hard."

Here and in the following paragraphs you may be selectively applying the criterion of 'practical relevance'. The swampman example plausibly reveals that remembering essentially requires the right kind of causal connection between some initial experience and the recollecting event. Swampman plausibly lacks this connection, and to that extent we doubt that he remember.

The courtroom example you give to show the practical relevane of epistemology can easily be adapted to the case of swampman. It is easy to imagine that some verdict turns on whether someone remembered that the car was blue or whether he in fact never saw the car, but for whatever reason truly believed the car was blue. Since memory is typically regarded as a source of knowledge, it's not surprising at all that we can construct a parallel case for memory.

Expand full comment
Tommy Blanchard's avatar

Can you come up with an example where the conceptual distinction swampman is supposed to make is relevant? Not just an example of unreliable memory or a justification not being tied to a belief, but where the differentiation of remembering and swampman's schmemembering is important.

Expand full comment
Darius Chira's avatar

Nice article! In a way, Peter Unger continues the tradition of Wittgenstein, detailing his ideas in a less opaque and dense way, but in the end arriving at the same conclusion: that some philosophical arguments or problems can be, in a way, eliminated by showing the confusion of specific terms in language that they result from. But I would say that that is one of the features of language; that it can be intepretative, subjective, and changing over time. It is not just a static concept, as it moves with us through time and evolves through societal movements. I would say quite the opposite; a lot of the problems arise from trying to be too rigid with language. As an example, Dijkstra has a really good fitting quote (especially in the current craze of LLMs we're in right now) that goes like, "The question of whether machines can think is about as relevant as the question of whether submarines can swim.". In my mind, this shows exactly the problem between people trying to be too rigid with language, and people leaving it too open-ended.

Expand full comment
Michael Kowalik's avatar

Where does the meaning of “the brain” come from? It cannot be ‘in the brain’ as that would imply that the content of the brain is bigger than the brain: which approximates Russel’s paradox . It cannot be outside of the brain if all meaning is internal to the brain. Where is Meaning? Where is Brain?

Expand full comment
Tommy Blanchard's avatar

I don't follow. Why can't it be in the brain? What kind of meaning are you referring to?

Expand full comment
Michael Kowalik's avatar

The idea of “brain” is distinct from what it signifies, and nothing can signify itself by itself (that would imply that a thing is not identical to itself: a violation of the law of identity). All we have is the idea of a thing, which tells us nothing about the process of its signification.

Direct self-reference either collapses to pure self-identity, which is trivially true of everything but does not entail consciousness, or presupposes itself as something in excess of self-identity, as a concept of itself constructed in terms of itself and contained within itself, therefore is both identical and not identical to itself, therefore either a contradiction or no reflexive consciousness. This conclusion can be made more salient by the analogy of a sentence that refers to itself: ‘This sentence is true’. The law of identity is implicitly violated by equivocating between the identity of the sentence ‘This sentence is true’ and the word ‘sentence’ in the sentence. It can be demonstrated that these two instances of ‘sentence’ are not the same identity. In fact, the phrase ‘This sentence’ does not refer to anything at all: substitution of the whole sentence for every recurrent instance of the phrase ‘This sentence’ results in infinite regress and an empty subject: “(((((…) is true) is true) is true)…)”. The sentence cannot be meaningfully completed; when consistently parsed it does not make sense and is not even a sentence.

Expand full comment
Tommy Blanchard's avatar

I'm not following. Let me lay out my understanding.

You're concerned with how the brain can store the concept of "brain". To me, concepts are information, and we know the brain can store information.

You seem concerned with the self-referential aspect of the brain storing information about itself. But the information we store about the brain is clearly less than what it can store.

I'm not sure what your concerns about consciousness are here or how they connect with this.

Expand full comment
Michael Kowalik's avatar

Concepts are not just information, but meaningful information, and meaningful to someone. The computer can store complete design information about itself, but this does not make the computer meaningful to itself, let alone conscious. The ‘meaning’ of information is something extra that the information does not contain but must be ascribed from somewhere else.

My objection is rather that direct self-signification (of itself by itself as itself, at any level) is logically impossible, and the premise that ‘what’ the concept of the brain signifies contains the concept of the brain that signifies what it is, is logically circular and empty of meaning. We (conscious selves) conceive of the concept of the brain but this does not imply that its meaning is ‘in’ the brain, hence the question about the location of meaning.

Expand full comment
Tommy Blanchard's avatar

Concepts take on meaning from direct sensory experience and their association with other concepts (see for example models of concept learning like this: https://psycnet.apa.org/fulltext/2018-20732-001.html)

I'm still not clear why you think self-referencing concepts are impossible, and unclear how, if that were the case, any view could account for the fact that we clearly do have concepts about what we are

Expand full comment
Michael Kowalik's avatar

There are two main types of self-referencing concepts: 1) statements purporting to be logically circular, for example, ‘this sentence is true’ which are either meaningless or contradictory, depending on how they are parsed; 2) reflexive ascriptions, for example, ‘I am thinking’, which signify reflexive consciousness but are not directly self-referential, tacitly referring to some other contextual meanings in terms of which the Self in question is objectified.

Expand full comment
Michael Kowalik's avatar

I agree that all concepts are meaningfully related, cross-referential, including the concept of sensory experience, but this testifies only to the fact that meanings are already conceived of, not where or how meaning arises for consciousness. Sensory experience is meaningful to us, but not ‘because of’ sensory experience, since it is still only a concept. Positing the meaning of sensory experience as the source of meaning is begging the question.

Expand full comment
Nathan Ormond's avatar

I believe this is a mistaken representationalist view of language. Check out PMS Hacker's work on this ( https://link.springer.com/book/10.1007/978-3-031-57559-4 )

Expand full comment
Michael Kowalik's avatar

I agree, and this was part of the point, that the representational view entails contradiction: all instances of meaning (all thought) being allegedly ‘contained in’ (as a representation) in just one instance of meaning (“brain”). Alternatively, we may understand meaning non-representationally, “brain” being just what “brain” means, nothing beyond meaning, which is arguably comprised of logical relations between different instances of meaning (qualities, quantities and categories).

This looks like an interesting reference, thank you!

Expand full comment