Discussion about this post

User's avatar
Grognoscente's avatar

There's an important difference between reductionism and eliminativism. Reductionism, in most philosophically rigorous senses of the term, sees higher-level phenomena or theories as *grounded* in lower-level phenomena or theories. When a sufficiently strong connection has been made, the lower-level phenomena actually provide *evidence* for the higher-level theories. When this is the case, it is indeed wrong to say that the lower-level theory is the only legitimate way of describing the higher-level phenomena. Both theories really are just offering different descriptions of the same thing.

Eliminativism, on the other hand, is a possible explanation for a *failure* of reduction. One reason a high-level theory might fail to reduce to a lower-level theory is that the higher-level theory is simply wrong. This is the reason, to give just one example, why phlogiston theory does not reduce to modern models of oxidation. The world just doesn't have a referent for something with the properties posited of phlogiston.

Re: computer languages, it's important to remember we started at a relatively low level of physical theory. The more abstract theories that comprise the various languages were developed within the known constraints of the physical system that ultimately has to implement them, and this ensures relatively smooth reducibility. When it comes to human thought and behavior, though, the story has been rather messier. We didn't start with a rich understanding of the physical implementation level; we started with a bunch of "middle world" folk concepts and discovered deeper dynamics and regularities only later. It's an open question, then, which of our folk or higher-level scientific concepts actually *can* be grounded in the implementation level--at least without radical changes in their posited properties.

Smooth reductions are rare in science. Most inter-level theory articulations are fudged in various ways such that the higher-level theory works well enough for enough purposes that we continue to use it even thought it breaks down in some places where the lower-level theory doesn't. We can model a fluid as a continuous medium using the Navier-Stokes equations for a lot of purposes, but if we want to know how it behaves inside a cell or in highly rarefied conditions, we have to start taking individual molecular effects into account (because fluids aren't *really* continuous media). There's a lot of this sort of thing in science, and that's fine; modeling everything at the implementation level would be way too computationally burdensome. But these concessions to practicality don't (imo) license us to say that these different-leveled theories are equivalent or even equally well-evidenced.

Expand full comment
Mike Smith's avatar

Excellent article. I came at this from the same direction, learning about the layers of abstraction in computer technology long before reading anything about consciousness and the brain. (My day job is in IT. Over my career I've programmed everything from machine code to web apps.)

If we came across computing technology in nature, we'd likely discuss the higher level concepts as "emergent" from the lower ones. Some of us might even despair of ever understanding how Windows or Mac OS could even conceivably emerge from all those transistors. Even though we actually understand everything about these human-made systems, those levels of abstraction are crucial for working with it.

Once we understand this, it seems clear that much of the mystery of the mind amounts to not having those intermediate layers yet. Of course, the nature of biology puts this into a completely different category of difficulty compared to engineered systems. But understanding it doesn't seem so inconceivable anymore.

Expand full comment
19 more comments...

No posts