There's an important difference between reductionism and eliminativism. Reductionism, in most philosophically rigorous senses of the term, sees higher-level phenomena or theories as *grounded* in lower-level phenomena or theories. When a sufficiently strong connection has been made, the lower-level phenomena actually provide *evidence* for the higher-level theories. When this is the case, it is indeed wrong to say that the lower-level theory is the only legitimate way of describing the higher-level phenomena. Both theories really are just offering different descriptions of the same thing.
Eliminativism, on the other hand, is a possible explanation for a *failure* of reduction. One reason a high-level theory might fail to reduce to a lower-level theory is that the higher-level theory is simply wrong. This is the reason, to give just one example, why phlogiston theory does not reduce to modern models of oxidation. The world just doesn't have a referent for something with the properties posited of phlogiston.
Re: computer languages, it's important to remember we started at a relatively low level of physical theory. The more abstract theories that comprise the various languages were developed within the known constraints of the physical system that ultimately has to implement them, and this ensures relatively smooth reducibility. When it comes to human thought and behavior, though, the story has been rather messier. We didn't start with a rich understanding of the physical implementation level; we started with a bunch of "middle world" folk concepts and discovered deeper dynamics and regularities only later. It's an open question, then, which of our folk or higher-level scientific concepts actually *can* be grounded in the implementation level--at least without radical changes in their posited properties.
Smooth reductions are rare in science. Most inter-level theory articulations are fudged in various ways such that the higher-level theory works well enough for enough purposes that we continue to use it even thought it breaks down in some places where the lower-level theory doesn't. We can model a fluid as a continuous medium using the Navier-Stokes equations for a lot of purposes, but if we want to know how it behaves inside a cell or in highly rarefied conditions, we have to start taking individual molecular effects into account (because fluids aren't *really* continuous media). There's a lot of this sort of thing in science, and that's fine; modeling everything at the implementation level would be way too computationally burdensome. But these concessions to practicality don't (imo) license us to say that these different-leveled theories are equivalent or even equally well-evidenced.
Excellent article. I came at this from the same direction, learning about the layers of abstraction in computer technology long before reading anything about consciousness and the brain. (My day job is in IT. Over my career I've programmed everything from machine code to web apps.)
If we came across computing technology in nature, we'd likely discuss the higher level concepts as "emergent" from the lower ones. Some of us might even despair of ever understanding how Windows or Mac OS could even conceivably emerge from all those transistors. Even though we actually understand everything about these human-made systems, those levels of abstraction are crucial for working with it.
Once we understand this, it seems clear that much of the mystery of the mind amounts to not having those intermediate layers yet. Of course, the nature of biology puts this into a completely different category of difficulty compared to engineered systems. But understanding it doesn't seem so inconceivable anymore.
Hi Tommy, great article as always. I’m struggling though with the connection between levels of abstraction/causality and “free will”. Just as you say I think it’s clear we can talk about how when I click submit on this comment there a many levels of causality that can be invoked to understand why the comment appears on your post. From the level of my conscious thought, through my biology, passing through the software and all the way down to the photons that move through fiber optic cables. That said, I don’t see how that implies “free will”. Does the software you mention in your analogy have free will? Did it decide to add the numbers or could it decide to do something else?
We can take this argument to the absurd by saying that US culture is a conscious entity whose “free will” elected Donald Trump on Tuesday. While the US is composed of many individuals (analogous to neurons or transistors) and it’s cultural identity and system of interconnections between those people (social media etc) is a level of abstraction where we can make casual arguments, it does not imply there exists some entity making decisions. Abstract levels of causality are an important and useful tool for understanding our reality but I see nothing “free” or “willed” about them.
I think what we mean by "free will" is that we want to be the causes of our actions. The difficulty some have with this is understanding that we ourselves are part of the physical universe. So we are both caused and causers. Explaining that our behaviors are the product of various physical factors is just a different level of description, irrelevant to the question of whether we caused something.
In terms of software or the US having free will, we generally think of free will as being a product of the type of decision makers we are. Free will coming from mechanisms doesn't mean every mechanism has free will.
I don't disagree with causality, I just don't understand the point of arguing for some concept of "free will" without arguing for dualism (we share the physicalist perspective). I don't understand where the "causer" is if you don't see it as something separate.
We know there is emergent behavior in the complex organizations of diverse interconnected entities that creates an imperfect abstraction of causality over the base system (Think about the emergence of chemistry in the system of atomic physics or the emergence of biology in the system of organic chemistry). Even though this emergent causality is meaningful and predictive, it does not discount the fundamental nature of the underlying system and the fact that the rules and interactions of the underlying system drive the meta system. These levels of abstract emergence continue up to what we could call "psychology": this very strange and "personal" experience of the universe, which I still don't come close to explaining or understanding, that I don't believe makes any choices that can't be explained by neuroscience and other related disciplines.
I am sincerely trying to understand your perspective, for as much as I believe in my own I also believe in my fallibility. Thank you as always for sharing. I've been stuck with this Hofstadter quote for years and "I" think this is the perfect moment for it:
“The “I” (is) a hallucination hallucinated by a hallucination” -Strange Loop #641
I don't think free will is a strongly emergent property. I think when we talk about free will, what we're referring to is the difference between actions we chose to do (a wink) and unconscious actions we didn't (a blink). What makes an action voluntary vs not? When can we say someone is responsible for their action?
We want to be the causers of our actions. Seeing the genetic or environmental factors that shape us as threatening that is a category error: those are the things that gave rise to us, but that doesn't mean it isn't meaningful to say we caused it. Saying atoms follow deterministic laws and we are made out of atoms doesn't threaten our freedom to choose, because that's just describing what's happening at a different level than the level of our choices.
What gives rise to free will is our complex decision making apparatus and our understanding of the world. These processes, which we should identify as parts of us, cause actions. Different autonomatic processes, which we identify with less, cause unconscious processes we don't see ourselves as responsible for, like blinking or our heart beating.
I am saying that "consciousness" is the emergent property, free will is a hallucination that conscious entities experience. I agree that "we" cause things, I am causing my computer to send you messages via the internet. I also agree that it is meaningful to talk about choice and decision, I am choosing to sit down and converse with you because I have decided that it is a fruitful endeavour.
I think maybe we disagree on what "we" or "I" means. As I understand it (and maybe you can correct me), a blink can originate from several different biological or neurological processes in the brain. If I quickly bring my hand up to your face for example, you will blink in reaction. A decision that is made in the lentiform nucleus(??). If you then decide to respond to me with a wink of one eye, that would be a decision originated in the cerebral cortex at a higher level of abstraction in the brain but still just a brain function.
As for responsibility I think an analogy to machine learning would be appropriate. If you run a model and get a poor result you don't blame the model and chastise it for making bad decisions (typical view of free will), but you also don't throw your hands up in the air and say oh well, that's just how it is (strawman determinist view). You can try to understand how the model failed and adjust the parameters or it's design to improve the result.
A determinist doesn't have to forgo justice or the assignment of responsibilities to individuals, I believe an active understanding of our behavior and it's consequences is a powerful sef-correcting mechanism that allows us to grow not only as individuals but as a society. We should not stop enforcing behavior correction for asocial actions, just as a parent does for a child, but we should employ a powerful empathy in understanding that at the end of the day that person got to where they were because of their genes and environment. This empathy should hopefully guide our correctional methods, Sapolsky talks about this at great length in Determined.
I agree with Sapolsky that a proper understanding of people as a product of circumstance is a lesson in empathy. But I also think it's important to recognize that, as a product of physical circumstances, we are still real things that make real choices.
A machine learning algorithm lacks a lot of the properties of humans. They aren't social creatures with a deeply embedded understanding of their actions, empathy for others, an ability to reason out the consequences of their actions, etc, etc. If a person willingly chooses to act, knowlingly, without regard for the harm they are causing to others, we would correctly identify them as lacking some virtues--in other words, they're a bad person.
I'm not sure what you mean by real choices, I feel I, Erick, an actor/agent in the universe, make real choices based on my evolved design and circumstantial input. As a conscious entity I posess self reflection and thanks to neural plasticity and the adaptability of behavior, I can improve myself and reduce the harm I produce in the world. None of this requires invoking some concept of free will.
I don't really think our consciousness is literally a ML model, it was meant to be a vague analgoy. Maybe we are closer to an amalgamation of a multitude of different models of varying size and complexity.(See 1000 brains). Like in the blink example, there are two distinct neurological structures making inferences and excerting control on the body based on stimulus and internal processes (self reflection/attention). This is all guesswork though.
If someone makes a wrong choice (asocial behavior) they could not have made any other choice. The choice they made was based solely on their history and physical structure up to that point. Asocial behavior is indicative of maladaptive environmental stimulus (Examples: violent family life, trauma, poor social education from parents/family/community, poor education) or genetic influences (Examples of conditions with genetic ties and commonly considered asocial: sociopathy, narcisism, psychosis, autism). The person making the "bad" choice should be contronted in some way and through their capacity to learn and our capacity to teach they can be set on a prosocial path.
The social system itself also needs to be adaptive and self reflective in order to thrive. Often arbitrarialy defined asocial behavior can cause more harm to people through its enforcement than through its permission (treatment of neuroqueer community for example), and it is upon the discovery of that inbalance that the social system evolves. All of this takes place without any "chosers" that are distinct from the complete physical, environemtnal and historical description of each individual and their interactions. Istill don't undererand, where _is_ the chooser?
I'm gonna call it for today though, thank you for your insight. Please don't stop what you're doing, the world needs some more accessible knowledge right now. Have a good night.
"Free will is a great example of this. As I've argued elsewhere, I think the key to understanding free will is realizing there are multiple levels at which we can think about the causes of our actions. Seeing the deterministic laws of nature as precluding ourselves from having a causal impact is a category error."
I don’t think that there are any determinists who think that our actions don’t have a causal impact. That would be a very bizarre thing to believe. Rather, determinists think that our actions are fully explained by genes interacting with environment according to deterministic processes. See here: https://eclecticinquiries.substack.com/p/the-pseudoscience-of-free-will
"determinists think that our actions are fully explained by genes interacting with environment according to deterministic processes"
That's the category error I'm referring to. The idea that things can be explained in at one level, and that that explanation precludes explanations at other levels, is incorrect. It's like explaining the pixels on a screen in terms of the current flowing through transistors and thinking therefore software doesn't do anything.
I don't believe that anyone has ever made the type of error you're criticizing here. Of course you can explain behavior at the level of self and world and then also explain it at the level of biochemistry. These two ways of viewing the matter are useful for different purposes. I'm not aware of anyone who has ever said that biochemical explanations of behavior are the only valid ones. Because why not go further and say that you have to explain behavior at the level of quarks and gravity and so forth? Determinists claim only that ultimately all behaviors are caused by biochemistry, and all higher level explanations must be compatible with biochemistry (consilience!), not that biochemistry is always the most useful framework for explanation.
When libertarians like Helen Steward or Kevin Mitchell argue for strongly emergent causal properties at the agent level, I view that as this kind of error. They are concerned that agents' causal power understood as weakly emergent from their constituent parts is not "real" enough, and so there needs to be something more.
When determinists wave away that kind of strongly emergent agent causality by pointing to other causal determiners and think we therefore need to do away with (or rethink) moral responsibility, they are making a similar mistake--they correctly reject the weirdness of strong emergence, but aren't accepting that the causal power of agents as a natural product of their constituent parts is just as real and capable of underpinning moral responsibility.
One aspect that you touched on that I think can be expressed in computational terms is “happiness”. As you mentioned, it is a higher level function that may be impacted by lower level actions.
I think of it as more of a probability function, where you can inhibit the eigenstate or promote it. Nothing can guarantee a specific result, but you can influence the likelihood of a given state being more or less probable.
Are you familiar with the concept of "INUS" conditions?
I won't challenge you to classify what the word "abstract" means when referring to something like "purpose" in a post in which it also describes a computational version of abstraction. Even the hard-core formalist nerds use it quite informally and inconsistently.
It's a bit of a weed-filled rabbit hole, so don't bother thinking much about it if you don't want to spin your wheels for a while!
It seems to me that most determinists make exactly such a leap with the idea of grand unification being exactly that level which best explains it all. Dealing with compatible constraints and emergent behaviors seem more a feature of the compatibilist camp. Not that I think the labels themselves are particularly useful.
There's an important difference between reductionism and eliminativism. Reductionism, in most philosophically rigorous senses of the term, sees higher-level phenomena or theories as *grounded* in lower-level phenomena or theories. When a sufficiently strong connection has been made, the lower-level phenomena actually provide *evidence* for the higher-level theories. When this is the case, it is indeed wrong to say that the lower-level theory is the only legitimate way of describing the higher-level phenomena. Both theories really are just offering different descriptions of the same thing.
Eliminativism, on the other hand, is a possible explanation for a *failure* of reduction. One reason a high-level theory might fail to reduce to a lower-level theory is that the higher-level theory is simply wrong. This is the reason, to give just one example, why phlogiston theory does not reduce to modern models of oxidation. The world just doesn't have a referent for something with the properties posited of phlogiston.
Re: computer languages, it's important to remember we started at a relatively low level of physical theory. The more abstract theories that comprise the various languages were developed within the known constraints of the physical system that ultimately has to implement them, and this ensures relatively smooth reducibility. When it comes to human thought and behavior, though, the story has been rather messier. We didn't start with a rich understanding of the physical implementation level; we started with a bunch of "middle world" folk concepts and discovered deeper dynamics and regularities only later. It's an open question, then, which of our folk or higher-level scientific concepts actually *can* be grounded in the implementation level--at least without radical changes in their posited properties.
Smooth reductions are rare in science. Most inter-level theory articulations are fudged in various ways such that the higher-level theory works well enough for enough purposes that we continue to use it even thought it breaks down in some places where the lower-level theory doesn't. We can model a fluid as a continuous medium using the Navier-Stokes equations for a lot of purposes, but if we want to know how it behaves inside a cell or in highly rarefied conditions, we have to start taking individual molecular effects into account (because fluids aren't *really* continuous media). There's a lot of this sort of thing in science, and that's fine; modeling everything at the implementation level would be way too computationally burdensome. But these concessions to practicality don't (imo) license us to say that these different-leveled theories are equivalent or even equally well-evidenced.
Excellent article. I came at this from the same direction, learning about the layers of abstraction in computer technology long before reading anything about consciousness and the brain. (My day job is in IT. Over my career I've programmed everything from machine code to web apps.)
If we came across computing technology in nature, we'd likely discuss the higher level concepts as "emergent" from the lower ones. Some of us might even despair of ever understanding how Windows or Mac OS could even conceivably emerge from all those transistors. Even though we actually understand everything about these human-made systems, those levels of abstraction are crucial for working with it.
Once we understand this, it seems clear that much of the mystery of the mind amounts to not having those intermediate layers yet. Of course, the nature of biology puts this into a completely different category of difficulty compared to engineered systems. But understanding it doesn't seem so inconceivable anymore.
Hi Tommy, great article as always. I’m struggling though with the connection between levels of abstraction/causality and “free will”. Just as you say I think it’s clear we can talk about how when I click submit on this comment there a many levels of causality that can be invoked to understand why the comment appears on your post. From the level of my conscious thought, through my biology, passing through the software and all the way down to the photons that move through fiber optic cables. That said, I don’t see how that implies “free will”. Does the software you mention in your analogy have free will? Did it decide to add the numbers or could it decide to do something else?
We can take this argument to the absurd by saying that US culture is a conscious entity whose “free will” elected Donald Trump on Tuesday. While the US is composed of many individuals (analogous to neurons or transistors) and it’s cultural identity and system of interconnections between those people (social media etc) is a level of abstraction where we can make casual arguments, it does not imply there exists some entity making decisions. Abstract levels of causality are an important and useful tool for understanding our reality but I see nothing “free” or “willed” about them.
I think what we mean by "free will" is that we want to be the causes of our actions. The difficulty some have with this is understanding that we ourselves are part of the physical universe. So we are both caused and causers. Explaining that our behaviors are the product of various physical factors is just a different level of description, irrelevant to the question of whether we caused something.
In terms of software or the US having free will, we generally think of free will as being a product of the type of decision makers we are. Free will coming from mechanisms doesn't mean every mechanism has free will.
I don't disagree with causality, I just don't understand the point of arguing for some concept of "free will" without arguing for dualism (we share the physicalist perspective). I don't understand where the "causer" is if you don't see it as something separate.
We know there is emergent behavior in the complex organizations of diverse interconnected entities that creates an imperfect abstraction of causality over the base system (Think about the emergence of chemistry in the system of atomic physics or the emergence of biology in the system of organic chemistry). Even though this emergent causality is meaningful and predictive, it does not discount the fundamental nature of the underlying system and the fact that the rules and interactions of the underlying system drive the meta system. These levels of abstract emergence continue up to what we could call "psychology": this very strange and "personal" experience of the universe, which I still don't come close to explaining or understanding, that I don't believe makes any choices that can't be explained by neuroscience and other related disciplines.
I am sincerely trying to understand your perspective, for as much as I believe in my own I also believe in my fallibility. Thank you as always for sharing. I've been stuck with this Hofstadter quote for years and "I" think this is the perfect moment for it:
“The “I” (is) a hallucination hallucinated by a hallucination” -Strange Loop #641
I Am A Strange Loop (2007), Douglas Hofstadter
I don't think free will is a strongly emergent property. I think when we talk about free will, what we're referring to is the difference between actions we chose to do (a wink) and unconscious actions we didn't (a blink). What makes an action voluntary vs not? When can we say someone is responsible for their action?
We want to be the causers of our actions. Seeing the genetic or environmental factors that shape us as threatening that is a category error: those are the things that gave rise to us, but that doesn't mean it isn't meaningful to say we caused it. Saying atoms follow deterministic laws and we are made out of atoms doesn't threaten our freedom to choose, because that's just describing what's happening at a different level than the level of our choices.
What gives rise to free will is our complex decision making apparatus and our understanding of the world. These processes, which we should identify as parts of us, cause actions. Different autonomatic processes, which we identify with less, cause unconscious processes we don't see ourselves as responsible for, like blinking or our heart beating.
I am saying that "consciousness" is the emergent property, free will is a hallucination that conscious entities experience. I agree that "we" cause things, I am causing my computer to send you messages via the internet. I also agree that it is meaningful to talk about choice and decision, I am choosing to sit down and converse with you because I have decided that it is a fruitful endeavour.
I think maybe we disagree on what "we" or "I" means. As I understand it (and maybe you can correct me), a blink can originate from several different biological or neurological processes in the brain. If I quickly bring my hand up to your face for example, you will blink in reaction. A decision that is made in the lentiform nucleus(??). If you then decide to respond to me with a wink of one eye, that would be a decision originated in the cerebral cortex at a higher level of abstraction in the brain but still just a brain function.
As for responsibility I think an analogy to machine learning would be appropriate. If you run a model and get a poor result you don't blame the model and chastise it for making bad decisions (typical view of free will), but you also don't throw your hands up in the air and say oh well, that's just how it is (strawman determinist view). You can try to understand how the model failed and adjust the parameters or it's design to improve the result.
A determinist doesn't have to forgo justice or the assignment of responsibilities to individuals, I believe an active understanding of our behavior and it's consequences is a powerful sef-correcting mechanism that allows us to grow not only as individuals but as a society. We should not stop enforcing behavior correction for asocial actions, just as a parent does for a child, but we should employ a powerful empathy in understanding that at the end of the day that person got to where they were because of their genes and environment. This empathy should hopefully guide our correctional methods, Sapolsky talks about this at great length in Determined.
I agree with Sapolsky that a proper understanding of people as a product of circumstance is a lesson in empathy. But I also think it's important to recognize that, as a product of physical circumstances, we are still real things that make real choices.
A machine learning algorithm lacks a lot of the properties of humans. They aren't social creatures with a deeply embedded understanding of their actions, empathy for others, an ability to reason out the consequences of their actions, etc, etc. If a person willingly chooses to act, knowlingly, without regard for the harm they are causing to others, we would correctly identify them as lacking some virtues--in other words, they're a bad person.
I'm not sure what you mean by real choices, I feel I, Erick, an actor/agent in the universe, make real choices based on my evolved design and circumstantial input. As a conscious entity I posess self reflection and thanks to neural plasticity and the adaptability of behavior, I can improve myself and reduce the harm I produce in the world. None of this requires invoking some concept of free will.
I don't really think our consciousness is literally a ML model, it was meant to be a vague analgoy. Maybe we are closer to an amalgamation of a multitude of different models of varying size and complexity.(See 1000 brains). Like in the blink example, there are two distinct neurological structures making inferences and excerting control on the body based on stimulus and internal processes (self reflection/attention). This is all guesswork though.
If someone makes a wrong choice (asocial behavior) they could not have made any other choice. The choice they made was based solely on their history and physical structure up to that point. Asocial behavior is indicative of maladaptive environmental stimulus (Examples: violent family life, trauma, poor social education from parents/family/community, poor education) or genetic influences (Examples of conditions with genetic ties and commonly considered asocial: sociopathy, narcisism, psychosis, autism). The person making the "bad" choice should be contronted in some way and through their capacity to learn and our capacity to teach they can be set on a prosocial path.
The social system itself also needs to be adaptive and self reflective in order to thrive. Often arbitrarialy defined asocial behavior can cause more harm to people through its enforcement than through its permission (treatment of neuroqueer community for example), and it is upon the discovery of that inbalance that the social system evolves. All of this takes place without any "chosers" that are distinct from the complete physical, environemtnal and historical description of each individual and their interactions. Istill don't undererand, where _is_ the chooser?
I'm gonna call it for today though, thank you for your insight. Please don't stop what you're doing, the world needs some more accessible knowledge right now. Have a good night.
"Free will is a great example of this. As I've argued elsewhere, I think the key to understanding free will is realizing there are multiple levels at which we can think about the causes of our actions. Seeing the deterministic laws of nature as precluding ourselves from having a causal impact is a category error."
I don’t think that there are any determinists who think that our actions don’t have a causal impact. That would be a very bizarre thing to believe. Rather, determinists think that our actions are fully explained by genes interacting with environment according to deterministic processes. See here: https://eclecticinquiries.substack.com/p/the-pseudoscience-of-free-will
"determinists think that our actions are fully explained by genes interacting with environment according to deterministic processes"
That's the category error I'm referring to. The idea that things can be explained in at one level, and that that explanation precludes explanations at other levels, is incorrect. It's like explaining the pixels on a screen in terms of the current flowing through transistors and thinking therefore software doesn't do anything.
I don't believe that anyone has ever made the type of error you're criticizing here. Of course you can explain behavior at the level of self and world and then also explain it at the level of biochemistry. These two ways of viewing the matter are useful for different purposes. I'm not aware of anyone who has ever said that biochemical explanations of behavior are the only valid ones. Because why not go further and say that you have to explain behavior at the level of quarks and gravity and so forth? Determinists claim only that ultimately all behaviors are caused by biochemistry, and all higher level explanations must be compatible with biochemistry (consilience!), not that biochemistry is always the most useful framework for explanation.
When libertarians like Helen Steward or Kevin Mitchell argue for strongly emergent causal properties at the agent level, I view that as this kind of error. They are concerned that agents' causal power understood as weakly emergent from their constituent parts is not "real" enough, and so there needs to be something more.
When determinists wave away that kind of strongly emergent agent causality by pointing to other causal determiners and think we therefore need to do away with (or rethink) moral responsibility, they are making a similar mistake--they correctly reject the weirdness of strong emergence, but aren't accepting that the causal power of agents as a natural product of their constituent parts is just as real and capable of underpinning moral responsibility.
One aspect that you touched on that I think can be expressed in computational terms is “happiness”. As you mentioned, it is a higher level function that may be impacted by lower level actions.
I think of it as more of a probability function, where you can inhibit the eigenstate or promote it. Nothing can guarantee a specific result, but you can influence the likelihood of a given state being more or less probable.
Thank you for your interesting article!
https://whetscience.substack.com/p/happiness-is-an-uncertainty
Are you familiar with the concept of "INUS" conditions?
I won't challenge you to classify what the word "abstract" means when referring to something like "purpose" in a post in which it also describes a computational version of abstraction. Even the hard-core formalist nerds use it quite informally and inconsistently.
It's a bit of a weed-filled rabbit hole, so don't bother thinking much about it if you don't want to spin your wheels for a while!
It seems to me that most determinists make exactly such a leap with the idea of grand unification being exactly that level which best explains it all. Dealing with compatible constraints and emergent behaviors seem more a feature of the compatibilist camp. Not that I think the labels themselves are particularly useful.