Eliminativist Interventions: Gallagher on Neglect and Free Will
In the previous post I tried to show how the shape of the free will debate can be explained in terms of the differences and incompatibilities between source-sensitive (causal) and source-insensitive (intentional) cognition. Rather than employ the overdetermined term, ‘free will,’ I considered the problem in terms of ‘choice-talk,’ the cognitive systems and language we typically employ when reporting behaviour. I then try to show how this simple step sideways allows us to see the free will debate as a paradigmatic, intellectual ‘crash space,’ a point where the combination of heuristic neglect and intellectual innovation generates systematic cognitive illusions.
As it happened, I read Shaun Gallagher’s excellent Enactivist Interventions: Rethinking the Mind while picking at this piece, and lo, discovered that he too tackles the problem of free will. I wrote what follows as an inescapable consequence.
Gallagher’s approach to the question of free will is diagnostic, much like my own. First, he wants to characterize how the problem is canonically posed, the ‘common understanding of the question,’ then he wants to show how this characterization gets the problem wrong. Discussing Libet’s now infamous ‘free won’t’ experiment, he points out that “In the experimental situation we are asked to pay attention to all of the processes that we normally do not attend to, and to move our body in a way that we do not usually move it…” which is to say, precisely those processes choice-talk systematically neglects. As he writes:
“These experiments, however, and more generally the broader discussions of motor control, have nothing to tell us about free will per se. If they contribute to a justification of perceptual or epiphenomenal theories of how we control our movement, these are not theories that address the question of free will. The question of free will is a different question.”
But why is the free will question a different question? Gallagher offers two different reasons why the question of free will simply has no place in the neurophysiology of decision-making:
“The attempt to frame the question of free will in terms of these subpersonal processes—either to dismiss it or to save it—is misguided for at least two reasons. First, free will cannot be squeezed into the elementary timescale of 150–350 milliseconds; free will is a longer-term phenomenon and, I will argue, it involves consciousness. Second, the notion of free will does not apply primarily to abstract motor processes or even to bodily movements that make up intentional actions—rather it applies to intentional actions themselves, described at the most appropriate pragmatic level of description.”
Essentially, Gallagher’s offering his own level-of-description argument. The first reason choice-talk has no place in neurophysiological considerations is that it only applies to the time-scale of personal action, and not to the time-scale of neurophysiological processes. This seems a safe enough assumption, given the affinity of choice-talk with personal action more generally. The problem is that we already know that free will has no application in neurophysiology—that it is expunged. The question, rather, is whether source-talk applies to personal time-scales. And the problem, as we saw above, is that it most certainly does. We can scale up the consequences of Libet’s experiment, talk of the brain deciding before conscious awareness of our decisions. In fact, we do this whenever we use biomedical facts to assess responsibility. Certainly, we don’t want to go back to the days of condemning the character of kids suffering ADHD and the like.
Gallagher’s second reason is that choice-talk only applies to the domain of intentional actions. He introduces the activity of lizard hunting to give an example of the applicability and inapplicability of choice-talk (and hence, free will). What’s so interesting here, from a heuristic neglect standpoint, is the way his thinking continually skirts around the issue of source-insensitivity.
“I am not at all thinking about how to move my body—I’m thinking about catching the lizard. My decision to catch the lizard is the result of a consciousness that is embedded or situated in the particular context defined by the present circumstance of encountering the lizard, and the fact that I have a lizard collection. This is an embedded or situated reflection, neither introspective nor focused on my body. It is ‘a first-person reflective consciousness that is embedded in a pragmatically or socially contextualized situation.”
Gallagher’s entirely right that we systematically neglect our physiology in the course of hunting lizards. Choice-talk belongs to a source-insensitive regime of problem solving—Gallagher himself recognizes as much. We neglect the proximal sources of behaviour and experience, focusing rather on the targets of those sources. Because this regime exhibits source-insensitivity, it relies on select correlations, cues, to the systems requiring solution. A face, for instance, is a kind of cuing organ, allowing others to draw dramatic conclusions of the basis of the most skeletal information (think happy faces, or any ‘emoticon’). The physiology of the expression-animating brain completely eludes us, and yet we can make striking predictions regarding what it will do next given things like ancestral biological integrity and similar training. A happy face on a robot, on the other hand, could mean anything. This ecological dependence is precisely why source-insensitive cognitive tools are so situational, requiring the right cues in the right circumstances to reliably solve select sets of problems—or problem ecologies.
So, Gallagher is right to insist that choice-talk, which is adapted to solve in source-insensitive or ‘shallow’ cognitive ecologies, has no application in source-sensitive or ‘deep’ cognitive ecologies. After all, we evolved these source-insensitive modes because, ancestrally speaking, biological complexity made source-sensitive cognition of living systems impossible. This is why our prescientific ancestors could go lizard hunting too.
Gallagher is also largely right to say that sourcing lizard-hunting a la neuroscience has nothing to do with our experience of hunting lizards—so long as everything functions as it should. Sun-stroke is but one of countless, potential ‘choice-talk’ breakers here.
But, once again, the question is whether source-talk applies to the nature of lizard hunting—which it certainly does. How could it not? Lizard hunting is something humans do—which is to say, biological through and through. Biology causes us to see lizards. Biology also causes us (in astronomically complicated, stochastic ways) to hunt them.
Gallagher’s whole argument hinges on an apple and orange strategy, the insistence that placing neurophysiological apples in the same bushel as voluntary oranges fundamentally mistakes the segregate nature of oranges. On my account both choice-talk and source-talk possess their respective problem-ecologies while belonging to the same high-dimensional nature. Choice-talk belongs to a system adapted to source-insensitive solutions, and as such, possesses a narrow scope of application. Source-talk, on the other hand, possesses a far, far broader scope of application, so much so that it allows us to report the nature of choice-talk. This is what Libet is doing. His findings crash choice-talk because choice-talk actually requires source-neglect to function happily.
On Gallagher’s account, free will and neurophysiology occupy distinct ‘levels of description,’ the one belonging to ‘intentional action,’ and the other to ‘natural science.’ As with the problem ecology of choice-talk, the former level is characterized by systematic source-neglect. But where this systematic neglect simply demarcates the problem-ecology of choice-talk from that of source-talk in my account, in Gallagher it demarcates an ontologically exceptional, low-dimensional ecology, that of ‘first-person reflective consciousness… embedded in a pragmatically or socially contextualized situation.’
This where post-cognitivists, having embraced high-dimensional ecology, toss us back into the intractable lap of philosophy. Gallagher, of course, thinks that some exceptional twist of nature forces this upon cognitive science, one that the systematic neglect of sources in things like lizard hunting evidences. But once you acknowledge neglect, the way Gallagher does, you have no choice but to consider the consequences of neglect. Magicians, for instance, are masters at manipulating our intuitions via neglect. Suppress the right kind of information, and humans intuit exceptional entities and events. Is it simply a coincidence that we both suffer source-neglect and we intuit exceptional entities and events when reflecting on our behaviour?
How, for instance, could reflection hope to distinguish the inability to source from the absence of sources? Gallagher agrees that metacognition is ecological—that there is no such thing as the ‘disembodied intellect.’ “Even in cases where we are able to step back,” Gallagher writes, “to detach ourselves from the demands of the immediate environment, and to engage in a second-order, conceptual deliberation, this stepping back does not make thinking any less of an embodied/ intersubjective skill.” Stepping back does not mean stepping out, despite seeming that way. Human metacognition is radically heuristic, source-insensitive through and through. Deliberative reflection on the nature of experience cannot but systematically neglect sources. This is why we hallucinate ‘disembodied intellects’ in the first place! We simply cannot, given our radically blinkered metacognitive vantage, distinguish confounds pertaining to neglect from properties belonging to experience. (The intuition, in fact, cuts the other way, which is why the ball of discursive yarn needs to be unraveled in the first place, why post-cognitivism is post.)
Even though Gallagher relies on neglect to relativize choice-talk to a particular problem-solving domain (his ‘level of description’), he fails to consider the systematic role played by source-insensitivity in our attempts to cognize cognition. He fails, in other words, to consider his own theoretical practice in exhaustively ecological terms. He acknowledges that it has to be ecological, but fails to consider what this means. As a result, he trips into phenomenological and pragmatic versions of the same confounds he critiques in cognitivism. Disembodied intellects become disembodied embodied intellects.
To be embodied is to be high-dimensional, to possess nearly inexhaustible amounts natural information. To be embodied, in other words, is to be susceptible to source-sensitive cognition. Except, Gallagher would have you believe, when its not, when the embodiment involves intentionality, in which case, we are told, source-talk no longer applies, stranding us with the low-dimensional resources of source-insensitive cognition (which is to say, perpetual disputation). ‘Disembodied intellects’ (one per theorist) are traded for irreducible phenomenologies (one per theorist) and/or autonomous normativities (one per theorist), a whole new set of explananda possessing natures that, we are assured, only intentional cognition can hope to solve.
Gallagher insists that intentional phenomena are embodied, ‘implicit,’ as he likes to say, in this or that high-dimensional ecological feature, only at a ‘level of description’ that only intentional cognition can solve. The obvious problem, of course, is that the descriptive pairing of low-dimensional intentional phenomena like ‘free will’ with high-dimensional ecologies amounts to no more than a rhetorical device short some high-dimensional account of intentionality. Terms such as ‘implicit,’ like ‘emergent’ or ‘autopoietic,’ raise far more questions than they answer. How is intentionality ‘implicit’ in x? How does intentionality ’emerge’ from x? Short some genuine naturalization of intentionality, very little evidences the difference between Gallagher’s ‘embodiment’ and haunting—‘daimonic possession.’
The discursively fatal problem, however, is that intentional cognition, as source-insensitive, relies on strategic correlations to those natures—and thus has no application to the question of natures. These are ‘quick and dirty’ systems adapted to the economical solution of practical problems on the fly. Only neglect makes it seem otherwise. This is why post-cognitivism, like cognitivism more generally, cannot so much as formulate, let alone explain, its explananda in any consensus-commanding way. On Gallagher’s account, institutional philosophy remains firmly in charge of cognitive scientific theorization, and will continue to do so in perpetuity as a ‘philosophy of nature’ (and in this respect, he’s more forthright than Hutto and Myin, who rhetorically dress their post-cognitive turn as an ‘escape’ from philosophy).
Ecological eliminativism suffers neither of these problems. Choice-talk has its problem-ecology. Source-talk has its problem-ecology. The two evolved on separate tracks, but now, thanks to radical changes in human cognitive ecology, they find themselves cheek and jowl, causing the former to crash with greater and greater frequency. This crash occurs, not because people are confusing ‘ontologically distinct levels of description,’ one exceptional, the other mundane, but because the kind of source-neglect required by the former does not obtain the way it did ancestrally. We should expect, moreover, the frequency of these crashes to radically increase as cognitive science and its technologies continue to mature. Continued insistence on ontologically and/or functionally exceptional ‘levels of description’ all but blinds us to this looming crisis.
Having acknowledged the fractionate and heuristic nature of deliberative metacognition, having acknowledged source-neglect, Gallagher now needs to explain what makes his exceptionalism exceptional, why the intentional events and entities he describes cannot be explained away as artifacts of inevitable heuristic misapplication. He finds neglect useful, but only because he neglects to provide a fulsome account of its metacognitive consequences. It possesses a second, far sharper edge.
The robot in my grocery store has googly eyes. It cues my friendliness heuristic real good. The 2020s are shaping up to be very interesting! 🙂
I don’t know if you are aware of “rationalist superstar” Gwern, but he reviewed TAE and gave it a somewhat positive review (he’s hard to please).
Way cool. Worth linking, actually.
Damn, sorry, it was adding my playlist to the link for some reason (hopefully this is the last time I post): https://youtu.be/msjQKkkW2Wo?t=118
Awesome episode. I wasn’t much of a NG fan, and the holodeck malfunctions started as preposterous, but I have a weakness for all things Moriarity.
I genuinely wonder if there isn’t a genetic/biological aspect of how people view these things, since e.g. even people trained in physics or other natural sciences can sometimes be found in the free will is real camp. That, to me, is pretty mind-boggling.
Could be, but I don’t think it need be the case at all, simply because I think there’s a biological aspect to our tendency to believe pretty much anything, despite manifest incompatibilities.
Some of them can be found in church, too. I think beliefs serve purposes such as individual comfort, social cohesion etc. The effectiveness of beliefs in serving these purposes might matter more than whether the beliefs are ‘true.’ Indeed, free will serves a lot of purposes and should the belief become untenable I have now idea what we will use in its place.
How will we accept our new-found unimportance? We’ve believed our own press releases for millennia, but now new information begins to leak out and we are not what we seem. Where do unimportant humans find their desire? Or will we even make it that far? And why should we? Because any positive answer is just another dogma.
Maybe we don’t. Maybe despair is the Great Filter: