Three Pound Brain

No bells, just whistling in the dark…

Category: PHILOSOPHY

Discontinuity Thesis: A ‘Birds of a Feather’ Argument Against Intentionalism*

by rsbakker

A hallmark of intentional phenomena is what might be called ‘discontinuity,’ the idea that the intentional somehow stands outside the contingent natural order, that it possesses some as-yet-occult ‘orthogonal efficacy.’ Here’s how some prominent intentionalists characterize it:

“Scholars who study intentional phenomena generally tend to consider them as processes and relationships that can be characterized irrespective of any physical objects, material changes, or motive forces. But this is exactly what poses a fundamental problem for the natural sciences. Scientific explanation requires that in order to have causal consequences, something must be susceptible of being involved in material and energetic interactions with other physical objects and forces.” Terrence Deacon, Incomplete Nature, 28

“Exactly how are consciousness and subjective experience related to brain and body? It is one thing to be able to establish correlations between consciousness and brain activity; it is another thing to have an account that explains exactly how certain biological processes generate and realize consciousness and subjectivity. At the present time, we not only lack such an account, but are also unsure about the form it would need to have in order to bridge the conceptual and epistemological gap between life and mind as objects of scientific investigation and life and mind as we subjectively experience them.” Evan Thompson, Mind in Life, x

“Norms (in the sense of normative statuses) are not objects in the causal order. Natural science, eschewing categories of social practice, will never run across commitments in its cataloguing of the furniture of the world; they are not by themselves causally efficacious—no more than strikes or outs are in baseball. Nonetheless, according to the account presented here, there are norms, and their existence is neither supernatural nor mysterious. Normative statuses are domesticated by being understood in terms of normative attitudes, which are in the causal order.” Robert Brandom, Making It Explicit, 626

What I would like to do is run through a number of different discontinuities you find in various intentional phenomena as a means of raising the question: What are the chances? What’s worth noting is how continuous these alleged phenomena are with each other, not simply in terms of their low-dimensionality and natural discontinuity, but in terms of mutual conceptual dependence as well. I made a distinction between ‘ontological’ and ‘functional’ exemptions from the natural even though I regard them as differences of degree because of the way it maps stark distinctions in the different kinds of commitments you find among various parties of believers. And ‘low-dimensionality’ simply refers to the scarcity of the information intentional phenomena give us to work with—whatever finds its way into the ‘philosopher’s lab,’ basically.

So with regard to all of the following, my question is simply, are these not birds of a feather? If not, then what distinguishes them? Why are low-dimensionality and supernaturalism fatal only for some and not others?

.

Soul – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of the Soul, you will find it consistently related to Ghost, Choice, Subjectivity, Value, Content, God, Agency, Mind, Purpose, Responsibility, and Good/Evil.

Game – Anthropic. Low-dimensional. Functionally exempt from natural continuity (insofar as ‘rule governed’). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Game is consistently related to Correctness, Rules/Norms, Value, Agency, Purpose, Practice, and Reason.

Aboutness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Aboutness is consistently related to Correctness, Rules/Norms, Inference, Content, Reason, Subjectivity, Mind, Truth, and Representation.

Correctness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Correctness is consistently related to Game, Aboutness, Rules/Norms, Inference, Content, Reason, Agency, Mind, Purpose, Truth, Representation, Responsibility, and Good/Evil.

Ghost – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of Ghosts, you will find it consistently related to God, Soul, Mind, Agency, Choice, Subjectivity Value, and Good/Evil.

Rules/Norms – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Rules and Norms are consistently related to Game, Aboutness, Correctness, Inference, Content, Reason, Agency, Mind, Truth, and Representation.

Choice – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Embodies inexplicable efficacy. Choice is typically discussed in relation to God, Agency, Responsibility, and Good/Evil.

Inference – Anthropic. Low-dimensional. Functionally exempt (‘irreducible,’ ‘autonomous’) from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Inference is consistently related to Game, Aboutness, Correctness, Rules/Norms, Value, Content, Reason, Mind, A priori, Truth, and Representation.

Subjectivity – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Subjectivity is typically discussed in relation to Soul, Rules/Norms, Choice, Phenomenality, Value, Agency, Reason, Mind, Purpose, Representation, and Responsibility.

Phenomenality – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. Phenomenality is typically discussed in relation to Subjectivity, Content, Mind, and Representation.

Value – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Value discussed in concert with Correctness, Rules/Norms, Subjectivity, Agency, Practice, Reason, Mind, Purpose, and Responsibility.

Content – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Content discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Phenomenality, Reason, Mind, A priori, Truth, and Representation.

Agency – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Agency is discussed in concert with Games, Correctness, Rules/Norms, Choice, Inference, Subjectivity, Value, Practice, Reason, Mind, Purpose, Representation, and Responsibility.

God – Anthropic. Low-dimensional. Ontologically exempt from natural continuity (as the condition of everything natural!). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds God discussed in relation to Soul, Correctness, Ghosts, Rules/Norms, Choice, Value, Agency, Purpose, Truth, Responsibility, and Good/Evil.

Practices – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Practices are discussed in relation to Games, Correctness, Rules/Norms, Value, Agency, Reason, Purpose, Truth, and Responsibility.

Reason – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Reason discussed in concert with Games, Correctness, Rules/Norms, Inference, Value, Content, Agency, Practices, Mind, Purpose, A priori, Truth, Representation, and Responsibility.

Mind – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Mind considered in relation to Souls, Subjectivity, Value, Content, Agency, Reason, Purpose, and Representation.

Purpose – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Purpose discussed along with Game, Correctness, Value, God, Reason, and Representation.

A priori – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One often finds the A priori discussed in relation to Correctness, Rules/Norms, Inference, Subjectivity, Content, Reason, Truth, and Representation.

Truth – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Truth discussed in concert with Games, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Value, Content, Practices, Mind, A priori, Truth, and Representation.

Representation – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Representation discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Subjectivity, Phenomenality, Content, Reason, Mind, A priori, and Truth.

Responsibility – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Responsibility is consistently related to Game, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Reason, Agency, Mind, Purpose, Truth, Representation, and Good/Evil.

Good/Evil – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Good/Evil consistently related to Souls, Correctness, Subjectivity, Value, Reason, Agency, God, Purpose, Truth, and Responsibility.

.

The big question here, from a naturalistic standpoint, is whether all of these characteristics are homologous or merely analogous. Are the similarities ontogenetic, the expression of some shared ‘deep structure,’ or merely coincidental? For me this has to be what I think is one of the most significant questions that never get’s asked in cognitive science. Why? Because everybody has their own way of divvying up the intentional pie (including interpretavists like Dennett). Some of these items are good, and some of them are bad, depending on whom you talk to. If these phenomena were merely analogous, then this division need not be problematic—we’re just talking fish and whales. But if these phenomena are homologous—if we’re talking whales and whales—then the kinds of discursive barricades various theorists erect to shelter their ‘good’ intentional phenomena from ‘bad’ intentional phenomena need to be powerfully motivated.

Pointing out the apparent functionality of certain phenomena versus others simply will not do. The fact that these phenomena discharge some kind of function somehow seems pretty clear. It seems to be the case that God anchors the solution to any number of social problems—that even Souls discharge some function in certain, specialized problem-ecologies. The same can be said of Truth, Rule/Norm, Agency—every item on this list, in fact.

And this is precisely what one might expect given a purely biomechanical, heuristic interpretation of these terms as well (with the added advantage of being able to explain why our phenomenological inheritance finds itself mired in the kinds of problems it does). None of these need count as anything resembling what our phenomenological tradition claims to explain the kinds of behaviour that accompanies them. God doesn’t need to be ‘real’ to explain church-going, no more than Rules/Norms do to explain rule-following. Meanwhile, the growing mountain of cognitive scientific discovery looms large: cognitive functions generally run ulterior to what we can metacognize for report. Time and again, in context after context, empirical research reveals that human cognition is simply not what we think it is. As ‘Dehaene’s Law’ states, “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). Perhaps this is simply what intentionality amounts to: a congenital ‘overestimation of awareness,’ a kind of WYSIATI or ‘what-you-see-is-all-there-is’ illusion. Perhaps anthropic, low-dimensional, functionally exempt from natural continuity, inscrutable in terms of natural continuity, source of perennial controversy, and possesses inexplicable efficacy are all expressions of various kinds of neglect. Perhaps it isn’t just a coincidence that we are entirely blind to our neuromechanical embodiment and that we suffer this compelling sense that we are more than merely neuromechanical.

How could we cognize the astronomical causal complexities of cognition? What evolutionary purpose would it serve?

What impact does our systematic neglect of those capacities have on philosophical reflection?

Does anyone really think the answer is going to be ‘minimal to nonexistent’?

 

* Originally posted June 16th, 2014

Eliminativist Interventions: Gallagher on Neglect and Free Will

by rsbakker

Enactivist Interventions

In the previous post I tried to show how the shape of the free will debate can be explained in terms of the differences and incompatibilities between source-sensitive (causal) and source-insensitive (intentional) cognition. Rather than employ the overdetermined term, ‘free will,’ I considered the problem in terms of ‘choice-talk,’ the cognitive systems and language we typically employ when reporting behaviour. I then try to show how this simple step sideways allows us to see the free will debate as a paradigmatic, intellectual ‘crash space,’ a point where the combination of heuristic neglect and intellectual innovation generates systematic cognitive illusions.

As it happened, I read Shaun Gallagher’s excellent Enactivist Interventions: Rethinking the Mind while picking at this piece, and lo, discovered that he too tackles the problem of free will. I wrote what follows as an inescapable consequence.

Gallagher’s approach to the question of free will is diagnostic, much like my own. First, he wants to characterize how the problem is canonically posed, the ‘common understanding of the question,’ then he wants to show how this characterization gets the problem wrong. Discussing Libet’s now infamous ‘free won’t’ experiment, he points out that “In the experimental situation we are asked to pay attention to all of the processes that we normally do not attend to, and to move our body in a way that we do not usually move it…” which is to say, precisely those processes choice-talk systematically neglects. As he writes:

“These experiments, however, and more generally the broader discussions of motor control, have nothing to tell us about free will per se. If they contribute to a justification of perceptual or epiphenomenal theories of how we control our movement, these are not theories that address the question of free will. The question of free will is a different question.”

But why is the free will question a different question? Gallagher offers two different reasons why the question of free will simply has no place in the neurophysiology of decision-making:

“The attempt to frame the question of free will in terms of these subpersonal processes—either to dismiss it or to save it—is misguided for at least two reasons. First, free will cannot be squeezed into the elementary timescale of 150–350 milliseconds; free will is a longer-term phenomenon and, I will argue, it involves consciousness. Second, the notion of free will does not apply primarily to abstract motor processes or even to bodily movements that make up intentional actions—rather it applies to intentional actions themselves, described at the most appropriate pragmatic level of description.”

Essentially, Gallagher’s offering his own level-of-description argument. The first reason choice-talk has no place in neurophysiological considerations is that it only applies to the time-scale of personal action, and not to the time-scale of neurophysiological processes. This seems a safe enough assumption, given the affinity of choice-talk with personal action more generally. The problem is that we already know that free will has no application in neurophysiology—that it is expunged. The question, rather, is whether source-talk applies to personal time-scales. And the problem, as we saw above, is that it most certainly does. We can scale up the consequences of Libet’s experiment, talk of the brain deciding before conscious awareness of our decisions. In fact, we do this whenever we use biomedical facts to assess responsibility. Certainly, we don’t want to go back to the days of condemning the character of kids suffering ADHD and the like.

Gallagher’s second reason is that choice-talk only applies to the domain of intentional actions. He introduces the activity of lizard hunting to give an example of the applicability and inapplicability of choice-talk (and hence, free will). What’s so interesting here, from a heuristic neglect standpoint, is the way his thinking continually skirts around the issue of source-insensitivity.

“I am not at all thinking about how to move my body—I’m thinking about catching the lizard. My decision to catch the lizard is the result of a consciousness that is embedded or situated in the particular context defined by the present circumstance of encountering the lizard, and the fact that I have a lizard collection. This is an embedded or situated reflection, neither introspective nor focused on my body. It is ‘a first-person reflective consciousness that is embedded in a pragmatically or socially contextualized situation.”

Gallagher’s entirely right that we systematically neglect our physiology in the course of hunting lizards. Choice-talk belongs to a source-insensitive regime of problem solving—Gallagher himself recognizes as much. We neglect the proximal sources of behaviour and experience, focusing rather on the targets of those sources. Because this regime exhibits source-insensitivity, it relies on select correlations, cues, to the systems requiring solution. A face, for instance, is a kind of cuing organ, allowing others to draw dramatic conclusions of the basis of the most skeletal information (think happy faces, or any ‘emoticon’). The physiology of the expression-animating brain completely eludes us, and yet we can make striking predictions regarding what it will do next given things like ancestral biological integrity and similar training. A happy face on a robot, on the other hand, could mean anything. This ecological dependence is precisely why source-insensitive cognitive tools are so situational, requiring the right cues in the right circumstances to reliably solve select sets of problems—or problem ecologies.

So, Gallagher is right to insist that choice-talk, which is adapted to solve in source-insensitive or ‘shallow’ cognitive ecologies, has no application in source-sensitive or ‘deep’ cognitive ecologies. After all, we evolved these source-insensitive modes because, ancestrally speaking, biological complexity made source-sensitive cognition of living systems impossible. This is why our prescientific ancestors could go lizard hunting too.

Gallagher is also largely right to say that sourcing lizard-hunting a la neuroscience has nothing to do with our experience of hunting lizards—so long as everything functions as it should. Sun-stroke is but one of countless, potential ‘choice-talk’ breakers here.

But, once again, the question is whether source-talk applies to the nature of lizard hunting—which it certainly does. How could it not? Lizard hunting is something humans do—which is to say, biological through and through. Biology causes us to see lizards. Biology also causes us (in astronomically complicated, stochastic ways) to hunt them.

Gallagher’s whole argument hinges on an apple and orange strategy, the insistence that placing neurophysiological apples in the same bushel as voluntary oranges fundamentally mistakes the segregate nature of oranges. On my account both choice-talk and source-talk possess their respective problem-ecologies while belonging to the same high-dimensional nature. Choice-talk belongs to a system adapted to source-insensitive solutions, and as such, possesses a narrow scope of application. Source-talk, on the other hand, possesses a far, far broader scope of application, so much so that it allows us to report the nature of choice-talk. This is what Libet is doing. His findings crash choice-talk because choice-talk actually requires source-neglect to function happily.

On Gallagher’s account, free will and neurophysiology occupy distinct ‘levels of description,’ the one belonging to ‘intentional action,’ and the other to ‘natural science.’ As with the problem ecology of choice-talk, the former level is characterized by systematic source-neglect. But where this systematic neglect simply demarcates the problem-ecology of choice-talk from that of source-talk in my account, in Gallagher it demarcates an ontologically exceptional, low-dimensional ecology, that of ‘first-person reflective consciousness… embedded in a pragmatically or socially contextualized situation.’

This where post-cognitivists, having embraced high-dimensional ecology, toss us back into the intractable lap of philosophy. Gallagher, of course, thinks that some exceptional twist of nature forces this upon cognitive science, one that the systematic neglect of sources in things like lizard hunting evidences. But once you acknowledge neglect, the way Gallagher does, you have no choice but to consider the consequences of neglect. Magicians, for instance, are masters at manipulating our intuitions via neglect. Suppress the right kind of information, and humans intuit exceptional entities and events. Is it simply a coincidence that we both suffer source-neglect and we intuit exceptional entities and events when reflecting on our behaviour?

How, for instance, could reflection hope to distinguish the inability to source from the absence of sources? Gallagher agrees that metacognition is ecological—that there is no such thing as the ‘disembodied intellect.’ “Even in cases where we are able to step back,” Gallagher writes, “to detach ourselves from the demands of the immediate environment, and to engage in a second-order, conceptual deliberation, this stepping back does not make thinking any less of an embodied/ intersubjective skill.” Stepping back does not mean stepping out, despite seeming that way. Human metacognition is radically heuristic, source-insensitive through and through. Deliberative reflection on the nature of experience cannot but systematically neglect sources. This is why we hallucinate ‘disembodied intellects’ in the first place! We simply cannot, given our radically blinkered metacognitive vantage, distinguish confounds pertaining to neglect from properties belonging to experience. (The intuition, in fact, cuts the other way, which is why the ball of discursive yarn needs to be unraveled in the first place, why post-cognitivism is post.)

Even though Gallagher relies on neglect to relativize choice-talk to a particular problem-solving domain (his ‘level of description’), he fails to consider the systematic role played by source-insensitivity in our attempts to cognize cognition. He fails, in other words, to consider his own theoretical practice in exhaustively ecological terms. He acknowledges that it has to be ecological, but fails to consider what this means. As a result, he trips into phenomenological and pragmatic versions of the same confounds he critiques in cognitivism. Disembodied intellects become disembodied embodied intellects.

To be embodied is to be high-dimensional, to possess nearly inexhaustible amounts natural information. To be embodied, in other words, is to be susceptible to source-sensitive cognition. Except, Gallagher would have you believe, when its not, when the embodiment involves intentionality, in which case, we are told, source-talk no longer applies, stranding us with the low-dimensional resources of source-insensitive cognition (which is to say, perpetual disputation). ‘Disembodied intellects’ (one per theorist) are traded for irreducible phenomenologies (one per theorist) and/or autonomous normativities (one per theorist), a whole new set of explananda possessing natures that, we are assured, only intentional cognition can hope to solve.

Gallagher insists that intentional phenomena are embodied, ‘implicit,’ as he likes to say, in this or that high-dimensional ecological feature, only at a ‘level of description’ that only intentional cognition can solve. The obvious problem, of course, is that the descriptive pairing of low-dimensional intentional phenomena like ‘free will’ with high-dimensional ecologies amounts to no more than a rhetorical device short some high-dimensional account of intentionality. Terms such as ‘implicit,’ like ‘emergent’ or ‘autopoietic,’ raise far more questions than they answer. How is intentionality ‘implicit’ in x? How does intentionality ’emerge’ from x? Short some genuine naturalization of intentionality, very little evidences the difference between Gallagher’s ‘embodiment’ and haunting—‘daimonic possession.’

The discursively fatal problem, however, is that intentional cognition, as source-insensitive, relies on strategic correlations to those natures—and thus has no application to the question of natures. These are ‘quick and dirty’ systems adapted to the economical solution of practical problems on the fly. Only neglect makes it seem otherwise. This is why post-cognitivism, like cognitivism more generally, cannot so much as formulate, let alone explain, its explananda in any consensus-commanding way. On Gallagher’s account, institutional philosophy remains firmly in charge of cognitive scientific theorization, and will continue to do so in perpetuity as a ‘philosophy of nature’ (and in this respect, he’s more forthright than Hutto and Myin, who rhetorically dress their post-cognitive turn as an ‘escape’ from philosophy).

Ecological eliminativism suffers neither of these problems. Choice-talk has its problem-ecology. Source-talk has its problem-ecology. The two evolved on separate tracks, but now, thanks to radical changes in human cognitive ecology, they find themselves cheek and jowl, causing the former to crash with greater and greater frequency. This crash occurs, not because people are confusing ‘ontologically distinct levels of description,’ one exceptional, the other mundane, but because the kind of source-neglect required by the former does not obtain the way it did ancestrally. We should expect, moreover, the frequency of these crashes to radically increase as cognitive science and its technologies continue to mature. Continued insistence on ontologically and/or functionally exceptional ‘levels of description’ all but blinds us to this looming crisis.

Having acknowledged the fractionate and heuristic nature of deliberative metacognition, having acknowledged source-neglect, Gallagher now needs to explain what makes his exceptionalism exceptional, why the intentional events and entities he describes cannot be explained away as artifacts of inevitable heuristic misapplication. He finds neglect useful, but only because he neglects to provide a fulsome account of its metacognitive consequences. It possesses a second, far sharper edge.

 

 

If Free-Will were a Heuristic…

by rsbakker

Ecological eliminativism provides, I think, an elegant way to understand the free-will debate as a socio-cognitive ‘crash space,’ a circumstance where ecological variance causes the systematic breakdown of some heuristic cognitive system. What follows is a diagnostic account, and as such will seem to beg the question to pretty much everyone it diagnoses. The challenge it sets, however, is abductive. In matters this abstruse, it will be the power to explain and synthesize that will carry the theoretical morning if not the empirical day.

As hairy as it is, the free-will debate, at least in its academic incarnation, has a trinary structure: you have libertarians arguing the reality of how decision feels, you have compatibilists arguing endless ways of resolving otherwise manifest conceptual and intuitive incompatibilities, and you have determinists arguing the illusory nature of how decision feels.

All three legs of this triumvirate can be explained, I think, given an understanding of heuristics and the kinds of neglect that fall out of them. Why does the feeling of free will feel so convincing? Why are the conceptualities of causality and choice incompatible? Why do our attempts to overcome this incompatibility devolve into endless disputation?

In other words, why is there a free-will debate at all? As of 10:33 AM December 17th, 2019, Googling “free will debate” returned 575,000,000 hits. Looking at the landscape of human cognition, the problem of free will looms large, a place where our intuitions, despite functioning so well in countless other contexts, systematically frustrate any chance of consensus.

This is itself scientifically significant. So far as pathology is the royal road to function, we should expect that spectacular breakdowns such as these will hold deep lessons regarding the nature of human cognition.

As indeed they do.

So, let’s begin with a simple question: If free-will were a heuristic, a tool humans used to solve otherwise intractable problems, what would it’s breakdown look like?

But let’s take a step back for a second, and bite a very important, naturalistic bullet. Rather than consider ‘free-will’ as a heuristic, let’s consider something less overdetermined: ‘choice-talk.’ Choice-talk constitutes one of at least two ways for us humans to report selections between behaviours. The second, ‘source-talk,’ we generally use to report the cognition of high-dimensional (natural) precursors, whereas we generally use choice-talk to report cognition absent high-dimensional precursors.

As a cognitive mechanism, choice-talk is heuristic insofar as it turns a liability into an asset, allowing us to solve social problems low-dimensionally—which is to say, on the cheap. That liability is source insensitivity, our congenital neglect of our biological/ecological precursors. Human cognition is fundamentally structured by what might be called the ‘biocomplexity barrier,’ the brute fact that biology is too complicated to cognize itself high-dimensionally. The choice-talk toolset manages astronomically complicated biological systems—ourselves and other people—via an interactional system reliably correlated to the high-dimensional fact of those systems given certain socio-cognitive contexts. Choice-talk works given the cognitive ecological conditions required to maintain the felicitous correlation between the cues consumed and the systems linked to them. Undo that correlation and choice-talk, like any other heuristic mechanism, begins to break down.

Ancestrally, we had no means of discriminating our own cognitive constitution. The division of cognitive labour between source-sensitive and source-insensitive cognition is one that humans constitutively neglect: we have to be trained to discriminate it. Absent such discrimination, the efficacy of our applications turn on the continuity of our cognitive ecologies. Given biocomplexity, the application of source-sensitive cognition to intractable systems—and biological systems in particular—is not something evolution could have foreseen. Why should we possess the capacity to intuitively reconcile the joint application of two cognitive systems that, as far as evolution was concerned, would never meet?

As a source-insensitivity workaround, a way to cognize behaviour absent the ability to source that behaviour, we should expect choice-talk cognition to misfire when applied to behaviour that can be sourced. We should expect that discovering the natural causes of decisions will scuttle the intuition that those decisions were freely chosen. The manifest incompatibility between high-dimensional source-talk and low-dimensional choice-talk arises because the latter has been biologically filtered to function in contexts precluding the former. Intrusions of source-talk applicability, when someone suffers a head injury, say, could usefully trump choice-talk applicability.

Choice-talk, in fact, possesses numerous useful limits, circumstances where we suspend its application to better solve social problems via other tools. As radically heuristic, choice-talk requires a vast amount of environmental stage-setting in order to function felicitously, an ecological ‘sweet spot’ that’s bound to be interrupted by any number of environmental contingencies. Some capacity to suspend its application was required. Intuitively, then, source-talk trumps choice-talk when applied to the same behaviour. Since the biocomplexity barrier assured that each mode would be cued the way it had always been cued since time immemorial, we could, ancestrally speaking, ignore our ignorance and generally trust our intuitions.

The problem is that source-talk is omni-applicable. With the rise of science, we realized that everything biological can be high-dimensionally sourced. We discovered that the once-useful incompatibility between source-talk and choice-talk can be scotched with a single question: If everything can be sourced, and if sources negate choices, then how could we be free? Incompatibility that was once-useful now powerfully suggests choice-talk has no genuinely cognitive applicability anywhere. If choice-talk were heuristic, in other words, you might expect the argument that ‘choices’ are illusions.

The dialectical problem, however, is that human deliberative metacognition, reflection, also suffers source-insensitivity and so also consists of low-dimensional heuristics. Deliberative metacognition, the same as choice-talk, systematically neglects the machinery of decision making: reflection consistently reports choices absent sources as a result. Lacking sensitivity to the fact of insensitivity, reflection also reports the sufficiency of this reporting. No machinery is required. The absence of proximal, high-dimensional sources is taken for something real, ontologized, becoming a property belonging to choices. Given metacognitive neglect, in other words, reflection reports choice-talk as expressing some kind of separate, low dimensional ontological order.

Given this blinkered report, everything depends on how one interprets that ontology and its relation to the high-dimensional order. Creativity is required to somehow rationalize these confounds, which, qua confounds, offer nothing decisive to adjudicate between rationalizations. If choice-talk were a heuristic, one could see individuals arguing, not simply that choices are ‘real,’ but the kind of reality they possess. Some would argue that choice possesses a reality distinct from biological reality, that choices are somehow made outside causal closure. Others would argue that choices belong to biological reality, but in a special way that explains their peculiarity.

If choice-talk were heuristic, in other words, you would expect that it would crash given the application of source-cognition to behaviours it attempts to explain. You would expect this crash to generate the intuition that choice-talk is an illusion (determinism). You would expect attempts to rescue choice would either take the form of insisting on its independent reality (libertarianism), or its secondary reality (compatibilism).

Two heuristic confounds are at work, the first a product of the naïve application of source-talk to human decision-making, cuing us to report the inapplicability of choice-talk tout court, the second the product of the naïve application of deliberative metacognition to human decision-making, cuing us to report the substantive and/or functional reality of ‘choice.’

If choice-talk were heuristic, in other words, you would expect something that closely resembles the contemporary free-will debate. You could even imagine philosophers cooking up cases to test, even spoof, the ways in which choice-talk and source-talk are cued. Since choices involve options, for instance, what happens when we apply source-talk to only one option, leaving the others to neglect?

If choice-talk were heuristic, in other words, you could imagine philosophers coming up things like ‘Frankfurt-style counterexamples.’ Say I want to buy a pet, but I can’t make up my mind whether to buy a cat or a dog. So, I decide to decide when I go the pet store on Friday. My wife is a neuroscientist who hates cats almost as much as she hates healthy communication. While I’m sleeping, she inserts a device at a strategic point in my brain that prevents me from choosing a cat and nothing else. None the wiser, I go to the pet store on Friday and decide to get a dog, but entirely of my own accord.

Did I choose freely?

These examples evidence the mischief falling out of heuristic neglect in a stark way. My wife’s device only interferes with decision-making processes to prevent one undesirable output. If the output is desirable, it plays no role, suggesting that the hacked subject chose that output ‘freely,’ despite the inability to do otherwise. On the one hand, surgical intervention prevents the application of choice-talk to cat buying. Source-talk, after all, trumps choice-talk. But since surgical intervention only pertains to cat buying, dog buying seems, to some at least, to remain a valid subject of choice-talk. Source neglect remains unproblematic. The machinery of decision-making, in other words, can be ignored the way it’s always ignored in decision-making contexts. It remains irrelevant. Choice-talk machinery seems to remain applicable to this one fork, despite crashing when both forks are taken together.

For some philosophers, this suggests that choice isn’t a matter of being able to do otherwise, but of arising out of the proper process—a question of appropriate ‘sourcing.’ They presume that choice-talk and the corresponding intuitions still apply. If the capacity to do otherwise isn’t definitive of choice, then provenance must be: choice is entirely compatible with precursors, they argue, so long as those precursors are the proper ones. Crash. Down another interpretative rabbit-hole they go. Short any inkling of the limits imposed by the heuristic tools at their disposal—blind to their own cognitive capacities—all they can do is pursue the intuitions falling out of the misapplications of those tools. They remain trapped, in effect, downstream the heuristic confounds described above.

Here we can see the way philosophical parsing lets us map the boundaries of reliable choice-talk application. Frankfurt-style counterexamples, on this account, are best seen as cognitive versions of visual illusions, instances where we trip over the ecological limits of our cognitive capacities.

As with visual illusions, they reveal the fractionate, heuristic nature of the capacities employed. Unlike visual illusions, however, they are too low-dimensional to be readily identified as such. To make matters worse, the breakdown is socio-cognitive: perpetual disputation between individuals is the breakdown. This means that its status as a crash space is only visible by taking an ecological perspective. For interpretative partisans, however, the breakdown always belongs to the ‘other guy.’ Understanding the ecology of the breakdown becomes impossible.

The stark lesson here is that ‘free-will’ is a deliberative confound, what you get when you ponder the nature of choice-talk without accounting for heuristic neglect. Choice-talk itself is very real. With the interactional system it belongs to—intentional cognition more generally—it facilitates cooperative miracles on the communicative back of less than fifteen bit-per-second. Impressive. Gobsmacking, actually. We would be fools not to trust our socio-cognitive reflexes where they are applicable, which is to say, where neglecting sources solves more problems than it causes.

So, yah, sure, we make choices all the bloody time. At the same time, though, ‘What is the nature of choice?’ is a question that can only be answered ecologically, which is to say, via source-sensitive cognition. The nature of choice involves the systematic neglect of systems that must be manipulated nevertheless. Cues and correlations are compulsory. The nature of choice, in other words, obliterates our intellectual and phenomenological intuitions regarding choice. There’s just no such thing.

And this, I think it’s fair to say, is as disastrous as a natural fact can be. But should we be surprised? The thing to appreciate, I think, is the degree to which we should expect to find ourselves in precisely such a dilemma. The hard fact is that biocomplexity forced us to evolve source-insensitive ways to troubleshoot all organisms, ourselves included. The progressive nature of science, however, insures that biocomplexity will eventually succumb to source-sensitive cognition. So, what are the chances that two drastically different, evolutionarily segregated cognitive modes would be easily harmonized?

Perhaps this is a growing pain every intelligent, interstellar species suffers, the point where their ancestral socio-cognitive toolset begins to fail them. Maybe science strips exceptionalism from every advanced civilization in roughly the same way: first our exceptional position, then our exceptional origin, and lastly, our exceptional being.

Perhaps choice dies with the same inevitability as suns, choking on knowledge instead of iron.

Flies, Frogs, and Fishhooks*

by rsbakker

[Revisited this the other day after reading Gallagher’s account of lizard catching in Enactivist Interventions (recommended to me by Dirk a ways back) and it struck me as worth reposting. But where Gallagher thinks the neglect characteristic of lizard catching implies only to the inapplicability of neurobiology to the question of free-will, I think that neglect can be used to resolve a great number of mysteries regarding intentionality and cognition. I hope he finds this piece.]

 

So, me and my buddies occasionally went frog hunting when we were kids. We’d knot a string on a fishhook, swing the line over the pond’s edge, and bam! frogs would strike at them. Up, up they were hauled, nude for being amphibian, hoots and hollers measuring their relative size.  Then they were dumped in a bucket.

We were just kids. We knew nothing about biology or evolution, let alone cognition. Despite this ignorance, we had no difficulty whatsoever explaining why it was so easy to catch the frogs: they were too stupid to tell the difference between fishhooks and flies.

Contrast this with the biological view I have available now. Given the capacity of Anuran visual cognition and the information sampled, frogs exhibit systematic insensitivities to the difference between fishhooks and flies. Anuran visual cognition not only evolved to catch flies, it evolved to catch flies as cheaply as possible. Without fishhooks to filter the less fishhook sensitive from the more fishhook sensitive, frogs had no way of evolving the capacity to distinguish flies from fishhooks.

Our old childhood theory is pretty clearly a normative one, explaining the frogs’ failure in terms what they ought to do (the dumb buggers). The frogs were mistaking fishhooks for flies. But if you look closely, you’ll notice how the latter theory communicates a similar normative component only in biological guise. Adducing evolutionary history pretty clearly allows us to say the proper function of Anuran cognition is to catch flies.

Ruth Millikan famously used this intentional crack in the empirical explanatory door to develop her influential version of teleosemantics, the attempt to derive semantic normativity from the biological normativity evident in proper functions. Eyes are for seeing, tongues for talking or catching flies; everything has been evolutionarily filtered to accomplish ends. So long as biological phenomena possess functions, it seems obvious functions are objectively real. So far as functions entail ‘satisfaction conditions,’ we can argue that normativity is objectively real. Given this anchor, the trick then becomes one of explaining normativity more generally.

The controversy caused by Language, Thought, and Other Biological Categories was immediate. But for all the principled problems that have since belaboured teleosemantic approaches, the real problem is that they remain as underdetermined as the day they were born. Debates, rather than striking out in various empirical directions, remain perpetually mired in ‘mere philosophy.’ After decades of pursuit, the naturalization of intentionality project, Uriah Kriegl notes, “bears all the hallmarks of a degenerating research program” (Sources of Normativity, 5).

Now the easy way to explain this failure is to point out that finding, as Millikan does, right-wrong talk buried in the heart of biological explanation does not amount to finding right and wrong buried in the heart of biology. It seems far less extravagant to suppose ‘proper function’ provides us with a short cut, a way to communicate/troubleshoot this or that actionable upshot of Anuran evolutionary history absent any knowledge of that history.

Recall my boyhood theory that frogs were simply too stupid to distinguish flies from fishhooks. Absent all knowledge of evolution and biomechanics, my friends and I found a way to communicate something lethal regarding frogs. We knew what frog eyes and frog tongues and frog brains and so on were for. Just like that. The theory possessed a rather narrow range of application to be true, but it was nothing if not cheap, and potentially invaluable if one were, say, starving. Anuran physiology, ethology, and evolutionary history simply did not exist for us, and yet we were able to pluck the unfortunate amphibians from the pond at will. As naïve children, we lived in a shallow information environment, one absent the great bulk of deep information provided by the sciences. And as far as frog catching was concerned, this made no difference whatsoever, simply because we were the evolutionary products of numberless such environments. Like fishhooks with frogs, theories of evolution had no impact on the human genome. Animal behavior and the communication of animal behavior, on the other hand, possessed a tremendous impact—they were the flies.

Which brings us back to the easy answer posed above, the idea that teleosemantics fails for confusing a cognitive short-cut for a natural phenomenon. Absent any way of cognizing our deep information environments, our ancestors evolved countless ways to solve various, specific problems absent such cognition. Rather than track all the regularities engulfing us, we take them for granted—just like a frog.

The easy answer, in other words, is to assume that theoretical applications of normative subsystems are themselves ecological (as is this very instant of cognition). After all, my childhood theory was nothing if not heuristic, which is to say, geared to the solution of complex physical systems absent complex physical knowledge of them. Terms like ‘about’ or ‘for,’ you could say, belong to systems dedicated to solving systems absent biomechanical cognition.

Which is why kids can use them.

Small wonder then, that attempts to naturalize ‘aboutness’ or ‘forness’—or any other apparent intentional phenomena—cause the theoretical fits they do. Such attempts amount to human versions of confusing flies for fishhooks! They are shallow information terms geared to the solution of shallow information problems. They ‘solve’—filter behaviors via feedback—by playing on otherwise neglected regularities in our deep environments, relying on causal correlations to the systems requiring solution, rather than cognizing those systems in physical terms. That is their naturalization—their deep information story.

‘Function,’ on the other hand, is a shallow information tool geared to the solution of deep information problems. What makes a bit of the world specifically ‘functional’ is its relation to our capacity to cognize consequences in a source neglecting yet source compatible way. As my childhood example shows, functions can be known independent of biology. The constitutive story, like the developmental one, can be filled in afterward. Functional cognition lets us neglect an astronomical number of biological details. To say what a mechanism is for is to know what a mechanism will do without saying what makes a mechanism tick. But unlike intentional cognition more generally, functional cognition remains entirely compatible with causality. This potent combination of high-dimensional compatibility and neglect is what renders it invaluable, providing the degrees of cognitive freedom required to tackle complexities across scales.

The intuition underwriting teleosemantics hits upon what is in fact a crucial crossroads between cognitive systems, where the amnesiac power of should facilitates, rather than circumvents, causal cognition. But rather than interrogate the prospect of theoretically retasking a child’s explanatory tool, Millikan, like everyone else, presumes felicity, that intuitions secondary to such retasking are genuinely cognitive. Because they neglect the neglect-structure of their inquiry, they flatter cunning children with objectivity, so sparing their own (coincidentally) perpetually underdetermined intuitions. Time and again they apply systems selected for brushed-sun afternoons along the pond’s edge to the theoretical problem of their own nature. The lures dangle in their reflection. They strike at fishhook after fishhook, and find themselves hauled skyward, manhandled by shadows before being dropped into buckets on the shore.

*Originally posted January 23rd, 2018

On the Death of Meaning

by rsbakker

My copy of New Directions In Philosophy and Literature arrived yesterday…

New Directions

The anthology features an introduction by Claire Colebrook, as well as papers by Graham Harman, Graham Priest, Charlie Blake, and more. A prepub version of my contribution, “On the Death of Meaning,” can be found here.

Exploding the Manifest and Scientific Images of Man*

by rsbakker

 

This is how one pictures the angel of history. His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress. –Benjamin, Theses on the Philosophy of History

 

What I would like to do is show how Sellars’ manifest and scientific images of humanity are best understood in terms of shallow cognitive ecologies and deep information environments. Expressed in Sellars’ own terms, you could say the primary problem with his characterization is that it is a manifest, rather than scientific, understanding of the distinction. It generates the problems it does (for example, in Brassier or Dennett) because it inherits the very cognitive limitations it purports to explain. At best, Sellars take is too granular, and ultimately too deceptive to function as much more than a stop-sign when it comes to questions regarding the constitution and interrelation of different human cognitive modes. Far from a way to categorize and escape the conundrums of traditional philosophy, it provides yet one more way to bake them in.

 

Cognitive Images

Things begin, for Sellars, in the original image, our prehistorical self-understanding. The manifest image consists in the ‘correlational and categorial refinement’ of this self-understanding. And the scientific image consists in everything discovered about man beyond the limits of correlational and categorial refinement (while relying on these refinements all the same). The manifest image, in other words, is an attenuation of the original image, whereas the scientific image is an addition to the manifest image (that problematizes the manifest image). Importantly, all three are understood as kinds of ‘conceptual frameworks’ (though he sometime refers to the original image as ‘preconceptual.’

The original framework, Sellars tells us, conceptualizes all objects as ways of being persons—it personalizes its environments. The manifest image, then, can be seen as “the modification of an image in which all the objects are capable of the full range of personal activity” (12). The correlational and categorial refinement consists in ‘pruning’ the degree to which they are personalized. The accumulation of correlational inductions (patterns of appearance) undermined the plausibility of environmental agencies and so drove categorial innovation, creating a nature consisting of ‘truncated persons,’ a world that was habitual as opposed to mechanical. This new image of man, Sellars claims, is “the framework in terms of which man came to be aware of himself as man-in-the-world” (6). As such, the manifest image is the image interrogated by the philosophical tradition, which given the limited correlational and categorial resources available to it, remained blind to the communicative—social—conditions of conceptual frameworks, and so, the manifest image of man. Apprehending this would require the scientific image, the conceptual complex “derived from the fruits of postulational theory construction,” yet still turning on the conceptual resources of the manifest image.

For Sellars, the distinction between the two images turns not so much on what we commonly regard to be ‘scientific’ or not (which is why he thinks the manifest image is scientific in certain respects), but on the primary cognitive strategies utilized. “The contrast I have in mind,” he writes, “is not that between an unscientific conception of man-in-the-world and a scientific one, but between that conception which limits itself to what correlational techniques can tell us about perceptible and introspectable events and that which postulates imperceptible objects and events for the purpose of explaining correlations among perceptibles” (19). This distinction, as it turns out, only captures part of what we typically think of as ‘scientific.’ A great deal of scientific work is correlational, bent on describing patterns in sets of perceptibles as opposed to postulating imperceptibles to explain those sets. This is why he suggests that terming the scientific image the ‘theoretical image’ might prove more accurate, if less rhetorically satisfying. The scientific image is postulational because it posits what isn’t manifest—what wasn’t available to our historical or prehistorical ancestors, namely, knowledge of man as “a complex physical system” (25).

The key to overcoming the antipathy between the two images, Sellars thinks, lies in the indispensability of the communally grounded conceptual framework of the manifest image to both images. The reason we should yield ontological priority to the scientific image derives from the conceptual priority of the manifest image. Their domains need not overlap. “[T]he conceptual framework of persons,” he writes, “is not something that needs to be reconciled with the scientific image, but rather something to be joined to it” (40). To do this, we need to “directly relate the world as conceived by scientific theory to our purposes and make it our world and no longer an alien appendage to the world in which we do our living” (40).

Being in the ‘logical space of reasons,’ or playing the ‘game of giving and asking for reasons,’ requires social competence, which requires sensitivity to norms and purposes. The entities and relations populating Sellars normative metaphysics exist only in social contexts, only so far as they discharge pragmatic functions. The reliance of the scientific image on these pragmatic functions renders them indispensable, forcing us to adopt ‘stereoscopic vision,’ to acknowledge the conceptual priority of the manifest even as we yield ontological priority to the scientific.

 

Cognitive Ecologies

The interactional sum of organisms and their environments constitutes an ecology. A ‘cognitive ecology,’ then, can be understood as the interactional sum of organisms and their environments as it pertains to the selection of behaviours.

A deep information environment is simply the sum of difference-making differences available for possible human cognition. We could, given the proper neurobiology, perceive radio waves, but we don’t. We could, given the proper neurobiology, hear dog whistles, but we don’t. We could, given the proper neurobiology, see paramecia, but we don’t. Of course, we now possess instrumentation allowing us to do all these things, but this just testifies to the way science accesses deep information environments. As finite, our cognitive ecology, though embedded in deep information environments, engages only select fractions of it. As biologically finite, in other words, human cognitive ecology is insensitive to most all deep information. When a magician tricks you, for instance, they’re exploiting your neglect-structure, ‘forcing’ your attention toward ephemera while they manipulate behind the scenes.

Given the complexity of biology, the structure of our cognitive ecology lies outside the capacity of our cognitive ecology. Human cognitive ecology cannot but neglect the high dimensional facts of human cognitive ecology. Our intractability imposes inscrutability. This means that human metacognition and sociocognition are radically heuristic, systems adapted to solving systems they otherwise neglect.

Human cognition possesses two basic modes, one that is source-insensitive, or heuristic, relying on cues to predict behaviour, and one that is source-sensitive, or mechanical, relying on causal contexts to predict behaviour. The radical economies provided by the former is offset by narrow ranges of applicability and dependence on background regularities. The general applicability of the latter is offset by its cost. Human cognitive ecology can be said to be shallow to the extent it turns on source-insensitive modes of cognition, and deep to the extent it turns on source-sensitive modes. Given the radical intractability of human cognition, we should expect metacognition and sociocognition to be radically shallow, utterly dependent on cues and contexts. Not only are we blind to the enabling dimension of experience and cognition, we are blind to this blindness. We suffer medial neglect.

This provides a parsimonious alternative to understanding the structure and development of human self-understanding. We began in an age of what might be called ‘medial innocence,’ when our cognitive ecologies were almost exclusively shallow, incorporating causal determinations only to cognize local events. Given their ignorance of nature, our ancestors could not but cognize it via source-insensitive modes. They did not so much ‘personalize’ the world, as Sellars claims, as use source-insensitive modes opportunistically. They understood each other and themselves as far as they needed to resolve practical issues. They understood argument as far as they needed to troubleshoot their reports. Aside from these specialized ways of surmounting their intractability, they were utterly ignorant of their nature.

Our ancestral medial innocence began eroding as soon as humanity began gaming various heuristic systems out of school, spoofing their visual and auditory systems, knapping them into cultural inheritances, slowly expanding and multiplying potential problem-ecologies within the constraints of oral culture. Writing, as a cognitive technology, had a tremendous impact on human cognitive ecology. Literacy allowed speech to be visually frozen and carved up for interrogation. The gaming of our heuristics began in earnest, the knapping of countless cognitive tools. As did the questions. Our ancient medial innocence bloomed into a myriad of medial confusions.

Confusions. Not, as Sellars would have it, a manifest image. Sellars calls it ‘manifest’ because it’s correlational, source-insensitive, bound to the information available. The fact that it’s manifest means that it’s available—nothing more. Given medial innocence, that availability was geared to practical ancestral applications. The shallowness of our cognitive ecology was adapted to the specificity of the problems faced by our ancestors. Retasking those shallow resources to solve for their own nature, not surprisingly, generated endless disputation. Combined with the efficiencies provided by coinage and domestication during the ‘axial age,’ literacy did not so much trigger ‘man’s encounter with man,’ as Sellars suggests, as occasion humanity’s encounter with the question of humanity, and the kinds cognitive illusions secondary to the application of metacognitive and sociocognitive heuristics to the theoretical question of experience and cognition.

The birth of philosophy is the birth of discursive crash space. We have no problem reflecting on thoughts or experiences, but as soon as we reflect on the nature of thoughts and experiences, we find ourselves stymied, piling guesses upon guesses. Despite our genius for metacognitive innovation, what’s manifest in our shallow cognitive ecologies is woefully incapable of solving for the nature of human cognitive ecology. Precisely because reflecting on the nature of thoughts and experiences is a metacognitive innovation, something without evolutionary precedent, we neglect the insufficiency of the resources available. Artifacts of the lack of information are systematically mistaken for positive features. The systematicity of these crashes licenses the intuition that some common structure lurks ‘beneath’ the disputation—that for all their disagreements, the disputants are ‘onto something.’ The neglect-structure belonging to human metacognitive ecology gradually forms the ontological canon of the ‘first-person’ (see “On Alien Philosophy” for a more full-blooded account). And so, we persisted, generation after generation, insisting on the sufficiency of those resources. Since sociocognitive terms cue sociocognitive modes of cognition, the application of these modes to the theoretical problem of human experience and cognition struck us as intuitive. Since the specialization of these modes renders them incompatible with source-sensitive modes, some, like Wittgenstein and Sellars, went so far as to insist on the exclusive applicability of those resources to the problem of human experience and cognition.

Despite the profundity of metacognitive traps like these, the development of our sourcesensitive cognitive modes continued reckoning more and more of our deep environment. At first this process was informal, but as time passed and the optimal form and application of these modes resolved from the folk clutter, we began cognizing more and more of the world in deep environmental terms. The collective behavioural nexuses of science took shape. Time and again, traditions funded by source-insensitive speculation on the nature of some domain found themselves outcompeted and ultimately displaced. The world was ‘disenchanted’; more and more of the grand machinery of the natural universe was revealed. But as powerful as these individual and collective source-sensitive modes of cognition proved, the complexity of human cognitive ecology insured that we would, for the interim, remain beyond their reach. Though an artifactual consequence of shallow ecological neglect-structures, the ‘first-person’ retained cognitive legitimacy. Despite the paradoxes, the conundrums, the interminable disputation, the immediacy of our faulty metacognitive intuitions convinced us that we alone were exempt, that we were the lone exception in the desert landscape of the real. So long as science lacked the resources to reveal the deep environmental facts of our nature, we could continue rationalizing our conceit.

 

Ecology versus Image

As should be clear, Sellars’ characterization of the images of man falls squarely within this tradition of rationalization, the attempt to explain away our exceptionalism. One of the stranger claims Sellars makes in this celebrated essay involves the scientific status of his own discursive exposition of the images and their interrelation. The problem, he writes, is that the social sources of the manifest image are not themselves manifest. As a result, the manifest image lacks the resources to explain its own structure and dynamics: “It is in the scientific image of man in the world that we begin to see the main outlines of the way in which man came to have an image of himself-in-the-world” (17). Understanding our self-understanding requires reaching beyond the manifest and postulating the social axis of human conceptuality, something, he implies, that only becomes available when we can see group phenomena as ‘evolutionary developments.’

Remember Sellars’ caveats regarding ‘correlational science’ and the sense in which the manifest image can be construed as scientific? (7) Here, we see how that leaky demarcation of the manifest (as correlational) and the scientific (as theoretical) serves his downstream equivocation of his manifest discourse with scientific discourse. If science is correlational, as he admits, then philosophy is also postulational—as he well knows. But if each image helps itself to the cognitive modes belonging to the other, then Sellars assertion that the distinction lies between a conception limited to ‘correlational techniques’ and one committed to the ‘postulation of imperceptibles’ (19) is either mistaken or incomplete. Traditional philosophy is nothing if not theoretical, which is to say, in the business of postulating ontologies.

Suppressing this fact allows him to pose his own traditional philosophical posits as (somehow) belonging to the scientific image of man-in-the-world. What are ‘spaces of reasons’ or ‘conceptual frameworks’ if not postulates used to explain the manifest phenomena of cognition? But then how do these posits contribute to the image of man as a ‘complex physical system’? Sellars understands the difficulty here “as long as the ultimate constituents of the scientific image are particles forming ever more complex systems of particles” (37). This is what ultimately motivates the structure of his ‘stereoscopic view,’ where ontological precedence is conceded to the scientific image, while cognition itself remains safely in the humanistic hands of the manifest image…

Which is to say, lost to crash space.

Are human neuroheuristic systems welded into ‘conceptual frameworks’ forming an ‘irreducible’ and ‘autonomous’ inferential regime? Obviously not. But we can now see why, given the confounds secondary to metacognitive neglect, they might report as such in philosophical reflection. Our ancestors bickered. In other words, our capacity to collectively resolve communicative and behavioural discrepancies belongs to our medial innocence: intentional idioms antedate our attempts to theoretically understand intentionality. Uttering them, not surprisingly, activates intentional cognitive systems, because, ancestrally speaking, intentional idioms always belonged to problem-ecologies requiring these systems to solve. It was all but inevitable that questioning the nature of intentional idioms would trigger the theoretical application of intentional cognition. Given the degree to which intentional cognition turns on neglect, our millennial inability to collectively make sense of ourselves, medial confusion, was all but inevitable as well. Intentional cognition cannot explain the nature of anything, insofar as natures are general, and the problem ecology of intentional cognition is specific. This is why, far from decisively resolving our cognitive straits, Sellars’ normative metaphysics merely complicates it, using the same overdetermined posits to make new(ish) guesses that can only serve as grist for more disputation.

But if his approach is ultimately hopeless, how is he able to track the development in human self-understanding at all? For one, he understands the centrality of behaviour. But rather than understand behaviour naturalistically, in terms of systems of dispositions and regularities, he understands it intentionally, via modes adapted to neglect physical super-complexities. Guesses regarding hidden systems of physically inexplicable efficacies—’conceptual frameworks’—are offered as basic explanations of human behaviour construed as ‘action.’

He also understands that distinct cognitive modes are at play. But rather than see this distinction biologically, as the difference between complex physical systems, he conceives it conceptually, which is to say, via source-insensitive systems incapable of charting, let alone explaining our cognitive complexity. Thus, his confounding reliance on what might be called manifest postulation, deep environmental explanation via shallow ecological (intentional) posits.

And he understands the centrality of information availability. But rather than see this availability biologically, as the play of physically interdependent capacities and resources, he conceives it, once again, conceptually. All differences make differences somehow. Information consists of differences selected (neurally or evolutionarily) by the production of prior behaviours. Information consists in those differences prone to make select systematic differences, which is to say, feed the function of various complex physical systems. Medial neglect assures that the general interdependence of information and cognitive system appears nowhere in experience or cognition. Once humanity began retasking its metacognitive capacities, it was bound to hallucinate a countless array of ‘givens.’ Sellars is at pains to stress the medial (enabling) dimension of experience and cognition, the inability of manifest deliverances to account for the form of thought (16). Suffering medial neglect, cued to misapply heuristics belonging to intentional cognition, he posits ‘conceptual frameworks’ as a means of accommodating the general interdependence of information and cognitive system. The naturalistic inscrutability of conceptual frameworks renders them local cognitive prime movers (after all, source-insensitive posits can only come first), assuring the ‘conceptual priority’ of the manifest image.

The issue of information availability, for him, is always conceptual, which is to say, always heuristically conditioned, which is to say, always bound to systematically distort what is the case. Where the enabling dimension of cognition belongs to the deep environments on a cognitive ecological account, it belongs to communities on Sellars’ inferentialist account. As result, he has no clear way of seeing how the increasingly technologically mediated accumulation of ancestrally unavailable information drives the development of human self-understanding.

The contrast between shallow (source-insensitive) cognitive ecologies and deep information environments opens the question of the development of human self-understanding to the high-dimensional messiness of life. The long migratory path from the medial innocence of our preliterate past to the medial chaos of our ongoing cognitive technological revolution has nothing to do with the “projection of man-in-the-world on the human understanding” (5) given the development of ‘conceptual frameworks.’ It has to do with blind medial adaptation to transforming cognitive ecologies. What complicates this adaptation, what delivers us from medial innocence to chaos, is the heuristic nature of source-insensitive cognitive modes. Their specificity, their inscrutability, not to mention their hypersensitivity (the ease with which problems outside their ability cue their application) all but doomed us to perpetual, discursive disarray.

Images. Games. Conceptual frameworks. None of these shallow ecological posits are required to make sense of our path from ancestral ignorance to present conundrum. And we must discard them, if we hope to finally turn and face our future, gaze upon the universe with the universe’s own eyes.

 

*Originally posted, April 2nd, 2018.

Enlightenment How? Pinker’s Tutelary Natures*

by rsbakker

 

The fate of civilization, Steven Pinker thinks, hangs upon our commitment to enlightenment values. Enlightenment Now: The Case for Reason, Science, Humanism and Progress constitutes his attempt to shore up those commitments in a culture grown antagonistic to them. This is a great book, well worth the read for the examples and quotations Pinker endlessly adduces, but even though I found myself nodding far more often than not, one glaring fact continually leaks through: Enlightenment Now is a book about a process, namely ‘progress,’ that as yet remains mired in ‘tutelary natures.’ As Kevin Williamson puts it in the National Review, Pinker “leaps, without warrant, from physical science to metaphysical certitude.”

What is his naturalization of meaning? Or morality? Or cognition—especially cognition! How does one assess the cognitive revolution that is the Enlightenment short understanding the nature of cognition? How does one prognosticate something one does not scientifically understand?

At one point he offers that “[t]he principles of information, computation, and control bridge the chasm between the physical world of cause and effect and the mental world of knowledge, intelligence, and purpose” (22). Granted, he’s a psychologist: operationalizations of information, computation, and control are his empirical bread and butter. But operationalizing intentional concepts in experimental contexts is a far cry from naturalizing intentional concepts. He entirely neglects to mention that his ‘bridge’ is merely a pragmatic, institutional one, that cognitive science remains, despite decades of research and billions of dollars in resources, unable to formulate its explananda, let alone explain them. He mentions a great number of philosophers, but he fails to mention what the presence of those philosophers in his thetic wheelhouse means.

All he ultimately has, on the one hand, is a kind of ‘ta-da’ argument, the exhaustive statistical inventory of the bounty of reason, science, and humanism, and on the other hand (which he largely keeps hidden behind his back), he has the ‘tu quoque,’ the question-begging presumption that one can only argue against reason (as it is traditionally understood) by presupposing reason (as it is traditionally understood). “We don’t believe in reason,” he writes, “we use reason” (352). Pending any scientific verdict on the nature of ‘reason,’ however, these kinds of transcendental arguments amount to little more than fancy foot-stomping.

This is one of those books that make me wish I could travel back in time to catch the author drafting notes. So much brilliance, so much erudition, all devoted to beating straw—at least as far as ‘Second Culture’ Enlightenment critiques are concerned. Nietzsche is the most glaring example. Ignoring Nietzsche the physiologist, the empirically-minded skeptic, and reducing him to his subsequent misappropriation by fascist, existential, and postmodernist thought, Pinker writes:

Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.” (Of course, this left him unable to explain why we should believe that those statements are true.) 446

Although it’s true that Nietzsche (like Pinker) lacked any scientifically compelling theory of cognition, what he did understand was its relation to power, the fact that “when you face an adversary alone, your best weapon may be an ax, but when you face an adversary in front of a throng of bystanders, your best weapon may be an argument” (415). To argue that all knowledge is contextual isn’t to argue that all knowledge is fundamentally equal (and therefore not knowledge at all), only that it is bound to its time and place, a creature possessing its own ecology, its own conditions of failure and flourishing. The Nietzschean thought experiment is actually quite a simple one: What happens when we turn Enlightenment skepticism loose upon Enlightenment values? For Nietzsche, Enlightenment Now, though it regularly pays lip service to the ramshackle, reversal-prone nature of progress, serves to conceal the empirical fact of cognitive ecology, that we remain, for all our enlightened noise-making to the contrary, animals bent on minimizing discrepancies. The Enlightenment only survives its own skepticism, Nietzsche thought, in the transvaluation of value, which he conceived—unfortunately—in atavistic or morally regressive terms.

This underwrites the subsequent critique of the Enlightenment we find in Adorno—another thinker whom Pinker grossly underestimates. Though science is able to determine the more—to provide more food, shelter, security, etc.—it has the social consequence underdetermining (and so undermining) the better, stranding civilization with a nihilistic consumerism, where ‘meaningfulness’ becomes just another commodity, which is to say, nothing meaningful at all. Adorno’s whole diagnosis turns on the way science monopolizes rationality, the way it renders moral discourses like Pinker’s mere conjectural exercises (regarding the value of certain values), turning on leaps of faith (on the nature of cognition, etc.), bound to dissolve into disputation. Although both Nietzsche and Adorno believed science needed to be understood as a living, high dimensional entity, neither harboured any delusions as to where they stood in the cognitive pecking order. Unlike Pinker.

Whatever their failings, Nietzsche and Adorno glimpsed a profound truth regarding ‘reason, science, humanism, and progress,’ one that lurks throughout Pinker’s entire account. Both understood that cognition, whatever it amounts to, is ecological. Steven Pinker’s claim to fame, of course, lies in the cognitive ecological analysis of different cultural phenomena—this was the whole reason I was so keen to read this book. (In How the Mind Works, for instance, he famously calls music ‘auditory cheese-cake.’) Nevertheless, I think both Nietzsche and Adorno understood the ecological upshot of the Enlightenment in way that Pinker, as an avowed humanist, simply cannot. In fact, Pinker need only follow through on his modus operandi to see how and why the Enlightenment is not what he thinks it is—as well as why we have good reason to fear that Trumpism is no ‘blip.’

Time and again Pinker likens the process of Enlightenment, the movement away from our tutelary natures, in terms of a conflict between ancestral cognitive predilections and scientifically and culturally revolutionized environments. “Humans today,” he writes, “rely on cognitive faculties that worked well enough in traditional societies, but which we now see are infested with bugs” (25). And the number of bugs that Pinker references in the course of the book is nothing short of prodigious. We tend to estimate frequencies according to ease of retrieval. We tend to fear losses more than we hope for gains. We tend to believe as our group believes. We’re prone to tribalism. We tend to forget past misfortune, and to succumb to nostalgia. The list goes on and on.

What redeems us, Pinker argues, is the human capacity for abstraction and combinatorial recursion, which allows us to endlessly optimize our behaviour. We are a self-correcting species:

So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment. 28

We are the products of ancestral cognitive ecologies, yes, but our capacity for optimizing our capacities allows us to overcome our ‘flawed natures,’ become something better than what we were. “The challenge for us today,” Pinker writes, “is to design an informational environment in which that ability prevails over the ones that lead us into folly” (355).

And here we encounter the paradox that Enlightenment Now never considers, even though Pinker presupposes it continually. The challenge for us today is to construct an informational environment that mitigates the problems arising out of our previous environmental constructions. The ‘bugs’ in human nature that need to be fixed were once ancestral features. What has rendered these adaptations ‘buggy’ is nothing other than the ‘march of progress.’ A central premise of Enlightenment Now is that human cognitive ecology, the complex formed by our capacities and our environments, has fallen out of whack in this way or that, cuing us to apply atavistic modes of problem-solving out of school. The paradox is that the very bugs Pinker thinks only the Enlightenment can solve are the very bugs the Enlightenment has created.

What Nietzsche and Adorno glimpsed, each in their own murky way, was a recursive flaw in Enlightenment logic, the way the rationalization of everything meant the rationalization of rationalization, and how this has to short-circuit human meaning. Both saw the problem in the implementation, in the physiology of thought and community, not in the abstract. So where Pinker seeks to “to restate the ideals of the Enlightenment in the language and concepts of the 21st century” (5), we can likewise restate Nietzsche and Adorno’s critiques of the Enlightenment in Pinker’s own biological idiom.

The problem with the Enlightenment is a cognitive ecological problem. The technical (rational and technological) remediation of our cognitive ecologies transforms those ecologies, generating the need for further technical remediation. Our technical cognitive ecologies are thus drifting ever further from our ancestral cognitive ecologies. Human sociocognition and metacognition in particular are radically heuristic, and as such dependent on countless environmental invariants. Before even considering more, smarter intervention as a solution to the ambient consequences of prior interventions, the big question has to be how far—and how fast—can humanity go? At what point (or what velocity) does a recognizably human cognitive ecology cease to exist?

This question has nothing to do with nostalgia or declinism, no more than any question of ecological viability in times of environmental transformation. It also clearly follows from Pinker’s own empirical commitments.

 

The Death of Progress (at the Hand of Progress)

The formula is simple. Enlightenment reason solves natures, allowing the development of technology, generally relieving humanity of countless ancestral afflictions. But Enlightenment reason is only now solving its own nature. Pinker, in the absence of that solution, is arguing that the formula remains reliable if not quite as simple. And if all things were equal, his optimistic induction would carry the day—at least for me. As it stands, I’m with Nietzsche and Adorno. All things are not equal… and we would see this clearly, I think, were it not for the intentional obscurities comprising humanism. Far from the latest, greatest hope that Pinker makes it out to be, I fear humanism constitutes yet another nexus of traditional intuitions that must be overcome. The last stand of ancestral authority.

I agree this conclusion is catastrophic, “the greatest intellectual collapse in the history of our species” (vii), as an old polemical foe of Pinker’s, Jerry Fodor (1987) calls it. Nevertheless, short grasping this conclusion, I fear we court a disaster far greater still.

Hitherto, the light cast by the Enlightenment left us largely in the dark, guessing at the lay of interior shadows. We can mathematically model the first instants of creation, and yet we remain thoroughly baffled by our ability to do so. So far, the march of moral progress has turned on the revolutionizing our material environments: we need only renovate our self-understanding enough to accommodate this revolution. Humanism can be seen as the ‘good enough’ product of this renovation, a retooling of folk vocabularies and folk reports to accommodate the radical environmental and interpersonal transformations occurring around them. The discourses are myriad, the definitions are endlessly disputed, nevertheless humanism provisioned us with the cognitive flexibility required to flourish in an age of environmental disenchantment and transformation. Once we understand the pertinent facts of human cognitive ecology, its status as an ad hoc ‘tutelary nature’ becomes plain.

Just what are these pertinent facts? First, there is a profound distinction between natural or causal cognition, and intentional cognition. Developmental research shows that infants begin exhibiting distinct physical versus psychological cognitive capacities within the first year of life. Research into Asperger Syndrome (Baron-Cohen et al 2001) and Autism Spectrum Disorder (Binnie and Williams 2003) consistently reveals a cleavage between intuitive social cognitive capacities, ‘theory-of-mind’ or ‘folk psychology,’ and intuitive mechanical cognitive capacities, or ‘folk physics.’ Intuitive social cognitive capacities demonstrate significant heritability (Ebstein et al 2010, Scourfield et al 1999) in twin and family studies. Adults suffering Williams Syndrome (a genetic developmental disorder affecting spatial cognition) demonstrate profound impairments on intuitive physics tasks, but not intuitive psychology tasks (Kamps et al 2017). The distinction between intentional and natural cognition, in other words, is not merely a philosophical assertion, but a matter of established scientific fact.

Second, cognitive systems are mechanically intractable. From the standpoint of cognition, the most significant property of cognitive systems is their astronomical complexity: to solve for cognitive systems is to solve for what are perhaps the most complicated systems in the known universe. The industrial scale of the cognitive sciences provides dramatic evidence of this complexity: the scientific investigation of the human brain arguably constitutes the most massive cognitive endeavor in human history. (In the past six fiscal years, from 2012 to 2017, the National Institute of Health [21/01/2017] alone will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegeneration (10.183 billion)).

Despite this intractability, however, our cognitive systems solve for cognitive systems all the time. And they do so, moreover, expending imperceptible resources and absent any access to the astronomical complexities responsible—which is to say, given very little information. Which delivers us to our third pertinent fact: the capacity of cognitive systems to solve for cognitive systems is radically heuristic. It consists of ‘fast and frugal’ tools, not so much sacrificing accuracy as applicability in problem-solving (Todd and Gigerenzer 2012). When one cognitive system solves for another it relies on available cues, granular information made available via behaviour, utterly neglecting the biomechanical information that is the stock and trade of the cognitive sciences. This radically limits their domain of applicability.

The heuristic nature of intentional cognition is evidenced by the ease with which it is cued. Thus, the fourth pertinent fact: intentional cognition is hypersensitive. Anthropomorphism, the attribution of human cognitive characteristics to systems possessing none, evidences the promiscuous application of human intentional cognition to intentional cues, our tendency to run afoul what might be called intentional pareidolia, the disposition to cognize minds where no minds exist (Waytz et al 2014). The Heider-Simmel illusion, an animation consisting of no more than shapes moving about a screen, dramatically evidences this hypersensitivity, insofar as viewers invariably see versions of a romantic drama (Heider and Simmel 1944). Research in Human-Computer Interaction continues to explore this hypersensitivity in a wide variety of contexts involving artificial systems (Nass and Moon 2000, Appel et al 2012). The identification and exploitation of our intentional reflexes has become a massive commercial research project (so-called ‘affective computing’) in its own right (Yonck 2017).

Intentional pareidolia underscores the fact that intentional cognition, as heuristic, is geared to solve a specific range of problems. In this sense, it closely parallels facial pareidolia, the tendency to cognize faces where no faces exist. Intentional cognition, in other words, is both domain-specific, and readily misapplied.

The incompatibility between intentional and mechanical cognitive systems, then, is precisely what we should expect, given the radically heuristic nature of the former. Humanity evolved in shallow cognitive ecologies, mechanically inscrutable environments. Only the most immediate and granular causes could be cognized, so we evolved a plethora of ways to do without deep environmental information, to isolate saliencies correlated with various outcomes (much as machine learning).

Human intentional cognition neglects the intractable task of cognizing natural facts, leaping to conclusions on the basis of whatever information it can scrounge. In this sense it’s constantly gambling that certain invariant backgrounds obtain, or conversely, that what it sees is all that matters. This is just another way to say that intentional cognition is ecological, which in turn is just another way to say that it can degrade, even collapse, given the loss of certain background invariants.

The important thing to note, here, of course, is how Enlightenment progress appears to be ultimately inimical to human intentional cognition. We can only assume that, over time, the unrestricted rationalization of our environments will gradually degrade, then eventually overthrow the invariances sustaining intentional cognition. The argument is straightforward:

1) Intentional cognition depends on cognitive ecological invariances.

2) Scientific progress entails the continual transformation of cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition.

But this argument oversimplifies matters. To see as much one need only consider the way a semantic apocalypse—the collapse of intentional cognition—differs from say a nuclear or zombie apocalypse. The Walking Dead, for instance, abounds with savvy applications of intentional cognition. The physical systems underwriting meaning, in other words, are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them.

The viability of intentional cognition, in other words, depends not on us, but on a particular cognitive ecology peculiar to us. The question of the threshold of a semantic apocalypse becomes the question of the stability of certain onboard biological invariances correlated to a background of certain environmental invariances. Change the constitutive or environmental invariances underwriting intentional cognition too much, and you can expect it will crash, generate more problems than solutions.

The hypersensitivity of intentional cognition either evinced by solitary confinement or more generally by anthropomorphism demonstrates the threat of systematic misapplication, the mode’s dependence on cue authenticity. (Sherry Turkle’s (2007) concerns regarding ‘Darwinian buttons,’ or Deidre Barrett’s (2010) with ‘supernormal stimuli,’ touch on this issue). So, one way of inducing semantic apocalypse, we might surmise, lies in the proliferation of counterfeit cues, information that triggers intentional determinations that confound, rather than solve any problems. One way to degrade cognitive ecologies, in other words, is to populate environments with artifacts cuing intentional cognition ‘out of school,’ which is to say, circumstances cheating or crashing them.

The morbidity of intentional cognition demonstrates the mode’s dependence on its own physiology. What makes this more than platitudinal is the way this physiology is attuned to the greater, enabling cognitive ecology. Since environments always vary while cognitive systems remain the same, changing the physiology of intentional cognition impacts every intentional cognitive ecology—not only for oneself, but for the rest of humanity as well. Just as our moral cognitive ecology is complicated by the existence of psychopaths, individuals possessing systematically different ways of solving social problems, the existence of ‘augmented’ moral cognizers complicates our moral cognitive ecology as well. This is important because you often find it claimed in transhumanist circles (see, for example, Buchanan 2011), that ‘enhancement,’ the technological upgrading of human cognitive capacities, is what guarantees perpetual Enlightenment. What better way to optimize our values than by reengineering the biology of valuation?

Here, at last, we encounter Nietzsche’s question cloaked in 21st century garb.

And here we can also see where the above argument falls short: it overlooks the inevitability of engineering intentional cognition to accommodate constitutive and environmental transformations. The dependence upon cognitive ecologies asserted in (1) is actually contingent upon the ecological transformation asserted in (2).

1) Intentional cognition depends on constitutive and environmental cognitive ecological invariances.

2) Scientific progress entails the continual transformation of constitutive and environmental cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition short remedial constitutive transformations.

What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus:

What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket? 277

The former dilemma is presently dominating the headlines and is set to be astronomically complicated by the explosion of AI. The latter we can see rising out of literature, clawing its way out of Hollywood, seizing us with video game consoles, engulfing ever more experiential bandwidth. And as I like to remind people, 100 years separates the Blu-Ray from the wax phonograph.

The key to blocking the possibility that the transformative potential of (2) can ameliorate the dependency in (1) lies in underscoring the continual nature of the changes asserted in (2). A cognitive ecology where basic constitutive and environmental facts are in play is no longer recognizable as a human one.

Scientific progress entails the collapse of intentional cognition.

On this view, the coupling of scientific and moral progress is a temporary affair, one doomed to last only so long as cognition itself remained outside the purview of Enlightenment cognition. So long as astronomical complexity assured that the ancestral invariances underwriting cognition remained intact, the revolution of our environments could proceed apace. Our ancestral cognitive equilibria need not be overthrown. In place of materially actionable knowledge regarding ourselves, we developed ‘humanism,’ a sop for rare stipulation and ambient disputation.

But now that our ancestral cognitive equilibria are being overthrown, we should expect scientific and moral progress will become decoupled. And I would argue that the evidence of this is becoming plainer with the passing of every year. Next week, we’ll take a look at several examples.

I fear Donald Trump may be just the beginning.

.

References

Appel, Jana, von der Putten, Astrid, Kramer, Nicole C. and Gratch, Jonathan 2012, ‘Does Humanity Matter? Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction’, in Advances in Human-Computer Interaction 2012 <https://www.hindawi.com/journals/ahci/2012/324694/ref/&gt;

Barrett, Deidre 2010, Supernormal Stimuli: How Primal Urges Overran Their Original Evolutionary Purpose (New York: W.W. Norton)

Binnie, Lynne and Williams, Joanne 2003, ‘Intuitive Psychology and Physics Among Children with Autism and Typically Developing Children’, Autism 7

Buchanan, Allen 2011, Better than Human: The Promise and Perils of Enhancing Ourselves (New York: Oxford University Press)

Ebstein, R.P., Israel, S, Chew, S.H., Zhong, S., and Knafo, A. 2010, ‘Genetics of human social behavior’, in Neuron 65

Fodor, Jerry A. 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: The MIT Press)

Harari, Yuval 2017, Homo Deus: A Brief History of Tomorrow (New York: HarperCollins)

Heider, Fritz and Simmel, Marianne 1944, ‘An Experimental Study of Apparent Behaviour,’ in The American Journal of Psychology 57

Kamps, Frederik S., Julian, Joshua B., Battaglia, Peter, Landau, Barbara, Kanwisher, Nancy and Dilks Daniel D 2017, ‘Dissociating intuitive physics from intuitive psychology: Evidence from Williams syndrome’, in Cognition 168

Nass, Clifford and Moon, Youngme 2000, ‘Machines and Mindlessness: Social Responses to Computers’, Journal of Social Issues 56

Pinker, Steven 1997, How the Mind Works (New York: W.W. Norton)

—. 2018, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking)

Scourfield J., Martin N., Lewis G. and McGuffin P. 1999, ‘Heritability of social cognitive skills in children and adolescents’, British Journal of Psychiatry 175

Todd, P. and Gigerenzer, G. 2012 ‘What is ecological rationality?’, in Todd, P. and Gigerenzer, G. (eds.) Ecological Rationality: Intelligence in the World (Oxford: Oxford University Press) 3–

30

Turkle, Sherry 2007, ‘Authenticity in the age of digital companions’, Interaction Studies 501-517

Waytz, Adam, Cacioppo, John, and Epley, Nicholas 2014, ‘Who See Human? The Stability and Importance of Individual Differences in Anthropomorphism’, Perspectives in Psychological Science 5

Yonck, Richard 2017, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence (New York, NY: Arcade Publishing)

 

*Originally posted 20/03/2018

Division By Zero

by rsbakker

 

If we want to know what truth consists in, perhaps we should ask what it is we are building up and tearing down when we make cases for and against the truth.

Like so many others, I found myself riveted by the Kavanaugh confirmation hearings. (My money is on Ford, not simply because I found her testimony compelling, but because her story implicates someone doomed to corroborate Kavanaugh—not the kind of detail you would expect to find in a partisan hit job). Aside from the unsettling realization that mainstream Senate Republicans—as well as Kavanaugh himself!—had adopted Trump’s ‘post-truth’ playbook, what struck me was the precarious way Rachel Mitchell’s questions were poised between ‘victim blaming’ and simple ‘fact finding.’ Had Brett Kavanaugh sexually assaulted her? Right from the beginning, Mitchell began asking questions regarding the provenance and circumstances of her accusation, the implication being that she had been coached by partisan handlers. (As it turns out, she wasn’t). But she was also careful to map the limits of Ford’s memory of the event, the insinuation being that her cognitive capacities could not be trusted. (The problem with this approach, as it turns out, was that Ford, as a psychologist, knows quite a bit about the cognitive capacities at issue, and so was able to identify those limits as precisely the kind of limits one should expect in cases such as hers).

Victim blaming is so instinctive, so common, that we often have difficulty recognizing it as such. Accusing our accusers is a go-to human strategy for managing interpersonal conflict. People are credulous. In the absence of information to the contrary, ‘warning flags,’ we simply take assertions for granted, we trust that everything neglected, everything from cognitive capacity to motivation to circumstances, is irrelevant to the reliability of the claim. Human cognitive reliability, it turns out, depends on a tremendous number of physical factors, which is why impugning the reliability of claims is so dreadfully easy. At one point, Mitchell even insinuates (citing Geiselman and Fisher) that Ford compromised her story by communicating it absent specially trained trauma interviewers. Mitchell goes so far, in other words, to suggest the very format of the ongoing Senate hearing had impacted the reliability of her account. (This is where I thought her downright insidious (especially given her use of humour at this turn), but as it turns out, she was probably being too subtle given that many see this as Mitchell criticizing the Senate proceedings).

When the Republicans finally ditched Mitchell’s plane somewhere in the Atlantic, the attacks ranged the whole of constitutive and circumstantial relevance space (apropos the semantic apocalypse, we are fast approaching the point where crude topographies of this space can be mapped and algorithms developed to exploit it), a Quixotic charge of old white men that had to raise the hackles of even the most conservative women. Cognition requires we neglect countless constitutive and circumstantial factors. Neglect insures that more information is always required to flag potential constitutive and circumstantial confounds. Thus, the spectacle of old men competing for Fox News clips, each of them insisting on the relevance of something pertaining to the production of her claims. We’re not disputing something happened, but how do you know it was Brett? 36 years! Multiple denials!

From the outset, the Republicans had made a calculation: to cue moral outrage at the Democrats, and thus ingroup solidarity among conservatives, regardless of gender. From the outset they understood the peril of cuing outrage against male politicians and ingroup solidarity among women. Having Rachel Mitchell question her prevented cuing competing identifications, not to mention the politically disastrous scripts falling out of them. The Democratic strategy, of course, was to cue both channels, lending them, I think, an intrinsic advantage. (The Republican charge that the Democrats are engineering these accusations for the purposes of political advantage are false, but there’s little doubt that they are gaming them, and as the semantic apocalypse deepens, I think we should expect the production of reputation destroying realities to become big business). If ‘trust’ is understood as the degree to which we do not, blindly or otherwise, interrogate constitutive and circumstantial factors relevant to the claims of others, the enormous importance of group affiliation becomes obvious. Think of the amount of energy expended these past days, all bent on preventing or protecting the default: that Kristine Blasey Ford speaks true. Group identity cues trust, which is to say, spares us the expense of such interrogations.

Think of truth as merely the degree to which we can take constitutive and circumstantial factors for granted relative to behavioural feedback. Truth is where neglect, brute insensitivity to otherwise relevant constitutive and circumstantial factors, does not matter. Kristine Blasey Ford ‘speaks true,’ therefore, when she speaks as one who endured the violence described, nothing more or less. (The disquotational parallel is no coincidence here, I think: what disquotation captures is the primary function of truth talk, to troubleshoot issues involving constitutive and circumstantial factors). If we can take constitutive and circumstantial factors for granted, then third-party investigations of her claims should raise no flags. Our trust should be vindicated.

But there’s a catch. Even when we investigate constitutive and circumstantial factors, we continue to neglect a great many of them as such, relying instead on a variety of heuristic work-arounds. The inaccessibility of the constitutive and circumstantial means we have to troubleshoot constitutive and circumstantial problems absent any reference to their high-dimensional reality. The question of truth, far from a question regarding what can be taken for granted relative to behavioural feedback, becomes a question of whatever happens to be available for deliberative troubleshooting: typically, the claim-maker, the claim, and the world. As a result, we have no idea just what we’re doing when embroiled in spectacles such as Kavanaugh’s Senate confirmation hearing. Everyone is left guessing, groping. The nature of the breakdowns eludes us entirely.

If a claim regards something existent, an undiscovered species of possum, say, the easiest way to verify the truth of the claim is to simply go out and ‘see for yourself’: so far as our capacities and circumstances remain irrelevant and we see the possum, the claim is true. The absence of empirical discrepancies between cognitive systems allows those cognitive systems to continue neglecting their constitution and circumstances, to rely upon other brains the way we rely upon our own: blindly. Call this ‘default synchronization’: the constitutive and circumstantial coincidence required for cooperative behaviour regarding things like new species of possum. Seeing, as the saying goes, is believing.

This, as it turns out, is one of the few ways truth can overcome trust.

If, however, a claim regards something only indirectly accessible, an ‘alleged event’ or a ‘scientific theory,’ say, we have to rely on its consistency with whatever is relevant and accessible, ‘evidence.’ And when that evidence consists of reports, more claims, then the threat is always that our original problem will simply metastasize, and the interrogation of constitutive and circumstantial factors will be multiplied to more and more claims. Both sides frame the claims of the other side as artifacts, manipulations, while they view their own claims as windows, glimpses of truth (or failing that, self-defensive artifacts in service of that truth). The claims of both are equally artifactual, of course, both equally the product of biology and environment. The difference consists only in that behaviour can remain entirely insensitive to the artifactuality of the true claim without running aground. Just as with vision. The window works so well as a figure for truth because visual cognition likewise neglects its constitutive dimension. Visual cognition provides experience with a tremendous amount of information, going so far as to index its reliability (with blur, darkness, glare, and so on), while providing nary a whiff of the machinations responsible. (You could say the so-called ‘view from nowhere’ is literal to the extent ‘nowhere’ references neglect of the constitutive and circumstantial conditions of our view.)

To call attention to constitutive and circumstantial problems is to ‘muddy the waters,’ to scotch the illusion of transparency, and so conserve in-group solidarity. We evolved to manipulate the orientations of isomorphic systems, to husband and herd the constitutive and circumstantial coincidence of those we trust according to how far we trust them. (Representationalism merely adapts and schematizes this basic capacity, thus saddling the whole of cognition with, among other things, the problem of ‘transparency,’ which is to say, an ontologization of constitutive and circumstantial neglect). We reason with one another. Neglect assures that we do so blindly, without the least second-order inkling of what is actually going on. If ‘reason’ is a lesser tool, a neurolinguistic means of policing discrepancies—effecting ‘noise reduction’—within ingroups, as it pretty clearly seems to be in instances such as these, then the ‘rationality’ of something like the Kavanaugh confirmation hearings requires some minimal coincidence, some tendency to identify with as opposed to against, and so to either neglect or overlook the same things. A spontaneous ‘kumbaya’ moment, or something… something information technology is rendering all but impossible.

Either that or some kind of ‘transparency event,’ a Burning of the Reichstag, only in the context of Kavanaugh’s or Ford’s life, something powerful enough to cue trans-group identification.

Or what amounts to the same thing: a common truth.

We’re Fucked. So (Now) What?

by rsbakker

“Conscious self-creation.” This is the nostrum Roy Scranton offers at the end of his now notorious piece, “We’re Doomed. Now What?” Conscious self-creation is the ‘now what,’ the imperative that we must carry across the threshold of apocalypse. After spending several weeks in the company of children I very nearly wept reading this in his latest collection of essays. I laughed instead.

I understand the logic well enough. Social coordination turns on trust, which turns on shared values, which turns on shared narratives. As Scranton writes, “Humans have survived and thrived in some of the most inhospitable environments on Earth, from the deserts of Arabia to the ice fields of the Arctic, because of this ability to organize collective life around symbolic constellations of meaning.” If our imminent self-destruction is the consequence of our traditional narratives, then we, quite obviously, need to come up with better narratives. “We need to work together to transform a global order of meaning focused on accumulation into a new order of meaning that knows the value of limits, transience, and restraint.”

If I laughed, it was because Scranton’s thesis is nowhere near so radical as his title might imply. It consists, on the one hand, in the truism that human survival depends on engineering an environmentally responsible culture, and on the other, the pessimistic claim that this engineering can only happen after our present (obviously irresponsible) culture has self-destructed. The ‘now what,’ in other words, amounts to the same-old same-old, only après le deluge. Just another goddamn narrative.

Scranton would, of course, take issue with my ‘just another goddamn’ modifier. As far as he’s concerned, the narrative he outlines is not just any narrative, it’s THE narrative. And, as the owner of a sophisticated philosophical position, he could endlessly argue its moral and ecological superiority… the same as any other theoretician. And therein lies the fundamental problem. Traditional philosophy is littered with bids to theorize and repair meaning. The very plasticity allowing for its rehabilitation also attests to its instability, which is to say, our prodigious ability to cook narratives up and our congenital inability to make them stick.

Thus, my sorrow, and my fear for children. Scranton, like fairly every soul writing on these topics, presumes our problem lies in the content of our narratives rather than their nature.

Why, for instance, presume meaning will survive the apocalypse? Even though he rhetorically stresses the continuity of nature and meaning, Scranton nevertheless assumes the independence of the latter. But why? If meaning is fundamentally natural, then what in its nature renders it immune to ecological degradation and collapse?

Think about the instability referenced above, the difficulty we have making our narratives collectively compelling. This wasn’t always the case. For the vast bulk of human history, our narratives were simply given. Our preliterate ancestors evolved the plasticity required to adapt their coordinating stories (over the course of generations) to the demands of countless different environments—nothing more or less. The possibility of alternative narratives, let alone ‘conscious self-creation,’ simply did not exist given the metacognitive resources at their disposal. They could change their narrative, to be sure, but incrementally, unconsciously, not so much convinced it was the only game in town as unable to report otherwise.

Despite their plasticity, our narratives provided the occluded (and therefore immovable) frame of reference for all our sociocognitive determinations. We quite simply did not evolve to systematically question the meaning of our lives. The capacity to do so seems to have required literacy, which is to say, a radical transformation of our sociocognitive environment. Writing allowed our ancestors to transcend the limits of memory, to aggregate insights, to record alternatives, to regiment and to interrogate claims. Combined with narrative plasticity, literacy begat a semantic explosion, a proliferation of communicative alternatives that continues to accelerate to this present day.

This is biologically unprecedented. Literacy, it seems safe to say, irrevocably domesticated our ancestral cognitive habitat, allowing us to farm what we once gathered. The plasticity of meaning, our basic ability to adapt our narratives, is the evolutionary product of a particular cognitive ecology, one absent writing. Literacy, you could say, constitutes a form of pollution, something that disrupts preexisting adaptive equilibria. Aside from the cognitive bounty it provides, it has the long-term effect of destabilizing narratives—all narratives.

The reason we find such a characterization jarring is that we subscribe to a narrative (Scranton’s eminently Western narrative) that values literacy as a means of generating new meaning. What fool would argue for illiteracy (and in writing no less!)? No one I know. But the fact remains that with literacy, certain ancestral functions of narrative were doomed to crash. Where once there was blind trust in our meanings, we find ourselves afflicted with questions, forced to troubleshoot what our ancestors took for granted. (This is the contradiction dwelling in the heart of all post-modernisms: the valuation of the very process devaluing meaning, crying ‘More is better!’ as those unable or unwilling to tread water drown).

The biological origins of narrative lie in shallow information cognitive ecologies, circumstances characterized by profound ignorance. What we cannot grasp we poke with sticks. Hitherto we’ve been able to exapt these capacities to great effect, raising a civilization that would make our story-telling ancestors weep, and for wonder far more than horror. But as with all heuristic systems, something must be taken for granted. Only so much can be changed before an ecology collapses altogether. And now we stand on the cusp of a communicative revolution even more profound than literacy, a proliferation, not simply of alternate narratives, but of alternate narrators.

If you sweep the workbench clean, cease looking at meaning as something somehow ‘anomalous’ or ‘transcendent,’ narrative becomes a matter of super-complicated systems, things that can be cut short by a heart attack or stroke. If you refuse to relinquish the meat (which is to say nature), then narratives, like any other biological system, require that particular background conditions obtain. Scranton’s error, in effect, is a more egregious version of the error Harari makes in Homo Deus, the default presumption that meaning somehow lies outside the circuit of ecology. Harari, recall, realizes that humanism, the ‘man-the-meaning-maker’ narrative of Western civilization, is doomed, but his low-dimensional characterization of the ‘intersubjective web of meaning’ as an ‘intermediate level of reality’ convinces him that some other collective narrative must evolve to take its place. He fails to see how the technologies he describes are actively replacing the ancestral social coordinating functions of narrative.

Scranton, perhaps hobbled by the faux-naturalism of Speculative Realism, cannot even concede the wholesale collapse of humanism, only those elements antithetical to environmental sustainability. His philosophical commitments effectively blind him to the intimate connection between the environmental crises he considers throughout the collection, and the semantic collapses he so eloquently describes in the final essay, “What is Thinking Good For?” Log onto the web, he writes, “and you’ll soon find yourself either nauseated by the vertigo that comes from drifting awash in endless waves of repetitive, clickbaity, amnesiac drek, or so benumbed and bedazzled by the sheer volume of ersatz cognition on display that you wind up giving in to the flow and welcoming your own stupefaction as a kind of relief.” Throughout this essay he hovers about, without quite touching, the idea of noise, how the technologically mediated ease of meaning production and consumption has somehow compromised our ability to reliably signal. Our capacity to arbitrate and select signals is an ecological artifact, historically dependent on the ancestral bottleneck of physical presence. Once a precious resource, like-minded commiseration has become cheap as dirt.

But since he frames the problem in the traditional register of ‘thought,’ an entity he acknowledges he cannot definitively define, he has no way of explaining what precisely is going wrong, and so finds himself succumbing to analogue nostalgia, Kantian shades. What is thinking good for? The interruption of cognitive reflex, which is to say, freedom from tutelary natures.’ Thinking, genuine thinking, is a koan.

The problem, of course, is that we now know that it’s tutelary natures all the way down: deliberative interruption is itself a reflex, sometimes instinctive, sometimes learned, but dependent on heuristic cues all the same. ‘Freedom’ is a shallow information ecological artifact, a tool requiring certain kinds of environmental ignorance (an ancestral neglect structure) to reliably discharge its communicative functions. The ‘free will debate’ simply illustrates the myriad ways in which the introduction of mechanical information, the very information human sociocognition has evolved to do without, inevitably crashes the problem-solving power of sociocognition.

The point being that nothing fundamental—and certainly nothing ontological—separates the crash of thought and freedom from the crash of any other environmental ecosystem. Quite without realizing, Scranton is describing the same process in both essays, the global dissolution of ancestral ecologies, cognitive and otherwise. What he and, frankly, the rest of the planet need to realize is that between the two, the prospect of semantic apocalypse is actually both more imminent and more dire. The heuristic scripts we use to cognize biological intelligences are about to face an onslaught of evolutionarily unprecedented intelligences, ever-improving systems designed to cue human sociocognitive reflexes out of school. How long before we’re overrun by billions of ‘junk intelligences’? One decade? Two?

What happens when genuine social interaction becomes optional?

The age of AI is upon us. And even though it is undoubtedly the case that social cognition is heuristic—ecological—our blindness to our nature convinces us that we possess no such nature and so remain, in some respect (because strokes still happen), immune. Our ‘symbolic spaces’ will be deluged with invasive species, each optimized to condition us, to cue social reflexes—to “nudge” or to “improve user experience.” We’ll scoff at them, declare them stupid, even as we dutifully run through scripts they have cued.

So long as the residue of traditional humanistic philosophy persists, so long as we presume meaning exceptional, this prospect cannot even be conceived, let alone explored. The “evacuation of interiority,” as Scranton calls it, is always the other guy’s—metacognitive neglect assures experience cannot but appear fathomless, immovable. Therein lies the heartbreaking genius of our cognitive predicament: given the intractability of our biomechanical nature, our sociocognitive and metacognitive systems behave as though no such nature exists. We just… are—the deliverance of something inexplicable.

An apparent interruption in thought, in nature, something necessarily observing the ruin, rather than (as Nietzsche understood) embodying it. And so enthusiastically tearing down the last ecological staple sustaining meaning: that humans cue one another ignorant of those cues as such.

All deep environmental knowledge constitutes an unprecedented attenuation of our ancestral cognitive ecologies. Up to this point, the utilities extracted have far exceeded the utilities lost. Pinker is right in this one regard: modernity has been a fantastic deal. We could plunder the ecologies about us, while largely ignoring the ecologies between us. But now that science and technology are becoming cognitive, we ourselves are becoming the resources ripe for plunder, the ecology doomed to fragment and implode.

We’re fucked. So now what? We fight, clutch for flotsam, like any other doomed beetle caught upon the flood, not for any ‘reason,’ but because this is what beetles do, drowning.

Fight.

The Crash of Truth: A Critical Review of Post-Truth by Lee C. Mcintyre

by rsbakker

Lee Mcintyre is a philosopher of science at Boston University, and author of Dark Ages: The Case for a Science of Human Behaviour. I read Post-truth on the basis of Fareed Zakaria’s enthusiastic endorsement on CNN’s GPS, so I fully expected to like it more than I ultimately did. It does an admirable job scouting the cognitive ecology of post-truth, but because it fails to understand that ecology in ecological terms, the dynamic itself remains obscured. The best Mcintyre can do is assemble and interrogate the usual suspects. As a result, his case ultimately devolves into what amounts to yet another ingroup appeal.

As perhaps, we should expect, given the actual nature of the problem.

Mcintyre begins with a transcript of an interview where CNN’s Alisyn Camerota presses Newt Gingrich at the 2016 Republican convention on Trump’s assertions regarding crime:

GINGRICH: No, but what I said is equally true. People feel more threatened.

CAMEROTA: Feel it, yes. They feel it, but the facts don’t support it.

GINGRICH: As a political candidate, I’ll go with how people feel and let you go with the theoreticians.

There’s a terror you feel in days like these. I felt that terror most recently, I think, watching Sarah Huckabee Sanders insisting that the out-going National Security Advisor, General H. R. McMaster, had declared that no one had been tougher on Russia than Trump after a journalist had quoted him saying almost exactly otherwise. I had been walking through the living-room and the exchange stopped me in my tracks. Never in my life had I ever witnessed a Whitehouse Official so fecklessly, so obviously, contradict what everyone in the room had just heard. It reminded me of the psychotic episodes I witnessed as a young man working tobacco with a friend who suffered schizophrenia—only this was a social psychosis. Nothing was wrong with Sarah Huckabee Sanders. Rather than lying in malfunctioning neural machinery, this discrepancy lay in malfunctioning social machinery. She could say what she said because she knew that statements appearing incoherent to those knowing what H. R. McMaster had actually said would not appear as such to those ignorant of or indifferent to what he had actually said.  She knew, in other words, that even though the journalists in the room saw this:

given the information available to their perspective, the audience that really mattered would see this:

which is to say, something rendered coherent for neglecting that information.

The task Mcintyre sets himself in this brief treatise is to explain how such a thing could have come to pass, to explain, not how a sitting President could lie, but how he could lie without consequences. When Sarah Huckabee Sanders asserts that H. R. McMaster’s claim that the Administration is not doing enough is actually the claim that no Administration has done more she’s relying on innumerable background facts that simply did not obtain a mere generation ago. The social machinery of truth-telling has fundamentally changed. If we look at the sideways picture of Disney’s faux New York skyline as the ‘deep information view,’ and the head-on picture as the ‘shallow information view,’ the question becomes one of how she could trust that her audience, despite the availability of deep information, would nevertheless affirm the illusion of coherence provided by the shallow information view. As Mcintyre writes, “what is striking about the idea of post-truth is not just that truth is being challenged, but that it is being challenged as a mechanism for asserting political dominance.” Sanders, you could say, is availing herself of new mechanisms, ones antagonistic to the traditional mechanisms of communicating the semantic authority of deep information. Somehow, someway, the communication of deep information has ceased to command the kinds of general assent it once did. It’s almost preposterous on the face of it: in attributing Trump’s claims to McMaster, Sanders is gambling that somehow, either by dint of corruption, delusion, or neglect, her false claim will discharge functions ideally belonging to truthful claims, such as informing subsequent behaviour. For whatever reason, the circumstances once preventing such mass dissociations of deep and shallow information ecologies have yielded to circumstances that no longer do.

Mcintyre provides a chapter by chapter account of those new circumstances. For reasons that will become apparent, I’ll skip his initial chapter, which he devotes to defining ‘post-truth,’ and return to it in the end.

Science Denial

He provides clear, pithy outlines of the history of the tobacco industry’s seminal decision to argue the science, to wage what amounts to an organized disinformation campaign. He describes the ways resource companies adapted these tactics to scramble the message and undermine the authority of climate science. And by ‘disinformation,’ he means this literally, given “that even while ExxonMobil was spending money to obfuscate the facts about climate change, they were making plans to explore new drilling opportunities in the Arctic once the polar ice cap had melted.” This part of the story is pretty well-known, I think, but Mcintyre tells the tale in a way that pricks the numbness of familiarity, reminding us of the boggling scale of what these campaigns achieved: generating a political/cultural alliance that is—not simply bent on—hastening untold misery and global economic loss in the name of short term parochial economic gain.

Cognitive Bias

He gives a curiously (given his background) two-dimensional sketch of the role cognitive bias plays in the problem, focusing primarily on cognitive dissonance, our need to minimize cognitive discrepancies, and the backfire effect, how counter-arguments actually strengthen, as opposed to mitigate, commitment to positions. (I would recommend Steven Sloman and Philip Fernbach’s The Knowledge Illusion for a more thorough consideration of the dynamics involved). He discusses research showing the profound ways that social identification, even cued by things so flimsy as coloured wristbands, profoundly transforms our moral determinations. But he underestimates, I think, the profound nature of what Dan Kahan and his colleagues call the “Tragedy of the Risk-Perception Commons,” the individual rationality of espousing irrational collective claims. There’s so much research directly pertinent to his thesis that he passes over in silence, especially that belonging to ecological rationality.

Traditional versus social media

If Mcintyre’s consideration of the cognitive science left me dissatisfied, I thoroughly enjoyed his consideration of media’s contribution to the problem of post-truth. He reminds us that the existence of entities, like Fox News, disguising advocacy as disinterested reporting, is the historical norm, not the rule. Disinterested journalistic reporting was more the result how AP, which served papers grinding different political axes, required stories expressing as little overt bias as possible. Rather than seize upon this ecological insight (more on this below), he narrates the gradual rise of television news from small, money-losing network endeavours, to money-making enterprises culminating in CNN, Fox, MSNBC, and the return of ‘yellow journalism.’

He provides a sobering assessment of the eclipse of traditional media, and the historically unprecedented rise of social media. Here, more than anywhere else, we find Mcintyre taking steps toward a genuine cognitive ecological understanding of the problem:

“In the past, perhaps our cognitive biases were ameliorated by our interactions with others. It is ironic to think that in today’s media deluge, we could perhaps be more isolated from contrary opinion than when our ancestors were forced to live and work among other members of their tribe, village, or community, who had to interact with one another to get information.”

Since his understanding of the problem is primarily normative, however, he fails to see how cognitive reflexes that misfire in experimental contexts, and so strike observers as normative breakdowns, actually facilitate problem-solving in ancestral contexts. What he notes as ‘ironic’ should strike him (and everyone else) as astounding, as one of the doors that any adequate explanation of post-truth must kick down. But it is heartening, I have to say, to see these ideas begin to penetrate more and more brainpans. Despite the insufficiency of his theoretical tools, Mcintyre glimpses something of the way cognitive technology has impacted human cognitive ecology: “Indeed,” he writes, “what a perfect storm for the exploitation of our ignorance and cognitive biases by those with an agenda to put forward.” But even if the ‘perfect storm’ metaphor captures the complex relational nature of what’s happened, it implies that we find ourselves suffering a spot of bad luck, and nothing more.

Postmodernism

At last he turns to the role postmodernism has played in all this: this is the only chapter where I smelled a ‘legacy effect,’ the sense that the author is trying to shoe-horn some independently published material.

He acknowledges that ‘postmodernism’ is hopelessly overdetermined, but he thinks two theses consistently rise above the noise: the first is that “there is no such thing as objective truth,” and the second is “that any profession of truth is nothing more than a reflection of the political ideology of the person who is making it.”

To his credit, he’s quick to pile on the caveats, to acknowledge the need to critique both the possibility of absolute truth as well as the social power of scientific truth-claims. Because of this, it quickly becomes apparent that his target isn’t so much ‘postmodernism’ as it is social constructivism, the thesis that ‘truth-telling,’ far from connecting us to reality, bullies us into affirming interest serving constructs. This, as it turns out, is the best way to think post-truth “[i]n its purest form” as “when one thinks that the crowd’s reaction actually does change the facts about a lie.”

In other words, for Mcintyre, post-truth is the consequence of too many people believing in social constructivism—or in other words, presuming the wrong theory of truthHis approach to the question of post-truth is that of a traditional philosopher: if the failure is one of correspondence, then the blame has to lie with anti-correspondence theories of truth. The reason Sarah Huckabee Sanders could lie about McMaster’s final speech turns on (among other things) the wide-spread theoretical belief that there is no such thing as objective truth,’ that it’s power plays all the way down.

Thus the (rather thick) irony of citing Daniel Dennett—an interpretivist!—stating that “what the postmodernists did was truly evil” so far as they bear responsibility “for the intellectual fad that made it respectable to be cynical about truth and facts.”

The sin of the postmodern left has very, very little to do with generating semantically irresponsible theoriesDennett’s own positions are actually a good deal more radical in this regard! When it comes to the competing narratives involving ‘meaning of’ questions and answers, Dennett knows we have no choice but to advert to the ‘dramatic idiom’ of intentionality. If the problem were one of providing theoretical ammunition then Dennett is as much a part of the problem as Baudrillard.

And yet Mcintyre caps Dennett’s assertion by asking, “Is there more direct evidence than this?” Not a shining moment, dialectically speaking.

I agree with him that tools have been lifted from postmodernists, but they have been lifted from pragmatists (Dennett’s ilk) as well. Talk of ‘stances’ and ‘language games’ is also rife on the right! And I should know. What’s happening now is the consequence of a trend that I’ve been battling since the turn of the millennium. All my novels constitute self-conscious attempts to short-circuit the conditions responsible for ‘post-truth.’ And I’ve spent thousands of hours trolling the alt-Right (before they were called such) trying to figure out what was going on. The longest online debate I ever had was with a fundamentalist Christian who belonged to a group using Thomas Kuhn to justify their belief in the literal truth of Genesis.

Defining Post-truth

Which brings us, as promised, back to the book’s beginning, the chapter that I skipped, where, in the course of refining his definition of post-truth, Mcintyre acknowledges that no one knows what the hell truth is:

“It is important at this point to give at least a minimal definition of truth. Perhaps the most famous is that of Aristotle, who said: ‘to say of what is that it is not, or of what is not, that it is, is false, while to say of what is that it is, and what of is not that it is not, is true.’ Naturally, philosophers have fought for centuries over whether this sort of “correspondence” view is correct, whereby we judge the truth of a statement only by how well it fits reality. Other prominent conceptions of truth (coherentist, pragmatist, semantic) reflect a diversity of opinion among philosophers about the proper theory of truth, even while—as a value—there seems little dispute that truth is important.”

He provides a minimal definition with one hand—truth as correspondence—which he immediately admits is merely speculative! Truth, he’s admitting, is both indispensable and inscrutable. And yet this inscrutability, he thinks, need not hobble the attempt to understand post-truth: “For now, however, the question at hand is not whether we have the proper theory of truth, but how to make sense of the different ways that people subvert truth.”

In other words, we don’t need to know what is being subverted to agree that it is being subverted. But this goes without saying; the question is whether we need to know what is being subverted to explain what Mcintyre is purporting to explain, namely, how truth is being subverted. How do we determine what’s gone wrong with truth when we don’t even know what truth is?

Mcintyre begins Post-truth, in other words, by admitting that no canonical formulation of his explanandum exists, that it remains a matter of mere speculation. Truth remains one of humanity’s confounding questions.

But if truth is in question, then shouldn’t the blame fall upon those who question truth? Perhaps the problem isn’t this or that philosophy so much as philosophy itself. We see as much at so many turns in Mcintyre’s account:

“Why not doubt the mainstream news or embrace a conspiracy theory? Indeed, if news is just political expression, why not make it up? Whose facts should be dominant? Whose perspective is the right one? Thus is postmodernism the godfather of post-truth.”

Certainly, the latter two questions belong to philosophy as whole, and not postmodernism in particular. To that extent, the two former questions—so far as they follow from the latter—have to be seen as falling out of philosophy in general, and not just some ‘philosophical bad apples.’

But does it make sense to blame philosophy, to suggest we should have never questioned the nature of truth? Of course not.

The real question, the one that I think any serious attempt to understand post-truth needs to reckon, is the one Mcintyre breezes by in the first chapter: Why do we find truth so difficult to understand?

On the one hand, truth seems to be crashing. On the other, we have yet to take a step beyond Aristotle when it comes to answering the question of the nature of truth. The latter is the primary obstacle, since the only way to truly understand the nature of the crash is to understand the nature of truth. Could the crash and the inscrutability of truth be related? Could post-truth somehow turn on our inability to explain truth?

Adaptive Anamorphosis

Truth lies murdered in the Calais Coach, and Mcintyre has assembled all the suspects: denialism, cognitive biases, traditional and social media, and (though he knows it not) philosophy. He knows all of them had some part to play, either directly, or as accessories, but the Calais Coach remains locked—his crime scene is a black box. He doesn’t even have a body!

For me, however, post-truth is a prediction come to pass—a manifestation of what I’ve long called the ‘semantic apocalypse.’ Far from a perfect storm of suspects coming together in unlikely ways to murder ‘all of factual reality,’ it is an inevitable consequence of our rapidly transforming cognitive ecologies.

Biologically speaking, human communication and cooperation represent astounding evolutionary achievements. Human cognition is the most complicated thing human cognition has ever encountered: only now are we beginning to reverse-engineer its nature, and to use that knowledge to engineer unprecedented cognitive artifacts. We know that cognition is structurally and dynamically composite, heavily reliant on heuristic specialization to solve its social and natural environments. The astronomical complexity of human cognition means that sociocognition and metacognition are especially reliant on composite, source-insensitive systems, devices turning on available cues that correlate, given that various hidden regularities obtain, with specific outcomes. Despite being legion, we manage to synchronize with our fellows and our environments without the least awareness of the cognitive machinery responsible.

We suffer medial neglect, a systematic insensitivity to our own nature—a nature that includes this insensitivity. Like every other organism on this planet we cognize without cognizing the concurrent act of cognition. Well, almost like every other organism. Where other species utterly depend on the reliability of their cognitive capacities, have no way of repairing failures in various enabling—medial—systems, we do have recourse. Despite our blindness to the machinery of human cognition, we’ve developed a number of different ways to nudge that machinery—whack the TV set, you could say.

Truth-talk is one of those ways. Truth-talk allows us to minimize communicative discrepancies absent, once again, sensitivity to the complexities involved. Truth-talk provides a way to circumvent medial neglect, to resolve problems belonging to the enabling dimension of cognition despite our systematic insensitivity to the facts of that dimension. When medial issues—problems pertaining to cognitive function—arise, truth-talk allows for the metabolically inexpensive recovery of social and environmental synchronization. Incompatible claims can be sorted, at least so far as our ancestors required in prehistoric cognitive ecologies. The tribe can be healed, despite its profound ignorance of natures.

To say human cognition is heuristic is to say it is ecologically dependent, that it requires the neglected regularities underwriting the utility of our cues remain intact. Overthrow those regularities, and you overthrow human cognition. So, where our ancestors could simply trust the systematic relationship between retinal signals and environments while hunting, we have to remove our VR goggles before raiding the fridge. Where our ancestors could simply trust the systematic relationship between the text on the page or the voice in our ear and the existence of a fellow human, we have to worry about chatbots and ‘conversational user interfaces.’ Where our ancestors could automatically depend on the systematic relationship between their ingroup peers and the environments they reported, we need to search Wikipedia—trust strangers. More generally, where our ancestors could trust the general reliability (and therefore general irrelevance) of their cognitive reflexes, we find ourselves confronted with an ever growing and complicating set of circumstances where our reflexes can no longer be trusted to solve social problems.

The tribe, it seems, cannot be healed.

And, unfortunately, this is the very problem we should expect given the technical (tactical and technological) radicalization of human cognitive ecology.* Philosophy, and now, cognitive science, provide the communicative tactics required to neutralize (or ‘threshold’) truth-talk. Cognitive technologies, meanwhile, continually complicate the once direct systematic relationships between our suites of cognitive reflexes and our social and natural environments. The internet doesn’t simply render the sum of human knowledge available, it also renders the sum of human rationalization available as well. The curious and the informed, meanwhile, no longer need suffer the company of the incurious and the uninformed, and vice versa. The presumptive moral superiority of the former stands revealed: and in ever greater numbers the latter counter-identify, with a violence aggravated by phenomena such as the ‘online disinhibition effect.’ (One thing Mcintyre never pauses to consider is the degree to which he and his ilk are hated, despised, so much so as to see partners in traditional foreign adversaries, and to think lies and slander simply redress lies and slander). Populations begin spontaneously self-selecting. Big data identifies the vulnerable, who are showered with sociocognitive cues—atrocity tales to threaten, caricatures to amuse—engineered to provoke ingroup identification and outgroup alienation. In addition to ‘backfiring,’ counter-arguments are perceived as weapons, evidence of outgroup contempt for you and your own. And as the cognitive tactics become ever more adept at manipulating our biases, ever more scientifically informed, and as the cognitive technology becomes ever more sophisticated, ever more destructive of our ancestral cognitive habitat, the break between the two groups, we should expect, will only become more, not less, profound.

None of this is intuitive, of course. Medial neglect means reflection is source blind, and so inclined to conceive things in super-ecological terms. Thus the value of the prop building analogy I posed at the beginning.

Disney’s massive Manhattan anamorph depends on the viewer’s perspectival position within the installation to assure the occlusion of incompatible information. The degrees of cognitive freedom this position possesses—basically, how far one can wander this way and that—depends on the size and sophistication of the anamorph. The stability of illusion, in other words, entirely depends on the viewer: the deeper one investigates, the less stable the anamorph becomes. Their dependence on cognitive ‘sweet spots’ is their signature vulnerability.

The cognitive fragility of the anamorph, however, resides in the fact that we can move, while it cannot. Overcoming this fragility, then, either requires 1) de-animating observation, 2) complicating the anamorph, or 3) animating the anamorph. The problem we face can be understood as the problem of adaptive cognitive anamorphosis, the way cognitive science, in combination with cognitive technology, enables the de-animation of information consumers by gaming sociocognitive cues, while both complicating and animating the artifactual anamorphic information they consume.

Once a certain threshold is crossed, Sarah Huckabee Sanders can lie without shame or apology on national television. We don’t know what we don’t know. Mcintyre references the notorious Dunning-Kruger effect, the way cognitive incompetence correlates with incompetent assessments of competence, but the underlying mechanism is more basic: cognitive systems lacking access to information function independent of that information. Medial neglect assures we take the sufficiency of our perspectives for granted absent information indicating insufficiency or ‘medial misalignment.’ Trusting our biology and community is automatic. Perhaps we refuse to move, to even consider the information belonging to:

But if we do move, the anamorph, thanks to cognitive technology, adapts, the prop-facades grow prop sides, and the deep (globally synchronized) information presented above, has to compete with ‘faux deep’ information. The question becomes one of who has been systematically deceived—a question that ingroup biases have already answered in illusion’s favour. We can return to our less inquisitive peers and assure them they were right all along.

What is ‘post-truth’? Insofar as it names anything it refers to diminishing capacity of globally, versus locally, synchronized claims to drive public discourse. It’s almost as if, via technology, nature is retooling itself to conceal itself by creating adaptive ‘faux realities.’ It’s all artifactual, all biologically ‘constructed’: the question is whether our cognitive predicament facilitates global (or deep) synchronization geared to what happens to be the case, or facilitates local (or shallow) synchronization geared to ingroup expectations and hidden political and commercial interests.

There’s no contest between spooky correspondence and spooky construction. There’s no ‘assertion of ideological supremacy,’ just cognitive critters (us) stranded in a rapidly transforming cognitive ecology that has become too sophisticated to see, and too powerful to credit. Post-truth, in other words, is an inevitable consequence of scientific progress, particularly as it pertains to cognitive technologies.

Sarah Huckabee Sanders can lie without shame or apology on national television because Trump was able to lure millions of Americans across a radically transformed (and transforming) anamorphic threshold. And we should find this terrifying. Most doomed democracies elect their executioner. In his The Death of Democracy: Hitler’s Rise to Power, Benjamin Carter Hett blames the success of Nazism on the “reality deficit” suffered by the German people. “Hostility to reality,” he writes, “translated into contempt for politics, or, rather, desire for a politics that was somehow not political: a thing that can never be” (14). But where Germany in the 1930’s had every reason to despise the real, “a lost war that had cost the nation almost two million of her sons, a widely unpopular revolution, a seemingly unjust peace settlement, and economic chaos accompanied by huge social and technological change” (13), America finds itself suffering only the latter. The difference lies in the way the latter allows for the cultivation and exploitation of this hostility in an age of unparalleled peace and prosperity. In the German case, the reality itself drove the populace to embrace atavistic political fantasies. Thanks to technology, we can now achieve the same effect using only human cognitive shortcomings and corporate greed.

Buckle up. No matter what happens to Trump, the social dysfunction he expresses belongs to the very structure of our civilization. Competition for the market he’s identified is only going to intensify.