Three Pound Brain

No bells, just whistling in the dark…

Tag: Psychology

The Eliminativistic Implicit (I): The Necker Cube of Everyday and Scientific Explanation

by rsbakker

Go back to what seems the most important bit, then ask the Intentionalist this question: What makes you think you have conscious access to the information you need? They’ll twist and turn, attempt to reverse the charges, but if you hold them to this question, it should be a show-stopper.

What follows, I fear, is far longer winded.

Intentionalists, I’ve found, generally advert to one of two general strategies when dismissing eliminativism. The first is founded on what might be called the ‘Preposterous Complaint,’ the idea that eliminativism simply contradicts too many assumptions and intuitions to be considered plausible. As Uriah Kriegal puts it, “if eliminativism cannot be acceptable unless a relatively radical interpretation of cognitive science is adopted, then eliminativism is not in good shape” (“Non-phenomenal Intentionality,” 18). But where this criticism would be damning in other, more established sciences, it amounts to little more than an argument ad populum in the case of cognitive science, which as of yet lacks any consensual definition of its domain. The very naturalistic inscrutability behind the perpetual controversy also motivates the Eliminativist’s radical interpretation. The idea that something very basic is wrong with our approach to questions of experience and intentionality is by no means a ‘preposterous’ one. You could say the reality and nature of intentionality is the question. The Preposterous Complaint, in other words, doesn’t so much impugn the position as insinuate career suicide.

The second turns on what might be called the ‘Presupposition Complaint,’ the idea that eliminativism implicitly presupposes the very intentionality that it claims to undermine. The tactic generally consists of scanning the eliminativist’s claims, picking out various intentional concepts, then claiming that use of such concepts implicitly affirms the existence of intentionality. The Eliminativist, in other words, commits ‘cognitive suicide’ (as Lycan, 2005, calls it). Insofar as the use of intentional concepts is unavoidable, and insofar as the use of intentional concepts implicitly affirms the existence of intentionality, intentionality is ineliminable. The Eliminativist is thus caught in an obvious contradiction, explicitly asserting not-A on the hand, while implicitly asserting A on the other.

On BBT, intentionality as traditionally theorized, far from simply ‘making explicit’ what is ‘implicitly the case,’ is actually a kind of conceptual comedy of errors turning on heuristic misapplication and metacognitive neglect. Such appeals to ‘implicit intentionality,’ in other words, are appeals to the very thing BBT denies. They assume the sufficiency of the very metacognitive intuitions that positions such as my own call into question. The Intentionalist charge of performative contradiction simply begs the question. It amounts to nothing more than the bald assertion that intentionality cannot be eliminated because intentionality is ineliminable.

The ‘Presupposition Complaint’ is pretty clearly empty as an argumentative strategy. In dialogical terms, however, I think it remains the single biggest obstacle to the rational prosecution of the Intentionalist/Eliminativist debate—if only because of the way it allows so many theorists to summarily dismiss the threat of Eliminativism. Despite its circularity, the Presupposition Complaint remains the most persistent objection I encounter—in fact, many critics persist in making it even after its vicious circularity has been made clear. And this has led me to realize the almost spectacular importance of the notion of the implicit plays in all such debates. For many thinkers, the intentional nature of the implicit is simply self-evident, somehow obvious to intuition. This is certainly how it struck me before I began asking the kinds of questions motivating the present piece. After all, what else could the implicit be, if not the intentional ‘ground’ of our intentional ‘practices’?

In what follows, I hope to show how this characterization of the implicit, far from obvious, actually depends, not only on ignorance, but on a profound ignorance of our ignorance. On the account I want to give here, the implicit, far from naming some spooky ‘infraconceptual’ or ‘transcendental’ before of thought and cognition, simply refers to what we know is actually occluded from metacognitive appraisals of experience: namely, nature as described by science. To frame the issue in terms of a single question, what I want to ask in this post and its sequels is, What warrants the Intentionalist’s claims regarding implicit normativity, say, over an Eliminativist’s claims of implicit mechanicity?

So what is the implicit? Given the crucial role the concept plays in a variety of discourses, it’s actually remarkable how few theorists have bothered with the question of making the implicit qua implicit explicit (Stephen Turner and Eugene Gendlin are signature exceptions in this regard, of course). Etymologically, ‘implicit’ derives from the Latin, implicitus, the participle of implico, which means ‘to involve’ or ‘to entangle,’ meanings that seem to bear more on implicit’s perhaps equally mysterious relatives, ‘imply’ or ‘implicate.’ According to Wikitionary, uses that connote ‘entangled’ are now obsolete. Implicit, rather, is generally taken to mean, 1) “Implied directly, without being directly expressed,” 2) “Contained in the essential nature of something but not openly shown,” and 3) “Having no reservations or doubts; unquestioning or unconditional; usually said of faith or trust.” Implicit, in other words, is generally taken to mean unspoken, intrinsic, and unquestioned.

Prima facie, at least, these three senses are clearly related. Unless spoken about, the implicit cannot be questioned, and so must remain an intrinsic feature of our performances. The ‘implicit,’ in other words, refers to something operative within us that nonetheless remains hidden from our capacity to consciously report. Logical or material inferential implications, for instance, guide subsequent transitions within discourse, whether we are conscious of them or not. The same might be said of ‘emotional implications,’ or ‘political implications,’ or so on.

Let’s call this the Hidden Constraint Model of the implicit, the notion that something outside conscious experience somehow ‘contains’ organizing principles constraining conscious experience. The two central claims of the model can be recapitulated as:

1) The implicit lies in what conscious cognition neglects. The implicit is inscrutable.

2) The implicit somehow constrains conscious cognition. The implicit is effective.

From inscrutability and effectiveness, we can infer at least two additional features pertaining to the implicit:

3) The effective constraints on any given moment of conscious cognition require a subsequent moment of conscious cognition to be made explicit. We can only isolate the biases specific to a claim we make subsequent to that claim. The implicit, in other words, is only retrospectively accessible.

4) Effective constraints can only be consciously cognized indirectly via their effects on conscious experience. Referencing, say, the ‘implicit norms governing interpersonal conduct’ involves referencing something experienced only in effect. ‘Norms’ are not part of the catalogue of nature—at least as anything recognizable as such. The implicit, in other words, is only inferentially accessible.

So consider, as a test case, Hume’s famous meditations on causation and induction. In An Enquiry Concerning Human Understanding, Hume points out how reason, no matter how cunning, is powerless when it comes to matters of fact. Short of actual observation, we have no way of divining the causal connections between events. When we turn to experience, however, all we ever observe is the conjunction of events. So what brings about our assumptive sense of efficacy, our sense of causal power? Why should repeating the serial presentation of two phenomena produce the ‘feeling,’ as Hume terms it, that the first somehow determines the second? Hume’s ‘skeptical solution,’ of course, attributes the feeling to mere ‘custom or habit.’ As he writes, “[t]he appearance of a cause always conveys the mind, by a customary transition, the idea of an effect” (ECHU, 51, italics my own).

All four of the features enumerated above are clearly visible in the above. Hume makes no dispute of the fact that the repetition of successive events somehow produces the assumption of efficacy. “On this,” he writes, “are founded all our reasonings concerning matters of fact or existence” (51). Exposure to such repetitions fundamentally constrains our understanding of subsequent exposures, to the point where we cannot observe the one without assuming the other—to the point where the bulk of scientific knowledge is raised upon it. Efficacy is effective—to say the least!

But there’s nothing available to conscious cognition—nothing observable in these successive events—over and above their conjunction. “One event follows another,” Hume writes; “but we never can observe any tie between them. They seem conjoined, but never connected” (49). Efficacy, in other words, is inscrutable as well.

So then what explains our intuition of efficacy? The best we can do, it seems, is to pause and reflect upon the problem (as Hume does), to posit some X (as Hume does) reasoning from what information we can access. Efficacy, in other words, is only retrospectively and inferentially accessible.

We typically explain phenomena by plugging them into larger functional economies, by comprehending how their precursors constrain them and how they constrain their successors in turn. This, of course, is what made Hume’s discovery—that efficacy is inscrutable—so alarming. When it comes to environmental inquiries we can always assay more information via secondary investigation and instrumentation. As a result, we can generally solve for precursors in our environments. When it comes to metacognitive inquiries such as Hume’s, however, we very quickly stumble into our own incapacity. “And what stronger instance,” Hume asks, “can be produced of the surprising ignorance and weakness of the understanding, than the present?” (51). Efficacy, the very thing that binds phenomena to their precursors, is itself without precursors.

Not surprisingly, the comprehension of cognitive phenomena (such as efficacy) without apparent precursors poses a special kind of problem. Given efficacy, we can comprehend environmental nature. We simply revisit the phenomena and infer, over and over, accumulating the information we need to arbitrate between different posits. So how, then, are we supposed to comprehend efficacy? The empirical door is nailed shut. No matter how often we revisit and infer, we simply cannot accumulate the data we need to arbitrate between our various posits. Above, we see Hume rooting around with questions, (our primary tool for making ignorance visible) and finding no trace of what grounds his intuitions of empirical efficacy. Thus the apparent dilemma: Either we acknowledge that we simply cannot understand these intuitions, “that we have no idea of connexion or power at all, and that these words are absolutely without any meaning” (49), or we elaborate some kind of theoretical precursor, some fund of hidden constraint, that generates, at the very least, the semblance of knowledge. We posit some X that ‘reveals’ or ‘expresses’ or ‘makes explicit’ the hidden constraint at issue.

These ‘X posits’ have been the bread and butter of philosophy for some time now. Given Hume’s example it’s easy to see why: the structure and dynamics of cognition, unlike the structure and dynamics of our environment, do not allow for the accumulation of data. The myriad observational opportunities provided by environmental phenomena simply do not exist for phenomena like efficacy. Since individual (and therefore idiosyncratic) metacognitive intuitions are all we have to go on, our makings explicit are pretty much doomed to remain perpetually underdetermined—to be ‘merely philosophical.’

I take this as uncontroversial. What makes philosophy philosophy as opposed to a science is its perennial inability to arbitrate between incompatible theoretical claims. This perennial inability to arbitrate between incompatible theoretical claims, like the temporary inability to arbitrate between incompatible theoretical claims in the sciences, is in some important respect an artifact of insufficient information. But where the sciences generally possess the resources to accumulate the information required, philosophy does not. Aside from metacognition or ‘theoretical reflection,’ philosophy has precious little in the way of informational resources.

And yet we soldier on. The bulk of traditional philosophy relies on what might be called the Accessibility Conceit: the notion that, despite more than two thousand years of failure, retrospective (reflective, metacognitive) interrogations of our activities somehow access enough information pertaining to their ‘intrinsic character’ to make the inferential ‘expression’ of our implicit precursors a viable possibility. Hope, as they say, springs eternal. Rather than blame their discipline’s manifest institutional incapacity on some more basic metacognitive incapacity, philosophers generally blame the problem on the various conceptual apparatuses used. If they could only get their concepts right, the information is there for the taking. And so they tweak and they overturn, posit this precursor and that, and the parade of ‘makings explicit’ grows and grows and grows. In a very real sense, the Accessibility Conceit, the assumption that the tools and material required to cognize the implicit are available, is the core commitment of the traditional philosopher. Why show up for work, otherwise?

The question of comprehending conscious experience is the question of comprehending the constitutive and dynamic constraints on conscious experience. Since those constraints don’t appear within conscious experience, we pay certain people called ‘philosophers’ to advance speculative theories of their nature. We are a rather self-obsessed species, after all.

Advancing speculative hypotheses regarding each other’s implicit nature is something we do all the time. According to Robin Dunbar, some two thirds of human communication is devoted to gossip. We are continually replaying, revisiting—even our anticipations yoke the neural engines of memory. In fact, we continually interrogate our emotionally charged interactions, concocting rationales, searching for the springs of others’ actions, declaring things like ‘She’s just jealous,’ or ‘He’s on to you.’ There is, you might say, an ‘Everyday Implicit’ implicit in our everyday discourse.

As there has to be. Conscious experience may be ‘as wide as the sky,’ as Dickinson says, but it is little more than a peephole. Conscious experience, whatever it turns out to be, seems to be primarily adapted to deliberative behaviour in complex environments. Among other things, it operates as a training interface, where the deliberative repetition of actions can be committed to automatic systems. So perhaps it should come as no surprise that, like behaviour, it is largely serial. When peephole, serial access to a complex environment is all you have, the kind of retrospective inferential capacity possessed by humans becomes invaluable. Our ability to ‘make things explicit’ is pretty clearly a central evolutionary design feature of human consciousness.

In a fundamental sense, then, making-explicit is just what we humans do. It makes sense that with time, especially once literacy allowed for the compiling of questions—an inventory of ignorance, you might say—that we would find certain humans attempting to make making explicit itself explicit. And since making each other explicit was something that we seemed to do with some degree of reliability, it makes sense that the difficulty of this new task should confound these inquirers. The Everyday Implicit was something they used with instinctive ease, reliably attributing all manner of folk-intentional properties to individuals all the time. And yet, whenever anyone attempted to make this Everyday Implicit explicit, they seemed to come up with something different.

No one could agree on any canonical explication. And yet, aside from the ancient skeptics, they all agreed on the possibility of such a canonical explication. They all hewed to the Accessibility Conceit. And since the skeptics’ mysterian posit was as underdetermined as any of their own claims, they were inclined to be skeptical of the skeptics. Otherwise, their Philosophical Implicit remained the only game in town when it came to things human and implicit. They need only look to the theologians for confirmation of their legitimacy. At least they placed their premises before their conclusions!

But things have changed. Over the past few decades, cognitive scientists have developed a number of ingenious experimental paradigms designed to reveal the implicit underbelly of what we think and do. In the now notorious Implicit Association Test, for instance, the time subjects require to pair concepts is thought to indicate the cognitive resources required, and thus provide an indirect measure of implicit attitudes. If it takes a white individual longer to pair stereotypically black names with positive attributes than it does white names, this is presumed to evidence an ‘implicit bias’ against blacks. Actions, as the old proverb has it, speak louder than words. It does seem intuitive to suppose that the racially skewed effort involved in value identifications tokens some kind of bias. Versions of this of this paradigm continue to proliferate. Once the exclusive purview of philosophers, the implicit has now become the conceptual centerpiece of a vast empirical domain. Cognitive science has now revealed myriad processes of implicit learning, interpretation, evaluation, and even goal-setting. Taken together, these processes form what is generally referred to as System 1 cognition (see table below), an assemblage of specialized cognitive capacities—heuristics—adapted to the ‘quick and dirty’ solution of domain specific ‘problem ecologies’ (Chow, 2011; Todd and Gigerenzer, 2012), and which operate in stark contrast to what is called System 2 cognition, the slow, serial, and deliberate problem solving related to conscious access (defined in Dehaene’s operationalized sense of reportability)—what we take ourselves to be doing this very moment, in effect.

DUAL PROCESS THEORIES IN PSYCHOLOGY

System 1 Cognition (Implicit) System 2 Cognition (Explicit)
Not conscious Conscious
Not human specific Human specific
Automatic Deliberative
Fast Slow
Parallel Sequential
Effortless Effortful
Intuitive Reflective
Domain specific Domain general
Pragmatic Logical
Associative Rulish
High capacity Low capacity
Evolutionarily old Evolutionarily young

* Adapted from Frankish and Evans, “The duality of mind: A historical perspective.”

What are called ‘dual process’ or ‘dual system’ theories of cognition are essentially experimentally driven complications of the crude dichotomy between unconscious/implicit and conscious/explicit problem solving that has been pondered since ancient times. As granular as this emerging empirical picture remains, it already poses a grave threat to our traditional explicitations of the implicit. Our cognitive capacities, it turns out, are far more fractionate, contingent, and opaque than we ever imagined. Decisions can be tracked prior to a subject’s ability to report them (Haynes, 2008; or here). The feeling of willing can be readily tricked, and thus stands revealed as interpretative (Wegner, 2002; Pronin, 2009). Memory turns out to be fractionate and nonveridical (See Bechtel, 2008, for review). Moral argumentation is self-promotional rather than truth-seeking (Haidt, 2012). Various attitudes appear to be introspectively inaccessible (See Carruthers, 2011, for extensive review). The feeling of certainty has a dubious connection to rational warrant (Burton, 2008). The list of such findings continually grows, revealing an ‘implicit’ that consistently undermines and contradicts our traditional and intuitive self-image—what Sellars famously termed our Manifest Image.

As Frankish and Evans (2009) write in their historical perspective on dual system theories:

“The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning.” 25

Given the rhetorical constraints imposed by their genre, this amounts to the strident claim that a genuine revolution in our understanding of the human is underway, one that could humble us out of existence. The simple question is, Where does that revolution end?

Consider what might be called the ‘Worst Case Scenario’ (WCS). What if it were the case that conscious experience and cognition have evolved in such a way that the higher dimensional, natural truth of the implicit utterly exceeds our capacity to effectively cognize conscious experience and cognition outside a narrow heuristic range? In other words, what if the philosophical Accessibility Conceit were almost entirely unwarranted, because metacognition, no matter how long it retrospects or how ingeniously it infers, only accesses information pertinent to a very narrow band of problem solving?

Now I have a number of arguments for why this is very likely the case, but in lieu of those arguments, it will serve to consider the eerie way our contemporary disarray regarding the implicit actually exemplifies WCS. People, of course, continue using the Everyday Implicit the way we always have. Philosophers continue positing their incompatible versions of the Philosophical Implicit the way they have for millennia. And scientists researching the Natural Implicit continue accumulating data, articulating a picture that seems to contradict more and more of our everyday and philosophical intuitions as it gains dimensionality.

Given WCS, we might expect the increasing dimensionality of our understanding would leave the functionality of the Everyday Implicit intact, that it would continue to do what it evolved to do, simply because it functions the way it does regardless of what we learn. At the same time, however, we might expect the growing fidelity of the Natural Implicit would slowly delegitimize our philosophical explications of that implicit, not only because those explications amount to little more than guesswork, but because of the fundamental incompatibility of intentional and the causal conceptual registers.

Precisely because the Everyday Implicit is so robustly functional, however, our ability to gerrymander experimental contexts around it should come as no surprise. And we should expect that those invested in the Accessibility Conceit would take the scientific operationalization of various intentional concepts as proof of 1) their objective existence, and 2) the fact that only more cognitive labour, conceptual, empirical, or both, is required.

If WCS were true, in other words, one might expect that cognitive sciences invested in the Everyday and Philosophical Implicit, like psychology, would find themselves inexorably gravitating about the Natural Implicit as its dimensionality increased. One might expect, in other words, that the Psychological Implicit would become a kind of decaying Necker Cube, an ‘unstable bi-stable concept,’ one that would alternately appear to correspond to the Everyday and Philosophical Implicit less and less, and to the Natural Implicit more and more.

Part Two considers this process in more detail.

Advertisements

Paradox as Cognitive Illusion

by rsbakker

Aphorism of the Day: A blog knows no greater enemy than Call of Duty. A blogger, no greater friend.

.

Paradoxes. I’ve been fascinated by them since my contradictory youth.

A paradox is typically defined as a conjunction of two true, yet logically incompatible statements – which those of you with a smattering of ancient Greek will recognize in the etymology of the word (para, ‘beside,’ doxa, ‘holdings’). So in a sense, it would be more accurate to say that I’m fascinated by paradoxicality, that sense of ethereal torsion that you get whenever baffled by self-reference, as in the classic verbalization of Russell’s Set Paradox,

The barber in town shaves all those who don’t shave themselves. Does the barber shave himself?

Or the grandaddy of them all, the Liar’s Paradox,

This sentence is false.

Pondering these while doing my philosophy PhD. at Vanderbilt led me to posit something I called ‘performance-reference asymmetry,’ the strange way referring to the performance of the self-same performance seemed to cramp sense, whether the resulting formulation was paradoxical or not. As in, for instance,

This sentence is true.

This led me to the notion that paradoxes were, properly speaking, a subset of the kinds of problems generated by self-reference more generally. Now logicians and linguists like to argue away paradoxes by appeal to some interpretation of the ‘proper use’ of the terms you find in statements like the above. ‘This sentence is true,’ plainly abuses the indexical function of ‘this,’ as well as the veridical function of ‘true,’ creating a little verbal golem that, you could argue, merely lurches in the semblance semantic life. But I’ve never been interested in the legalities of self-reference or paradox so much as the implications. The important fact, it seems to me, is that self-reference (and therefore paradox) is a defining characteristic of human life. Whatever else that might distinguish us from our mammalian kin, we are the beasts that endlessly refer to the performance of our referring…

Which is to say, continually violate what seems to be a powerful bound of intelligibility.

Now I know that oh-so-many see this as an occasion for self-referential back-slapping, an example of ‘human transcendence’ or whatever. For many, the term ‘aporia’ (which means ‘difficult passage’ in ancient Greek) is a greased pipeline delivering all kinds of super-rational goodies. I’m more interested in the difficulty part. What is it about self-reference that is so damn difficult? Why should referring to the performance of our referring exhibit such peculiar effects?

Now if we were machines, we simply wouldn’t have this problem. It seems to be a brute fact of nature that an information processing mechanism cannot model its modelling as it models. Why? Simply because its resources are engaged. It can model its modelling (at the expense of fidelity) after it has modelled something else. But only after, never as.

Thus, thanks to the irreflexivity of nature, the closest a machine can come to a paradox is a loop. Well, actually, not even that, at least to the extent that ‘loops’ presuppose some kind of circularity. An information processing mechanism can only model the performance of its modelling subsequent to its modelling, which is just to say the circle is never closed, thanks to the crowbar of temporality. So rather, what we have looks more like a spiral than a loop.

Machines can only ‘refer’ to their past states simply because they need their present states to do the ‘referring.’

Can you see the creepy parallel building? Here we have all these ancient difficulties referring to the performance of our referring, and information processing machines, meanwhile, are out-and-out incapable of modelling the performance of their modelling as they model. Could these be related? Perhaps our difficulty stems from the fact that we are actually trying to do something that is, when all is said and done, mechanically impossible.

But as I said above, one of the things that distinguishes us humans from animals is our extravagrant capacity for self-reference. The implicit assumption was that this is also what distinguishes us from machines.

But recall what I said above: information processing machines can only model their modelling – at the expense of fidelity – after they have modelled something else.  Any post hoc models an information processing machine generates of its modelling will necessarily be both granular and incomplete, granular because the mechanical complexity required to model its modelling necessarily outruns the complexity of the model, and incomplete because ‘omniscient access’ to information pertaining to its structures and functions is impossible.

Now, of course, the life sciences tell us that the mental turns on the biomechanical – that we are machines, in effect. The reason we need the life sciences to tell us this is that the mental appears to be anything but biomechanical – which is to say, anything but irreflexive.  The mental, in other words, would seem to be radically granular and incomplete. This raises the troubling but provocative possibility that our ‘difficulty with self-reference’ is simply the most our stymied cognitive systems can make of the mechanical impossibility of modelling our modelling simultaneous to our modelling.

Like any other mechanism, the brain can only model its past states, and only in a radically granular and incomplete manner, no less. Because it can only cognize itself after the fact, it can never cognize itself as it is, and so cannot cognize the interval between. In other words, even though it can model time (and so easily cognize the mechanicity of other brains), it cannot model the time of modelling, and so has to remain utterly oblivious to its own irreflexivity.

It perceives a false reflexivity, and so is afflicted by a welter of cognitive illusions, enough to make consciousness a near magical thing.

Structurally enforced myopia, simple informatic neglect, crushes the mechanical spiral that decompresses paradoxical self-reference flat. Put differently, what I called ‘paradox in the living sense’ above arises because a brain shaped like this:

which is to say, an irreflexive mechanism that spirals through temporally distinct states, can only access and model this,

from the standpoint of itself.

Meathooks: Dennett and the Death of Meaning

by rsbakker

Aphorism of the Day: God is myopia, personality mapped across the illusion of the a priori.

.

In Darwin’s Dangerous Idea, Daniel Dennett attempts to show how Darwinism possesses the explanatory resources “to unite and explain everything in one magnificent vision.” To assist him, he introduces the metaphors of the ‘crane’ and the ‘skyhook’ as a general means of understanding the Darwinian cognitive mode and that belonging to its traditional antagonist:

Let us understand that a skyhook is a “mind-first” force or power or process, an exception to the principle that all design, and apparent design, is ultimately the result of mindless, motiveless mechanicity. A crane, in contrast, is a subprocess or a special feature of a design process that can be demonstrated to permit the local speeding up of the basic, slow process of natural selection, and can be demonstrated to be itself the predictable (or retrospectively explicable) product of the basic process. Darwin’s Dangerous Idea, 76

The important thing to note in this passage is that Dennett is actually trying to find some middle-ground, here, between what might be called the ‘top-down’ intuitions, which suggest some kind of essential break between meaning and nature, and ‘bottom-up’ intuitions, which seem to suggest there is no such thing as meaning at all. What Dennett attempts to argue is that the incommensurability of these families of intuitions is apparent only, that one only needs to see the boom, the gantry, the cab, and the tracks, to understand how skyhooks are in reality cranes, the products of Darwinian evolution through and through.

The arch-skyhook in the evolutionary story, of course, is design. What Dennett wants to argue is that the problem has nothing to do with the concept design per se, but rather with a certain way of understanding it. Design is what Dennett calls a ‘Good Trick,’ a way of cognizing the world without delving into its intricacies, a powerful heuristic selected precisely because it is so effective. On Dennett’s account, then, design really looks like this:

And only apparently looks like this:

Design, in other words, is not the problem–design is a crane, something easily explicable in natural terms. The problem, rather, lies in our skyhook conception of design. This is a common strategy of Dennett’s. Even though he’s commonly accused of eliminativism (primarily for his rejection of ‘original intentionality’), a fair amount of his output is devoted to apologizing for the intentional status quo, and Darwin’s Dangerous Idea is filled with some of his most compelling arguments to this effect.

Now I actually think the situation is nowhere near so straightforward as Dennett seems to think. I also believe Dennett’s ‘redefinitional strategy,’ where we hang onto our ‘folk’ terms and simply redefine them in light of incoming scientific knowledge, is more than a little tendentious. But to see this, we need to understand why it is these metaphors of crane and skyhook capture as much of the issue of meaning and nature as they do. We need to take a closer look.

Darwin’s great insight, you could say, was simply to see the crane, to grasp the great, hidden mechanism that explains us all. As Dennett points out, if you find a ticking watch while walking in the woods, the most natural thing in the world is to assume is that you’ve discovered an intentional artifact, a product of ‘intelligent design.’ Darwin’s world-historical insight was to see how natural processes lacking motive, intelligence, or foresight could accomplish the same thing.

But what made this insight so extraordinary? Why was the rest of the crane so difficult to see? Why, in other words, did it take a Darwin to show us something that, in hindsight at least, should have been so very obvious?

Perspective is the most obvious, most intuitive answer. We couldn’t see because we were in no position to see. We humans are ephemeral creatures, with imaginations that can be beggared by mere centuries, let alone the vast, epochal processes that created us. Given our frame of informatic reference, the universe is an engine that idles so low as to seem cold and dead–obviously so. In a sense, Darwin was asking his peers to believe, or at least consider, a rather preposterous thing: that their morphology only seemed fixed, that when viewed on the appropriate scale, it became wax, something that sloshed and spilled into environmental moulds.

A skyhook, on this interpretation, is simply what cranes look like in the fog of human ignorance, an artifact of myopia–blindness. Lacking information pertaining to our natural origins (and what is more, lacking information regarding that lack), we resorted to those intuitions that seemed most immediate, found ways, as we are prone to do, to spin virtue and flattery out of our ignorance. Waste not, want not.

All this should be clear enough, I think. As ‘brights’ we have an ingrained animus against the beliefs of our outgroup competitors. ‘Intelligent design,’ in our circles at least, is what psychologists call an ‘identity claim,’ a way to sort our fellows on perceived gradients of cognitive authority. As such, it’s very easy to redefine, as far as intentional concepts go. Contamination is contamination, terminological or no. And so we have grown used to using the intuitive, which is to say, skyhook, concept of design ‘under erasure,’ as continental philosophers might say–as a mere facon de parler.

But I fear the situation is nowhere quite so easy, that when we take a close look at the ‘skyhook’ structure of ‘design,’ when we take care to elucidate its informatic structure as a kind of perspectival artifact, we have good reason to be uncomfortable–very uncomfortable. Trading in our intuitive concept of design for a scientifically informed one, as Dennett recommends, actually delivers us to a potentially catastrophic implicature, one that only seems innocuous for the very reason our ancestors thought ‘design’ so obvious and innocuous: ignorance and informatic neglect.

On Dennett’s account, design is a kind of ‘stance’–literally, a cognitive perspective–a computationally parsimonious way of making sense of things. He has no problem with relying on intentional concepts because, as we have seen, he thinks them reliable, at least enough for his purposes. For my part, I prefer to eschew ‘stances’ and the like and talk exclusively in terms of heuristics. Why? For one, heuristics are entirely compatible with the mechanistic approach of the life sciences–unlike stances. As such, they do not share the liabilities of intentional concepts, which are much more prone to be applied out of school, and so carry an increased risk of generating conceptual confusion. Moreover, by skirting intentionality, heuristic talk obviates the threat of circularity. The holy grail of cognitive science, after all, is to find some natural (which is to say, nonintentional) way to explain intentionality. But most importantly, heuristics, unlike stances, make explicit the profound role played by informatic neglect. Heuristics are heuristics (as opposed to optimization devices) by virtue of the way they systematically ignore various kinds of information. And this, as we shall see, makes all the difference in the world.

Recall the question of why we needed Darwin to show us the crane of evolution. The crane was so hard to see, I suggested, because our limited informatic frame of reference–our myopic perspective. So then why did we assume design was the appropriate model? Why, in the absence of information pertaining to natural selection, should design become the default explanation of our biological origins as opposed to, say, ‘spontaniety’? When Origin of the Species was published in 1859, for instance, many naturalists actually accepted some notion of purposive evolution; it was natural selection they found offensive, the mindlessness of biological origins. One can cite many contributing factors in answering this question, of course, but looming large over all of them is the fact that design is a natural heuristic, one of many specialized cognitive tools developed by our oversexed, undernourished ancestors.

By rendering the role of informatic neglect implicit, Dennett’s approach equivocates between ‘circumstantial’ and ‘structural’ ignorance, or in other words, between the mere inability to see and blindness proper. Some skyhooks we can dissolve with the accumulation of information. Others we cannot. This is why merely seeing the crane of evolution is not enough, why we must also put the skyhook of intuitive design on notice, quarantine it: we may be born in ignorance of evolution, but we die with the informatic neglect constitutive of design.

Our ignorance of evolution was never a simple matter of ignorance, it was also a matter of human nature, an entrenched mode of understanding, one incompatible with the facts of Darwinian evolution. Design, it seemed, was obviously true, either outright or upon the merest reflection. We couldn’t see the crane of evolution, not simply because we were in no position to see (given our ephemeral nature), but also because we were in position to see something else, namely, the skyhook of design. Think about the two photos I provided above, the way the latter, the skyhook, was obviously an obfuscation of the former, the crane, not merely because you had the original photo to reference, but because you could see that something had been covered over–because you had access, in other words, to information pertaining to the lack of information. The first photo of the crane strikes us as complete, as informatically sufficient. The second photo of the skyhook, however, strikes us as obviously incomplete.

We couldn’t see the crane of evolution, in other words, not just because we were in position to see something else, the skyhook of design, but because we were in position to see something else and nothing else. The second photo, in other words, should have looked more like this:

Enter the Blind Brain Theory. BBT analyzes problems pertaining to intentionality and consciousness in terms of informatic availability and cognitive applicability, in terms of what information we can reasonably expect conscious deliberation to access, and the kinds heuristic limitations we can reasonably expect it to face. Two of the most important concepts arising from this analysis are apparent sufficiency and asymptotic limitation. Since differentiation is always a matter of more information, informatic sufficiency is always the default. We always need to know more, you could say, to know that we know less. The is why intentionality and consciousness, on the BBT account, confront philosophy and science with so many apparent conundrums: what we think we see when we pause to reflect is limned and fissured by numerous varieties of informatic neglect, deficits we cannot intuit. Thus asymptosis and the corresponding illusion of informatic sufficiency, the default sense that we have all the information we need simply because we lack information pertaining to the limits of that information.

This is where I think all those years I spent reading continental philosophy have served me in good stead. This is also where those without any background in continental thought generally begin squinting and rolling their eyes. But the phenomena is literally one we encounter every day–every waking moment in fact (although this would require a separate post to explain). In epistemological terms, it refers to ‘unknown-unknowns,’ or unk-unks as they are called in engineering. In fact, we encountered its cognitive dynamics just above when puzzling through the question of why natural selection, which seems so obvious to us in hindsight, could count as such a revelation prior to 1859. Natural selection, quite simply, was an unknown unknown. Lacking the least information regarding the crane, in other words, meant that the design seemed the only option, the great big ‘it’s-gotta-be’ of early nineteenth century biology.

In a sense, all BBT does is import this cognitive dynamic–call it, the ‘Only-game-in-town Effect’–into human cognition and consciousness proper. In continental philosophy you find this dynamic conceptualized in a variety of ways, as ‘presence’ or ‘identity thinking,’ for example, in its positive incarnation (sufficiency), or as ‘differance’ or ‘alterity’ in its negative (neglect). But as I say, we witness it everywhere in our collective cognitive endeavours. All you need do is think of the way the accumulation of alternatives has the effect of progressively weakening novel interpretations, such as Kant’s say, in philosophy. Kant, who was by no means a stupid man, could actually believe in the power of transcendental deduction to deliver synthetic a priori truths simply because he was the first. It’s interpretative nature only became evident as the variants, such as Fichte’s, began piling up. Or consider the way contextualizing claims, giving them speakers and histories and motives and so on has the strange effect of relativizing them, somehow robbing them of veridical force. Back in my teaching days, I would illustrate the power of unk-unk via a series of recontextualizations. I would give the example of a young man stabbing an old man, and ask my students if it’s a crime. “Yes,” they would cry. “What could be more obvious!” Then I would start stacking contexts, such as a surrounding mob of other men stabbing one another, then a giant arena filled with screaming spectators watching it all, and so on.

The Only-game-in-town Effect (or the Invisibility of Ignorance), according to BBT, plays an even more profound role within us than it does between us. Conscious experience and cognition as we intuit them, it argues, is profoundly structured ‘by’ unk-unk–or informatic neglect.

This is all just to say that the skyhook of design always fills the screen, so to speak, that it always strikes us as sufficient, and can only be revealed as parochial through the accumulation of recalcitrant information. And this makes plain the astonishing nature of Darwin’s achievement, just how far he had to step out of the traditional conceptual box to grasp the importance of natural selection. At the same time, it also explains why, at least for some, the crane was in the ‘air,’ so to speak, why Darwin ultimately found himself in a race with Wallace. The life sciences, by the middle of the 19th century, had accumulated enough ‘recalcitrant information’ to reveal something of the heuristic parochialism of intuitive design and its inapplicability to the life sciences as a matter of fact, as opposed to mere philosophical reflection a la, for instance, Hume.

Intuitive design is a native cognitive heuristic that generates ‘sufficient understanding’ via the systematic neglect of ‘bottom-up’ causal information. The apparent ‘sufficiency’ of this understanding, however, is simply an artifact of this self-same neglect: as is the case with other intentional concepts, it is notoriously difficult to ‘get behind’ this understanding, to explain why it should count as cognition at all. To take Dennett’s example of finding a watch in the forest: certainly understanding that a watch is an intentional artifact, the product of design, tells you something very important, something that allows you to distinguish watches from rocks, for instance. It also tells you to be wary, that other agents such as yourself are about, perhaps looking for that watch. Watch out!

But what, exactly, is it you are understanding? Design seems to possess a profound ‘resolution’ constraint: unlike mechanism, which allows explanations at varying levels of functional complexity, organelles to cells, cells to tissues, tissues to organs, organs to animal organisms, etc., design seems stuck at the level of the ‘personal,’ you might say. Thus the appropriateness of the metaphor: skyhooks leave us hanging in a way that cranes do not.

And thus the importance of cranes. Precisely because of its variable degrees of resolution, you might say, mechanistic understanding allows us to ‘get behind’ our environments, not only to understand them ‘deeper,’ but to hack and reprogram them as well. And this is the sense in which cranes trump skyhooks, why it pays to see the latter as perspectival distortions of the former. Design, as it is intuitively understood, is a skyhook, which is to say, a cognitive illusion.

And here we can clearly see how the threat of tendentiousness hangs over Dennett’s apologetic redefinitional project. The design heuristic is effective precisely because it systematically neglects causal information. It allows us to understand what systems are doing and will do without understanding how they actually work. In other words, what makes design so computationally effective across a narrow scope of applications, causal neglect, seems to be the very thing that fools us into thinking it’s a skyhook–causal neglect.

Looked at in this way, it suddenly becomes very difficult to parse what it is Dennett is recommending. Replacing the old, intuitive, skyhook design-concept with a new, counterintuitive, crane design-concept means using a heuristic whose efficiencies turn on causal neglect in a manner amenable to causal explanation. Now it seems easy, I suppose, to say he’s simply drawing a distinction between informatic neglect as a virtue and informatic neglect as a vice, but can this be so? When an evolutionary psychologist says, ‘We are designed for persistence hunting,’ are we cognizing ‘designed for’ in a causal sense? If so, then what’s the bloody point of hanging onto concept at all? Or are we cognizing ‘designed for’ in an intentional sense? If so, then aren’t we simply wrong? Or are we, as seems far more likely the case, cognizing ‘designed for’ in an intentional sense only ‘as if’ or ‘under erasure,’ which is to say, as a mere facon de parler?

Either way, the prospects for Dennett’s apologetic project, at least in the case of design, seem to look exceedingly bleak. The fact that design cannot be the skyhook it seems to be, that it is actually a crane, does nothing to change the fact that it leverages computational efficiencies via causal neglect, which is to say, by looking at the world through skyhook glasses. The theory behinds his cranes is impeccable. The very notion of crane-design as a deployable concept, however,is incoherent. And using concepts ‘under erasure,’ as one must do when using ‘design’ in evolutionary contexts, would seem to stand upon the very lip of an eliminativist abyss.

And this is simply an instance of what I’ve been ranting about all along here on Three Pound Brain, the calamitous disjunction of knowledge and experience, and the kinds of distortions it is even now imposing on culture and society. The Semantic Apocalypse.

.

But Dennett is interested in far more than simply providing a new Darwinian understanding of design, he wants to mint a new crane-coinage for all intentional concepts. So the question becomes: To what extent do the considerations above apply to intentionality as a whole? What if it were the case that all the peculiarities, the interminable debates, the inability to ‘get behind’ intentionality in any remotely convincing way–what if all this were more than simply coincidental? Insofar as all intentional concepts systematically neglect causal information, we have ample reason to worry. Like it or not, all intentional concepts are heuristic, not in any old manner, but in the very manner characteristic of design.

Brentano, not surprisingly, provides the classic account of the problem in Psychology From an Empirical Standpoint, some fifteen years after the publication of Origin of the Species:

Every mental phenomenon includes something as object within itself, although they do not all do so in the same way. In presentation something is presented, in judgement something is affirmed or denied, in love loved, in hate hated, in desire desired and so on. This intentional in-existence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We can, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves. 68

No physical phenomena exhibits intentionality, and likewise, no intentional phenomena exhibits anything like causality, at least not obviously so. The reason for this, on the BBT account, is as clear as can be. Most are inclined to blame the computational intractability of cognizing and tracking the causal complexities of our relationships. The idea (and it is a beguiling one) is that aboutness is a kind of evolved discovery, that the exigencies of natural selection cobbled together a brain capable of exploiting a preexisting logical space–what we call the ‘a priori.’ Meaning, or intentionality more generally, on this account is literally ‘found money.’ The vexing question, as always, is one of divining how this logical level is related to the causal.

On the BBT account, the computational intractability of cognizing and tracking the causal complexities of our environmental relationships is also to blame, but aboutness, far from being found money, is rather a kind of ‘frame heuristic,’ a way for the brain to relate itself to its environments absent causal information pertaining to this relation. It presumes that consciousness is a distributed, dynamic artifact of some subsystem of the brain and that, as such, faces severe constraints on its access to information generally, and almost no access to information regarding its own neurofunctionality whatsoever:

It presumes, in other words, that the information available for deliberative or conscious cognition must be, for developmental as well as structural reasons, drastically attenuated. And it’s easy to see how this simply has to be the case, simply given the dramatic granularity of consciousness compared to the boggling complexities of our peta-flopping brains.

The real question–the million dollar question, you might say–turns on the character of this informatic attenuation. At the subpersonal level, ‘pondering the mental’ consists (we like to suppose anyway) in the recursive uptake of ‘information regarding the mental’ by ‘System 2,’ or conscious, deliberative cognition. The question immediately becomes: 1) Is this information adequate for cognition? and 2) Are the heuristic systems employed even applicable to this kind of problem, namely, the ‘problem of the mental’? Is the information (as Dennett seems to assume throughout his corpus) ‘merely compressed,’ which is to say, merely stripped to its essentials to maximize computational efficiencies? Or is it a far, far messier affair? Given that the human cognitive ‘toolkit,’ as they call it in ecological rationality circles, is heuristically adapted to troubleshoot external environments, can we assume that mental phenomena actually lie within their scope of application? Could the famed and hoary conundrums afflicting philosophy of mind and consciousness research be symptoms of heuristic overreach, the application of specialized cognitive tools to a problem set they are simply not adapted to solve?

Let’s call the issue expressed in this nest of questions the ‘Attenuation Problem.’

It’s worth noting at this juncture that although Dennett is entirely on board with the notion that ‘the information available for deliberative or conscious cognition must be drastically attenuated’ (see, for instance, “Real Patterns”), he inexplicably shies from any detailed consideration of the nature of this attenuation. Well, perhaps not so inexplicably. For Dennett, the Good Tricks are good because they are efficacious and because they are winners of the evolutionary sweepstakes. He assumes, in other words, that the Attenuation Problem is no problem at all, simply because it has been resolved in advance. Thus, his apologetic, redefinitional programme. Thus his endless attempts to disabuse his fellow travellers of the perceived need to make skyhooks real:

I know that others find this vision so shocking that they turn with renewed eagerness to the conviction that somewhere, somehow, there just has to be a blockade against Darwinism and AI. I have tried to show that Darwin’s dangerous idea carries the implication that there is no such blockade. It follows from the truth of Darwinism that you and I are Mother Nature’s artefacts, but our intentionality is none the less real for being an effect of millions of years of mindless, algorithmic R and D instead of a gift from on high. Darwin’s Dangerous Idea, 426-7

Cranes are all we have, he argues, and as it turns out, they are more than good enough.

But, as we’ve seen in the case of design, the crane version forces us to check our heuristic intuitions at the door. Given that the naturalization of design requires adducing the very causal information that intuitive design neglects to leverage heuristic efficiencies, there cannot be, in effect, any coherent, naturalized concept of design, as opposed to the employment of intuitive design ‘under erasure.’ Real or not, the skyhook comes first, leaving us to append the rest of the crane as an afterthought. Apologetic redefinition is simply not enough.

And this suggests that something might be wrong with Dennett’s arguments from efficacy and evolution for the ‘good enough’ status of derived intentionality. As it turns out, this is precisely the case. Despite their prima facie appeal, neither the apparent efficacy nor the evolutionary pedigree of our intentional concepts provide Dennett with what he needs.

To see how this could be the case, we need to reconsider the two conceptual dividends of BBT considered above, sufficiency and neglect. Since more information is required to flag the insufficiency of the information (be it ‘sensory’ or ‘cognitive’) broadcast through or integrated into consciousness, sufficiency is the perennial default. This is the experiential version of what I called the ‘Only-game-in-town Effect’ above. This means that insufficiency will generally have to be inferred against the grain of a prior sense of intuitive sufficiency. Thus, one might suppose, evolution’s continued difficulties with intuitive design, and science’s battle against anthropomorphic worldviews more generally: not only does science force us to reason around elements of our own cognitive apparatus, it forces us to overcome the intuition that these elements are good enough to tell us what’s what on their own.

Dennett, in this instance at least, is arguing with the intuitive grain!

Intentionality, once again, systematically neglects causal information. As Chalmers puts it, echoing Leibniz and his problem of the Mill:

The basic problem has already been mentioned. First: Physical descriptions of the world characterize the world in terms of structure and dynamics. Second: From truths about structure and dynamics, one can deduce only further truths about structure and dynamics. And third: truths about consciousness are not truths about structure and dynamics. “Consciousness and Its Place in Nature”

Informatic neglect simply means that conscious experience tells us nothing about the neurofunctional details of conscious experience. Rather, we seem to find ourselves stranded with an eerily empty version of what the life sciences tell us we in fact are, the asymptotic (finite but unbounded) clearing called ‘consciousness’ or ‘mind’ containing, as Brentano puts it, ‘objects within itself.’ What is a mere fractional slice of the neuro-environmental circuit sketched above, literally fills the screen of conscious experience, as it were, appearing something like this:

Which is to say, something like a first-person perspective, where environmental relations appear within a ‘transparent frame’ of experience. Thus all the blank white space around the arrow: I wanted to convey the strange sense in which you are the ‘occluded frame,’ here, a background where the brain drops out, not just partially, not even entirely, but utterly. Floridi refers to this as the ‘one-dimensionality of experience,’ the way “experience is experience, only experience, and nothing but experience” (The Philosophy of Information, 296). Experience utterly fills the screen, relegating the mechanisms that make it possible to oblivion. As I’ve quipped many times: Consciousness is a fragment that constitutively confuses itself for a whole, a cog systematically deluded into thinking it’s the entire machine. Sufficiency and neglect, taken together, mean we really have no way short of a mature neuroscience of determining the character of the informatic attenuation (how compressed, depleted, fragmentary, distorted, etc.) of intentional phenomena.

So consider the evolutionary argument, the contention that evolution assures us that intentional attenuations are generally happy adaptations: Why else would they be selected otherwise?

To this, we need only reply, Sure, but adapted for what? Say subreption was the best way for evolution to proceed: We have sex because we lust, not because we want to replicate our genetic information, generally speaking. We pair-bond because we love, not because we want to successfully raise offspring to the age of sexual maturation, generally speaking. When it comes to evolution, we find more than a few ‘ulterior motives.’ One need only consider the kinds of evolutionary debates you find in cognitive psychology, for instance, to realize that our intuitive sense of our myriad capacities need not line up with their adaptative functions in any way at all, let alone those we might consider ‘happy.’

Or say evolution was only concerned with providing what might be called ‘exigency information’ for deliberative cognition, the barest details required for a limited subset of cognitive activities. One could cobble together a kind of neuro-Wittgensteinian argument, suggest that we do what we do all well and fine, but that as soon as we pause to theorize what we do, we find ourselves limited to mere informatic rumour and innuendo that, thanks to sufficiency, we promptly confuse for apodictic knowledge. It literally could be the case that what we call philosophy amounts to little more than submitting the same ‘mangled’ information to various deliberative systems again and again and again, hoping against hope for a different result. In fact, you could argue that this is precisely what we should expect to be the case, given that we almost certainly didn’t evolve to ‘philosophize.’

In other words, how does Dennett know the ‘intentionality’ he and others are ‘making explicit’ accurately describes the mechanisms, the Good Tricks, that evolution actually selected? He doesn’t. He can’t.

But if the evolutionary argument holds no water, what about Dennett’s argument regarding the out-and-out efficacy of intentional concepts? Unlike the evolutionary complaint, this argument is, I think, genuinely powerful. After all, we seem to use intentional concepts to understand, predict, and manipulate each other all the time. And perhaps even more impressively, we use them (albeit in stripped down form)in formal semantics and all its astounding applications. Fodor, for instance, famously argues that the use of representations in computation provide an all-important ‘compatibility proof.’ Formalization links semantics to syntax, and computation links syntax to causation. It’s hard to imagine a better demonstration of the way skyhooks could be linked to cranes.

Except that, like fitting the belly of Africa into the gut of the Carribean, it never quite seems to work when you actually try. Thus Searle’s famous Chinese Room Argument and Harnad’s generalization of it into the Symbol Grounding Problem. But the intuition persists that it has to work somehow: After all, what else could account for all that efficacy?

Plenty, it turns out. Intentional concepts, no matter how attenuated, will be efficacious the degree to which the brain is efficacious, simply by virtue of being systematically related to the activity of the brain. The upshot of sufficiency and neglect, recall, is that we are prone to confuse what little information we have available for most all the information available. The greater neuro-environmental circuit revealed by third-person science simply does not exist for the first-person, not even as an absence. This generates the problem of metonymicry, or the tendency for consciousness to take credit for the whole cognitive show regardless of what actually happens neurocomputationally back stage. Now matter how mangled our metacognitive understanding, how insufficient the information broadcast or integrated, in the absence of contradicting information, it will count as our intuitive baseline for what works. It will seem to be the very rule.

And this, my view predicts, is what science will eventually make of the ‘a priori.’ It will show it to be of a piece with the soul, which is to say, more superstition, a cognitive illusion generated by sufficiency and informatic neglect. As a neural subsystem, the conscious brain has more than just the environment from which to learn; it also has the brain itself. Perhaps logic and mathematics as we intuitively conceive them are best thought of, from the life sciences perspective at least (that is, the perspective you hope will keep you alive every time you see your doctor), as kinds of depleted, truncated, informatic shadows cast by brains performatively exploring the most basic natural permutations of  information processing, the combinatorial ensemble of nature’s most fundamental, hyper-applicable, interaction patterns.

On this view, ‘computer programming,’ for instance, looks something like:

where essentially, you have two machines conjoined, two ‘implementations’ with semantics arising as an artifact of the varieties of informatic neglect characterizing the position of the conscious subsystem on this circuit. On this account, our brains ‘program’ the computer, and our conscious subsystems, though they do participate, do so under a number of onerous informatic constraints. As a result, we program blind to all aetiologies save the ‘lateral,’ which is to say, those functionally independent mechanisms belonging to the computer and to our immediate environment more generally. In place of any thoroughgoing access to these ‘medial’ (functionally dependent) causal relations, conscious cognition is forced to rely what little information it can glean, which is to say, the cartoon skyhooks we call semantics. Since this information is systematically related to what the brain is actually doing, and since informatic neglect renders it apparently sufficient, conscious cognition decides it’s the outboard engine driving the whole bloody boat. Neural interaction patterns author inference schemes that, thanks to sufficiency and neglect, conscious cognition deems the efficacious author of computer interaction patterns.

Semantics, in other words, can be explained away.

The very real problem of metonymicry allows us to see how Dennett’s famous ‘two black boxes’ thought-experiment (Darwin’s Dangerous Idea, 412-27), far from dramatically demonstrating the efficacy of intentionality, is simply an extended exercise in question-begging. Dennett tells the story of a group of researchers stranded with two black boxes, each containing a supercomputer containing a database of ‘true facts’ about the world only in different programming languages. One box has two buttons labelled alpha and beta, while the second box has three lights coloured yellow, red, and green accordingly. A single wire connects them. Unbeknownst to the researchers, the button box simply transmits a true statement when the alpha button is pushed, which the bulb box acknowledges by lighting the red bulb for agreement, and a false statement when the beta button is pushed, which the bulb box acknowledges by lighting the green bulb for disagreement. The yellow bulb illuminates only when the bulb box can make no sense of the transmission, which is always the case when the researcher disconnect the boxes and, being entirely ignorant of any of these details, substitute signals of their own.

What Dennett wants to show is how these box-to-box interactions would be impossible to decipher short of taking the intentional stance, in which case, as he points out, the communications become easy enough for a child to comprehend. But all he’s really saying is that the coded transmissions between our brains only make sense from the standpoint of our environmentally informed brains–that the communications between them are adapted to their idiosyncrasies as environmentally embedded, supercomplicated systems. He thinks he’s arguing the ineliminability of intentionality as we intuitively conceive it, as if it were the one wheel required to make the entire mechanism turn. But again, the spectre of metonymicry, the fact that, no matter where our intentional intuitions fit on the neurofunctional food chain, they will strike us as central and efficacious even when they are not, means that all this thought experiment shows–all that it can show, in fact–is that our brains communicate in idiosyncratic codes that conscious cognition seems access via intentional intuitions. To assume that our assumptions regarding the ‘intentional’ capture that code without gross, even debilitating distortions, simply begs the question.

The question we want answered is how intentionality as we understand it is related to the efficacy of our brains. We want to know how conscious experience and cognition fits into this far more sophisticated mechanistic picture. Another way of putting this, since it amounts to the same thing, is that we want to know whether it makes any sense doing philosophy as we have traditionally conceived it. How far we can trust our native intuitions regarding intentionality? The irony, of course, is that Dennett himself argues no, at least to the extent that skyhooks are both intuitive and illusory. Efficacy, understood via design, is ‘top-down,’ the artifact of agency, which is to say, another skyhook. The whole point of introducing the metaphor of cranes was to find some way of capturing our ‘skyhookish’ intuitions in a manner amenable to Darwinian evolution. And, as we saw in the case of design, above, this inexorably means using the concept ‘under erasure.’

The way cognitive science may force us to use all intentional concepts.

.

Consciousness, whatever it turns out to be, is informatically localized. We are just beginning the hard work of inventorying all the information, constitutive or otherwise, that slips through its meagre nets. Because it is localized, it lacks access to vast amounts of information regarding its locality. This means that it is locally conditioned in such a way that it assumes itself locally unconditioned–to be a skyhook as opposed to a crane.

A skyhook, of course, that happens to look something like this

which is to say, what you are undergoing this very moment, reading these very words. On the BBT account, the shape of the first-person is cut from the third-person with the scissors of neglect. The best way to understand consciousness as we humans seem to generally conceive it, to unravel the knots of perplexity that seem to belong to it, is to conceive it in privative terms, as the result of numerous informatic subtractions.* Since those subtractions are a matter of neglect from the standpoint of conscious experience and cognition, they in no way exist for conscious experience and cognition, which means their character utterly escapes our ability to cognize, short of the information accumulating in the cognitive sciences. Experience provides us with innumerable assumptions regarding what we are and what we do, intuitions stamped in various cultural moulds, all conforming to the metaphorics of the skyhook. Dennett’s cranes are simply attempts to intellectually plug these skyhooks into the meat that makes them possible, allowing him to thus argue that intentionality is real enough.

Metonymicry shows that the crane metaphor not only fails to do the conceptual heavy lifting that Dennett’s apologetic redefinitional strategy demands, it also fails to capture the ‘position,’ if you will, of our intentional skyhooks relative to the neglected causality that makes them possible. Cranes may be ‘grounded,’ but they still have hooks: this is why the metaphor is so suggestive. Mundane cranes may be, but they can still do the work that skyhooks accomplish via magic. The presumption that intentional concepts do the work we think that they do is, you could say, built right into the metaphoric frame of Dennett’s argument. But the problem is that skyhooks are not ‘cranes,’ rather they are cogs, mechanistic moments in a larger mechanism, rising from neglected processes to discharge neglected functions. They hang in the meat, and the question of where they hang, and the degree to which their functional position matches or approximates their intuitive one remains a profoundly open and entirely empirical question.

Thus, the pessimistic, quasi-eliminativist thrust of BBT: once metonymicry decouples intentionality from neural efficacy, it seems clear there are far more ways for our metacognitive intuitions to be deceived than otherwise.

Either way, the upshot is that efficacy, like evolution, guarantees nothing when it comes to intentionality. It really could be the case that we are simply ‘pre-Darwinian’ with reference to intentionality in a manner resembling the various commitments to design held back in Darwin’s day. Representation could very well suffer the same fate vis a vis the life sciences–it literally could become a concept that we can only use ‘under erasure’ when speaking of human cognition.

Science overcoming the black box of the brain could be likened to a gang of starving thieves breaking into a treasure room they had obsessively pondered for the entirety of their unsavoury careers. They range about, kicking over cases filled with paper, fretting over the fact that they can’t find any gold, anything possessing intrinsic value. Dennett is the one who examines the paper, and holds it up declaring that it’s cash and so capable of providing all the wealth anyone could ever need.

I’m the one-eyed syphilitic, the runt of the evil litter, who points out Jefferson Davis staring up from each and every $50 bill.

.

Notes

* It’s worth pausing to consider the way BBT ‘pictures’ consciousness. First, BBT is agnostic on the issue of how the brain generates consciousness; it is concerned, rather, with the way consciousness appears. Taking a deflationary, ‘working conception’ of nonsemantic information, and assuming three things–that consciousness involves integration of differentiated elements, that it has no way of cognizing information related to its own neurofunctionality, and that it is a subsystematic artifact of the brain–it sees the first-person and all its perplexities as expressions of informatic neglect. Consider the asymptotic margins of visual attention–the way the limits of what you are seeing this very moment cannot themselves be seen. BBT argues that similar asymptotic margins, or ‘information horizons,’ characterize all the modalities of conscious experience–as they must, insofar as the information available to each is finite. The radical step in the picture is see how this trivial fact can explain the apparent structure of the first-person as an asymptotic partitioning of a larger informatic environment. So it suggests that a first-person structural feature as significant and as perplexing as the Now, for instance, can be viewed as a kind of temporal analogue of our visual margin, always apparently the ‘same’ because timing can no more time itself than seeing can see itself, and always different because other cognitive systems (as in the case of vision again) can frame it as another moment within a larger (in this case temporal) environment. Most of the problems pertaining to consciousness, the paradoxicality, the incommensurability, the inexplicability, can be explained if we simply adopt a subsystematic perspective, and start asking what information we could realistically expect to be available for uptake for what kinds of cognitive systems. Thus the radical empirical stakes of BBT: the ‘consciousness’ that remains seems far, far easier to explain than the conundrum riddled one we think we see.