Dennett’s Black Boxes (Or, Meaning Naturalized)

by rsbakker

“Dennett’s basic insight is that there are under-explored possibilities implicit in contemporary scientific ideas about human nature that are, for various well understood reasons, difficult for brains like ours to grasp. However, there is a familiar remedy for this situation: as our species has done throughout its history when restrained by the cognitive limitations of the human brain, the solution is to engineer new cognitive tools that enable us to transcend these limitations. ”

—T. W. Zadwidzki, “As close to the definitive Dennett as we’re going to get.”

So the challenge confronting cognitive science, as I see it, is to find some kind of theoretical lingua franca, a way to understand different research paradigms relative to one another. This is the function that Darwin’s theory of evolution plays in the biological sciences, that of a common star chart, a way for myriad disciplines to chart their courses vis a vis one another.

Taking a cognitive version of ‘modern synthesis’ as the challenge, you can read Dennett’s “Two Black Boxes: a Fable” as an argument against the need for such a synthesis. What I would like to show is the way his fable can be carved along different joints to reach a far different conclusion. Beguiled by his own simplifications, Dennett trips into the same cognitive ‘crash space’ that has trapped traditional speculation on the nature of cognition more generally, fooling him into asserting explanatory limits that are apparent only.

Dennett’s fable tells the story (originally found in Darwin’s Dangerous Idea, 412-27) of a group of researchers stranded with two black boxes, each containing a supercomputer with a database of ‘true facts’ about the world, one in English, the other in Swedish. One box has two buttons labeled alpha and beta, while the second box has three lights coloured yellow, red, and green. Unbeknownst to the researchers, the button box simply transmits a true statement from the one supercomputer when the alpha button is pushed, which the other supercomputer acknowledges by lighting the red bulb for agreement, and a false statement when the beta button is pushed, which the bulb box acknowledges by lighting the green bulb for disagreement. The yellow bulb illuminates only when the bulb box can make no sense of the transmission, which is always the case when the researcher disconnect the boxes and, being entirely ignorant of any of these details, substitute signals of their own.

The intuitive power of the fable turns on the ignorance of the researchers, who begin by noting the manifest relations above, how pushing alpha illuminates red, pushing beta illuminates green, and how interfering with the signal between the boxes invariably illuminates yellow. Until the two hackers who built the supercomputers arrive, they have no way of explaining why the three actions—alpha pushing, beta pushing, and signal interfering—illuminate the lights they do. Even when they crack open the boxes and begin reverse engineering the supercomputers within, they find themselves no closer to solving the problem. This is what makes their ignorance so striking: not even the sustained, systematic application of mechanical cognition paradigmatic of science can solve the problem. Certainly a mechanical account of all the downstream consequences of pushing alpha or beta or interfering with the signal is possible, but this inevitably cumbersome account nevertheless fails to explain the significance of what is going on.

Dennett’s black boxes, in other words, are actually made of glass. They can be cracked open and mechanically understood. It’s their communication that remains inscrutable, the fact that no matter what resources the researchers throw at the problem, they have no way of knowing what is being communicated. The only way to do this, Dennett wants to argue, is to adopt the ‘intentional stance.’ This is exactly what Al and Bo, the two hackers responsible for designing and building the black boxes, provide when they finally let the researchers in on their game.

Now Dennett argues that the explanatory problem is the same whether or not the hackers simply hide themselves in the black boxes, Al in one and Bo in the other, but you don’t have to buy into the mythical distinction between derived and original intentionality to see this simply cannot be the case. The fact that the hackers are required to resolve the research conundrum pretty clearly suggests they cannot simply be swapped out with their machines. As soon as the researchers crack open the boxes and find two human beings are behind the communication the whole nature of the research enterprise is radically transformed, much as it is when they show up to explain their ‘philosophical toy.’

This underscores a crucial point: Only the fact that Al and Bo share a vast background of contingencies with the researchers allows for the ‘semantic demystification’ of the signals passing between the boxes. If anything, cognitive ecology is the real black box at work in this fable. If Al and Bo had been aliens, their appearance would have simply constituted an extension of the problem. As it is, they deliver a powerful, but ultimately heuristic, understanding of what the two boxes are doing. They provide, in other words, a black box understanding of the signals passing between our two glass boxes.

The key feature of heuristic cognition is evinced in the now widely cited gaze heuristic, the way fielders fix the ball in their visual field while running to keep the ball in place. The most economical way to catch pop flies isn’t to calculate angles and velocities but to simply ‘lock onto’ the target, orient locomotion to maintain its visual position, and let the ball guide you in. Heuristic cognition solves problems not via modelling systems, but via correlation, by comporting us to cues, features systematically correlated to the systems requiring solution. IIR heat-seeking missiles, for instance, need understand nothing of the targets they track and destroy. Heuristic cognition allows us to solve environmental systems (including ourselves) without the need to model those systems. It enables, in other words, the solution of environmental black boxes, systems possessing unknown causal structures, via known environmental regularities correlated to those structures.

This is why Al and Bo’s revelation has the effect of mooting most all of the work the researchers had done thus far. The boxes might as well be black, given the heuristic nature of their explanation. The arrival of the hackers provides a black box (homuncular) ‘glassing’ of the communication between the two boxes, a way to understand what they are doing that cannot be mechanically decomposed. How? By identifying the relevant cues for the researchers, thereby plugging them into the wider cognitive ecology of which they and the machines are a part.

The communication between the boxes is opaque to the researchers, even when the boxes are transparent, because it is keyed to the hackers, who belong to the same cognitive ecology as to the researchers—only unbeknownst to the researchers. As soon as they let the researchers in on their secret—clue (or ‘cue’) them in—the communication becomes entirely transparent. What the boxes are communicating becomes crystal clear because it turns out they were playing the same game with the same equipment in the same arena all along.

Now what Dennett would have you believe is that ‘understanding the communication’ is exhausted by taking the intentional stance, that the problem of what the machines are communicating is solved as far as it needs to be solved. Sure, there is a vast, microcausal story to be told (the glass box one), but it proves otiose. The artificiality of the fable facilitates this sense: the machines, after all, were designed to compare true or false claims. This generates the sense of some insuperable gulf segregating the two forms of cognition. One second the communication was utterly inscrutable, and the next, Presto! it’s transparent.

“The debate went on for years,” Dennett concludes, “but the mystery with which it began was solved” (84). This seems obvious, until one asks whether plugging the communication into our own intentional ecology answers our original question. If the question is, ‘What do the three lights mean?’ then of course the question is answered, as well it should be, given the question amounts to, ‘How do the three lights plug into the cognitive ecology of human meaning?’ If the question is, ‘What are the mechanics of the three lights, such that they mean?’ then the utility of intentional cognition simply provides more data. The mystery of the meaning of the communication is dissolved, sure, but the problem of relating this meaning to the machinery remains.

What Dennett is attempting to provide with this analogy is a version of ‘radical interpretation,’ an instance that strips away our preconceptions, and forces us to consider the problem of meaning from ‘conceptual scratch,’ you might say. To see the way his fable is loaded, you need only divorce the machines from the human cognitive ecology framing them. Make them alien black-cum-glass boxes and suddenly mechanical cognition is all our researchers have—all they can hope to have. If Dennett’s conclusions vis a vis our human black-cum-glass boxes are warranted, then our researchers might as well give up before they begin, “because there really is no substitute for semantic or intentional predicates when it comes to specifying the property in a compact, generative, explanatory way” (84). Since we don’t share the same cognitive ecology as the aliens, their cues will make no implicit or homuncular sense to us at all. Even if we could pick those cues out, we would have no way of plugging them into the requisite system of correlations, the cognitive ecology of human meaning. Absent homuncular purchase, what the alien machines are communicating would remain inscrutable—if Dennett is to be believed.

Dennett sees this thought experiment as a decisive rebuttal to those critics who think his position entails semantic epiphenomenalism, the notion that intentional posits are causally inert. Not only does he think the intentional stance answers the researchers’ primary question, he thinks it does so in a manner compatible (if not consilient) with causal explanation. Truthhood can cause things to happen:

“the main point of the example of the Two Black Boxes is to demonstrate the need for a concept of causation that is (1) cordial to higher-level causal understanding distinct from an understanding of the microcausal story, and (2) ordinary enough in any case, especially in scientific contexts.” “With a Little Help From my Friends,” Dennett’s Philosophy: A Comprehensive Assessment, 357

The moral of the fable, in other words, isn’t so much intentional as it is causal, to show how meaning-talk is indispensible to a certain crucial ‘high level’ kind of causal explanation. He continues:

“With regard to (1), let me reemphasize the key feature of the example: The scientists can explain each and every instance with no residual mystery at all; but there is a generalization of obviously causal import that they are utterly baffled by until they hit upon the right higher-level perspective.” 357

Everything, of course, depends on what ‘hitting upon the right higher level perspective’ means. The fact is, after all, causal cognition funds explanation across all ‘levels,’ and not simply those involving microstates. The issue, then, isn’t simply one of ‘levels.’ We shall return to this point below.

With regard to (2), the need for an ‘ordinary enough’ concept of cause, he points out the sciences are replete with examples of intentional posits figuring in otherwise causal explanations:

“it is only via … rationality considerations that one can identify or single out beliefs and desires, and this forces the theorist to adopt a higher level than the physical level of explanation on its own. This level crossing is not peculiar to the intentional stance. It is the life-blood of science. If a blush can be used as an embarrassment-detector, other effects can be monitored in a lie detector.” 358

Not only does the intentional stance provide a causally relevant result, it does so, he is convinced, in a way that science utilizes all the time. In fact, he thinks this hybrid intentional/causal level is forced on the theorist, something which need cause no concern because this is simply the cost of doing scientific business.

Again, the question comes down to what ‘higher level of causal understanding’ amounts to. Dennett has no way of tackling this question because he has no genuinely naturalistic theory of intentional cognition. His solution is homuncular—and self-consciously so. The problem is that homuncular solvers can only take us so far in certain circumstances. Once we take them on as explanatory primitives—the way he does with the intentional stance—we’re articulating a theory that can only take us so far in certain circumstances. If we confuse that theory for something more than a homuncular solver, the perennial temptation (given neglect) will be to confuse heuristic limits for general ones—to run afoul the ‘only-game-in-town-effect.’ In fact, I think Dennett is tripping over one of his own pet peeves here, confusing what amounts to a failure of imagination with necessity (Consciousness Explained, 401).

Heuristic cognition, as Dennett claims, is the ‘life-blood of science.’ But this radically understates the matter. Given the difficulties involved in the isolation of causes, we often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information humans have, evolved and learned sensitivities to effects systematically correlated to those environmental systems (including ourselves) relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible (and we are presently hellbent on creating tools that can do the same).

Humans are encircled, engulfed, by the inverse problem, the problem of isolating causes from effects. We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems are wildly disparate, not all access is equal. ‘Isolating causes,’ it turns out, means different things for different kinds of problem solving.

Information access, in fact, divides cognition into two distinct families. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.

Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.

Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.

(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the indispensability, the efficacy, and the limitations of source insensitive (cue correlative) cognition (see, “On the Interpretation of Artificial Souls“). Andrei Cimpian, Klaus Fiedler, and the work of the Adaptive Behaviour and Cognition Research Group more generally are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)

Now then, what Dennett is claiming is first, that instances of source insensitive cognition can serve source sensitive cognition, and second, that such instances fulfill our explanatory needs as far as they need to be fulfilled. What triggers the red light? The communication of a true claim from the other machine.

Can instances of source insensitive cognition serve source sensitive cognition (or vice versa)? Can there be such a thing as source insensitive/source sensitive hybrid cognition? Certainly seems that way, given how we cobble to two modes together both in science and everyday life. Narrative cognition, the human ability to cognize (and communicate) human action in context, is pretty clearly predicated on this hybridization. Dennett is clearly right to insist that certain forms of source insensitive cognition can serve certain forms of source sensitive cognition.

The devil is in the details. We know homuncular forms of source insensitive cognition, for instance, don’t serve the ‘hard’ sciences all that well. The reason for this is clear: source insensitive cognition is the mode we resort to when information regarding actual physical constraints isn’t available. Source insensitive idioms are components of wide correlative systems, cue-based cognition. The posits they employ cut no physical joints.

This means that physically speaking, truth causes nothing, because physically speaking, ‘truth’ does not so much refer to ‘real patterns’ in the natural world as participate in them. Truth is at best a metaphorical causer of things, a kind of fetish when thematized, a mere component of our communicative gear otherwise. This, of course, made no difference whatsoever to our ancestors, who scarce had any way of distinguishing source sensitive from source insensitive cognition. For them, a cause was a cause was a cause: the kinds of problems they faced required no distinction to be economically resolved. The cobble was at once manifest and mandatory. Metaphorical causes suited their needs no less than physical causes did. Since shallow information neglect entails ignorance of shallow information neglect—since insensitivity begets insensitivity to insensitivity—what we see becomes all there is. The lack of distinctions cues apparent identity (see, “On Alien Philosophy,” The Journal of Consciousness Studies (forthcoming)).

The crucial thing to keep in mind is that our ancestors, as shallow information consumers, required nothing more. The source sensitive/source insensitive cobble they possessed was the source sensitive/source insensitive cobble their ancestors required. Things only become problematic as more and more ancestrally unprecedented—or ‘deep’— information finds its way into our shallow information ambit. Novel information begets novel distinctions, and absolutely nothing guarantees the compatibility of those distinctions with intuitions adapted to shallow information ecologies.

In fact, we should expect any number of problems will arise once we cognize the distinction between source sensitive causes and source insensitive causes. Why should some causes so effortlessly double as effects, while other causes absolutely refuse? Since all our metacognitive capacities are (as a matter of computational necessity) source insensitive capacities, a suite of heuristic devices adapted to practical problem ecologies, it should come as no surprise that our ancestors found themselves baffled. How is source insensitive reflection on the distinction between source sensitive and source insensitive cognition supposed to uncover the source of the distinction? Obviously, it cannot, yet precisely because these tools are shallow information tools, our ancestors had no way of cognizing them as such. Given the power of source insensitive cognition and our unparalleled capacity for cognitive improvisation, it should come as no surprise that they eventually found ways to experimentally regiment that power, apparently guaranteeing the reality of various source insensitive posits. They found themselves in a classic cognitive crash space, duped into misapplying the same tools out of school over and over again simply because they had no way (short exhaustion, perhaps) of cognizing the limits of those tools.

And here we stand with one foot in and one foot out of our ancestral shallow information ecologies. In countless ways both everyday and scientific we still rely upon the homuncular cobble, we still tell narratives. In numerous other ways, mostly scientific, we assiduously guard against inadvertently tripping back into the cobble, applying source insensitive cognition to a question of sources.

Dennett, ever the master of artful emphasis, focuses on the cobble, pumping the ancestral intuition of identity. He thinks the answer here is to simply shrug our shoulders. Because he takes stances as his explanatory primitives, his understanding of source sensitive and source insensitive modes of cognition remains an intentional (or source insensitive) one. And to this extent, he remains caught upon the bourne of traditional philosophical crash space, famously calling out homuncularism on the one side and ‘greedy reductionism’ on the other.

But as much as I applaud the former charge, I think the latter is clearly an artifact of confusing the limits of his theoretical approach with the way things are. The problem is that for Dennett, the difference between using meaning-talk and using cause-talk isn’t the difference between using a stance (the intentional stance) and using something other than a stance. Sometimes the intentional stance suites our needs, and sometimes the physical stance delivers. Given his reliance on source insensitive primitives—stances—to theorize source sensitive and source insensitive cognition, the question of their relation to each other also devolves upon source insensitive cognition. Confronted with a choice between two distinct homuncular modes of cognition, shrugging our shoulders is pretty much all that we can do, outside, that is, extolling their relative pragmatic virtues.

Source sensitive cognition, on Dennett’s account, is best understood via source insensitive cognition (the intentional stance) as a form of source insensitive cognition (the ‘physical stance’). As should be clear, this not only sets the explanatory bar too low, it confounds the attempt to understand the kinds of cognitive systems involved outright. We evolved intentional cognition as a means of solving systems absent information regarding their nature. The idea then—the idea that has animated philosophical discourse on the soul since the beginning—that we can use intentional cognition to solve the nature of cognition generally is plainly mistaken. In this sense, Intentional Systems Theory is an artifact of the very confusion that has plagued humanity’s attempt to understand itself all along: the undying assumption that source insensitive cognition can solve the nature of cognition.

What do Dennett’s two black boxes ultimately illuminate? When two machines functionally embedded within the wide correlative system anchoring human source insensitive cognition exhibit no cues to this effect, human source sensitive cognition has a devil of a time understanding even the simplest behaviours. It finds itself confronted by the very intractability that necessitated the evolution of source insensitive systems in the first place. As soon as those cues are provided, what was intractable for source sensitive cognition suddenly becomes effortless for source insensitive cognition. That shallow environmental understanding is ‘all we need’ if explaining the behaviour for shallow environmental purposes happens to be all we want. Typically, however, scientists want the ‘deepest’ or highest dimensional answers they can find, in which case, such a solution does nothing more than provide data.

Once again, consider how much the researchers would learn were they to glass the black boxes and find the two hackers inside of them. Finding them would immediately plug the communication into the wide correlative system underwriting human source insensitive cognition. The researchers would suddenly find themselves, their own source insensitive cognitive systems, potential components of the system under examination. Solving the signal would become an anthropological matter involving the identification of communicative cues. The signal’s morphology, which had baffled before, would now possess any number of suggestive features. The amber light, for instance, could be quickly identified as signalling a miscommunication. The reason their interference invariably illuminated it would be instantly plain: they were impinging on signals belonging to some wide correlative system. Given the binary nature of the two lights and given the binary nature of truth and falsehood, the researchers, it seems safe to suppose, would have a fair chance of advancing the correct hypothesis, at least.

This is significant because source sensitive idioms do generalize to the intentional explanatory scale—the issue of free will wouldn’t be such a conceptual crash space otherwise! ‘Dispositions’ are the typical alternative offered in philosophy, but in fact, any medicalization of human behaviour examples the effectiveness of biomechanical idioms at the intentional level of description (something Dennett recognizes at various points in his oeuvre (as in “Mechanism and Responsibility”) yet seems to ignore when making arguments like these). In fact, the very idiom deployed here demonstrates the degree to which these issues can be removed from the intentional domain.

The degree to which meaning can be genuinely naturalized.

We are bathed in consequences. Cognizing causes is more expensive than cognizing correlations, so we evolved the ability to cognize the causes that count, and to leave the rest to correlations. Outside the physics of our immediate surroundings, we dwell in a correlative fog, one that thins or deepens, sometimes radically, depending on the physical complexity of the systems engaged. Thus, what Gerd Gigerenzer calls the ‘adaptive toolbox,’ the wide array of heuristic devices solving via correlations alone. Dennett’s ‘intentional stance’ is far better understood as a collection of these tools, particularly those involving social cognition, our ability to solve for others or for ourselves. Rather than settling for any homuncular ‘attitude taking’ (or ‘rule following’), we can get to the business of isolating devices and identifying heuristics and their ‘application conditions,’ understanding both how they work, where they work, and the ways they go wrong.