Three Pound Brain

No bells, just whistling in the dark…

Tag: Ecological Eliminativism

Reading From Bacteria to Bach and Back III: Beyond Stances

by rsbakker

 

The problem with his user-illusion model of consciousness, Dennett realizes, lies in its Cartesian theatricalization, the reflex to assume the reality of the illusion, and to thence argue that it is in fact this… the dumbfounding fact, the inexplicable explanandum. We acknowledge that consciousness is a ‘user-illusion,’ then insist this ‘manifest image’ is the very thing requiring explanation. Dennett’s de-theatricalization, in other words, immediately invites re-theatricalization, intuitions so powerful he feels compelled to devote an entire chapter to resisting the invitation, only to have otherwise generally sympathetic readers, like Tom Clark, to re-theatricalize everything once again. To deceive us at all, the illusion itself has to be something possessing, minimally it seems, the capacity to deceive. Faced with the question of what the illusion amounts to, he writes, “It is a representation of a red stripe in some neural system of representation” (358), allowing Clark and others to reply, ‘and so possesses content called qualia.’

One of the striking features of From Bacteria to Bach and Back is the degree to which his trademark Intentional Systems Theory (IST) fades into the background. Rather than speak of the physical stance, design stance, and intentional stance, he continually references Sellars tripartite nomenclature from “Philosophy and the Scientific Image of Man,” the ‘original image’ (which he only parenthetically mentions), the ‘manifest image,’ and the ‘scientific image.’ The manifest image in particular, far more than the intentional stance, becomes his primary theoretical term.

Why might this be?

Dennett has always seen himself threading a kind of theoretical needle, fending the scientifically preposterous claims of intentionalism on the one hand, and the psychologically bankrupt claims of eliminativism on the other. Where intentionalism strands us with impossible explanatory vocabularies, tools that cause more problems than they solve, eliminativism strands us with impoverished explanatory vocabularies, purging tools that do real work from our theoretical kits without replacing them. It’s not simply that Dennett wants, as so many of his critics accuse him, ‘to have it both ways’; it’s that he recognizes that having it both ways is itself the only way, theoretically speaking. What we want is to square the circle of intentionality and consciousness without running afoul either squircles or blank screens, which is to say, inexplicable intentionalisms or deaf-mute eliminativisms.

Seen in this light, Dennett’s apparent theoretical opportunism, rapping philosophical knuckles for some applications of intentional terms, shaking scientific hands for others, begins to look well motivated—at least from a distance. The global theoretical devil, of course, lies in the local details. Intentional Systems Theory constitutes Dennett’s attempt to render his ‘middle way’ (and so his entire project) a principled one. In From Bacteria to Bach and Back he explains it thus:

There are three different but closely related strategies or stances we can adopt when trying to understand, explain, and predict phenomena: the physical stance, the design stance, in the intentional stance. The physical stance is the least risky but also the most difficult; you treat the phenomenon in question as a physical phenomenon, obeying the laws of physics, and use your hard-won understanding of physics to predict what will happen next. The design stance works only for things that are designed, either artifacts or living things or their parts, and have functions or purposes. The intentional stance works primarily for things that are designed to use information to accomplish their functions. It works by treating the thing as a rational agent, attributing “beliefs” and “desires” and “rationality” to the thing, and predicting that it will act rationally. 37

The strategy is straightforward enough. There’s little doubt that the physical stance, design stance, and intentional stance assist solving certain classes of phenomena in certain circumstances, so when confronted by those kinds of phenomena in those kinds of circumstances, taking the requisite stance is a good bet. If we have the tools, then why not use them?

But as I’ve been arguing for years here at Three Pound Brain, the problems stack up pretty quick, problems which, I think, find glaring apotheosis in From Bacteria to Bach and Back. The first problem lies in the granularity of stances, the sense in which they don’t so much explain cognition as merely divvy it up into three families. This first problem arises from the second, their homuncularity, the fact that ‘stances’ amount to black-box cognitive comportments, ways to manipulate/explain/predict things that themselves resist understanding. The third, and (from the standpoint his thesis) most devastating problem, also turns on the second: the fact that stances are the very thing requiring explanation.

The reason the intentional stance, Dennett’s most famed explanatory tool, so rarely surfaces in From Bacteria to Bach and Back is actually quite simple: it’s his primary explanandum. The intentional stance cannot explain comprehension simply because it is, ultimately, what comprehension amounts to…

Well, almost. And it’s this ‘almost,’ the ways in which the intentional stance defects from our traditional (cognitivist) understanding of comprehension, which has ensnared Dennett’s imagination—or so I hope to show.

What does this defection consist in? As we saw, the retasking of metacognition to solve theoretical questions was doomed to run afoul sufficiency-effects secondary to frame and medial neglect. The easiest way to redress these illusions lies in interrogating the conditions and the constitution of cognition. What the intentional stance provides Dennett is a granular appreciation of the performative, and therefore the social, fractionate, constructive, and circumstantial nature of comprehension. Like Wittgenstein’s ‘language games,’ or Kuhn’s ‘paradigms,’ or Davidson’s ‘charity,’ Dennett’s stances allow him to capture something of the occluded external and internal complexities that have for so long worried the ‘clear and distinct’ intuition of the ambiguous human cylinder.

The intentional stance thus plays a supporting role, popping up here and there in From Bacteria to Bach and Back insofar as it complicates comprehension. At every turn, however, we’re left with the question of just what it amounts to. Intentional phenomena such as representations, beliefs, rules, and so on are perspectival artifacts, gears in what (according to Dennett) is the manifest ontology we use to predict/explain/manipulate one another using only the most superficial facts. Given the appropriate perspective, he assures us, they’re every bit as ‘real’ as you and I need. But what is a perspective, let alone a perspectival artifact? How does it—or they—function? What are the limits of application? What constitutes the ‘order’ it tracks, and why is it ‘there’ as opposed to, say, here?

Dennett—and he’s entirely aware of this—really doesn’t have much more than suggestions and directions when it comes to these and other questions. As recently as Intuition Pumps, he explicitly described his toolset as “good at nibbling, at roughly locating a few ‘fixed’ points that will help us see the general shape of the problem” (79). He knows the intentional stance cannot explain comprehension, but he also knows it can inflect it, nudge it closer to a biological register, even as it logically prevents the very kind of biological understanding Dennett—and naturalists more generally—take as the primary desideratum. As he writes (once again in 2013):

I propose we simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is. Whatever the right answer to that question is—if it has a right answer—this will not jeopardize the plain fact that the intentional stance works remarkably well as a prediction method in these and other areas, almost as well as it works in our daily lives as folk-psychologists dealing with other people. This move of mine annoys and frustrates some philosophers, who want to blow the whistle and insist on properly settling the issue of what a mind, a belief, a desire is before taking another step. Define your terms, sir! No, I won’t. that would be premature. I want to explore first the power and the extent of application of this good trick, the intentional stance. Intuition Pumps, 79

But that was then and this is now. From Bacteria to Bach and Back explicitly attempts to make good on this promissory note—to naturalize comprehension, which is to say, to cease merely exploring the scope and power of the intentional stance, and to provide us with a genuine naturalistic explanation. To explain, in the high-dimensional terms of nature, what the hell it is. And the only way to do this is to move beyond the intentional stance, to cease wielding it as a tool, to hoist it on the work-bench, and to adduce the tools that will allows us to take it apart.

By Dennett’s own lights, then, he needs to reverse-engineer the intentional stance. Given his newfound appreciation for heuristic neglect, I understand why he feels the potential for doing this. A great deal of his argument for Cartesian gravity, as we’ve seen, turns on our implicit appreciation of the impact of ‘no information otherwise.’ But sensing the possibility of those tools, unfortunately, does not amount to grasping them. Short explicit thematizations of neglect and sufficiency, he was doomed to remain trapped on the wrong side of the Cartesian event horizon.

On Dennett’s view, intentional stances are homuncular penlights more than homuncular projectors. What they see, ‘reasons,’ lies in the ‘eye of the beholder’ only so far as natural and neural selection provisions the beholder with the specialized competencies required to light them up.

The reasons tracked by evolution I have called ‘free-floating rationales,’ a term that has apparent jangled the nerves of some few thinkers, who suspect I am conjuring up ghosts of some sort. Not at all. Free-floating rationales are no more ghostly or problematic than numbers or centers of gravity. Cubes had eight corners before people invented ways of articulating arithmetic, and asteroids had centers of gravity before there were physicists to dream up the idea and calculate with it. Reasons existed long before there were reasoners. 50

To be more precise, the patterns revealed by the intentional stance exist independent of the intentional stance. For Dennett, the problematic philosophical step—his version of the original philosophical sin of intentionalism—is to think the cognitive bi-stability of these patterns, the fact they appear to be radically different when spied with a first-person penlight versus scientific floodlights, turns on some fundamental ontological difference.

And so, Dennett holds that a wide variety of intentional phenomena are real, just not in the way we have traditionally understood them to be real. This includes reasons, beliefs, functions, desires, rules, choices, purposes, and—pivotally, given critiques like Tom Clark’s—representations. So far as this bestiary solves real world problems, they have to grab hold of the world somehow, don’t they? The suggestion that intentional posits are no more problematic than formal or empirical posits (like numbers and centers of gravity) is something of a Dennettian refrain—as we shall see, it presumes the heuristics involved in intentional cognition possess the same structure as heuristics in other domains, which is simply not the case. Otherwise, so long as intentional phenomena actually facilitate cognition, it seems hard to deny that they broker some kind high-dimensional relationship with the high-dimensional facts of our environment.

So what kind of relationship? Well, Dennett argues that it will be—has to be, given evolution—heuristic. So far as that relationship is heuristic, we can presume that it solves by taking the high-dimensional facts of the matter—what we might call the deep information environment—for granted. We can presume, in other words, that it will ignore the machinery, and focus on cues, available information systematically related to that machinery in ways that enable the prediction/explanation/manipulation of that machinery. In other words, rather than pick out the deep causal patterns responsible it will exploit those available patterns possessing some exploitable correlation to those patterns.

So then where, one might ask, do the real patterns pertaining to ‘representation’ lie in this? What part or parts of this machine-solving machinery gainsays the ‘reality’ of representations? Just where do we find the ‘real patterns’ underwriting the content responsible for individuating our reports? It can’t be the cue, the available information happily correlated to the system or systems requiring solution, simply because the cue is often little more than a special purpose trigger. The Heider-Simmel Illusion, for instance, provides a breathtaking example of just how little information it takes. So perhaps we need to look beyond the cue, to the adventitious correlations binding it to the neglected system or systems requiring solution. But if these are the ‘real patterns’ illuminated by the intentional stance, it’s hard to understand what makes them representational—more than hard in fact, since these relationships consist in regularities, which, as whole philosophical traditions have discovered, are thoroughly incompatible with the distinctively cognitive properties of representation. Well, then, how about the high-dimensional machinery indirectly targeted for solution? After all, representations provide us a heuristic way to understand otherwise complex cognitive relationships. This is where Dennett (and most everyone else, for that matter) seems to think the real patterns lie, the ‘order which is there,’ in the very machinery that heuristic systems are adapted—to avoid! Suddenly, we find ourselves stranded with regularities only indirectly correlated to the cues triggering different heuristic cognitive systems. How could the real patterns gainsaying the reality of representations be the very patterns our heuristic systems are adapted to ignore?

But if we give up on the high-dimensional systems targeted for solution, perhaps we should be looking at the heuristic systems cognizing—perhaps this is where the real patterns gainsaying the reality of representations lie, here, in our heads. But this is absurd, of course, since the whole point of saying representations are real (enough) is to say they’re out there (enough), independent of our determinations one way or another.

No matter how we play this discursive shell game, the structure of heuristic cognition guarantees that we’ll never discover the ‘real pattern pea,’ even with intentional phenomena so apparently manifest (because so useful in both everyday and scientific contexts) as representations. There’s real systems, to be sure, systems that make ‘identifying representations’ as easy as directing attention to the television screen. But those systems are as much here as they are there, making that television screen simply another component in a greater whole. Without the here, there is no there, which is to say, no ‘representation.’ Medial neglect assures the astronomical dimensionality of the here is flattened into near oblivion, stranding cognition with a powerful intuition of a representational there. Thanks to our ancestors, who discovered myriad ways to manipulate information to cue visual cognition out of school, to drape optical illusions across their cave walls, or to press them into lumps of clay, we’ve become so accustomed to imagery as to entirely forget the miraculousness of seeing absent things in things present. Those cues are more or less isomorphic to the actual systems comprising the ancestral problem ecologies visual cognition originally evolved to manage. This is why they work. They recapitulate certain real patterns of information in certain ways—as does your, retina, your optic nerve, and every stage of visual cognition culminating in visual experience. The only thing ‘special’ about the recapitulations belonging to your television screen is their availability, not simply to visual cognition, but to our attempts to cognize/troubleshoot such instances of visual cognition. The recapitulations on the screen, unlike, say, the recapitulations captured by our retinas, are the one thing we can readily troubleshoot should they begin miscuing visual cognition. Neglect ensures the intuition of sufficiency, the conviction that the screen is the basis, as opposed to simply another component in a superordinate whole. So, we fetishize it, attribute efficacies belonging to the system to what is in fact just another component. All its enabling entanglements vanish into the apparent miracle of unmediated semantic relationships to whatever else happens to be available. Look! we cry. Representation

Figure 1: This image of the Martian surface taken by Viking 1 in 1976 caused a furor on earth, for obvious reasons.

Figure 2: Images such as this one taken by the Mars Reconnaissance Orbiter reveal the former to be an example of facial pareidolia, an instance where information cues facial recognition where no faces are to be found. The “Face on Mars” seems be an obvious instance of projection—mere illusion—as opposed to discovery. Until, that is, one realizes that both of these images consist of pixels cuing your visual systems ‘out of school’! Both, in other words, constitute instances of pareidolia: the difference lies in what they enable.

Some apparent squircles, it turns out, are dreadfully useful. So long as the deception is systematic, it can be instrumentalized any which way. Environmental interaction is the basis of neural selection (learning), and neural selection is the basis of environmental domination. What artificial visual cuing—‘representation’—provides is environmental interaction on the cheap, ways to learn from experience without having to risk or endure experience. A ‘good trick’ indeed!

This brings us to a great fault-line running through the entirety of Dennett’s corpus. The more instrumental a posit, the more inclined he’s to say it’s ‘real.’ But when critics accuse him of instrumentalism, he adverts to the realities underwriting the instrumentalities, what enables them to work, to claim a certain (ambiguous, he admits) brand of realism. But as should now be clear, what he elides when he does this is nothing less than the structure of heuristic cognition, which blindly exploits the systematic correlations between information available and the systems involved to solve those systems as far as constraints on availability and capacity allow.

The reason he can elide the structure of heuristic cognition (and so find his real patterns argument convincing) lies, pretty clearly, I think, in the conflation of human intentional cognition (which is radically heuristic) with the intentional stance. In other words, he confuses what’s actually happening in instances of intentional cognition with what seems to be happening in instances of intentional cognition, given neglect. He runs afoul Cartesian gravity. “We tend to underestimate the strength of the forces that distort our imaginations,” he writes, “especially when confronted by irreconcilable insights that are ‘undeniable’” (22). Given medial neglect, the inability to cognize our contemporaneous cognizing, we are bound to intuit the order as ‘there’ (as ‘lateral’) even when we, like Dennett, should know better. Environmentalization is, as Hume observed, the persistent reflex, the sufficiency effect explaining our default tendency to report medial artifacts, features belonging to the signal, as genuine environmental phenomena, or features belonging to the source.

As a heuristic device, an assumption circumventing the brute fact of medial neglect, the environmentalization heuristic possesses an adaptive problem ecology—or as Dennett would put it, ‘normal’ and ‘abnormal’ applications. The environmentalization heuristic, in other words, possesses adaptive application conditions. What Dennett would want to argue, I’m sure, is that ‘representations’ are no more or less heuristic than ‘centres of gravity,’ and that we are no more justified in impugning the reality of the one than the reality of the other. “I don’t see why my critics think their understanding about what really exists is superior to mine,” he complains at one point in From Bacteria to Bach and Back, “so I demure” (224). And he’s entirely right on this score: no one has a clue as to what attributing reality amounts to. As he writes regarding the reality of beliefs in “Real Patterns”:

I have claimed that beliefs are best considered to be abstract objects rather like centers of gravity. Smith considers centers of gravity to be useful fictions while Dretske considers them to be useful (and hence?) real abstractions, and each takes his view to constitute a criticism of my position. The optimistic assessment of these opposite criticisms is that they cancel each other out; my analogy must have hit the nail on the head. The pessimistic assessment is that more needs to be said to convince philosophers that a mild and intermediate sort of realism is a positively attractive position, and not just the desperate dodge of ontological responsibility it has sometimes been taken to be. I have just such a case to present, a generalization and extension of my earlier attempts, via the concept of a pattern. 29

Heuristic Neglect Theory, however, actually put us in a position to make a great deal of sense of ‘reality.’ We can see, rather plainly, I think, the disanalogy between ‘centres of gravity’ and ‘beliefs,’ the disanalogy that leaps out as soon as we consider how only the latter patterns require the intentional stance (or more accurately, intentional cognition) to become salient. Both are heuristic, certainly, but in quite different ways.

We can also see the environmentalization heuristic at work in the debate between whether ‘centres of gravity’ are real or merely instrumental, and Dennett’s claim that they lie somewhere in-between. Do ‘centres of gravity’ belong to the order which is there, or do we simply project them in useful ways? Are they discoveries, or impositions? Why do we find it so natural to assume either the one or the other, and so difficult to imagine Dennett’s in-between or ‘intermediate’ realism? Why is it so hard conceiving of something half-real, half-instrumental?

The fundamental answer lies in the combination of frame and medial neglect. Our blindness to the enabling dimension of cognition renders cognition, from the standpoint of metacognition, an all but ethereal exercise. ‘Transparency’ is but one way of thematizing the rank incapacity generally rendering environmentalization such a good trick. “Of course, centres of gravity lie out there!” We are more realists than instrumentalists. The more we focus on the machinery of cognition, however, the more dimensional the medial becomes, the more efficacious, and the more artifactual whatever we’re focusing on begins to seem. Given frame neglect, however, we fail to plug this higher-dimensional artifactuality into the superordinate systems encompassing all instances of cognition, thus transforming gears into tools, fetishizing those instances, in effect. “Of course, centres of gravity organize out there!” We become instrumentalists.

If these incompatible intuitions are all that the theoretician has to go on, then Dennett’s middle way can only seem tendentious, an attempt to have it both ways. What makes Dennett’s ‘mild or intermediate’ realism so difficult to imagine is nothing less than Cartesian gravity, which is to say, the compelling nature of the cognitive illusions driving our metacognitive intuitions either way. Squares viewed on this angle become circles viewed on that. There’s no in-between! This is why Dennett, like so many revolutionary philosophical thinkers before him, is always quick to reference the importance of imagination, of envisioning how things might be otherwise. He’s always bumping against the limits of our shackles, calling attention to the rattle in the dark. Implicitly, he understands the peril that neglect, by way of sufficiency, poses to our attempts to puzzle through these problems.

But only implicitly, and as it turns out (given tools so blunt and so complicit as the intentional stance), imperfectly. On Heuristic Neglect Theory, the practical question of what’s real versus what’s not is simply one of where and when the environmentalization heuristic applies, and the theoretical question of what’s ‘really real’ and what’s ‘merely instrumental’ is simply an invitation to trip into what is obviously (given the millennial accumulation of linguistic wreckage) metacognitive crash space. When it comes to ‘centres of gravity,’ environmentalization—or the modifier ‘real’—applies because of the way the posit economizes otherwise available, as opposed to unavailable, information. Heuristic posits centres of gravity might be, but ones entirely compatible with the scientific examination of deep information environments.

Such is famously not the case with posits like ‘belief’ or ‘representation’—or for that matter, ‘real’! The heuristic mechanisms underwriting environmentalization are entirely real, as is the fact that these heuristics do not simply economize otherwise available information, but rather compensate for structurally unavailable information. To this extent, saying something is ‘real’—acknowledging the applicability of the environmentalization heuristic—involves the order here as much as the order there, so far as it compensates for structural neglect, rather than mere ignorance or contingent unavailability. ‘Reality’ (like ‘truth’) communicates our way of selecting and so sorting environmental interactions while remaining almost entirely blind to the nature of those environmental interactions, which is to say, neglecting our profound continuity with those environments.

At least as traditionally (intentionally) conceived, reality does not belong to the real, though reality-talk is quite real, and very useful. It pays to communicate the applicability of environmentalization, if only to avoid the dizzying cognitive challenges posed by the medial, enabling dimensions of cognition. Given the human circuit, truth-talk can save lives. The apparent paradox of such declarations—such as saying, for instance, that it’s true that truth does not exist—can be seen as a direct consequence of frame and medial neglect, one that, when thought carefully through step by empirically tractable step, was pretty much inevitable. We find ourselves dumbfounding for good reason!

The unremarkable fact is that the heuristic systems we resort to when communicating and trouble-shooting cognition are just that: heuristic systems we resort to when communicating and trouble-shooting cognition. And what’s more, they possess no real theoretical power. Intentional idioms are all adapted to shallow information ecologies. They comprise the communicative fraction of compensatory heuristic systems adapted not simply to solve astronomically complicated systems on the cheap, but absent otherwise instrumental information belonging to our deep information environments. Applying those idioms to theoretical problems amounts to using shallow resources to solve the natural deeps. The history of philosophy screams underdetermination for good reason! There’s no ‘fundamental ontology’ beneath, no ‘transcendental functions’ above, and no ‘language-games’ or ‘intentional stances’ between, just the machinations of meat, which is why strokes and head injuries and drugs produce the boggling cognitive effects they do.

The point to always keep in mind is that every act of cognition amounts to a systematic meeting of at least two functionally distinct systems, the one cognized, the other cognizing. The cognitive facts of life entail that all cognition remains, in some fundamental respect, insensitive to the superordinate system explaining the whole let alone the structure and activity of cognition. This inability to cognize our position within superordinate systems (frame neglect) or to cognize our contemporaneous cognizing (medial neglect) is what renders the so-called first-person (intentional stance) homuncular, blind to its own structure and dynamics, which is to say, oblivious to the role here plays ordering ‘there.’ This is what cognitive science needs to internalize, the way our intentional and phenomenal idioms steer us blindly, absent any high-dimensional input, toward solutions that, when finally mapped, will bear scant resemblance to the metacognitive shadows parading across our cave walls. And this is what philosophy needs to internalize as well, the way their endless descriptions and explanations, all the impossible figures—squircles—comprising the great bestiary of traditional reflection upon the nature of the soul, are little more than illusory artifacts of their inability to see their inability to see. To say something is ‘real’ or ‘true’ or ‘factual’ or ‘represents,’ or what have you is to blindly cue blind orientations in your fellows, to lock them into real but otherwise occluded systems, practically and even experimentally efficacious circuits, not to invoke otherworldly functions or pick out obscure-but-real patterns like ‘qualia’ or ‘representations.’

The question of ‘reality’ is itself a heuristic question. As horribly counter-intuitive as all this must sound, we really have no way of cognizing the high-dimensional facts of our environmental orientation, and so no choice but to problem-solve those facts absent any inkling of them. The issue of ‘reality,’ for us, is a radically heuristic one. As with all heuristic matters, the question of application becomes paramount: where does externalization optimize, and where does it crash? It optimizes where the cues relied upon generalize, provide behavioural handles that can be reverse-engineered—‘reduced’—absent reverse-engineering us. It optimizes, in other words, wherever frame and medial neglect do not matter. It crashes, however, where the cues relied upon compensate, provide behavioural handles that can only be reverse-engineered by reverse-engineering ourselves.

And this explains the ‘gobsmacking fact’ with which we began, how we can source the universe all the way back to first second, and yet remain utterly confounded by our ability to do so. Short cognitive science, compensatory heuristics were all that we possessed when it came to question of ourselves. Only now do we find ourselves in a position to unravel the nature of the soul.

The crazy thing to understand, here, the point Dennett continually throws himself toward in From Bacteria to Bach and Back only to be drawn back out on the Cartesian tide, is that there is no first-person. There is no original or manifest or even scientific ‘image’: these all court ‘imaginative distortion’ because they, like the intentional stance, are shallow ecological artifacts posturing as deep information truths. It is not the case that, “[w]e won’t have a complete science of consciousness until we can align our manifest-image identifications of mental states by their contents with scientific-image identifications of the subpersonal information structures and events that are causally responsible for generating the details of the user-illusion we take ourselves to operate in” (367)—and how could it be, given our abject inability to even formulate ‘our manifest-image identifications,’ to agree on the merest ‘detail of our user-illusion’? There’s a reason Tom Clark emphasizes this particular passage in his defense of qualia! If it’s the case that Dennett believes a ‘complete science of consciousness’ requires the ‘alignment’ of metacognitive reports with subpersonal mechanisms then he is as much a closet mysterian as any other intentionalist. There’s simply too many ways to get lost in the metacognitive labyrinth, as the history of intentional philosophy amply shows.

Dennett needs only continue following the heuristic tracks he’s started down in From Bacteria to Bach and Back—and perhaps recall his own exhortation to imagine—to see as much. Imagine how it was as a child, living blissfully unaware of philosophers and scientists and their countless confounding theoretical distinctions and determinations. Imagine the naïveté, not of dwelling within this or that ‘image,’ but within an ancestral shallow information ecology, culturally conditioned to be sure, but absent the metacognitive capacity required to run afoul sufficiency effects. Imagine thinking without ‘having thoughts,’ knowing without ‘possessing knowledge,’ choosing without ‘exercising freedom.’ Imagine this orientation and how much blinkered metacognitive speculation and rationalization is required to transform it into something resembling our apparent ‘first-person perspective’—the one that commands scarcely any consensus beyond exceptionalist conceit.

Imagine how much blinkered metacognitive speculation and rationalization is required to transform it into the intentional stance.

So, what, then, is the intentional stance? An illusory artifact of intentional cognition, understood in the high-dimensional sense of actual biological mechanisms (both naturally and neurally selected), not the low-dimensional, contentious sense of an ‘attitude’ or ‘perspective.’ The intentional stance represents an attempt to use intentional cognition to fundamentally explain intentional cognition, and in this way, it is entirely consonant with the history of philosophy as a whole. It differs—perhaps radically so—in the manner it circumvents the metacognitive tendency to report intentional phenomena as intrinsic (self-sufficient), but it nevertheless remains a way to theorize cognition and experience via, as Dennett himself admits, resources adapted to their practical troubleshooting.

The ‘Cartesian wound’ is no more than theatrical paint, stage make-up, and so something to be wiped away, not healed. There is no explanatory gap because there is no first-person—there never has been, apart from the misapplication of radically heuristic, practical problem-solving systems to the theoretical question of the soul. Stripped of the first-person, consciousness becomes a natural phenomenon like any other, baffling only for its proximity, for overwriting the very page it attempts to read. Heuristic Neglect Theory, in other words, provides a way for us to grasp what we are, what we always have been: a high-dimensional physical system possessing selective sensitivities and capacities embedded in other high-dimensional physical systems. This is what you’re experiencing now, only so far as your sensitivities and capacities allow. This, in other words, is this… You are fundamentally inscrutable unto yourself outside practical problem-solving contexts. Everything else, everything apparently ‘intentional’ or ‘phenomenal’ is simply ‘seems upon reflection.’ There is no manifest image,’ only a gallery of competing cognitive illusions, reflexes to report leading to the crash space we call intentional philosophy. The only ‘alignment’ required is that between our shallow information ecology and our deep information environments: the ways we do much with little, both with reference to each other and with ourselves. This is what you reference when describing a concert to your buddies. This is what you draw on when you confess your secrets, your feelings, your fears and aspirations. Not a ‘mind,’ not a ‘self-model,’ nor even a ‘user illusion,’ but the shallow cognitive ecology underwriting your brain’s capacity to solve and report itself and others.

There’s a positively vast research project buried in this outlook, and as much would become plain, I think, if enough souls could bring themselves see past the fact that it took an institutional outsider to discover. The resolutely post-intentional empirical investigation of the human has scarcely begun.

The Knowledge Illusion Illusion

by rsbakker

 

 

When academics encounter a new idea that doesn’t conform to their preconceptions, there’s often a sequence of three reactions: first dismiss, then reject, then finally declare it obvious. Steven Sloman and Philip Fernbach, The Knowledge Illusion, 255

 

The best example illustrating the thesis put forward in Steven Sloman and Philip Fernbach’s excellent The Knowledge Illusion: Why We Never Think Alone is one I’ve belaboured before, the bereft  ‘well-dressed man’ in Byron Haskin’s 1953 version of The War of the Worlds, dismayed at his malfunctioning pile of money, unable to comprehend why it couldn’t secure him passage out of Los Angeles. So keep this in mind: if all goes well, we shall return to the well-dressed man.

The Knowledge Illusion is about a great many things, everything from basic cognitive science to political polarization to educational reform, but it all comes back to how individuals are duped by the ways knowledge outruns individual human brains. The praise for this book has been nearly universal, and deservedly so, given the existential nature of the ‘knowledge problematic’ in the technological age. Because of this consensus, however, I’ll play the devil’s advocate and focus on what I think are core problems. For all the book’s virtues, I think Steven Sloman, Professor of Cognitive, Linguistic, and Psychological Sciences at Brown University, and Philip Fernbach, Assistant Professor at the University of Colorado, find themselves wandering the same traditional dead ends afflicting all philosophical and psychological discourses on the nature of human knowledge. The sad fact is nobody knows what knowledge is. They only think they do.

Sloman and Fernbach begin with a consideration of our universal tendency to overestimate our understanding. In a wide variety of tests, individuals regularly fail to provide first order evidence regarding second order reports of what they know. So for instance, they say they understand how toilets or bicycles work, yet find themselves incapable of accurately drawing the mechanisms responsible. Thus the ‘knowledge illusion,’ or the ‘illusion of explanatory depth,’ the consistent tendency to think our understanding of various phenomena and devices is far more complete than it in fact is.

This calves into two interrelated questions: 1) Why are we so prone to think we know more than we do? and 2) How can we know so little yet achieve so much? Sloman and Fernbach think the answer to both these questions lies in the way human cognition is embodied, embedded, and enactive, which is to say, the myriad ways it turns on our physical and social environmental interactions. They also hold the far more controversial position that cognition is extended, that ‘mind,’ understood as a natural phenomenon, just ain’t in our heads. As they write:

The main lesson is that we should not think of the mind as an information processor that spends its time doing abstract computation in the brain. The brain and the body and the external environment all work together to remember, reason, and make decisions. The knowledge is spread through the system, beyond just the brain. Thought does not take place on a stage inside the brain. Thought uses knowledge in the brain, the body, and the world more generally to support intelligent action. In other words, the mind is not in the brain. Rather, the brain is in the mind. The mind uses the brain and other things to process information. 105

The Knowledge Illusion, in other words, lies astride the complicated fault-line between cognitivism, the tendency to construe cognition as largely representational and brain-bound, and post-cognitivism, the tendency to construe cognition as constitutively dependent on the community and environment. Since the book is not only aimed at a general audience but also about the ways humans are so prone to confuse partial for complete accounts, it is more than ironic that Sloman and Fernbach fail to contextualize the speculative, and therefore divisive, nature of their project. Charitably, you could say The Knowledge Illusion runs afoul the very ‘curse of knowledge’ illusion it references throughout, the failure to appreciate the context of cognitive reception—the tendency to assume that others know what you know, and so will draw similar conclusions. Less charitably, the suspicion has to be that Sloman and Fernbach are actually relying on the reader’s ignorance to cement their case. My guess is that the answer lies somewhere in the middle, and that the authors, given their sensitivity to the foibles and biases built into human communication and cognition, would acknowledge as much.

But the problem runs deeper. The extended mind hypothesis is subject to a number of apparently decisive counter-arguments. One could argue a la Adams and Aizawa, for instance, and accuse Sloman and Fernbach, of committing the so-called ‘causal-constitutive fallacy,’ mistaking causal influences on cognition for cognition proper. Even if we do accept that external factors are constitutive of cognition, the question becomes one of where cognition begins and ends. What is the ‘mark of the cognitive’? After all, ‘environment’ potentially includes the whole of the physical universe, and ‘community’ potentially reaches back to the origins of life. Should we take a page from Hegel and conclude that everything is cognitive? If our minds outrun our brains, then just where do they end?

So far, every attempt to overcome these and other challenges has only served to complicate the controversy. Cognitivism remains a going concern for good reason: it captures a series of powerful second-order intuitions regarding the nature of human cognition, intuitions that post-cognitivists like Sloman and Fernbach would have us set aside on the basis of incompatible second-order intuitions regarding that self-same nature. Where the intuitions milked by cognitivism paint an internalist portrait of knowledge, the intuitions milked by post-cognitivism sketch an externalist landscape. Back and forth the arguments go, each side hungry to recruit the latest scientific findings into their explanatory paradigms. At some point, the unspoken assumption seems to be, the abductive weight supporting either position will definitively tip in favour of either one or the other. By time we return to our well-dressed man and his heap of useless money, I hope to show how and why this will never happen.

For the nonce, however, the upshot is that either way you cut it, knowledge, as the subject of theoretical investigation, is positively awash in illusions, intuitions that seem compelling, but just ain’t so. For some profound reason, knowledge and other so-called ‘intentional phenomena’ baffle us in way distinct from all other natural phenomena with the exception of consciousness. This is the sense in which one can speak of the Knowledge Illusion Illusion.

Let’s begin with Sloman and Fernbach’s ultimate explanation for the Knowledge Illusion:

The Knowledge Illusion occurs because we live in a community of knowledge and we fail to distinguish the knowledge that is in our heads from the knowledge outside of it. We think the knowledge we have about how things work sits inside our skulls when in fact we’re drawing a lot of it from the environment and from other people. This is as much a feature of cognition as it is a bug. The world and our community house most of our knowledge base. A lot of human understanding consists simply of awareness that the knowledge is out there. 127-128.

The reason we presume knowledge sufficiency, in other words, is that we fail to draw a distinction between individual knowledge and collective knowledge, between our immediate know-how and know-how requiring environmental and social mediation. Put differently, we neglect various forms of what might be called cognitive dependency, and so assume cognitive independence, the ability to answer questions and solve problems absent environmental and social interactions. We are prone to forget, in other words, that our minds are actually extended.

This seems elegant and straightforward enough: as any parent (or spouse) can tell you, humans are nothing if not prone to take things for granted! We take the contributions of our fellows for granted, and so reliably overestimate our own epistemic were-withal. But something peculiar has happened. Framed in these terms, the knowledge illusion suddenly bears a striking resemblance to the correspondence or attribution error, our tendency to put our fingers on our side of the scales when apportioning social credit. We generally take ourselves to have more epistemic virtue than we in fact possess for the same reason we generally take ourselves to have more virtue than we in fact possess: because ancestrally, confabulatory self-promotion paid greater reproductive dividends than accurate self-description. The fact that we are more prone to overestimate epistemic virtue given accessibility to external knowledge sources, on this account, amounts to no more than the awareness that we have resources to fall back on, should someone ‘call bullshit.’

There’s a great deal that could be unpacked here, not the least of which is the way changing demonstrations of knowledge into demonstrations of epistemic virtue radically impacts the case for the extended mind hypothesis. But it’s worth considering, I think, how this alternative explanation illuminates an earlier explanation they give of the illusion:

So one way to conceive of the illusion of explanatory depth is that our intuitive system overestimates what it can deliberate about. When I ask you how a toilet works, your intuitive system reports, “No problem, I’m very comfortable with toilets. They are part of my daily experience.” But when your deliberative system is probed by a request to explain how they work, it is at a loss because your intuitions are only superficial. The real knowledge lies elsewhere. 84

In the prior explanation, the illusion turns on confusing our individual with our collective resources. We presume that we possess knowledge that other people have. Here, however, the illusion turns on the superficiality of intuitive cognition. “The real knowledge lies elsewhere” plays no direct explanatory role whatsoever. The culprit here, if anything, lies with what Daniel Kahneman terms WYSIATI, or ‘What-You-See-Is-All-There-Is,’ effects, the way subpersonal cognitive systems automatically presume the cognitive sufficiency of whatever information/capacity they happen to have at their disposal.

So, the question is, do we confabulate cognitive independence because subpersonal cognitive processing lacks the metacognitive monitoring capacity to flag problematic results, or because such confabulations facilitated ancestral reproductive success, or because our blindness to the extended nature of knowledge renders us prone to this particular type of metacognitive error?

The first two explanations, at least, can be combined. Given the divide and conquer structure of neural problem-solving, the presumptive cognitive sufficiency (WYSIATI) of subpersonal processing is inescapable. Each phase of cognitive processing turns on the reliability of the phases preceding (which is why we experience sensory and cognitive illusions rather than error messages). If those illusions happen to facilitate reproduction, as they often do, then we end up with biological propensities to commit things like epistemic attribution errors. We both think and declare ourselves more knowledgeable than we in fact are.

Blindness to the ‘extended nature of knowledge,’ on this account, doesn’t so much explain the knowledge illusion as follow from it.

The knowledge illusion is primarily a metacognitive and evolutionary artifact. This actually follows as an empirical consequence of the cornerstone commitment of Sloman and Fernbach’s own theory of cognition: the fact that cognition is fractionate and heuristic, which is to say, ecological. This becomes obvious, I think, but only once we see our way past the cardinal cognitive illusion afflicting post-cognitivism.

Sloman and Fernbach, like pretty much everyone writing popular accounts of embodied, embedded, and enactive approaches to cognitive science, provide the standard narrative of the rise and fall of GOFAI, standard computational approaches to cognition. Cognizing, on this approach, amounts to recapitulating environmental systems within universal computational systems, going through the enormous expense of doing in effigy in order to do in the world. Not only is such an approach expensive, it requires prior knowledge of what needs to be recapitulated and what can be ignored—tossing the project into the infamous jaws of the Frame Problem. A truly general cognitive system is omni-applicable, capable of solving any problem in any environment, given the requisite resources. The only way to assure that ecology doesn’t matter, however, is to have recapitulated that ecology in advance.

The question from a biological standpoint is simply one of why we need to go through all the bother of recapitulating a problem-solving ecology when that ecology is already there, challenging us, replete with regularities we can exploit without needing to know whatsoever. “This assumption that the world is behaving normally gives people a giant crutch,” as Sloman and Fernbach put it. “It means that we don’t have to remember everything because the information is stored in the world” (95). All cognition requires are reliable interactive systematicities—cognitive ecologies—to steer organisms through their environments. Heuristics are the product of cognitive systems adapted to the exploitation of the correlations between regularities available for processing and environmental regularities requiring solution. And since the regularities happened upon, cues, are secondary to the effects they enable, heuristic systems are always domain specific. They don’t travel well.

And herein lies the rub for Sloman and Fernbach: If the failure of cognitivism lies in its insensitivity to cognitive ecology, then the failure of post-cognitivism lies in its insensitivity to metacognitive ecology, the fact that intentional modes of theorizing cognition are themselves heuristic. Humans had need to troubleshoot claims, to distinguish guesswork from knowledge. But they possessed no access whatsoever to the high-dimensional facts of the matter, so they made do with what was available. Our basic cognitive intuitions facilitate this radically heuristic ‘making do,’ allowing us to debug any number of practical communicative problems. The big question is whether they facilitate anything theoretical. If intentional cognition turns on systems selected to solve practical problem ecologies absent information, why suppose it possesses any decisive theoretical power? Why presume, as post-cognitivists do, that the theoretical problem of intentional cognition lies within the heuristic purview of intentional cognition?

Its manifest inapplicability, I think, can be clearly discerned in The Knowledge Illusion. Consider Sloman and Fernbach’s contention that the power of heuristic problem-solving turns on the ‘deep’ and ‘abstract’ nature of the information exploited by heuristic cognitive systems. As they write:

Being smart is all about having the ability to extract deeper, more abstract information from the flood of data that comes into our senses. Instead of just reacting to the light, sounds, and smells that surround them, animals with sophisticated large brains respond to deep, abstract properties of the that they are sensing. 46

But surely ‘being smart’ lies in the capacity to find, not abstracta, but tells, sensory features possessing reliable systematic relationships to deep environments. There’s nothing ‘deep’ or ‘abstract’ about the moonlight insects use to navigate at night—which is precisely why transverse orientation is so easily hijacked by bug-zappers and porch-lights. There’s nothing ‘deep’ or ‘abstract’ about the tastes triggering aversion in rats, which is why taste aversion is so easily circumvented by using chronic rodenticides. Animals with more complex brains, not surprisingly, can discover and exploit more tells, which can also be hijacked, cued ‘out of school.’ We bemoan the deceptive superficiality of political and commercial marketing for good reason! It’s unclear what ‘deeper’ or ‘more abstract’ add here, aside from millennial disputation. And yet Sloman and Fernbach continue, “[t]he reason that deeper, more abstract information is helpful is that it can be used to pick out what we’re interested in from an incredibly complex array of possibilities, regardless of how the focus of our interest presents itself” (46).

If a cue, or tell—be it a red beak or a prolonged stare or a scarlet letter—possesses some exploitable systematic relationship to some environmental problem, then nothing more is needed. Talk of ‘depth’ or ‘abstraction’ plays no real explanatory function, and invites no little theoretical mischief.

The term ‘depth’ is perhaps the biggest troublemaker, here. Insofar as human cognition is heuristic, we dwell in shallow information environments, ancestral need-to-know ecologies, remaining (in all the myriad ways Sloman and Fernbach describe so well) entirely ignorant of the deeper environment, and the super-complex systems comprising them. What renders tells so valuable is their availability, the fact that they are at once ‘superficial’ and systematically correlated to the neglected ‘deeps’ requiring solution. Tells possess no intrinsic mark of their depth or abstraction. It is not the case that “[a]s brains get more complex, they get better at responding to deeper, more abstract cues from the environment, and this makes them ever more adaptive to new situations” (48). What is the case is far more mundane: they get better at devising, combining, and collecting environmental tells.

And so, one finds Sloman and Fernbach at metaphoric war with themselves:

It is rare for us to directly perceive the mechanisms that create outcomes. We experience our actions and we experience the outcomes of those actions; only by peering inside the machine do we see the mechanism that makes it tick. We can peer inside when the components are visible. 73

As they go on to admit, “[r]easoning about social situations is like reasoning about physical objects: pretty shallow” (75).

The Knowledge Illusion is about nothing if not the superficiality of human cognition, and all the ways we remain oblivious to this fact because of this fact. “Normal human thought is just not engineered to figure out some things” (71), least of all the deep/fundamental abstracta undergirding our environment! Until the institutionalization of science, we were far more vulture than lion, information scavengers instead of predators. Only the scientific elucidation of our deep environments reveals how shallow and opportunistic we have always been, how reliant on ancestrally unfathomable machinations.

So then why do Sloman and Fernbach presume that heuristic cognition grasps things both abstract and deep?

The primary reason, I think, turns on the inevitably heuristic nature of our attempts to cognize cognition. We run afoul these heuristic limits every time we look up at the night sky. Ancestrally, light belonged to those systems we could take for granted; we had no reason to intuit anything about its deeper nature. As a result, we had no reason to suppose we were plumbing different pockets of the ancient past whenever we paused to gaze into the night sky. Our ability to cognize the medium of visual cognition suffers from what might be called medial neglect. We have to remind ourselves we’re looking across gulfs of time because the ecological nature of visual cognition presumes the ‘transparency’ of light. It vanishes into what it reveals, generating a simultaneity illusion.

What applies to vision applies to all our cognitive apparatuses. Medial neglect, in other words, characterizes all of our intuitive ways of cognizing cognition. At fairly every turn, the enabling dimension of our cognitive systems is consigned to oblivion, generating, upon reflection, the metacognitive impression of ‘transparency,’ or ‘aboutness’—intentionality in Brentano’s sense. So when Sloman and Fernbach attempt to understand the cognitive nature of heuristic selectivity, they cue the heuristic systems we evolved to solve practical epistemic problems absent any sensitivity to the actual systems responsible, and so run afoul a kind of ‘transparency illusion,’ the notion that heuristic cognition requires fastening onto something intrinsically simple and out there—a ‘truth’ of some description, when all our brain need to do is identify some serendipitously correlated cue in its sensory streams.

This misapprehension is doubly attractive, I think, for the theoretical cover it provides their contention that all human cognition is causal cognition. As they write:

… the purpose of thinking is to choose the most effective action given the current situation. That requires discerning the deep properties that are constant across situations. What sets humans apart is our skill at figuring out what those deep, invariant properties are. It takes human genius to identify the key properties that indicate if someone has suffered a concussion or has a communicable disease, or that it’s time to pump up a car’s tires. 53

In fact, they go so far as to declare us “the world’s master causal thinkers” (52)—a claim they spend the rest of the book qualifying. As we’ve seen, humans are horrible at understanding how things work: “We may be better at causal reasoning than other kinds of reasoning, but the illusion of explanatory depth shows that we are still quite limited as individuals in how much of it we can do” (53).

So, what gives? How can we be both causal idiots and causal savants?

Once again, the answer lies in their own commitments. Time and again, they demonstrate the way the shallowness of human cognition prevents us from cognizing that shallowness as such. The ‘deep abstracta’ posited by Sloman and Fernbach constitute a metacognitive version of the very illusion of explanatory depth they’re attempting to solve. Oblivious to the heuristic nature of our metacognitive intuitions, they presume those intuitions deep, theoretically sufficient ways to cognize the structure of human cognition. Like the physics of light, the enabling networks of contingent correlations assuring the efficacy of various tells get flattened into oblivion—the mediating nature vanishes—and the connection between heuristic systems and the environments they solve becomes an apparently intentional one, with ‘knowing’ here, ‘known’ out there, and nothing in between. Rather than picking out strategically connected cues, heuristic cognition isolates ‘deep causal truths.’

How can we be both idiots and savants when it comes to causality? The fact is, all cognition is not causal cognition. Some cognition is causal, while other cognition—the bulk of it—is correlative. What Sloman and Fernbach systematically confuse are the kinds of cognitive efficacy belonging to the isolation of actual mechanisms with the kinds of cognitive efficacy belonging to the isolation of tells possessing unfathomable (‘deep’) correlations to those mechanisms. The latter cognition, if anything, turns on ignoring the actual causal regularities involved. This is what makes it both so cheap and so powerful (for both humans and AI): it relieves us of the need to understand the deeper nature of things, allowing us to focus on what happens next.

Although some predictions turn on identifying actual causes, those requiring the heuristic solution of complex systems turn on identifying tells, triggers that are systematically correlated precursors to various significant events. Given our metacognitive neglect of the intervening systems, we regularly fetishize the tells available, take them to be the causes of the kinds of effects we require. Sloman and Fernbach’s insistence on the causal nature of human cognition commits this very error: it fetishizes heuristic cues. (Or to use Klaus Fiedler’s terminology, it confuses pseudocontingencies for genuine contingencies, or to use Andrei Cimpian’s, it fails to recognize a kind of ‘inherence heuristic’ as heuristic).

The power of predictive reasoning turns on the plenitude of potential tells, our outright immersion in environmental systematicities. No understanding of celestial mechanics is required to use the stars to anticipate seasonal changes and so organize agricultural activities. The cost of this immersion, on the other hand, is the inverse problem, the problem of isolating genuine causes as opposed to mere correlations on the basis of effects. In diagnostic reasoning, the sheer plenitude of correlations is the problem: finding causes amounts to finding needles in haystacks, sorting systematicities that are genuinely counterfactual from those that are not. Given this difficulty, it should come as no surprise that problems designed to cue predictive deliberation tend to neglect the causal dimension altogether. Tells, even when imbued with causal powers, fetishized, stand entirely on their own.

Sloman and Fernbach’s explanation of ‘alternative cause neglect’ thoroughly illustrates, I think, the way cognitivism and post-cognitivism have snarled cognitive psychology in the barbed wire of incompatible intuitions. They also point out the comparative ease of predictive versus diagnostic reasoning. But where the above sketch explains this disparity in thoroughly ecological terms, their explanation is decidedly cognitivist: we recapitulate systems, they claim, run ‘mental simulations’ to explore the space of possible effects. Apparently, running these tapes backward to explore the space of possible causes is not something nature has equipped us to do, at least easily. “People ignore alternative causes when reasoning from cause to effect,” they contend, “because their mental simulations have no room for them, and because we’re unable to run mental simulations backward in time from effect to cause” (61).

Even setting aside the extravagant metabolic expense their cognitivist tack presupposes, it’s hard to understand how this explains much of anything, let alone how the difference between these two modes figures in the ultimate moral of Sloman and Fernbach’s story: the social intransigence of the knowledge illusion.

Toward the end of the book, they provide a powerful and striking picture of the way false beliefs seem to have little, if anything, to do with the access to scientific facts. The provision of reasons likewise has little or no effect. People believe what their group believes, thus binding generally narcissistic or otherwise fantastic worldviews to estimations of group membership and identity. For Sloman and Fernbach, this dovetails nicely with their commitment to extended minds, the fact that ‘knowing’ is fundamentally collective.

Beliefs are hard to change because they are wrapped up with our values and identities, and they are shared with our community. Moreover, what is actually in our own heads—our causal models—are sparse and often wrong. This explains why false beliefs are so hard to weed out. Sometimes communities get the science wrong, usually in ways supported by our causal models. And the knowledge illusion means that we don’t check our understanding often or deeply enough. This is a recipe for antiscientific thinking. 169

But it’s not simply the case that reports of belief signal group membership. One need only think of the ‘kooks’ or ‘eccentrics’ in one’s own social circles (and fair warning, if you can’t readily identify one, that likely means you’re it!) to bring home the cognitive heterogeneity one finds in every community, people who demonstrate reliability in some other way (like my wife’s late uncle who never once attended church, but who cut the church lawn every week all the same).

Like every other animal on this planet, we’ve evolved to thrive in shallow cognitive ecologies, to pick what we need when we need it from wherever we can, be it the world or one another. We are cooperative cognitive scavengers, which is to say, we live in communal shallow cognitive ecologies. The cognitive reports of ingroup members, in other words, are themselves powerful tells, correlations allowing us to predict what will happen next absent deep environmental access or understanding. As an outgroup commentator on these topics, I’m intimately acquainted with the powerful way the who trumps the what in claim-making. I could raise a pyramid with all the mud and straw I’ve accumulated! But this has nothing to do with the ‘intrinsically communal nature of knowledge,’ and everything to do with the way we are biologically primed to rely on our most powerful ancestral tools. It’s not simply that we ‘believe to belong,’ but because, ancestrally speaking, it provided an extraordinarily metabolically cheap way to hack our natural and social environments.

So cheap and powerful, in fact, we’ve developed linguistic mechanisms, ‘knowledge talk,’ to troubleshoot cognitive reports.

And this brings us back to the well-dressed man in The War of the Worlds, left stranded with his useless bills, dumbfounded by the sudden impotence of what had so reliably commanded the actions of others in the past. Paper currency requires vast systems of regularities to produce the local effects we all know and love and loathe. Since these local, or shallow, effects occur whether or not we possess any inkling of the superordinate, deep, systems responsible, we can get along quite well simply supposing, like the well-dressed man, that money possesses this power on its own, or intrinsically. Pressed to explain this intrinsic power, to explain why this paper commands such extraordinary effects, we posit a special kind of property, value.

What the well-dressed man illustrates, in other words, is the way shallow cognitive ecologies generate illusions of local sufficiency. We have no access to the enormous amount of evolutionary, historical, social, and personal stage-setting involved when our doctor diagnoses us with depression, so we chalk it up to her knowledge, not because any such thing exists in nature, but because it provides us a way to communicate and troubleshoot an otherwise incomprehensible local effect. How did your doctor make you better? Obviously, she knows her stuff!

What could be more intuitive?

But then along comes science, and lo, we find ourselves every bit as dumbfounded when asked to causally explain knowledge as (to use Sloman and Fernbach’s examples) when asked to explain toilets or bicycles or vaccination or climate warming or why incest possessing positive consequences is morally wrong. Given our shallow metacognitive ecology, we presume that the heuristic systems applicable to troubleshooting practical cognitive problems can solve the theoretical problem of cognition as well. When we go looking for this or that intentional formulation of ‘knowledge’ (because we cannot even agree on what it is we want to explain) in the head, we find ourselves, like the well-dressed man, even more dumbfounded. Rather than finding anything sufficient, we discover more and more dependencies, evidence of the way our doctor’s ability to cure our depression relies on extrinsic environmental and social factors. But since we remain committed to our fetishization of knowledge, we conclude that knowledge, whatever it is, simply cannot be in the head. Knowledge, we insist, must be nonlocal, reliant on natural and social environments. But of course, this cuts against the very intuition of local sufficiency underwriting the attribution of knowledge in the first place. Sure, my doctor has a past, a library, and a community, but ultimately, it’s her knowledge that cures my depression.

And so, cognitivism and post-cognitivism find themselves at perpetual war, disputing theoretical vocabularies possessing local operational efficacy in everyday or specialized experimental contexts, but perpetually deferring the possibility of any global, genuinely naturalistic understanding of human cognition. The strange fact of the matter is that there’s no such thing or function as ‘knowledge’ in nature, nothing deep to redeem our shallow intuitions, though knowledge talk (which is very real) takes us a long way to resolve a wide variety of practical problems. The trick isn’t to understand what knowledge ‘really is,’ but rather to understand the deep, supercomplicated systems underwriting the optimization of behaviour, and how they underwrite our shallow intuitive and deliberative manipulations. Insofar as knowledge talk forms a component of those systems, we must content ourselves with studying ‘knowledge’ as a term rather than an entity, leaving intentional cognition to solve what problems it can where it can. The time has come to leave both cognitivism and post-cognitivism behind, and to embrace genuinely post-intentional approaches, such as the ecological eliminativism espoused here.

The Knowledge Illusion, in this sense, provides a wonderful example of crash space, the way in which the introduction of deep, scientific information into our shallow cognitive ecologies is prone to disrupt or delude or simply fall flat altogether. Intentional cognition provides a way for us to understand ourselves and each other while remaining oblivious to any of the deep machinations actually responsible. To suffer ‘medial neglect’ is to be blind to one’s actual sources, to comprehend and communicate human knowledge, experience, and action via linguistic fetishes, irreducible posits possessing inexplicable efficacies, entities fundamentally incompatible with the universe revealed by natural science.

For all the conceits Sloman and Fernbach reveal, they overlook and so run afoul perhaps greatest, most astonishing conceit of them all: the notion that we should have evolved the basic capacity to intuit our own deepest nature, that hunches belonging to our shallow ecological past could show us the way into our deep nature, rather than lead us, on pain of systematic misapplication, into perplexity. The time has come to dismantle the glamour we have raised around traditional philosophical and psychological speculation, to stop spinning abject ignorance into evidence of glorious exception, and to see our millennial dumbfounding as a symptom, an artifact of a species that has stumbled into the trap of interrogating its heuristic predicament using shallow heuristic tools that have no hope of generating deep theoretical solutions. The knowledge illusion illusion.