Three Pound Brain

No bells, just whistling in the dark…

Tag: eliminativism

Reading From Bacteria to Bach and Back II: The Human Squircle

by rsbakker

The entry placing second (!!) in the 2016 Illusion of the Year competition, the Ambiguous Cylinder Illusion, blew up on Reddit for good reason. What you’re seeing below is an instance where visual guesswork arising from natural environmental frequencies have been cued ‘out of school.’ In this illusion, convex and concave curves trick the visual system into interpreting a ‘squircle’ as either a square or a circle—thus the dazzling images. Ambiguous cylinders provide dramatic illustration of a point Dennett makes many times in From Bacteria to Bach and Back: “One of the hallmarks of design by natural selection,” he writes, “is that it is full of bugs, in the computer programmer’s sense: design flaws that show up only under highly improbable conditions, conditions never encountered in the finite course of R&D that led to the design to date, and hence not yet patched or worked around by generations of tinkering” (83). The ‘bug’ exploited in this instance could be as much a matter of neural as natural selection, of course—perhaps, as with the Muller-Lyer illusion, individuals raised in certain environments are immune to this effect. But the upshot remains the same. By discovering ways to cue heuristic visual subsystems outside their adaptive problem ecologies, optical illusionists have developed a bona fide science bent on exploring what might be called ‘visual crash space.’

One of the ideas behind Three Pound Brain is to see traditional intentional philosophy as the unwitting exploration of metacognitive crash space. Philosophical reflection amounts to the application of metacognitive capacities adapted to trouble-shooting practical cognitive and communicative issues to theoretical problems. What Dennett calls ‘Cartesian gravity,’ in other words, has been my obsession for quite some time, and I think I have a fair amount of wisdom to share, especially when it comes to philosophical squircles, things that seem undeniable, yet nevertheless contradict our natural scientific understanding. Free will is perhaps the most famous of these squircles, but there’s really no end to them. The most pernicious squircle of all, I’m convinced, is the notion of intentionality, be it ‘derived’ or ‘original.’

On Heuristic Neglect Theory, Cartesian gravity boils down to metacognitive reflexes, the application of heuristic systems to questions they have no hope of answering absent any inkling of as much. The root of the difficulty lies in neglect, the way insensitivity to the limits of felicitous application results in various kinds of systematic errors (what might be seen as generalized versions of the WYSIATI effects discovered by Daniel Kahneman).

The centrality of neglect (understood as an insensitivity that escapes our sensitivity) underwrites my reference to the ‘Grand Inversion’ in the previous installment. As an ecological artifact, human cognition trivially possesses what might be called a neglect structure: we are blind to the vast bulk of the electromagnetic spectrum, for instance, because sensing things like gamma radiation, infrared, or radio waves paid no ancestral dividends. If fact, one can look at the sum of scientific instrumentation as mapping out human ‘insensitivity space,’ providing ingress into all those places our ancestral sensitivities simply could not take us. Neglect, in other words, allows us to quite literally invert our reflexive ways of comprehending comprehension, not only in a wholesale manner, but in a way entirely compatible with what Dennett calls, following Sellars, the scientific image.

Simply flipping our orientation in this way allows us to radically recharacterize Dennett’s project in From Bacteria to Bach and Back as a matter of implicitly mapping our human neglect structure by filling in all the naturalistic blanks. I say implicit because his approach remains primarily focused on what is neglected, rather than neglect considered in its own right. Despite this, Dennett is quite cognizant of the fact that he’s discussing a single phenomenon, albeit one he characterizes (thanks to Cartesian gravity!) in positive terms:

Darwin’s “strange inversion of reasoning” and Turing’s equally revolutionary inversion form aspects of a single discovery: competence without comprehension. Comprehension, far from being a god-like talent from which all design must flow, is an emergent effect of systems of uncomprehending competence… (75)

The problem with this approach is one that Dennett knows well: no matter how high you build your tower of natural processes, all you’ve managed to do, in an important sense, is recapitulate the mystery you’ve set out to solve. No matter how long you build your ramp, talk of indefinite thresholds and ‘emergent effects’ very quickly reveals you’re jumping the same old explanatory shark. In a sense, everyone in the know knows at least the moral of the story Dennett tells: competences stack into comprehension on any Darwinian account. The million-dollar question is how ‘all that’ manages to culminate in this

Personally speaking, I’ve never had an experience quite like the one I had reading this book. Elation, realizing that one of the most celebrated minds in philosophy had (finally!) picked up on the same trail. Urgency, knowing I had to write a commentary, like, now. And then, at a certain point, wonder at the sense of knowing, quite precisely, what it was tantalizing his intuitions: the profound connection between his Darwinian commitments and his metaphilosophical hunches regarding Cartesian gravitation.

Heuristic Neglect Theory not only allows us to economize Dennett’s bottom-up saga of stacking competences, it also provides a way to theorize his top-down diagnosis of comprehension. It provides, in other words, the common explanatory framework required to understand this… in terms of ‘all that.’ No jumps. No sharks. Just one continuous natural story folding comprehension into competence (or better, behaviour).

What applies to human cognition applies to human metacognition—understood as the deliberative derivation of endogenous or exogenous behaviour via secondary (functionally distinct) access to one’s own endogenous or exogenous behaviour. As an ecological artifact, human metacognition is fractionate and heuristic, and radically so, given the complexity of the systems it solves. As such, it possesses its own neglect structure. Understanding this allows us to ‘reverse-engineer’ far more than Dennett suspects, insofar as it lets us hypothesize the kinds of blind spots we should expect to plague our attempts to theorize ourselves given the deliverances of philosophical reflection. It provides the theoretical basis, I think, for understanding philosophy as the cognitive psychological phenomenon that it is.

It’s a truism to say that the ability to cognize any system crucially depends on a cognitive system’s position relative to that system. But things get very interesting once we begin picking at the how and why. The rationality of geocentrism, for instance, is generally attributed to the fact that from our terrestrial perspective, the sky does all the moving. We remain, as far as we can tell, motionless. Why is motionlessness the default? Why not assume ignorance? Why not assume that the absence of information warranted ‘orbital agnosticism’? Basically, because we lacked the information to determine our lack of information.

Figure 1: It is a truism to state that where we find ourselves within a system determines our ability to cognize that system. ‘Frame neglect’ refers to our cognitive insensitivity, not only to our position within unknown systems, but to this insensitivity.

Figure 2: Thus, the problem posed by sufficiency, the automatic presumption that what we see is all there is. The ancients saw the stars comprising Orion as equidistant simply because they lacked the information and theory required to understand their actual position—because they had no way of knowing otherwise.

Figure 3: It is also a truism to state that the constitution of our cognitive capacities determines our ability to cognize systems. ‘Medial neglect’ refers to our cognitive insensitivity, not only to the constitution of our cognitive capacities, but to this insensitivity. We see, but absent any sensitivity to the machinery enabling sight.

Figure 4: Thus, once again, the problem posed by sufficiency. Our brain interprets ambiguous cylinders as magical squircles because it possesses no sensitivity to the kinds of heuristic mechanisms involved in processing visual information.

Generally speaking, we find these ‘no information otherwise’ justifications so intuitive that we just move on. We never ask how or why the absence of sensible movement cues reports of motionlessness. Plato need only tell us that his prisoners have been chained before shadows their whole lives and we get it, we understand that for them, shadows are everything. By merely conjuring an image, Plato secures our acknowledgment that we suffer a congenital form of frame neglect, a cognitive insensitivity to the limits of cognition that can strand us with fantastic (and so destructive) worldviews—and without our permission, no less. Despite the risk entailed, we neglect this form of neglect. Though industry and science are becoming ever more sensitive to the problems posed by the ‘unknown unknown,’ it remains the case that each of us at once understands the peril and presumes we’re the exception, the system apart from the systems about us. The motionless one.

Frame neglect, our insensitivity to the superordinate systems encompassing us, blinds us to our position within those systems. As a result, we have no choice but to take those positions for granted. This renders our cognitive orientations implicit, immune to deliberative revision and so persistent (as well as vulnerable to manipulation). Frame neglect, in other words, explains why bent orientations stay bent, why we suffer the cognitive inertia we do. More importantly, it highlights what might be called default sufficiency, the congenital presumption of implicit cognitive adequacy. We were in no position to cognize our position relative the heavens, and yet we nevertheless assumed that we were simply because we were in no position to cognize the inadequacy of our position.

Why is sufficiency the presumptive default? The stacking of ‘competences’ so brilliantly described by Dennett requires that every process ‘do its part’: sufficiency, you could say, is the default presumption of any biological system, so far as its component systems turn upon the iterative behaviour of other component systems. Dennett broaches the notion, albeit implicitly, via the example of asking someone to report on a nearby house via cell phone:

Seeing is believing, or something like that. We tacitly take the unknown pathways between his open eyes and speaking lips to be secure, just like the requisite activity in the pathways in the cell towers between his phone and ours. We’re not curious on the occasion about how telephones work; we take them for granted. We also don’t scratch our heads in bafflement over how he can just open his eyes and then answer questions with high reliability about what is positioned in front of him in the light, because we can all do it (those of us who are not blind). 348-349

Sufficiency is the default. We inherit our position, our basic cognitive orientation, because it sufficed to solve the kinds of high-frequency and/or high impact problems faced by our ancestors. This explains why unprecedented circumstances generate the kinds of problems they do: it’s always an open question whether our basic cognitive orientation will suffice when confronted with a novel problem.

When it comes to vision, for instance, we possess a wide range of ways to estimate sufficiency and so can adapt our behaviour to a variety of lighting conditions, waving our hand in fog, peering against glares, and so on. Darkness in particular demonstrates how the lack of information requires information, lest it ‘fall off the radar’ in the profound sense entailed by neglect. So even though we possess myriad ways to vet visual information, squircles possess no precedent and so no warning, the sufficiency of the information available is taken for granted, and we suffer the ambiguous cylinder illusion. Our cognitive ecology plays a functional role in the efficacy of our heuristic applications—all of them.

From this a great deal follows. Retasking some system of competences always runs the risk of systematic deception on the one hand, where unprecedented circumstances strand us with false solutions (as with the millennia-long ontological dualism of the terrestrial and the celestial), and dumbfounding on the other, where unprecedented circumstances crash some apparently sufficient application in subsequently detectable ways, such as ambiguous for human visual systems, or the problem of determinism for undergraduate students.

To the extent that ‘philosophical reflection’ turns on the novel application of preexisting metacognitive resources, it almost certainly runs afoul instances of systematic deception and dumbfounding. Retasked metacognitive channels and resources, we can be assured, would report as sufficient, simply because our capacity to intuit insufficiency would be the product of ancestral, which is to say, practical, applications. How could information and capacity geared to catching our tongue in social situations, assessing what we think we saw, rehearsing how to explain some disaster, and so on hope to leverage theoretical insights into the fundamental nature of cognition and experience? It can’t, no more than auditory cognition, say, could hope to solve the origin of the universe. But even more problematically, it has no hope of intuiting this fundamental inability. Once removed from the vacuum of ecological ignorance, the unreliability of ‘philosophical reflection,’ its capacity to both dumbfound and to systematically deceive, becomes exactly what we should expect.

This follows, I think, on any plausible empirical account of human metacognition. I’ve been asking interlocutors to provide me a more plausible account for years now, but they always manage to lose sight of the question somehow.

On the availability side, we should expect the confusion of task-insufficient information with task-sufficient information. On the capacity side, we should expect the confusion of task-insufficient applications with task-sufficient applications. And this is basically what Dennett’s ‘Cartesian gravity’ amounts to, the reflexive deliberative metacognitive tendency to confuse scraps with banquets and hammers with swiss-army knives.

But the subtleties secondary to these reflexes can be difficult to grasp, at least at first. Sufficiency means that decreases in dimensionality, the absence of kinds and quantities of information, simply cannot be cognized as such. Just over two years ago I suffered a retinal tear, which although successfully repaired, left me with a fair amount of debris in my right eye (‘floaters,’ as they call them, which can be quite distracting if you spend as much time staring at white screens as I do). Last autumn I noticed I had developed a ‘crimp’ in my right eye’s field of vision: apparently some debris had become attached to my fovea, a mass that accumulated as I was passed from doctor to doctor and thence to the surgeon. I found myself with my own, entirely private visual illusion: the occluded retinal cells were snipped out of my visual field altogether, mangling everything I tried to focus on with my right eye. The centre of every word I looked at would be pinched into oblivion, leaving only the beginning and ending characters mashed together. Faces became positively demonic—to the point where I began developing a Popeye squint for equanimity’s sake. The world had become a grand bi-stable image: things were fine when my left eye predominated, but then for whatever reason, click, my friends and family would be eyeless heads of hair. Human squircles.

My visual centres simply neglected the missing information, and muddled along assuming the sufficiency of the information that was available. I understood the insufficiency of what I was seeing. I knew the prisoners were there, chained in their particular neural cave with their own particular shadows, but I had no way of passing that information upstream—the best I could do was manage the downstream consequences.

But what happens when we have no way of intuiting information loss? What happens when our capacity to deliberate and report finds itself chained ‘with no information otherwise’? Well, given sufficiency, it stands to reason that what metacognition cannot distinguish we will report as same, that what it cannot vet we will report as accurate, that what it cannot swap we will report inescapable, and that what it cannot source we will report as sourceless, and so on. The dimensions of information occluded, in other words, depend entirely on what we happen to be reporting. If we ponder the proximate sources of our experiences, they will strike us as sourceless. If we ponder the composition of our experiences, they will strike us simple. Why? Because human metacognition not only failed to evolve the extraordinary ability to theoretically source or analyze human experience, it failed to evolve the ability to intuit this deficit. And so, we find ourselves stranded with squircles, our own personal paradox (illusion) of ourselves, of what it is fundamentally like to be ‘me.’

Dialectically, it’s important to note how this consequence of the Grand Inversion overturns the traditional explanatory burden when it comes to conscious experience. Since it takes more metacognitive access and capacity, not less, to discern things like disunity and provenance, the question Heuristic Neglect Theory asks of the phenomenologist is, “Yes, but how could you report otherwise?” Why think the intuition of apperceptive unity (just for instance) is anything more than a metacognitive cousin of the flicker-fusion you’re experiencing staring at the screen this very instant?

Given the wildly heuristic nature of our metacognitive capacities, we should expect to possess the capacity to discriminate only what our ancestors needed to discriminate, and precious little else. So, then, how could we intuit anything but apperceptive unity? Left with a choice between affirming a low-dimensional exception to nature on the basis of an empirically implausible metacognitive capacity, and a low-dimensional artifact of the very kind we might expect given an empirically plausible metacognitive account, there really is no contest.

And the list goes on and on. Why think intuitions of ‘self-identity’ possess anything more than the information required to resolve practical, ancestral issues involving identification?

One can think of countless philosophical accounts of the ‘first-person’ as the product of metacognitive ‘neglect origami,’ the way sufficiency precludes intuiting the radical insufficiency of the typically scant dimensions of information available. If geocentrism is the default simply for the way our peripheral position in the solar system precludes intuiting our position as peripheral, then ‘noocentrism’ is the default for the way our peripheral position vis a vis ourselves precludes intuiting our position as peripheral. The same way astrophysical ignorance renders the terrestrial the apparently immovable anchor of celestial motion, metacognitive neglect renders the first-person the apparently transcendent anchor of third-person nature. In this sense, I think, ‘gravity’ is a well-chosen metaphor to express the impact of metacognitive neglect upon the philosophical imagination: metacognitive neglect, like gravity, isn’t so much a discrete force as a structural feature, something internal to the architecture of philosophical reflection. Given it, humanity was all but doomed to wallow in self-congratulatory cartoons once literacy enabled regimented inquiry into its own nature. If we’re not the centres of the universe, then surely we’re the centre of our knowledge, our projects, our communities—ourselves.

Figure 5: The retasking of deliberative metacognition is not unlike discovering something practical—such as ‘self’ (or in this case, Brian’s sandal)—in apparently exceptional, because informationally impoverished, circumstances.

Figure 6: We attempt to interpret this practical deliverance in light of these exceptional circumstances.

Figure 7: Given neglect, we presume the practical deliverance theoretically sufficient, and so ascribe it singular significance.

Figure 8: We transform ‘self’ into a fetish, something both self-sustaining and exceptional. A squircle.

Of all the metacognitive misapplications confounding traditional interpretations of cognition and experience, Dennett homes in on the one responsible for perhaps the most theoretical mischief in the form of Hume’s ‘strange inversion of reasoning’ (354-358), where the problem, as we saw in the previous post, lies in mistaking the ‘intentional object’ of the red stripe illusion for the cause of the illusion. Hume, recall, notes our curious propensity to confuse mental determinations for environmental determinations, to impute something belonging to this… to ‘all that.’ Dennett notes that the problem lies in the application: normally, this ‘confusion’ works remarkably well; it’s only in abnormal circumstances, like those belonging to the red stripe illusion, where this otherwise efficacious cognitive reflex leads us astray.

The first thing to note about this cognitive reflex is the obvious way it allows us to neglect the actual machinery of our environmental relations. Hume’s inversion, in other words, calls attention to the radically heuristic nature of so-called intentional thinking. Given the general sufficiency of all the processes mediating our environmental relationships, we need not cognize them to cognize those relationships, we can take them for granted, which is a good thing, because their complexity (the complexity cognitive science is just now surmounting) necessitates they remain opaque. ‘Opaque,’ in this instance, means heuristically neglected, the fact that all the mad dimensionalities belonging to our actual cognitive relationships appear nowhere in cognition, not even as something missing. What does appear? Well, as Dennett himself would say, only what’s needed to resolve practical ancestral problems.

Reporting environments economically entails taking as much for granted as possible. So long as the machinery you and I use to supervise and revise our environmental orientations is similar enough, we can ignore each other’s actual relationships in communication, focusing instead on discrepancies and how to optimize them. This is why we narrate only those things most prone to vary—environmentally and neurally sourced information prone to facilitate reproduction—and remain utterly oblivious to the all the things that go without saying, the deep information environment plumbed by cognitive science. The commonality of our communicative and cognitive apparatuses, not to mention their astronomical complexity, assures that we will suffer what might be called, medial neglect, congenital blindness to the high-dimensional systems enabling communication and cognition. “All the subpersonal, neural-level activity is where the actual causal interactions happen that provide your cognitive powers, but all “you” have access to is the results” (348).

From Bacteria to Bach and Back is filled with implicit references to medial neglect. “Our access to our own thinking, and especially to the causation and dynamics of its subpersonal parts, is really no better than our access to our digestive processes,” Dennett writes; “we have to rely on the rather narrow and heavily edited channel that responds to our incessant curiosity with user-friendly deliverances, only one step closer to the real me than the access to the real me that is enjoyed by my family and friends” (346).

Given sufficiency, “[t]he relative accessibility and familiarity of the outer part of the process of telling people what we can see—we know our eyes have to be open, and focused, and we have to attend, and there has to be light—conceals from us the other blank from the perspective of introspection or simple self-examination of the rest of the process” (349). The ‘outer part of the process,’ in other words, is all that we need.

Medial neglect may be both necessary and economical, but it remains an incredibly risky bet to make given the perversity of circumstance and the radical interdependency characterizing human communities. The most frequent and important discrepancies will be environmental discrepancies, those which, given otherwise convergent orientations (the same physiology, location, and training), can be communicated absent medial information, difference making differences geared to the enabling axis of communication and cognition. Such discrepancies can be resolved while remaining almost entirely ‘performance blind.’ All I need do is ‘trust’ your communication and cognition, build upon it the same blind way I build upon my own. You cry, ‘Wolf!’ and I run for the shotgun: our orientations converge.

But as my example implies, things are not always so simple. Say you and I report seeing two different birds, a vulture versus an albatross, in circumstances where such a determination potentially matters—looking for a lost hunting party, say. An endless number of medial confounds could possibly explain our sudden disagreement. Perhaps I have bad eyesight, or I think albatrosses are black, or I’m blinded by the glare of the sun, or I’m suffering schizophrenia, or I’m drunk, or I’m just sick and tired of you being right all the time, or I’m teasing you out of boredom, or more insidiously, I’m responsible for the loss of the hunting party, and want to prevent you from finding the scene of my crime.

There’s no question that, despite medial neglect, certain forms of access and capacity regarding the enabling dimension of cognition and communication could provide much in the way of problem resolution. Given the stupendous complexity of the systems involved, however, it follows that any capacity to accommodate medial factors will be heuristic in the extreme. This means that our cognitive capacity to flag/troubleshoot issues of sufficiency will be retail, fractionate, geared to different kinds of high-impact, high-frequency problems. And the simplest solution, the highest priority reflex, will be to ignore the medial altogether. If our search party includes a third soul who also reports seeing a vulture, for instance, I’ll just be ‘wrong’ for ‘reasons’ that may or not be determined afterward.

The fact of medial neglect, in other words, underwrites what might be called an environmentalization heuristic, the reflexive tendency to ‘blame’ the environment first.

When you attempt to tell us about what is happening in your experience, you ineluctably slide into a metaphorical idiom simply because you have no deeper, truer, more accurate knowledge of what was going on inside you. You cushion your ignorance with a false—but deeply tempting—model: you simply reproduce, with some hand waving and apologies, your everyday model of how you know about what is going on outside you. 348

Because that’s typically all that you need. Dennett’s hierarchical mountain of competences is welded together by default sufficiency, the blind mechanical reliance of one system upon other systems. Communicative competences not only exploit this mechanical reliance, they extend it, opening entirely novel ecosystems leveraging convergent orientation, brute environmental parallels and physiological isomorphisms, to resolve discrepancies. So long as those discrepancies are resolved, medial factors potentially impinging on sufficiency can be entirely ignored, and so will be ignored. Communications will be ‘right’ or ‘wrong,’ ‘true’ or ‘false.’ We remain as blind to the sources of our cognitive capacities as circumstances allow us to be. And we remain blind to this blindness as well.

When I say from the peak of my particular competence mountain, “Albatross…” and you turn to me in perplexity, and say from the peak of your competence mountain, “What the hell are you talking about?” your instance of ‘about-talk’ is geared to the resolution of a discrepancy between our otherwise implicitly convergent systems. This is what it’s doing. The idea that it reveals an exceptional kind of relationship, ‘aboutness,’ spanning the void between ‘albatross’ here and albatrosses out there is a metacognitive artifact, a kind of squircle. For one, the apparent void is jam-packed with enabling competences—vast networks of welded together by sufficiency. Medial neglect merely dupes metacognition into presuming otherwise, into thinking the apparently miraculous covariance (the product of vast histories of natural and neural selection) between ‘sign’ (here) and ‘signified’ (out there) is indeed some kind of miracle.

Philosophers dwell among general descriptions and explanations: this is why they have difficulty appreciating that naïveté generally consists in having no ‘image,’ no ‘view,’ regarding this or that domain. They habitually overlook the oxymoronic implication of attaching any ‘ism’ to the term ‘naïve.’ Instances of ‘about-talk’ do not implicitly presume ‘intentionality’ even in some naïve, mistaken sense. We are not born ‘naïve intentionalists’ (any more than we’re ‘naïve realists’). We just use meaning talk to solve what kinds of problems we can where we can. Granted, our shared metacognitive shortcomings lead us, given different canons of interrogation, into asserting this or that interpretation of ‘intentionality,’ popular or scholastic. We’re all prone to see squircles when prompted to peer into our souls.

So, when someone asks, “Where does causality lie?” we just point to where we can see it, out there on the billiard table. After all, where the hell else would it be (given medial neglect)? This is why dogmatism comes first in the order of philosophical complication, why Kant comes after Descartes. It takes time and no little ingenuity to frame plausible alternatives of this ‘elsewhere.’ And this is the significance of Hume’s inversion to Cartesian gravity: the reflexive sufficiency of whatever happens to be available, a sufficiency that may or may not obtain given the kinds of problem posed. The issue has nothing to do with confusing normal versus abnormal attributions of causal efficacy to intentional objects, because, for one, there’s just no such thing as ‘intentional objects,’ and for another, ‘intentional object-talk’ generates far more problems than it solves.

Of course, it doesn’t seem that way to Dennett whilst attempting to solve for Cartesian gravity, but only because, short theoretical thematizations of neglect and sufficiency, he lacks any real purchase on the problem of explaining the tendency to insist (as Tom Clark does) on the reality of the illusion. As a result, he finds himself in the strange position of embracing the sufficiency of intentionality in certain circumstances to counter the reflexive tendency to assume the sufficiency of phenomenality in other circumstances—of using one squircle, in effect, to overcome another. And this is what renders him eminently vulnerable to readings like Clark’s, which turns on Dennett’s avowal of intentional squircles to leverage, on pain of inconsistency, his commitment to phenomenal squircles. This problem vanishes once we recognize ourselves for the ambiguous cylinders we have always been. Showing as much, however, will require one final installment.


Reading From Bacteria to Bach and Back I: On Cartesian Gravity

by rsbakker


Problem resolution generally possesses a diagnostic component; sometimes we can find workarounds, but often we need to know what the problem consists in before we can have any real hope of advancing beyond it. This is what Daniel Dennett proposes to do in his recent From Bacteria to Bach and Back, to not only sketch a story of how human comprehension arose from the mindless mire of biological competences, but to provide a diagnostic account of why we find such developmental stories so difficult to credit. He hews to the slogan I’ve oft repeated here on Three Pound Brain: We are natural in such a way that we find it impossible to intuit ourselves as natural. It’s his account of this ‘in such a way,’ that I want to consider here. As I’ve said many times before, I think Dennett has come as close as any philosopher in history to unravelling the conjoined problems of cognition and consciousness—and I am obliged to his acumen and creativity in more ways than I could possibly enumerate—but I’m convinced he remains entangled, both theoretically and dialectically, by several vestigial commitments to intentionalism. He remains a prisoner of ‘Cartesian gravity.’ Nowhere is this clearer than in his latest book, where he sets out to show how blind competences, by hook, crook, and sheer, mountainous aggregation, can actually explain comprehension, which is to say, understanding as it appears to the intentional stance.

Dennett offers two rationales for braving the question of comprehension, the first turning on the breathtaking advances made in the sciences of life and cognition, the second anchored in his “better sense of the undercurrents of resistance that shackle our imaginations” (16). He writes:

I’ve gradually come to be able to see that there are powerful forces at work, distorting imagination—my own imagination included—pulling us first one way and then another. If you learn to see these forces too, you will find that suddenly things begin falling into place in a new way. 16-17

The original force, the one begetting subsequent distortions, he calls Cartesian gravity. He likens the scientific attempt to explain cognition and consciousness to a planetary invasion, with the traditional defenders standing on the ground with their native, first-person orientation, and the empirical invaders finding their third-person orientation continually inverted the closer they draw to the surface. Cartesian gravity, most basically, refers to the tendency to fall into first-person modes of thinking cognition and consciousness. This is a problem because of the various, deep incompatibilities between the first-person and third-person views. Like a bi-stable image (Dennett provides the famous Duck-Rabbit as an example), one can only see the one at the expense of seeing the other.

Cartesian gravity, in other words, refers to the intuitions underwriting the first-person side of the famed Explanatory Gap, but Dennett warns against viewing it in these terms because of the tendency in the literature to view the divide as an ontological entity (a ‘chasm’) instead of an epistemological artifact (a ‘glitch’). He writes:

[Philosophers] may have discovered the “gap,” but they don’t see it for what it actually is because they haven’t asked “how it got that way.” By reconceiving of the gap as a dynamic imagination-distorter that has arisen for good reasons, we can learn to traverse it safely or—what may amount to the same thing—make it vanish. 20-21

It’s important, I think, to dwell on the significance of what he’s saying here. First of all, taking the gap as a given, as a fundamental feature of some kind, amounts to an explanatory dereliction. As I like to put it, the fact that we, as a species, can explain the origins of nature down to the first second and yet remain utterly mystified by the nature of this explanation is itself a gobsmacking fact requiring explanation. Any explanation of human cognition that fails to explain why humans find themselves so difficult to explain is woefully incomplete. Dennett recognizes this, though I sometimes think he fails to recognize the dialectical potential of this recognition. There’s few better ways to isolate the sound of stomping feet from the speculative cacophony, I’ve found, than by relentlessly posing this question.

Secondly, the argumentative advantage of stressing our cognitive straits turns directly on its theoretical importance: to naturalistically diagnose the gap is to understand the problem it poses. To understand the problem it poses is to potentially resolve that problem, to find some way to overcome the explanatory gap. And overcoming the gap, of course, amounts to explaining the first-person in third-person terms—to seize upon what has become the Holy Grail of philosophical and scientific speculation.

The point being that the whole cognition/consciousness debate stands balanced upon some diagnosis of why we find ourselves so difficult to fathom. As the centerpiece of his diagnosis, Cartesian gravity is absolutely integral to Dennett’s own position, and yet surveying the reviews From Bacteria to Bach and Back has received (as of 9/12/2017, at least), you find the notion is mentioned either in passing (as in Thomas Nagel’s piece in The New York Review of Books), dismissively (as in Peter Hankin’s review in Conscious Entities), or not at all.

Of course, it would probably help if anyone had any clue as to what ‘first-person’ or ‘third-person’ actually meant. A gap between gaps often feels like no gap at all.


“The idea of Cartesian gravity, as so far presented, is just a metaphor,” Dennett admits, “but the phenomenon I am calling by this metaphorical name is perfectly real, a disruptive force that bedevils (and sometimes aids) our imaginations, and unlike the gravity of physics, it is itself an evolved phenomenon. In order to understand it, we need to ask how and why it arose on the planet earth” (21). Part of the reason so many reviewers seem to have overlooked its significance, I think, turns on the sheer length of the story he proceeds to tell. Compositionally speaking, it’s rarely a good idea to go three hundred pages—wonderfully inventive, controversial pages, no less—without substantially revisiting your global explanandum. By time Dennett tells us “[w]e are ready to confront Cartesian gravity head on” (335) it feels like little more than a rhetorical device—and understandably so.

The irony, of course, is that Dennett thinks that nothing less than Cartesian gravity has forced the circuitous nature of his route upon him. If he fails to regularly reference his metaphor, he continually adverts to its signature consequence: cognitive inversion, the way the sciences have taken our traditional, intuitive, ab initio, top-down presumptions regarding life and intelligence and turned them on their head. Where Darwin showed how blind, bottom-up processes can generate what appear to be amazing instances of design, Turing showed how blind, bottom-up processes can generate what appear to be astounding examples of intelligence, “natural selection on the one hand, and mindless computation on the other” (75). Despite some polemical and explanatory meandering (most all of it rewarding), he never fails to keep his dialectical target, Cartesian exceptionalism, firmly (if implicitly) in view.

A great number of the biological examples Dennett adduces in From Bacteria to Bach and Back will be familiar to those following Three Pound Brain. This is no coincidence, given that Dennett is both an info-junkie like myself, as well as constantly on the lookout for examples of the same kinds of cognitive phenomena: in particular, those making plain the universally fractionate, heuristic nature of cognition, and those enabling organisms to neglect, and therefore build-upon, pre-existing problem-solving systems. As he writes:

Here’s what we have figured out about the predicament of the organism: It is floating in an ocean of differences, a scant few of which might make a difference to it. Having been born to a long lineage of successful copers, it comes pre-equipped with gear and biases for filtering out and refining the most valuable differences, separating the semantic information from the noise. In other words, it is prepared to cope in some regards; it has built-in expectations that have served its ancestors well but may need revision at any time. To say that it has these expectations is to say that it comes equipped with partially predesigned appropriate responses all ready to fire. It doesn’t have to waste precious time figuring out from first principles what to do about an A or a B or a C. These are familiar, already solved problems of relating input to output, perception to action. These responses to incoming simulation of its sensory systems may be external behaviors: a nipple affords sucking, limbs afford moving, a painful collision affords retreating. Or they may be entirely covert, internal responses, shaping up the neural armies into more effective teams for future tasks. 166

Natural environments consist of regularities, component physical processes systematically interrelated in ways that facilitate, transform, and extinguish other component physical processes. Although Dennett opts for the (I think) unfortunate terminology of ‘affordances’ and ‘Umwelts,’ what he’s really talking about are ecologies, the circuits of selective sensitivity and corresponding environmental frequency allowing for niches to be carved, eddies of life to congeal in the thermodynamic tide. With generational turnover, risk sculpts ever more morphological and behavioural complexity, and the life once encrusting rocks begins rolling them, then shaping and wielding them.

Now for Dennett, the crucial point is to see the facts of human comprehension in continuity with the histories that make it possible, all the while understanding why the appearance of human comprehension systematically neglects these self-same conditions. Since his accounts of language and cultural evolution (via memes) warrant entire posts in their own right, I’ll elide them here, pointing out that each follow this same incremental, explanatory pattern of natural processes enabling the development of further natural processes, tangled hierarchies piling toward something recognizable as human cognition. For Dennett, the coincidental appearance of La Sagrada Familia (arguably a paradigmatic example of top-down thinking given Gaudi’s reputed micro-managerial mania) and Australian termite castles expresses a profound continuity as well, one which, when grasped, allows for the demystification of comprehension, and inoculation against the pernicious effects of Cartesian gravity. The leap between the two processes, what seems to render the former miraculous in a way the latter does not, lies in the sheer plasticity of the processes responsible, the way the neurolinguistic mediation of effect feedback triggers the adaptive explosion we call ‘culture.’ Dennett writes:

Our ability to do this kind of thinking [abstract reasoning/planning] is not accomplished by any dedicated brain structure not found in other animals. There is no “explainer nucleus” for instance. Our thinking is enabled by the installation of a virtual machine made of virtual machines made of virtual machines. The goal of delineating and explaining this stack of competences via bottom-up neuroscience alone (without the help of cognitive neuroscience) is as remote as the goal of delineating and explaining the collection of apps on your smart phone by a bottom-up deciphering of its hardware circuit design and the bit-strings in memory without taking a peek at the user interface. The user interface of an app exists in order to make the competence accessible to users—people—who can’t know, and don’t need to know, the intricate details of how it works. The user-illusions of all the apps stored in our brains exist for the same reason: they make our competences (somewhat) accessible to users—other people—who can’t know, and don’t need to know, the intricate details. And then we get to use them ourselves, under roughly the same conditions, as guests in our own brain. 341

This is the Dennettian portrait of the first-person, or consciousness as it’s traditionally conceived: a radically heuristic point of contact and calibration between endogenous and exogenous systems, one resting on occluded stacks of individual, collective, and evolutionary competence. The overlap between what can be experienced and what can be reported is no cosmic coincidence: the two are (likely) coeval, part of a system dedicated to keeping both ourselves and our compatriots as well informed/misinformed—and as well armed with the latest competences available—as possible.

We can give this strange idea an almost paradoxical spin: it is like something to be you because you have been enabled to tell us—or refrain from telling us—what it’s like to be you!

When we evolved into in us, a communicating community of organisms that can compare notes, we became the beneficiaries of a system of user-illusions that rendered versions of our cognitive processes—otherwise as imperceptible as our metabolic processes—accessible to us for purposes of communication. 344

Far from the phenomenological plenum the (Western) tradition has taken it to be, then, consciousness is a presidential brief prepared by unscrupulous lobbyists, a radically synoptic aid to specific, self-serving forms of individual and collective action.

our first-person point of view of our own minds is not so different from our second-person point of view of others’ minds: we don’t see, or hear, or feel, the complicated neural machinery turning away in our brains but have to settle for an interpreted, digested version, a user-illusion that is so familiar to us that we take it not just for reality but also for the most indubitable and intimately known reality of all. That’s what it is like to be us. 345

Thus, the astounding problem posed by Cartesian gravity. As a socio-communicative interface possessing no access whatsoever to our actual sources, we can only be duped by our immediate intuitions. Referring to John Searle’s Cartesian injunction to insist upon a first-person solution of meaning and consciousness, Dennett writes:

The price you pay for following Searle’s advice is that you get all your phenomena, the events and things that have to be explained by your theory, through a channel designed not for scientific investigation but for handy, quick-and-dirty use in the rough and tumble of time-pressured life. You can learn a lot about how the brain it—you can learn quite a lot about computers by always insisting on the desk-top point of view, after all—but only if you remind yourself that your channel is systematically oversimplified and metaphorical, not literal. That means you must resist the alluring temptation to postulate a panoply of special subjective properties (typically called qualia) to which you (alone) have access. Those are fine items for our manifest image, but they must be “bracketed,” as the phenomenologist’s say, when we turn to scientific explanation. Failure to appreciate this leads to an inflated list of things that need to be explained, featuring, preeminently, a Hard Problem that is nothing but an artifact of the failure to recognize that evolution has given us a gift that sacrifices literal truth for utility. 365-366

Sound familiar? Human metacognitive access and capacity is radically heuristic, geared to the solution of practical ancestral problems. As such, we should expect that tasking that access and capacity, ‘relying on the first-person,’ with solving theoretical questions regarding the nature of experience and cognition will prove fruitless.

It’s worth pausing here, I think, to emphasize just how much this particular argumentative tack represents a departure from Dennett’s prior attempts to clear intuitive ground for his views. Nothing he says here is unprecedented: heuristic neglect has always lurked in the background of his view, always found light of day in this or that corner of this or that argument. But at no point—not in Consciousness Explained, nor even in “Quining Qualia”—has it occupied the dialectical pride of place he concedes it in From Bacteria to Bach and Back. Prior to this book, Dennett’s primary strategy has been to exploit the kinds of ‘crashes’ brought about by heuristic misapplication (though he never explicitly characterizes them as such). Here, with Cartesian gravity, he takes a gigantic step toward theorizing the neurocognitive bases of the problematic ‘intuition pumps’ he has targeted over the years. This allows him to generalize his arguments against first-person theorizations of experience in a manner that had hitherto escaped him.

But he still hasn’t quite found his way entirely clear. As I hope to show, heuristic neglect is far more than simply another tool Dennett can safely store with his pre-existing commitments. The best way to see this, I think, is to consider one particular misreading of the new argument against qualia in Chapter 14.


In “Dennett and the Reality of Red,” Tom Clark presents a concise and elegant account of how Dennett’s argument against the reality of qualia in From Bacteria to Bach and Back turns upon a misplaced physicalist bias. The extraordinary thing about his argument—and the whole reason we’re considering it here—lies in the way he concedes so much of Dennett’s case, only to arrive at a version of the very conclusion Dennett takes himself to be arguing against:

I’d suggest that qualia, properly understood, are simply the discriminable contents of sensory experience – all the tastes, colors, sounds, textures, and smells in terms of which reality appears to us as conscious creatures. They are not, as Dan correctly says, located or rendered in any detectable mental medium. They’re not located anywhere, and we are not in an observational or epistemic relationship to them; rather they are the basic, not further decomposable, hence ineffable elements of the experiences we consist of as conscious subjects.

The fact that ‘Cartesian gravity’ appears nowhere in his critique, however, pretty clearly signals that something has gone amiss. Showing as much, however, requires I provide some missing context.

After introducing his user-illusion metaphor for consciousness, Dennett is quick to identify the fundamental dialectical problem Cartesian gravity poses his characterization:

if (as I have just said) your individual consciousness is rather like the user-illusion on your computer screen, doesn’t this imply that there is a Cartesian theatre after all, where this portrayal happens, where the show goes on, rather like the show you perceive on the desktop? No, but explaining what to put in place of the Cartesian theatre will take some stretching of the imagination. 347

This is the point where he introduces a third ‘strange inversion of reasoning,’ this one belonging to Hume. Hume’s inversion, curiously enough, lies in his phenomenological observation of the way we experience causation ‘out there,’ in the world, even though we know given our propensity to get it wrong that it belongs to the machinery of cognition. (This is a canny move on Dennett’s part, but I think it demonstrates the way in which the cognitive consequences of heuristic neglect remain, as yet, implicit for him). What he wants is to ‘theatre-proof’ his account of conscious experience as a user-illusion. Hume’s inversion provides him a way to both thematize and problematize the automatic assumption that the illusion must itself be ‘real.’

The new argument for qualia eliminativism he offers, and that Clark critiques, is meant to “clarify [his] point, if not succeed in persuading everybody—as Hume says, the contrary notion is so riveted in our minds” (358). He gives the example of the red afterimage experienced in complementary colour illusions.

The phenomenon in you that is responsible for this is not a red stripe. It is a representation of a red stripe in some neural system of representation that we haven’t yet precisely located and don’t yet know how to decode, but we can be quite sure it is neither red nor a stripe. You don’t know exactly what causes you to seem to see a red stripe out in the world, so you are tempted to lapse into Humean misattribution: you misinterpret your sense (judgment, conviction, belief, inclination) that you are seeing a red stripe as arising from a subjective property (a quale, in the jargon of philosophy) that is the source of your judgment, when in fact, that is just about backward. It is your ability to describe “the red stripe,” your judgment, your willingness to make the assertions you just made, and your emotional reactions (if any) to “the red stripe” that is the source of your conviction that there is a subjective red stripe. 358-359

The problem, Dennett goes on to assert, lies in “mistaking the intentional object of a belief for its cause” (359). In normal circumstances, when we find ourselves in the presence of an apple, say, we’re entirely justified in declaring the apple the cause of our belief. In abnormal circumstances, however, this reflex dupes us into thinking that something extra-environmental—‘ineffable,’ supernatural—has to be the cause. And thus are inscrutable (and therefore perpetually underdetermined) theoretical posits like qualia born, giving rise to scholastic excesses beyond numbering.

Now the key to this argument lies in the distinction between normal and abnormal circumstances, which is to say the cognitive ecology occasioning the application of a certain heuristic regime—namely the one identified by Hume. For Clark, however, the salient point of Dennett’s argument is that the illusory red stripe lies nowhere.

Dan, a good, sophisticated physicalist, wants everything real to be locatable in the physical external world as vetted by science. What’s really real is what’s in the scientific image, right? But if you believe that we really have experiences, that experiences are specified in terms of content, and that color is among those contents, then the color of the experienced afterimage is as real as experiences. But it isn’t locatable, nor are any of the contents of experience: experiences are not observables. We don’t find them out there in spacetime or when poking around in the brain; we only find objects of various qualitative, quantitative and conceptual descriptions, including the brains with which experiences are associated. But since experiences and their contents are real, this means that not all of what’s real is locatable in the physical, external world.

Dennett never denies that we have experiences, and he even alludes to the representational basis of those experiences in the course of making his red stripe argument. A short time later, in his consideration of Cartesian gravity, he even admits that our ability to report our experiences turns on their content: “By taking for granted the content of your mental states, by picking them out by their content, you sweep under the rug all the problems of indeterminacy or vagueness of content” (367).

And yet, even though Clark is eager to seize on these and other instances of experience-talk, representation-talk, and content-talk, he completely elides the circumstances occasioning them, and thus the way Dennett sees all of these usages as profoundly circumstantial—‘normal’ or ‘abnormal.’ Sometimes they’re applicable, and sometimes they’re not. In a sense, the reality/unreality of qualia is actually beside the point; what’s truly at issue is the applicability of the heuristic tools philosophy has traditionally applied to experience. The question is, What does qualia-talk add to our ability to naturalistically explain colour, affect, sound, and so on? No one doubts our ability to correlate reportable metacognitive aspects of experience to various neural and environmental facts. No one doubts our sensory discriminatory abilities outrun our metacognitive discriminatory abilities—our ability to report. The empirical relationships are there regardless: the question is one of whether the theoretical paradigms we reflexively foist on these relationships lead anywhere other than endless disputation.

Clark not only breezes past the point of Dennett’s Red Stripe argument, he also overlooks the rather stark challenge it poses it his own position. Simply raising the spectre of heuristic metacognitive inadequacy, as Dennett does, obliges Clark to warrant his assumptive metacognitive claims. (Arguing, as Clark does, that we have no epistemic relation to our experiences simply defers the obligation to this second extraordinary claim: heaping speculation atop speculation generates more problems, not less). Dennett spends hundreds of pages amassing empirical evidence for the fractionate, heuristic nature of cognition. Given that our ancestors required only the solution of practical problems, the chances that human metacognition furnishes the information and capacity required to intuit the nature of experience (that it consists of representations consisting of contents consisting of qualia) is vanishingly small. What we should expect is that our metacognitive reflexes will do what they’ve always done: apply routines adapted to practical cognitive and communicative problem resolution to what amounts to radically unprecedented problem ecology. All things being equal, it’s almost certain that the so-called first-person can do little more than flounder before the theoretical question of itself.

The history of intentional philosophy and psychology, if nothing else, vividly illustrates as much.

In the case of content, it’s hard not to see Clark’s oversight as tendentious insofar as Dennett is referring to the way content talk exposes us to Cartesian gravity (“Reading your own mind is too easy” (367)) and the relative virtues of theorizing cognition via nonhuman species. But otherwise, I’m inclined to think Clark’s reading of Dennett is understandable. Clark misses the point of heuristic neglect entirely, but only because Dennett himself remains fuzzy on just how his newfound appreciation for the Grand Inversion—the one we’ve been exploring here on Three Pound Brain for years now—bears on his preexisting theoretical commitments. In particular, he has yet to see the hash it makes of his ‘stances’ and the ‘real patterns’ underwriting them. As soon as Dennett embraced heuristic neglect, opportunistic eliminativism ceased being an option. As goes the ‘reality’ of qualia, so goes the ‘reality’ supposedly underwriting the entire lexicon of traditional intentionalist philosophy. Showing as much, however, requires showing how Heuristic Neglect Theory arises out of the implications of Dennett’s own argument, and how it transforms Cartesian gravity into a proto-cognitive psychological explanation of intentional philosophy—an empirically tractable explanation for why humanity finds humanity so dumbfounding. But since I’m sure eyes are crossing and chins are nodding, I’ll save the way HNT can be directly drawn from the implicature of Dennett’s position for a second installment, then show how HNT both denies representation ‘reality,’ while explaining what makes representation talk so useful in my third and final post on what has been one the most exciting reading adventures in my life.

Bleak Theory (By Paul J. Ennis)

by rsbakker

In the beginning there was nothing and it has been getting steadily worse ever since. You might know this, and yet repress it. Why? Because you have a mind that is capable of generating useful illusions, that’s why. How is this possible? Because you are endowed with a brain that creates a self-model which has the capacity to hide things from ‘you.’ This works better for some than for others. Some of us are brain-sick and, for whatever perverse reasons, we chip away at our delusions. In such cases recourse is possible to philosophy, which offers consolation (or so I am told), or to mysticism, which intentionally offers nothing, or to aesthetics, which is a kind of self-externalizing that lets the mind’s eye drift elsewhere. All in all, however, the armor on offer is thin. Such are the options: to mirror (philosophy), to blacken (mysticism), or to embrace contingency (aesthetics). Let’s select the latter for now. By embracing contingency I mean that aesthetics consists of deciding upon and pursuing something quite specific for intuitive rather than rational reasons. This is to try to come to know contingency in your very bones.

As a mirrorer by trade I have to abandon some beliefs to allow myself to proceed this way. My belief that truth comes first and everything else later will be bracketed. I replace this with a less demanding constraint: truth comes when you know why you believe what you believe. Oftentimes I quite simply believe things because they are austere and minimal and I have a soft spot for that kind of thing. When I allow myself to think in line with these bleak tones an unusual desire is generated: to outbleak black, to be bleaker than black. This desire comes from I know not where. It seemingly has no reason. It is an aesthetic impulse. That’s why I ask that you take from what follows what you will. It brings me no peace either way.

I cannot hope to satisfy anyone with a definition of aesthetic experience, but let me wager that those moments that let me identify with the world a-subjectively – but not objectively – are commonly associated in my mind with bleakness. My brain chemistry, my environment, and similar contingent influences have rendered me this way. So be it. Bleakness manifests most often when I am faced with what is most distinctly impersonal: with cloudscapes and dimmed, wet treescapes. Or better yet, any time I witness a stark material disfiguration of the real by our species. And flowering from this is a bleak outlook correlated with the immense, consistent, and mostly hidden, suffering that is our history – our being. The intensity arising from the global reach of suffering becomes impressive when dislocated from the personal and the particular because then you realize that it belongs to us. Whatever the instigator the result is the same: I am alerted not just to the depths of unknowing that I embody, to the fact that I will never know most of life, but also to the industrial-scale sorrow consistently operative in being. All that is, is a misstep away from ruin. Consciousness is the holocaust of happiness.

Not that I expect anything more. Whatever we may say of our cultural evolution there was nothing inscribed in reality suggesting our world should be a fit for us. I am, on this basis, not surprised by our bleak surroundings. The brain, model-creator that it is, does quite a job at systematizing the outside into a representation that allows you to function; assuming, that is, that you have been gifted with a working model. Some have not. Perhaps the real horror is to try to imagine what has been left out (even the most ardent realist surely knows you do not look at the world directly as it is). Thankfully there is no real reason for us to register most of the information out there and we were not designed to know most of it anyway. This is the minimal blessing our evolution has gifted us with. The maximal damage is that from the exaption we call consciousness cultural evolution flowers and puts our self-model at the mercy of a bombardment of social complexity – our factical situation. It is impossible to know how our information age is toying with our brain, suffice to say that the spike in depression, anxiety and self-loathing is surely some kind of signal. The brain though, like the body, can function even when maltreated. Whether this is truly to the good is difficult to say.

And yet we must be careful to remember that even in so-called eliminative materialism the space of reasons remains. The normative dimension is, as Brandom would put it, irreducible. It does not constitute the entire range of cognition, and is perhaps best deflated in light of empirical evidence, but that is beside the point. To some degree, perhaps minor, we are rational animals with the capacity for relatively free decision-making. My intuition is that ultimately the complexity of our structure means that we will never be free of certain troubles arising from what we are. Being embodied is to be torn between immense capacity and the constant threat of losing capacities. A stroke, striking as if from nowhere, can fundamentally alter anyone. This is not to suggest that progress does not occur. It can and it does, but it can also be, and often is, undone. It’s an unfortunate state of affairs, bleak even, but being attuned to the bleakness of reality does not result in passivity by necessity.

Today there are projects that explicitly register all this, and nonetheless intend to operate in line with the potentiality contained within the capacities of reason. What differentiates these projects, oftentimes rationalist in nature, is that they do not follow our various universalist legacies in simply conceiving of the general human as deserving of dignity simply because we all belong to the same class of suffering beings. This is not sufficient to make humans act well. The phenomenon of suffering is easily recognizable and most humans are acutely aware of it, and yet they continue to act in ways contrary to how we ‘ought’ to respond. In fact, it is clear that knowing the sheer scale of suffering may lead to hedonism, egoism or repression. Various functional delusions can be generated by our mind, and it is hardly beyond us to rationalize selfishness on the basis of the universal. We are versatile like that. For this reason, I find myself torn between two poles. I maintain a philosophical respect for various neo-rationalist projects under development. And I remain equally under no illusion they will ever be put to much use. And I do not blame people for falling short of these demands. I am so far from them I only really take them seriously on the page. I find myself drawn, for these reasons, to the pessimist attitude, often considered a suspect stance.

One might suggest that we need only a minimal condition to be ethical. An appeal to the reality of pain in sentient and sapient creatures, perhaps. In that decision you might find solace – despite everything (or in spite of everything). It is a choice, however. Our attempts to assert an ethical universalism are bound up with a counter-logic: the bleak truth of contingency on the basis of the impersonal-in-the-personal. It is a logic quietly operative in the philosophical tradition and one I believe has been suppressed. Self-suppressed it flirts too much with a line leading us to the truth of our hallucination. It’s Nietzsche telling you about perspectivism hinging on the impersonal will-to-power and then you maturing, and forgetting. Not knocking his arguments out of the water, mind. Simply preferring not to accept it. Nobody wants to circle back round to the merry lunatic truths that make a mockery of your life. You might find it hard to get out of bed…whereas now I am sure you leap up every morning, smile on your face…The inhuman, impersonal attachment to each human has many names, but let us look at some that are found right at the heart of the post-Kantian tradition: transcendental subject, Dasein, Notion. Don’t believe me? I don’t mind, it makes no difference to me.

Let’s start with the sheer impersonality involved in Heidegger’s sustained fascination with discussing the human without using the word. Dasein is not supposed to be anything or anyone, in particular. Now once you think about it Dasein really does come across as extraordinarily peculiar. It spends a lot of its time being infested by language since this is, Heidegger insists, the place where its connection to being can be expressed. Yet it is also an easily overrun fortress that has been successfully invaded by techno-scientific jargon. When you hook this thesis up with Heidegger’s epochal shifts then the impersonal forces operative in his schema start to look downright ominous. However, we can’t blame Heidegger on what we can blame on Kant. His transcendental field of sense also belongs to one and all. And so, like Dasein, no one in particular. This aspect of the transcendental field still remains contentious. The transcendental is, at once, housed in a human body but also, in its sense-making functions, to be considered somehow separate from it. It is not quite human, but not exactly inhuman either.

There is, then, some strange aspect, I can think of no other word for it, inhabiting our own flowing world of a coherent ego, or ‘I,’ that allows for the emergence of a pooled intersubjectivity. Kant’s account, of course, had two main aims: to constrain groundless metaphysical speculation and, in turn, to ground the sciences. Yet his readers did not always follow his path. Kant’s decision to make a distinction between the phenomena and the noumena is perhaps the most consequential one in our tradition and is surely one of the greatest examples of opening up what you intended to close down. The nature of the noumenal realm has proven irresistible to philosophers and it has recursive consequences for how we see ourselves. If the nominal realm names a reality that is phenomenally clouded then it surely precedes, ontologically, the ego-as-center; even if it is superseded by the ego’s modelling function for us. Seen within the wider context of the noumenal realm it is legitimate to ask whether the ‘I’ is merely a densely concentrated, discrete packet amidst a wider flow; a locus amidst the chaos. The ontological generation of egos is then shorn back until all you have is Will (Schopenhaeur), Will to Power (Nietzsche), or, in a less generative sense ‘what gives,’ es gibt (Heidegger). This way of thinking belongs, when one takes the long-view, to the slow-motion deconstruction of the Cartesian ego in post-Kantian philosophy, albeit with Husserl cutting a lonely revivalist figure here. Today the ego is trounced everywhere, but there is perhaps no better example that the ‘no-self-at-all’ argument of Metzinger, but even the one-object-amongst-many thesis of object oriented ontology traces a similar line.

The destruction of the Cartesian ego may have its lineage in Kant, but the notion of the impersonal as force, process, or will, owes much to Hegel. In his metaphysics Hegel presents us with a cosmic loop explicable through retroactive justification. At the beginning, the un-articulated Notion, naming what is at the heart-of-the-real, sets off without knowledge of itself, but with the emergence of thinking subjects the Notion is finally able to think itself. In this transition the gap between the un-articulated and articulated Notion is closed, and the entire thing sets off again in directions as yet unknown. Absolute knowing is, after all, not totalized knowing, but a constant, vigilant knowing navigating its way through contingency and recognizing the necessity below it all. But that’s just the thing: despite being important conduits to this process, and having a quite special and specific function, it’s the impersonal process that really counts. In the end Kant’s attempt to close down discussion about the nature of the noumenal realm simply made it one of the most appealing themes for a philosopher to pursue. Censorship helps sales.

Speaking of sales, all kinds of new realism are being hawked on the various para-academic street-corners. All of them benefit from a tint of recognizability rooted, I would suggest, in the fact that ontological realism has always been hidden in plain sight; for any continentalist willing to look. What is different today is how the question of the impersonal attachments affecting the human comes not from inside philosophy, but from a number of external pressures. In what can only be described as a tragic situation for metaphysicians, truth now seeps into the discipline from the outside. We see thinking these days where philosophers promised there was none. The brilliance of continental realism lies in reminding us how this is an immense opportunity for philosophers to wake up from various self-induced slumbers, even if that means stepping outside the protected circle from time to time. It involves bringing this bubbling, left-over question of ontological realism right to the fore. This does not mean ontological realism will come to be accepted and then casually integrated into the tradition. If anything the backlash may eviscerate it, but the attempt will have been made. Or was, and quietly passed.

And the attempt should be made because the impersonality infecting ontological realist excesses such as the transcendental subject (in-itself), the Notion, or Dasein are attuned to what we can now see as the (delayed) flowering of the Copernican revolution. The de-centering is now embedded enough that whatever defense of the human we posit it must not be dishonest. We cannot hallucinate our way out of our ‘cold world’. If we know that our self-model is itself a hallucination, but a very real one, then what do we do then? Is it enough to situate the real in our ontological flesh and blood being-there that is not captured by thinking? Or is it best to remain with thinking as a contingent error that despite its aberrancy nonetheless spews out the truth? These avenues are grounded in consciousness and in our bodies and although both work wonders they can just as easily generate terrors. Truth qualified by these terrors is where one might go. No delusion can outflank these constraints forever. Bled of any delusional disavowal, one tries to think without hope. Hope is undignified anyway. Dignity involves resisting all provocation and remaining sane when you know it’s bleakness all the way down.

Some need hope, no? As I write this I feel the beautiful soul rising from his armchair, but I do not want to hear it. Bleak theory is addressed to your situation: a first worlder inhabiting an accelerated malaise. The ethics to address poverty, inequality, and hardship will be different. Our own heads are disordered and we do not quite know how to respond to the field outside it. You will feel guilty for your myopia, and you deserve it, but you cannot elide by endlessly pointing to the plank in the other’s eye.  You can pray through your tears, and in doing so ironically demonstrate the disturbance left by the death of God, but what does this shore up? It builds upon cathedral ruins: those sites where being is doubled-up and bent-over-backwards trying to look inconspicuous as just another option. Do you want to write religion back into being? Why not, as Ayache suggests, just ruin yourself? I hope it is clear I don’t have any answers: all clarity is a lie these days. I can only offer bleak theory as a way of seeing and perhaps a way of operating. It ‘works’ as follows: begin with confusion and shear away at what you can. Whatever is left is likely the closest thing approximating to what we name truth. It will be strictly negative. Elimination of errors is the best you can hope for.

I don’t know how to end this, so I am just going to end it.


The Knowledge Illusion Illusion

by rsbakker



When academics encounter a new idea that doesn’t conform to their preconceptions, there’s often a sequence of three reactions: first dismiss, then reject, then finally declare it obvious. Steven Sloman and Philip Fernbach, The Knowledge Illusion, 255


The best example illustrating the thesis put forward in Steven Sloman and Philip Fernbach’s excellent The Knowledge Illusion: Why We Never Think Alone is one I’ve belaboured before, the bereft  ‘well-dressed man’ in Byron Haskin’s 1953 version of The War of the Worlds, dismayed at his malfunctioning pile of money, unable to comprehend why it couldn’t secure him passage out of Los Angeles. So keep this in mind: if all goes well, we shall return to the well-dressed man.

The Knowledge Illusion is about a great many things, everything from basic cognitive science to political polarization to educational reform, but it all comes back to how individuals are duped by the ways knowledge outruns individual human brains. The praise for this book has been nearly universal, and deservedly so, given the existential nature of the ‘knowledge problematic’ in the technological age. Because of this consensus, however, I’ll play the devil’s advocate and focus on what I think are core problems. For all the book’s virtues, I think Steven Sloman, Professor of Cognitive, Linguistic, and Psychological Sciences at Brown University, and Philip Fernbach, Assistant Professor at the University of Colorado, find themselves wandering the same traditional dead ends afflicting all philosophical and psychological discourses on the nature of human knowledge. The sad fact is nobody knows what knowledge is. They only think they do.

Sloman and Fernbach begin with a consideration of our universal tendency to overestimate our understanding. In a wide variety of tests, individuals regularly fail to provide first order evidence regarding second order reports of what they know. So for instance, they say they understand how toilets or bicycles work, yet find themselves incapable of accurately drawing the mechanisms responsible. Thus the ‘knowledge illusion,’ or the ‘illusion of explanatory depth,’ the consistent tendency to think our understanding of various phenomena and devices is far more complete than it in fact is.

This calves into two interrelated questions: 1) Why are we so prone to think we know more than we do? and 2) How can we know so little yet achieve so much? Sloman and Fernbach think the answer to both these questions lies in the way human cognition is embodied, embedded, and enactive, which is to say, the myriad ways it turns on our physical and social environmental interactions. They also hold the far more controversial position that cognition is extended, that ‘mind,’ understood as a natural phenomenon, just ain’t in our heads. As they write:

The main lesson is that we should not think of the mind as an information processor that spends its time doing abstract computation in the brain. The brain and the body and the external environment all work together to remember, reason, and make decisions. The knowledge is spread through the system, beyond just the brain. Thought does not take place on a stage inside the brain. Thought uses knowledge in the brain, the body, and the world more generally to support intelligent action. In other words, the mind is not in the brain. Rather, the brain is in the mind. The mind uses the brain and other things to process information. 105

The Knowledge Illusion, in other words, lies astride the complicated fault-line between cognitivism, the tendency to construe cognition as largely representational and brain-bound, and post-cognitivism, the tendency to construe cognition as constitutively dependent on the community and environment. Since the book is not only aimed at a general audience but also about the ways humans are so prone to confuse partial for complete accounts, it is more than ironic that Sloman and Fernbach fail to contextualize the speculative, and therefore divisive, nature of their project. Charitably, you could say The Knowledge Illusion runs afoul the very ‘curse of knowledge’ illusion it references throughout, the failure to appreciate the context of cognitive reception—the tendency to assume that others know what you know, and so will draw similar conclusions. Less charitably, the suspicion has to be that Sloman and Fernbach are actually relying on the reader’s ignorance to cement their case. My guess is that the answer lies somewhere in the middle, and that the authors, given their sensitivity to the foibles and biases built into human communication and cognition, would acknowledge as much.

But the problem runs deeper. The extended mind hypothesis is subject to a number of apparently decisive counter-arguments. One could argue a la Adams and Aizawa, for instance, and accuse Sloman and Fernbach, of committing the so-called ‘causal-constitutive fallacy,’ mistaking causal influences on cognition for cognition proper. Even if we do accept that external factors are constitutive of cognition, the question becomes one of where cognition begins and ends. What is the ‘mark of the cognitive’? After all, ‘environment’ potentially includes the whole of the physical universe, and ‘community’ potentially reaches back to the origins of life. Should we take a page from Hegel and conclude that everything is cognitive? If our minds outrun our brains, then just where do they end?

So far, every attempt to overcome these and other challenges has only served to complicate the controversy. Cognitivism remains a going concern for good reason: it captures a series of powerful second-order intuitions regarding the nature of human cognition, intuitions that post-cognitivists like Sloman and Fernbach would have us set aside on the basis of incompatible second-order intuitions regarding that self-same nature. Where the intuitions milked by cognitivism paint an internalist portrait of knowledge, the intuitions milked by post-cognitivism sketch an externalist landscape. Back and forth the arguments go, each side hungry to recruit the latest scientific findings into their explanatory paradigms. At some point, the unspoken assumption seems to be, the abductive weight supporting either position will definitively tip in favour of either one or the other. By time we return to our well-dressed man and his heap of useless money, I hope to show how and why this will never happen.

For the nonce, however, the upshot is that either way you cut it, knowledge, as the subject of theoretical investigation, is positively awash in illusions, intuitions that seem compelling, but just ain’t so. For some profound reason, knowledge and other so-called ‘intentional phenomena’ baffle us in way distinct from all other natural phenomena with the exception of consciousness. This is the sense in which one can speak of the Knowledge Illusion Illusion.

Let’s begin with Sloman and Fernbach’s ultimate explanation for the Knowledge Illusion:

The Knowledge Illusion occurs because we live in a community of knowledge and we fail to distinguish the knowledge that is in our heads from the knowledge outside of it. We think the knowledge we have about how things work sits inside our skulls when in fact we’re drawing a lot of it from the environment and from other people. This is as much a feature of cognition as it is a bug. The world and our community house most of our knowledge base. A lot of human understanding consists simply of awareness that the knowledge is out there. 127-128.

The reason we presume knowledge sufficiency, in other words, is that we fail to draw a distinction between individual knowledge and collective knowledge, between our immediate know-how and know-how requiring environmental and social mediation. Put differently, we neglect various forms of what might be called cognitive dependency, and so assume cognitive independence, the ability to answer questions and solve problems absent environmental and social interactions. We are prone to forget, in other words, that our minds are actually extended.

This seems elegant and straightforward enough: as any parent (or spouse) can tell you, humans are nothing if not prone to take things for granted! We take the contributions of our fellows for granted, and so reliably overestimate our own epistemic were-withal. But something peculiar has happened. Framed in these terms, the knowledge illusion suddenly bears a striking resemblance to the correspondence or attribution error, our tendency to put our fingers on our side of the scales when apportioning social credit. We generally take ourselves to have more epistemic virtue than we in fact possess for the same reason we generally take ourselves to have more virtue than we in fact possess: because ancestrally, confabulatory self-promotion paid greater reproductive dividends than accurate self-description. The fact that we are more prone to overestimate epistemic virtue given accessibility to external knowledge sources, on this account, amounts to no more than the awareness that we have resources to fall back on, should someone ‘call bullshit.’

There’s a great deal that could be unpacked here, not the least of which is the way changing demonstrations of knowledge into demonstrations of epistemic virtue radically impacts the case for the extended mind hypothesis. But it’s worth considering, I think, how this alternative explanation illuminates an earlier explanation they give of the illusion:

So one way to conceive of the illusion of explanatory depth is that our intuitive system overestimates what it can deliberate about. When I ask you how a toilet works, your intuitive system reports, “No problem, I’m very comfortable with toilets. They are part of my daily experience.” But when your deliberative system is probed by a request to explain how they work, it is at a loss because your intuitions are only superficial. The real knowledge lies elsewhere. 84

In the prior explanation, the illusion turns on confusing our individual with our collective resources. We presume that we possess knowledge that other people have. Here, however, the illusion turns on the superficiality of intuitive cognition. “The real knowledge lies elsewhere” plays no direct explanatory role whatsoever. The culprit here, if anything, lies with what Daniel Kahneman terms WYSIATI, or ‘What-You-See-Is-All-There-Is,’ effects, the way subpersonal cognitive systems automatically presume the cognitive sufficiency of whatever information/capacity they happen to have at their disposal.

So, the question is, do we confabulate cognitive independence because subpersonal cognitive processing lacks the metacognitive monitoring capacity to flag problematic results, or because such confabulations facilitated ancestral reproductive success, or because our blindness to the extended nature of knowledge renders us prone to this particular type of metacognitive error?

The first two explanations, at least, can be combined. Given the divide and conquer structure of neural problem-solving, the presumptive cognitive sufficiency (WYSIATI) of subpersonal processing is inescapable. Each phase of cognitive processing turns on the reliability of the phases preceding (which is why we experience sensory and cognitive illusions rather than error messages). If those illusions happen to facilitate reproduction, as they often do, then we end up with biological propensities to commit things like epistemic attribution errors. We both think and declare ourselves more knowledgeable than we in fact are.

Blindness to the ‘extended nature of knowledge,’ on this account, doesn’t so much explain the knowledge illusion as follow from it.

The knowledge illusion is primarily a metacognitive and evolutionary artifact. This actually follows as an empirical consequence of the cornerstone commitment of Sloman and Fernbach’s own theory of cognition: the fact that cognition is fractionate and heuristic, which is to say, ecological. This becomes obvious, I think, but only once we see our way past the cardinal cognitive illusion afflicting post-cognitivism.

Sloman and Fernbach, like pretty much everyone writing popular accounts of embodied, embedded, and enactive approaches to cognitive science, provide the standard narrative of the rise and fall of GOFAI, standard computational approaches to cognition. Cognizing, on this approach, amounts to recapitulating environmental systems within universal computational systems, going through the enormous expense of doing in effigy in order to do in the world. Not only is such an approach expensive, it requires prior knowledge of what needs to be recapitulated and what can be ignored—tossing the project into the infamous jaws of the Frame Problem. A truly general cognitive system is omni-applicable, capable of solving any problem in any environment, given the requisite resources. The only way to assure that ecology doesn’t matter, however, is to have recapitulated that ecology in advance.

The question from a biological standpoint is simply one of why we need to go through all the bother of recapitulating a problem-solving ecology when that ecology is already there, challenging us, replete with regularities we can exploit without needing to know whatsoever. “This assumption that the world is behaving normally gives people a giant crutch,” as Sloman and Fernbach put it. “It means that we don’t have to remember everything because the information is stored in the world” (95). All cognition requires are reliable interactive systematicities—cognitive ecologies—to steer organisms through their environments. Heuristics are the product of cognitive systems adapted to the exploitation of the correlations between regularities available for processing and environmental regularities requiring solution. And since the regularities happened upon, cues, are secondary to the effects they enable, heuristic systems are always domain specific. They don’t travel well.

And herein lies the rub for Sloman and Fernbach: If the failure of cognitivism lies in its insensitivity to cognitive ecology, then the failure of post-cognitivism lies in its insensitivity to metacognitive ecology, the fact that intentional modes of theorizing cognition are themselves heuristic. Humans had need to troubleshoot claims, to distinguish guesswork from knowledge. But they possessed no access whatsoever to the high-dimensional facts of the matter, so they made do with what was available. Our basic cognitive intuitions facilitate this radically heuristic ‘making do,’ allowing us to debug any number of practical communicative problems. The big question is whether they facilitate anything theoretical. If intentional cognition turns on systems selected to solve practical problem ecologies absent information, why suppose it possesses any decisive theoretical power? Why presume, as post-cognitivists do, that the theoretical problem of intentional cognition lies within the heuristic purview of intentional cognition?

Its manifest inapplicability, I think, can be clearly discerned in The Knowledge Illusion. Consider Sloman and Fernbach’s contention that the power of heuristic problem-solving turns on the ‘deep’ and ‘abstract’ nature of the information exploited by heuristic cognitive systems. As they write:

Being smart is all about having the ability to extract deeper, more abstract information from the flood of data that comes into our senses. Instead of just reacting to the light, sounds, and smells that surround them, animals with sophisticated large brains respond to deep, abstract properties of the that they are sensing. 46

But surely ‘being smart’ lies in the capacity to find, not abstracta, but tells, sensory features possessing reliable systematic relationships to deep environments. There’s nothing ‘deep’ or ‘abstract’ about the moonlight insects use to navigate at night—which is precisely why transverse orientation is so easily hijacked by bug-zappers and porch-lights. There’s nothing ‘deep’ or ‘abstract’ about the tastes triggering aversion in rats, which is why taste aversion is so easily circumvented by using chronic rodenticides. Animals with more complex brains, not surprisingly, can discover and exploit more tells, which can also be hijacked, cued ‘out of school.’ We bemoan the deceptive superficiality of political and commercial marketing for good reason! It’s unclear what ‘deeper’ or ‘more abstract’ add here, aside from millennial disputation. And yet Sloman and Fernbach continue, “[t]he reason that deeper, more abstract information is helpful is that it can be used to pick out what we’re interested in from an incredibly complex array of possibilities, regardless of how the focus of our interest presents itself” (46).

If a cue, or tell—be it a red beak or a prolonged stare or a scarlet letter—possesses some exploitable systematic relationship to some environmental problem, then nothing more is needed. Talk of ‘depth’ or ‘abstraction’ plays no real explanatory function, and invites no little theoretical mischief.

The term ‘depth’ is perhaps the biggest troublemaker, here. Insofar as human cognition is heuristic, we dwell in shallow information environments, ancestral need-to-know ecologies, remaining (in all the myriad ways Sloman and Fernbach describe so well) entirely ignorant of the deeper environment, and the super-complex systems comprising them. What renders tells so valuable is their availability, the fact that they are at once ‘superficial’ and systematically correlated to the neglected ‘deeps’ requiring solution. Tells possess no intrinsic mark of their depth or abstraction. It is not the case that “[a]s brains get more complex, they get better at responding to deeper, more abstract cues from the environment, and this makes them ever more adaptive to new situations” (48). What is the case is far more mundane: they get better at devising, combining, and collecting environmental tells.

And so, one finds Sloman and Fernbach at metaphoric war with themselves:

It is rare for us to directly perceive the mechanisms that create outcomes. We experience our actions and we experience the outcomes of those actions; only by peering inside the machine do we see the mechanism that makes it tick. We can peer inside when the components are visible. 73

As they go on to admit, “[r]easoning about social situations is like reasoning about physical objects: pretty shallow” (75).

The Knowledge Illusion is about nothing if not the superficiality of human cognition, and all the ways we remain oblivious to this fact because of this fact. “Normal human thought is just not engineered to figure out some things” (71), least of all the deep/fundamental abstracta undergirding our environment! Until the institutionalization of science, we were far more vulture than lion, information scavengers instead of predators. Only the scientific elucidation of our deep environments reveals how shallow and opportunistic we have always been, how reliant on ancestrally unfathomable machinations.

So then why do Sloman and Fernbach presume that heuristic cognition grasps things both abstract and deep?

The primary reason, I think, turns on the inevitably heuristic nature of our attempts to cognize cognition. We run afoul these heuristic limits every time we look up at the night sky. Ancestrally, light belonged to those systems we could take for granted; we had no reason to intuit anything about its deeper nature. As a result, we had no reason to suppose we were plumbing different pockets of the ancient past whenever we paused to gaze into the night sky. Our ability to cognize the medium of visual cognition suffers from what might be called medial neglect. We have to remind ourselves we’re looking across gulfs of time because the ecological nature of visual cognition presumes the ‘transparency’ of light. It vanishes into what it reveals, generating a simultaneity illusion.

What applies to vision applies to all our cognitive apparatuses. Medial neglect, in other words, characterizes all of our intuitive ways of cognizing cognition. At fairly every turn, the enabling dimension of our cognitive systems is consigned to oblivion, generating, upon reflection, the metacognitive impression of ‘transparency,’ or ‘aboutness’—intentionality in Brentano’s sense. So when Sloman and Fernbach attempt to understand the cognitive nature of heuristic selectivity, they cue the heuristic systems we evolved to solve practical epistemic problems absent any sensitivity to the actual systems responsible, and so run afoul a kind of ‘transparency illusion,’ the notion that heuristic cognition requires fastening onto something intrinsically simple and out there—a ‘truth’ of some description, when all our brain need to do is identify some serendipitously correlated cue in its sensory streams.

This misapprehension is doubly attractive, I think, for the theoretical cover it provides their contention that all human cognition is causal cognition. As they write:

… the purpose of thinking is to choose the most effective action given the current situation. That requires discerning the deep properties that are constant across situations. What sets humans apart is our skill at figuring out what those deep, invariant properties are. It takes human genius to identify the key properties that indicate if someone has suffered a concussion or has a communicable disease, or that it’s time to pump up a car’s tires. 53

In fact, they go so far as to declare us “the world’s master causal thinkers” (52)—a claim they spend the rest of the book qualifying. As we’ve seen, humans are horrible at understanding how things work: “We may be better at causal reasoning than other kinds of reasoning, but the illusion of explanatory depth shows that we are still quite limited as individuals in how much of it we can do” (53).

So, what gives? How can we be both causal idiots and causal savants?

Once again, the answer lies in their own commitments. Time and again, they demonstrate the way the shallowness of human cognition prevents us from cognizing that shallowness as such. The ‘deep abstracta’ posited by Sloman and Fernbach constitute a metacognitive version of the very illusion of explanatory depth they’re attempting to solve. Oblivious to the heuristic nature of our metacognitive intuitions, they presume those intuitions deep, theoretically sufficient ways to cognize the structure of human cognition. Like the physics of light, the enabling networks of contingent correlations assuring the efficacy of various tells get flattened into oblivion—the mediating nature vanishes—and the connection between heuristic systems and the environments they solve becomes an apparently intentional one, with ‘knowing’ here, ‘known’ out there, and nothing in between. Rather than picking out strategically connected cues, heuristic cognition isolates ‘deep causal truths.’

How can we be both idiots and savants when it comes to causality? The fact is, all cognition is not causal cognition. Some cognition is causal, while other cognition—the bulk of it—is correlative. What Sloman and Fernbach systematically confuse are the kinds of cognitive efficacy belonging to the isolation of actual mechanisms with the kinds of cognitive efficacy belonging to the isolation of tells possessing unfathomable (‘deep’) correlations to those mechanisms. The latter cognition, if anything, turns on ignoring the actual causal regularities involved. This is what makes it both so cheap and so powerful (for both humans and AI): it relieves us of the need to understand the deeper nature of things, allowing us to focus on what happens next.

Although some predictions turn on identifying actual causes, those requiring the heuristic solution of complex systems turn on identifying tells, triggers that are systematically correlated precursors to various significant events. Given our metacognitive neglect of the intervening systems, we regularly fetishize the tells available, take them to be the causes of the kinds of effects we require. Sloman and Fernbach’s insistence on the causal nature of human cognition commits this very error: it fetishizes heuristic cues. (Or to use Klaus Fiedler’s terminology, it confuses pseudocontingencies for genuine contingencies, or to use Andrei Cimpian’s, it fails to recognize a kind of ‘inherence heuristic’ as heuristic).

The power of predictive reasoning turns on the plenitude of potential tells, our outright immersion in environmental systematicities. No understanding of celestial mechanics is required to use the stars to anticipate seasonal changes and so organize agricultural activities. The cost of this immersion, on the other hand, is the inverse problem, the problem of isolating genuine causes as opposed to mere correlations on the basis of effects. In diagnostic reasoning, the sheer plenitude of correlations is the problem: finding causes amounts to finding needles in haystacks, sorting systematicities that are genuinely counterfactual from those that are not. Given this difficulty, it should come as no surprise that problems designed to cue predictive deliberation tend to neglect the causal dimension altogether. Tells, even when imbued with causal powers, fetishized, stand entirely on their own.

Sloman and Fernbach’s explanation of ‘alternative cause neglect’ thoroughly illustrates, I think, the way cognitivism and post-cognitivism have snarled cognitive psychology in the barbed wire of incompatible intuitions. They also point out the comparative ease of predictive versus diagnostic reasoning. But where the above sketch explains this disparity in thoroughly ecological terms, their explanation is decidedly cognitivist: we recapitulate systems, they claim, run ‘mental simulations’ to explore the space of possible effects. Apparently, running these tapes backward to explore the space of possible causes is not something nature has equipped us to do, at least easily. “People ignore alternative causes when reasoning from cause to effect,” they contend, “because their mental simulations have no room for them, and because we’re unable to run mental simulations backward in time from effect to cause” (61).

Even setting aside the extravagant metabolic expense their cognitivist tack presupposes, it’s hard to understand how this explains much of anything, let alone how the difference between these two modes figures in the ultimate moral of Sloman and Fernbach’s story: the social intransigence of the knowledge illusion.

Toward the end of the book, they provide a powerful and striking picture of the way false beliefs seem to have little, if anything, to do with the access to scientific facts. The provision of reasons likewise has little or no effect. People believe what their group believes, thus binding generally narcissistic or otherwise fantastic worldviews to estimations of group membership and identity. For Sloman and Fernbach, this dovetails nicely with their commitment to extended minds, the fact that ‘knowing’ is fundamentally collective.

Beliefs are hard to change because they are wrapped up with our values and identities, and they are shared with our community. Moreover, what is actually in our own heads—our causal models—are sparse and often wrong. This explains why false beliefs are so hard to weed out. Sometimes communities get the science wrong, usually in ways supported by our causal models. And the knowledge illusion means that we don’t check our understanding often or deeply enough. This is a recipe for antiscientific thinking. 169

But it’s not simply the case that reports of belief signal group membership. One need only think of the ‘kooks’ or ‘eccentrics’ in one’s own social circles (and fair warning, if you can’t readily identify one, that likely means you’re it!) to bring home the cognitive heterogeneity one finds in every community, people who demonstrate reliability in some other way (like my wife’s late uncle who never once attended church, but who cut the church lawn every week all the same).

Like every other animal on this planet, we’ve evolved to thrive in shallow cognitive ecologies, to pick what we need when we need it from wherever we can, be it the world or one another. We are cooperative cognitive scavengers, which is to say, we live in communal shallow cognitive ecologies. The cognitive reports of ingroup members, in other words, are themselves powerful tells, correlations allowing us to predict what will happen next absent deep environmental access or understanding. As an outgroup commentator on these topics, I’m intimately acquainted with the powerful way the who trumps the what in claim-making. I could raise a pyramid with all the mud and straw I’ve accumulated! But this has nothing to do with the ‘intrinsically communal nature of knowledge,’ and everything to do with the way we are biologically primed to rely on our most powerful ancestral tools. It’s not simply that we ‘believe to belong,’ but because, ancestrally speaking, it provided an extraordinarily metabolically cheap way to hack our natural and social environments.

So cheap and powerful, in fact, we’ve developed linguistic mechanisms, ‘knowledge talk,’ to troubleshoot cognitive reports.

And this brings us back to the well-dressed man in The War of the Worlds, left stranded with his useless bills, dumbfounded by the sudden impotence of what had so reliably commanded the actions of others in the past. Paper currency requires vast systems of regularities to produce the local effects we all know and love and loathe. Since these local, or shallow, effects occur whether or not we possess any inkling of the superordinate, deep, systems responsible, we can get along quite well simply supposing, like the well-dressed man, that money possesses this power on its own, or intrinsically. Pressed to explain this intrinsic power, to explain why this paper commands such extraordinary effects, we posit a special kind of property, value.

What the well-dressed man illustrates, in other words, is the way shallow cognitive ecologies generate illusions of local sufficiency. We have no access to the enormous amount of evolutionary, historical, social, and personal stage-setting involved when our doctor diagnoses us with depression, so we chalk it up to her knowledge, not because any such thing exists in nature, but because it provides us a way to communicate and troubleshoot an otherwise incomprehensible local effect. How did your doctor make you better? Obviously, she knows her stuff!

What could be more intuitive?

But then along comes science, and lo, we find ourselves every bit as dumbfounded when asked to causally explain knowledge as (to use Sloman and Fernbach’s examples) when asked to explain toilets or bicycles or vaccination or climate warming or why incest possessing positive consequences is morally wrong. Given our shallow metacognitive ecology, we presume that the heuristic systems applicable to troubleshooting practical cognitive problems can solve the theoretical problem of cognition as well. When we go looking for this or that intentional formulation of ‘knowledge’ (because we cannot even agree on what it is we want to explain) in the head, we find ourselves, like the well-dressed man, even more dumbfounded. Rather than finding anything sufficient, we discover more and more dependencies, evidence of the way our doctor’s ability to cure our depression relies on extrinsic environmental and social factors. But since we remain committed to our fetishization of knowledge, we conclude that knowledge, whatever it is, simply cannot be in the head. Knowledge, we insist, must be nonlocal, reliant on natural and social environments. But of course, this cuts against the very intuition of local sufficiency underwriting the attribution of knowledge in the first place. Sure, my doctor has a past, a library, and a community, but ultimately, it’s her knowledge that cures my depression.

And so, cognitivism and post-cognitivism find themselves at perpetual war, disputing theoretical vocabularies possessing local operational efficacy in everyday or specialized experimental contexts, but perpetually deferring the possibility of any global, genuinely naturalistic understanding of human cognition. The strange fact of the matter is that there’s no such thing or function as ‘knowledge’ in nature, nothing deep to redeem our shallow intuitions, though knowledge talk (which is very real) takes us a long way to resolve a wide variety of practical problems. The trick isn’t to understand what knowledge ‘really is,’ but rather to understand the deep, supercomplicated systems underwriting the optimization of behaviour, and how they underwrite our shallow intuitive and deliberative manipulations. Insofar as knowledge talk forms a component of those systems, we must content ourselves with studying ‘knowledge’ as a term rather than an entity, leaving intentional cognition to solve what problems it can where it can. The time has come to leave both cognitivism and post-cognitivism behind, and to embrace genuinely post-intentional approaches, such as the ecological eliminativism espoused here.

The Knowledge Illusion, in this sense, provides a wonderful example of crash space, the way in which the introduction of deep, scientific information into our shallow cognitive ecologies is prone to disrupt or delude or simply fall flat altogether. Intentional cognition provides a way for us to understand ourselves and each other while remaining oblivious to any of the deep machinations actually responsible. To suffer ‘medial neglect’ is to be blind to one’s actual sources, to comprehend and communicate human knowledge, experience, and action via linguistic fetishes, irreducible posits possessing inexplicable efficacies, entities fundamentally incompatible with the universe revealed by natural science.

For all the conceits Sloman and Fernbach reveal, they overlook and so run afoul perhaps greatest, most astonishing conceit of them all: the notion that we should have evolved the basic capacity to intuit our own deepest nature, that hunches belonging to our shallow ecological past could show us the way into our deep nature, rather than lead us, on pain of systematic misapplication, into perplexity. The time has come to dismantle the glamour we have raised around traditional philosophical and psychological speculation, to stop spinning abject ignorance into evidence of glorious exception, and to see our millennial dumbfounding as a symptom, an artifact of a species that has stumbled into the trap of interrogating its heuristic predicament using shallow heuristic tools that have no hope of generating deep theoretical solutions. The knowledge illusion illusion.

Visions of the Semantic Apocalypse: James Andow and Dispositional Metasemantics

by rsbakker

The big problem faced by dispositionalist accounts of meaning lies in their inability to explain the apparent normativity of meaning. Claims that the meaning of X turns on the disposition to utter ‘X’ requires some way to explain the pragmatic dimensions of meaning, the fact that ‘X’ can be both shared and misapplied. Every attempt to pin meaning to natural facts, even ones so low-grained as dispositions, runs aground on the external relationality of the natural, the fact that things in the world just do not stand in relations of rightness or wrongness relative one another. No matter how many natural parameters you pile onto your dispositions, you will still have no way of determining the correctness of any given application of X.

This problem falls into the wheelhouse of heuristic neglect. If we understand that human cognition is fractionate, then the inability of dispositions to solve for correctness pretty clearly indicates a conflict between cognitive subsystems. But if we let metacognitive neglect, our matter of fact blindness to our own cognitive constitution, dupe us into thinking we possess one big happy cognition, this conflict is bound to seem deeply mysterious, a clash of black cows in the night. And as history shows us, mysterious problems beget mysterious answers.

So for normativists, this means that only intentional cognition, those systems adapted to solve problems via articulations of ‘right or wrong’ talk, can hope to solve the theoretical nature of meaning. For dispositionalists, however, this amounts to ceding whole domains of nature hostage to perpetual philosophical disputation. The only alternative, they think, is to collect and shuffle the cards yet again, in the hope that some articulation of natural facts will somehow lay correctness bare. The history of science, after all, is a history of uncovering hidden factors—a priori intuitions be damned. Even still, it remains very hard to understand how to stack external relations into normative relations. Ignorant of the structure of intentional cognition, and the differences between it and natural (mechanical) cognition, the dispositionalist assumes that meaning is real, and that since all real things are ultimately natural, meaning must have a natural locus and function. Both approaches find themselves stalled in different vestibules of the same crash space.

For me, the only way to naturalize meaning is to understand it not as something ‘real out there’ but as a component of intentional cognition, biologically understood. The trick lies in stacking external relations into the mirage of normative relations: laying out the heuristic misapplications generating traditional philosophical crash spaces. The actual functions of linguistic communication turn on the vast differential systems implementing it. We focus on the only things we apparently see. Given the intuition of sufficiency arising out of neglect, we assume these form autonomous systems. And so tools that allow conscious cognition to blindly mediate the function of vast differential systems—histories, both personal and evolutionary—become an ontological nightmare.

In “Zebras, Intransigence & Semantic Apocalypse: Problems for Dispositional Metasemantics,” James Andow considers the dispositionalist attempt to solve for normativity via the notion of ‘complete information.’ The title alone had me hooked (for obvious reasons), but the argument Andow lays out is a wry and fascinating one. Where dispositions to apply terms are neither right nor wrong, dispositions to apply terms given all relevant information seems to enable the discrimination of normative discrepancies between performances. The problem arises when one asks what counts as ‘all relevant information.’ Offloading determinacy onto relevant information simply raises the question of determinacy at the level of relevant information. What constrains ‘relevance’? What about future relevance? Andow chases this inability to delimit complete information to the most extreme case:

It seems pretty likely that there is information out there which would radically restructure the nature of human existence, make us abandon technologies, reconsider our values and place in nature, information that would lead us to restructure the political organization of our species, reconsider national boundaries, and the ‘artificial divisions’ which having distinct languages impose on us. The likely effect of complete information is semantic apocalypse. (Just to be clear—my claim here is not that it is likely we will undergo such a shift. Who is to say what volume of information humankind will become aware of before extinction? Rather, the claim is that the probable result of being exposed to all information which would alter one’s dispositions, i.e., complete information, would involve a radical overhaul in semantic dispositions).

This paragraph is brilliant, especially given the grand way it declares the semantic apocalypse only to parenthetically take it all back! For my money, though, Andow’s throwaway question, “Who is to say what volume of information humankind will become aware of before extinction?” is far and away the most pressing one. But then I see these issues in light of a far different theory of meaning.

What is the information threshold of semantic apocalypse?

Dispositionalism entails the possibility of semantic apocalypse to the degree the tendencies of biological systems are ecologically dependent, and so susceptible to gradual or catastrophic change. This draws out the importance of the semantic apocalypse as distinct from other forms of global catastrophe. A zombie apocalypse, for instance, might also count as a semantic apocalypse, but only if our dispositions to apply terms were radically transformed. It’s possible, in other words, to suffer a zombie apocalypse without suffering a semantic apocalypse. The physical systems underwriting meaning are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Meaning, in other words, can survive radical ecological destruction. (This is one of the reasons we remain, despite all our sophistication, largely blind to the issue of cognitive ecology: so far it’s been with us through thick and thin). The advantage of dispositionalist approaches, Andow thinks, lies in the way it anchors meaning in our nature. One may dispute how ‘meanings’ find themselves articulated in intentional cognition more generally, while agreeing that intentional cognition is biological; a suite of sensitivities attuned to very specific sets of cues, leveraging reliable predictions. One can be agnostic on the ontological status of ‘meaning’ in other words, and still agree that meaning talk turns on intentional cognition, which turns on heuristic capacities whose development we can track through childhood. So long as a catastrophe leaves those cues and their predictive power intact, it will not precipitate a semantic apocalypse.

So the question of the threshold of the semantic apocalypse becomes the question of the stability of a certain biological system of specialized sensitivities and correlations. Whatever collapses this system engenders the semantic apocalypse (which for Andow means the global indeterminacy of meanings, and for me the global unreliability of intentional cognition more generally). The thing to note here, however, is the ease with which such systems do collapse once the correlations between sensitivities and outcomes cease to become reliable. Meaning talk, in other words, is ecological, which is to say it requires its environments be a certain way to discharge ancestral functions.

Suddenly the summary dismissal of the genuine possibility of a semantic apocalypse becomes ill-advised. Ecologies can collapse in a wide variety of ways. The form any such collapse takes turns on the ‘pollutants’ and the systems involved. We have no assurance that human cognitive ecology is robust in all respects. Meaning may be able to survive a zombie apocalypse, but as an ecological artifact, it is bound to be vulnerable somehow.

That vulnerability, on my account, is cognitive technology. We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travellers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts. The list goes on.

The semantic apocalypse isn’t simply possible: it’s happening.

No results found for “cognitive psychology of philosophy”.

by rsbakker

That is, until today.

The one thing I try to continuously remind people is that philosophy is itself a data point, a telling demonstration of what has to be one of the most remarkable facts of our species. We don’t know ourselves for shit. We have been stumped since the beginning. We’ve unlocked the mechanism for aging for Christ’s sake: there’s a chance we might become immortal without having the faintest clue as to what ‘we’ amounts to.

There has to be some natural explanation for that, some story explaining why it belongs to our nature to be theoretically mystified by our nature, to find ourselves unable to even agree on formulations of the explananda. So what is it? Why all the apparent paradoxes?

Why, for instance, the fascination with koans?

Take the famous, “What is the sound of one hand clapping?” Apparently, the point of pondering this lies in realizing the koan is at once the questioning and the questioned, and coming to see oneself as the sound. For many, the pedagogical function of koans lies in revealing one’s Buddha nature, breaking down the folk reasoning habits barring the apprehension of the identity of subject and object.

Strangely enough, the statement I gave you in the previous post could be called a koan, of sorts:

It is true there is no such thing as truth.

But the idea wasn’t so much to break folk reasoning habits as to alert readers to an imperceptible complication belonging to discursive cognition: a complication that breaks the reliability of our folk-reasoning habits. The way deliberative cognition unconsciously toggles between applications and ontologizations of truth talk can generate compelling cognitive illusions—illusions so compelling, in fact, as to hold the whole of humanity in their grip for millennia.

Wittgenstein, and the pragmatists glimpsed the fractionate specialization of cognition, how it operated relative various practical contexts. They understood the problem in terms of concrete application, which for them was pragmatic application, a domain generally navigated via normative cognition. Impressed by the inability of mechanical cognition to double as normative cognition, they decided that only normative cognition could explain cognition, and so tripped into a different version of the ancient trap: that of using intentional cognition to theoretically solve intentional cognition.

Understanding cognition in terms of heuristic neglect lets us frame the problem subpersonally, to look at what’s going on in statements like the above in terms of possible neurobiological systems recruited. The fact that human cognition is heuristic, fractionate, and combinatory means that we should expect koans, puzzles, paradoxes, apories, and the like. We should expect that different systems possessing overlapping domains will come into conflict. We should expect them in the same way and for the same reason we should expect to encounter visual, auditory, and other kinds of systematic illusions. Because the brain picks out only the correlations it needs to predict its environments, cues predicting the systems requiring solution the way they need to be predicted to be solved. Given this, we should begin looking at traditional philosophy as a rich, discursive reservoir of pathologies, breakdowns providing information regarding the systems and misapplications involved. Like all corpses, meaning will provide a feast for worms.

In a sense, then, a koan demonstrates what a great many seem to think it’s meant to demonstrate: a genuine limit to some cognitive modality, a point where our automatic applications fail us, alerting us both to their automaticity and their specialized nature. And this, the idea would be, draws more of the automaticity (and default universal application) of the subject/object (aboutness) heuristic into deliberative purview, leading to… Enlightenment?

Does Heuristic Neglect Theory suggest a path to the Absolute?

I suppose… so long as we keep in mind that ‘Absolute’ means ‘abject stupidity.’ I think we’re better served looking at these kinds of things as boundaries rather than destinations.

The Truth Behind the Myth of Correlationism

by rsbakker

A wrong turn lies hidden in the human cultural code, an error that has scuttled our every attempt to understand consciousness and cognition. So much philosophical activity reeks of dead ends: we try and we try, and yet we find ourselves mired in the same ancient patterns of disputation. The majority of thinkers believe the problem is local, that they need only tinker with the tools they’ve inherited. They soldier on, arguing that this or that innovative modification will overcome our confusion. Some, however, believe the problem lies deeper. I’m one of those thinkers, as is Meillassoux. I think the solution lies in speculation bound to the hip of modern science, in something I call ‘heuristic neglect.’ For me, the wrong turn lies in the application of intentional cognition to solve the theoretical problem of intentional cognition. Meillassoux thinks it lies in what he calls ‘correlationism.’

Since I’ve been accused of ‘correlationism’ on a couple of occasions now, I thought it worthwhile tackling the issue in more detail. This will not be an institutional critique a la Golumbia’s, who manages to identify endless problems with Meillassoux’s presentation, while somehow entirely missing his skeptical point: once cognition becomes artifactual, it becomes very… very difficult to understand. Cognitive science is itself fractured about Meillassoux’s issue.

What follows will be a constructive critique, an attempt to explain the actual problem underwriting what Meillassoux calls ‘correlationism,’ and why his attempt to escape that problem simply collapses into more interminable philosophy. The problem that artifactuality poses to the understanding of cognition is very real, and it also happens to fall into the wheelhouse of Heuristic Neglect Theory (HNT). For those souls growing disenchanted with Speculative Realism, but unwilling to fall back into the traditional bosom, I hope to show that HNT not only offers the radical break with tradition that Meillassoux promises, it remains inextricably bound to the details of this, the most remarkable age.

What is correlationism? The experts explain:

Correlation affirms the indissoluble primacy of the relation between thought and its correlate over the metaphysical hypostatization or representational reification of either term of the relation. Correlationism is subtle: it never denies that our thoughts or utterances aim at or intend mind-independent or language-independent realities; it merely stipulates that this apparently independent dimension remains internally related to thought and language. Thus contemporary correlationism dismisses the problematic of scepticism, and or epistemology more generally, as an antiquated Cartesian hang-up: there is supposedly no problem about how we are able to adequately represent reality; since we are ‘always already’ outside ourselves and immersed in or engaging with the world (and indeed, this particular platitude is constantly touted as the great Heideggerean-Wittgensteinian insight). Note that correlationism need not privilege “thinking” or “consciousness” as the key relation—it can just as easily replace it with “being-in-the-world,” “perception,” “sensibility,” “intuition,” “affect,” or even “flesh.” Ray Brassier, Nihil Unbound, 51

By ‘correlation’ we mean the idea according to which we only ever have access to the correlation between thinking and being, and never to either term considered apart from the other. We will henceforth call correlationism any current of thought which maintains the unsurpassable character of the correlation so defined. Consequently, it becomes possible to say that every philosophy which disavows naive realism has become a variant of correlationism. Quentin Meillassoux, After Finitude, 5

Correlationism rests on an argument as simple as it is powerful, and which can be formulated in the following way: No X without givenness of X, and no theory about X without a positing of X. If you speak about something, you speak about something that is given to you, and posited by you. Consequently, the sentence: ‘X is’, means: ‘X is the correlate of thinking’ in a Cartesian sense. That is: X is the correlate of an affection, or a perception, or a conception, or of any subjective act. To be is to be a correlate, a term of a correlation . . . That is why it is impossible to conceive an absolute X, i.e., an X which would be essentially separate from a subject. We can’t know what the reality of the object in itself is because we can’t distinguish between properties which are supposed to belong to the object and properties belonging to the subjective access to the object. Quentin Meillassoux,”Time without Becoming

The claim of correlationism is the corollary of the slogan that ‘nothing is given’ to understanding: everything is mediated. Once knowing becomes an activity, then the objects insofar as they are known become artifacts in some manner: reception cannot be definitively sorted from projection and as a result no knowledge can be said to be absolute. We find ourselves trapped in the ‘correlationist circle,’ trapped in artifactual galleries, never able to explain the human-independent reality we damn well know exists. Since all cognition is mediated, all cognition is conditional somehow, even our attempts (or perhaps, especially our attempts) to account for those conditions. Any theory unable to decisively explain objectivity is a theory that cannot explain cognition. Ergo, correlationism names a failed (cognitivist) philosophical endeavour.

It’s a testament to the power of labels in philosophy, I think, because as Meillassoux himself acknowledges there’s nothing really novel about the above sketch. Explaining the ‘cognitive difference’ was my dissertation project back in the 90’s, after all, and as smitten as I was with my bullshit solution back then, I didn’t think the problem itself was anything but ancient. Given this whole website is dedicated to exploring and explaining consciousness and cognition, you could say it remains my project to this very day! One of the things I find so frustrating about the ‘critique of correlationism’ is that the real problem—the ongoing crisis—is the problem of meaning. If correlationism fails because correlationism cannot explain cognition, then the problem of correlationism is an expression of a larger problem, the problem of cognition—or in other words, the problem of intentionality.

Why is the problem of meaning an ongoing crisis? In the past six fiscal years, from 2012 to 2017, the National Institute of Health will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. [1] And this is just one public institution in one nation involving health related research. If you include the cognitive sciences more generally—research into everything from consumer behaviour to AI—you could say that solving the human soul commands more resources than any other domain in history. The reason all this money is being poured into the sciences rather than philosophy departments is that the former possesses real world consequences: diseases cured, soap sold, politicians elected. As someone who tries to keep up with developments in Continental philosophy, I already find the disconnect stupendous, how whole populations of thinkers continue discoursing as if nothing significant has changed, bitching about traditional cutlery in the shadow of the cognitive scientific tsunami.

Part of the popularity of the critique of correlationism derives from anxieties regarding the growing overlap of the sciences of the human and the humanities. All thinkers self-consciously engaged in the critique of correlationism reference scientific knowledge as a means of discrediting correlationist thought, but as far as I can tell, the project has done very little to bring the science, what we’re actually learning about consciousness and cognition, to the fore of philosophical debates. Even worse, the notion of mental and/or neural mediation is actually central to cognitive science. What some neuroscientists term ‘internal models,’ which monolopolize our access to ourselves and the world, is nothing if not a theoretical correlation of environments and cognition, trapping us in models of models. The very science that Meillassoux thinks argues against correlationism in one context, explicitly turns on it in another. The mediation of knowledge is the domain of cognitive science—full stop. A naturalistic understanding of cognition is a biological understanding is an artifactual understanding: this is why the upshot of cognitive science is so often skeptical, prone to further diminish our traditional (if not instinctive) hankering for unconditioned knowledge—to reveal it as an ancestral conceit

A kind of arche-fossil.

If an artifactual approach to cognition is doomed to misconstrue cognition, then cognitive science is a doomed enterprise. Despite the vast sums of knowledge accrued, the wondrous and fearsome social instrumentalities gained, knowledge itself will remain inexplicable. What we find lurking in the bones of Meillassoux’s critique, in other words, is precisely the same commitment to intentional exceptionality we find in all traditional philosophy, the belief that the subject matter of traditional philosophical disputation lies beyond the pale of scientific explanation… that despite the cognitive scientific tsunami, traditional intentional speculation lies secure in its ontological bunkers.

Only more philosophy, Meillassoux thinks, can overcome the ‘scandal of philosophy.’ But how is mere opinion supposed to provide bona fide knowledge of knowledge? Speculation on mathematics does nothing to ameliorate this absurdity: even though paradigmatic of objectivity, mathematics remains as inscrutable as knowledge itself. Perhaps there is some sense to be found in the notion of interrogating/theorizing objects in a bid to understand objectivity (cognition), but given what we now know regarding our cognitive shortcomings in low-information domains, we can be assured that ‘object-oriented’ approaches will bog down in disputation.

I just don’t know how to make the ‘critique of correlationism’ workable, short ignoring the very science it takes as its motivation, or just as bad, subordinating empirical discoveries to some school of ‘fundamental ontological’ speculation. If you’re willing to take such a leap of theoretical faith, you can be assured that no one in the vicinity of cognitive science will take it with you—and that you will make no difference in the mad revolution presently crashing upon us.

We know that knowledge is somehow an artifact of neural function—full stop. Meillassoux is quite right to say this renders the objectivity of knowledge very difficult to understand. But why think the problem lies in presuming the artifactual nature of cognition?—especially now that science has begun reverse-engineering that nature in earnest! What if our presumption of artifactuality weren’t so much the problem, as the characterization? What if the problem isn’t that cognitive science is artifactual so much as how it is?

After all, we’ve learned a tremendous amount about this how in the past decades: the idea of dismissing all this detail on the basis of a priori guesswork seems more than a little suspect. The track record would suggest extreme caution. As the boggling scale of the cognitive scientific project should make clear, everything turns on the biological details of cognition. We now know, for instance, that the brain employs legions of special purpose devices to navigate its environments. We know that cognition is thoroughly heuristic, that it turns on cues, bits of available information statistically correlated to systems requiring solution.

Most all systems in our environment shed information enabling the prediction of subsequent behaviours absent the mechanical particulars of that information. The human brain is exquisitely tuned to identify and exploit the correlation of information available and subsequent behaviours. The artifactuality of biology is an evolutionary one, and as such geared to the thrifty solution of high impact problems. To say that cognition (animal or human) is heuristic is to say it’s organized according to the kinds of problems our ancestors needed to solve, and not according to those belonging to academics. Human cognition consists of artifactualities, subsystems dedicated to certain kinds of problem ecologies. Moreover, it consists of artifactualities selected to answer questions quite different from those posed by philosophers.

These two facts drastically alter the landscape of the apparent problem posed by ‘correlationism.’ We have ample theoretical and empirical reasons to believe that mechanistic cognition and intentional cognition comprise two quite different cognitive regimes, the one dedicated to explanation via high-dimensional (physical) sourcing, the other dedicated to explanation absent that sourcing. As an intentional phenomena, objectivity clearly belongs to the latter. Mechanistic cognition, meanwhile, is artifactual. What if it’s the case that ‘objectivity’ is the turn of a screw in a cognitive system selected to solve in the absence of artifactual information? Since intentional cognition turns on specific cues to leverage solutions, and since those cues appear sufficient (to be the only game in town where that behaviour is concerned), the high-dimensional sourcing of that same behavior generates a philosophical crash space—and a storied one at that! What seems sourceless and self-evident becomes patently impossible.

Short magic, cognitive systems possess the environmental relationships they do thanks to super-complicated histories of natural and neural selection—evolution and learning. Let’s call this their orientation, understood as the nonintentional (‘zombie’) correlate of ‘perspective.’ The human brain is possibly the most complex thing we know of in the universe (a fact which should render any theory of the human neglecting that complexity suspect). Our cognitive systems, in other words, possess physically intractable orientations. How intractable? Enough that billions of dollars in research has merely scratched the surface.

Any capacity to cognize this relationship will perforce be radically heuristic, which is to say, provide a means to solve some critical range of problems—a problem ecology—absent natural historical information. The orientation heuristically cognized, of course, is the full-dimensional relationship we actually possess, only hacked in ways that generate solutions (repetitions of behaviour) while neglecting the physical details of that relationship.

Most significantly, orientation neglects the dimension of mediation: thought and perception (whatever they amount to) are thoroughly blind to their immediate sources. This cognitive blindness to the activity of cognition, or medial neglect, amounts to a gross insensitivity to our physical continuity with our environments, the fact that we break no thermodynamic laws. Our orientation, in other words, is characterized by a profound, structural insensitivity to its own constitution—its biological artifactuality, among other things. This auto-insensitivity, not surprisingly, includes insensitivity to the fact of this insensitivity, and thus the default presumption of sufficiency. Specialized sensitivities are required to flag insufficiencies, after all, and like all biological devices, they do not come for free. Not only are we blind to our position within the superordinate systems comprising nature, we’re blind to our blindness, and so, unable to distinguish table-scraps from a banquet, we are duped into affirming inexplicable spontanieties.

‘Truth’ belongs to our machinery for communicating (among other things) the sufficiency of iterable orientations within superordinate systems given medial neglect. You could say it’s a way to advertise clockwork positioning (functional sufficiency) absent any inkling of the clock. ‘Objectivity,’ the term denoting the supposed general property of being true apart from individual perspectives, is a deliberative contrivance derived from practical applications of ‘truth’—the product of ‘philosophical reflection.’ The problem with objectivity as a phenomenon (as opposed to ‘objectivity’ as a component of some larger cognitive articulation) is that the sufficiency of iterable orientations within superordinate systems is always a contingent affair. Whether ‘truth’ occasions sufficiency is always an open question, since the system provides, at best, a rough and ready way to communicate and/or troubleshoot orientation. Unpredictable events regularly make liars of us all. The notion of facts ‘being true’ absent the mediation of human cognition, ‘objectivity,’ also provides a rough and ready way to communicate and/or troubleshoot orientation in certain circumstances. We regularly predict felicitous orientations without the least sensitivity to their artifactual nature, absent any inkling how their pins lie in intractable high-dimensional coincidences between buzzing brains. This insensitivity generates the illusion of absolute orientation, a position outside natural regularities—a ‘view from nowhere.’ We are a worm in the gut of nature convinced we possess disembodied eyes. And so long as the consequences of our orientations remain felicitous, our conceit need not be tested. Our orientations might as well ‘stand nowhere’ absent cognition of their limits.

Thus can ‘truth’ and ‘objectivity’ be naturalized and their peculiarities explained.

The primary cognitive moral here is that lacking information has positive cognitive consequences, especially when it comes to deliberative metacognition, our attempts to understand our nature via philosophical reflection alone. Correlationism evidences this in a number of ways.

As soon as the problem of cognition is characterized as the problem of thought and being, it becomes insoluble. Intentional cognition is heuristic: it neglects the nature of the systems involved, exploiting cues correlated to the systems requiring solution instead. The application of intentional cognition to theoretical explanation, therefore, amounts to the attempt to solve natures using a system adapted to neglect natures. A great deal of traditional philosophy is dedicated to the theoretical understanding of cognition via intentional idioms—via applications of intentional cognition. Thus the morass of disputation. We presume that specialized problem-solving systems possess general application. Lacking the capacity to cognize our inability to cognize the theoretical nature of cognition, we presume sufficiency. Orientation, the relation between neural systems and their proximal and distal environments—between two systems of objects—becomes perspective, the relation between subjects (or systems of subjects) and systems of objects (environments). If one conflates the manifest artifactual nature of orientation for the artifactual nature of perspective (subjectivity), then objectivity itself becomes a subjective artifact, and therefore nothing objective at all. Since orientation characterizes our every attempt to solve for cognition, conflating it with perspective renders perspective inescapable, and objectivity all but inexplicable. Thus the crash space of traditional epistemology.

Now I know from hard experience that the typical response to the picture sketched above is to simply insist on the conflation of orientation and perspective, to assert that my position, despite its explanatory power, simply amounts to more of the same, another perspectival Klein Bottle distinctive only for its egregious ‘scientism.’ Only my intrinsically intentional perspective, I am told, allows me to claim that such perspectives are metacognitive artifacts, a consequence of medial neglect. But asserting perspective before orientation on the basis of metacognitive intuitions alone not only begs the question, it also beggars explanation, delivering the project of cognizing cognition to never-ending disputation—an inability to even formulate explananda, let alone explain anything. This is why I like asking intentionalists how many centuries of theoretical standstill we should expect before that oft advertised and never delivered breakthrough finally arrives. The sin Meillassoux attributes to correlationism, the inability to explain cognition, is really just the sin belonging to intentional philosophy as a whole. Thanks to medial neglect, metcognition,  blind to both its sources and its source blindness, insists we stand outside nature. Tackling this intuition with intentional idioms leaves our every attempt to rationalize our connection underdetermined, a matter of interminable controversy. The Scandal dwells on eternal.

I think orientation precedes perspective—and obviously so, having watched loved ones dismantled by brain disease. I think understanding the role of neglect in orientation explains the peculiarities of perspective, provides a parsimonious way to understand the apparent first-person in terms of the neglect structure belonging to the third. There’s no problem with escaping the dream tank and touching the world simply because there’s no ontological distinction between ourselves and the cosmos. We constitute a small region of a far greater territory, the proximal attuned to the distal. Understanding the heuristic nature of ‘truth’ and ‘objectivity,’ I restrict their application to adaptive problem-ecologies, and simply ask those who would turn them into something ontologically exceptional why they would trust low-dimensional intuitions over empirical data, especially when those intuitions pretty much guarantee perpetual theoretical underdetermination. Far better trust to our childhood presumptions of truth and reality, in the practical applications of these idioms, than in any one of the numberless theoretical misapplications ‘discovering’ this trust fundamentally (as opposed to situationally) ‘naïve.’

The cognitive difference, what separates the consequences of our claims, has never been about ‘subjectivity’ versus ‘objectivity,’ but rather intersystematicity, the integration of ever-more sensitive orientations possessing ever more effectiveness into the superordinate systems encompassing us all. Physically speaking, we’ve long known that this has to be the case. Short actual difference making differences, be they photons striking our retinas or compression waves striking our eardrums or so on, no difference is made. Even Meillassoux acknowledges the necessity of physical contact. What we’ve lacked is a way of seeing how our apparently immediate intentional intuitions, be they phenomenological, ontological, or normative, fit into this high-dimensional—physical—picture.

Heuristic Neglect Theory not only provides this way, it also explains why it has proven so elusive over the centuries. HNT explains the wrong turn mentioned above. The question of orientation immediately cues the systems our ancestors developed to circumvent medial neglect. Solving for our behaviourally salient environmental relationships, in other words, automatically formats the problem in intentional terms. The automaticity of the application of intentional cognition renders it apparently ‘self-evident.’

The reason the critique of correlationism and speculative realism suffer all the problems of underdetermination their proponents attribute to correlationism is that they take this very same wrong turn. How is Meillassoux’s ‘hyper-chaos,’ yet another adventure in a priori speculation, anything more than another pebble tossed upon the heap of traditional disputation? Novelty alone recommends them. Otherwise they leave us every bit as mystified, every bit as unable to accommodate the torrent of relevant scientific findings, and therefore every bit as irrelevant to the breathtaking revolutions even now sweeping us and our traditions out to sea. Like the traditions they claim to supersede, they peddle cognitive abjection, discursive immobility, in the guise of fundamental insight.

Theoretical speculation is cheap, which is why it’s so frightfully easy to make any philosophical account look bad. All you need do is start worrying definitions, then let the conceptual games begin. This is why the warrant of any account is always a global affair, why the power of Evolutionary Theory, for example, doesn’t so much lie in the immunity of its formulations to philosophical critique, but in how much it explains on nature’s dime alone. The warrant of Heuristic Neglect Theory likewise turns on the combination of parsimony and explanatory power.

Anyone arguing that HNT necessarily presupposes some X, be it ontological or normative, is simply begging the question. Doesn’t HNT presuppose the reality of intentional objectivity? Not at all. HNT certainly presupposes applications of intentional cognition, which, given medial neglect, philosophers pose as functional or ontological realities. On HNT, a theory can be true even though, high-dimensionally speaking, there is no such thing as truth. Truth talk possesses efficacy in certain practical problem-ecologies, but because it participates in solving something otherwise neglected, namely the superordinate systematicity of orientations, it remains beyond the pale of intentional resolution.

Even though sophisticated critics of eliminativism acknowledge the incoherence of the tu quoque, I realize this remains a hard twist for many (if not most) to absorb, let alone accept. But this is exactly as it should be, both insofar as something has to explain why isolating the wrong turn has proven so stupendously difficult, and because this is precisely the kind of trap we should expect, given the heuristic and fractionate nature of human cognition. ‘Knowledge’ provides a handle on the intersection of vast, high-dimensional histories, a way to manage orientations without understanding the least thing about them. To know knowledge, we will come to realize, is to know there is no such thing, simply because ‘knowing’ is a resolutely practical affair, almost certainly inscrutable to intentional cognition. When you’re in the intentional mode, this statement simply sounds preposterous—I know it once struck me as such! It’s only when you appreciate how far your intuitions have strayed from those of your childhood, back when your only applications of intentional cognition were practical, that you can see the possibility of a more continuous, intersystematic way to orient ourselves to the cosmos. There was a time before you wandered into the ancient funhouse of heuristic misapplication, when you could not distinguish between your perspective and your orientation. HNT provides a theoretical way to recover that time and take a radically different path.

As a bona fide theory of cognition, HNT provides a way to understand our spectacular inability to understand ourselves. HNT can explain ‘aporia.’ The metacognitive resources recruited for the purposes of philosophical reflection possess alarm bells—sensitivities to their own limits—relevant only to their ancestral applications. The kinds of cognitive apories (crash spaces) characterizing traditional philosophy are precisely those we might expect, given the sudden ability to exercise specialized metacognitive resources out of school, to apply, among other things, the problem-solving power of intentional cognition to the question of intentional cognition.

As a bona fide theory of cognition, HNT bears as much on artificial cognition as on biological cognition, and as such, can be used to understand and navigate the already radical and accelerating transformation of our cognitive ecologies. HNT scales, from the subpersonal to the social, and this means that HNT is relevant to the technological madness of the now.

As a bona fide empirical theory, HNT, unlike any traditional theory of intentionality, will be sorted. Either science will find that metacognition actually neglects information in the ways I propose, or it won’t. Either science will find this neglect possesses the consequences I theorize, or it won’t. Nothing exceptional and contentious is required. With our growing understanding of the brain and consciousness comes a growing understanding of information access and processing capacity—and the neglect structures that fall out of them. The human brain abounds in bottlenecks, none of which are more dramatic than consciousness itself.

Cognition is biomechanical. The ‘correlation of thought and being,’ on my account, is the correlation of being and being. The ontology of HNT is resolutely flat. Once we understand that we only glimpse as much of our orientations as our ancestors required for reproduction, and nothing more, we can see that ‘thought,’ whatever it amounts to, is material through and through.

The evidence of this lies strewn throughout the cognitive wreckage of speculation, the alien crash site of philosophy.



[1] This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegenerative (10.183 billion). 21/01/2017


Framing “On Alien Philosophy”…

by rsbakker


Peter Hankins of Conscious Entities fame has a piece considering “On Alien Philosophy.” The debate is just getting started, but I thought it worthwhile explaining why I think this particular paper of mine amounts to more than yet just another interpretation to heap onto the intractable problem of ourselves.

Consider the four following claims:

1) We have biologically constrained (in terms of information access and processing resources) metacognitive capacities ancestrally tuned to the solution of various practical problem ecologies, and capable of exaptation to various other problems.

2) ‘Philosophical reflection’ constitutes such an exaptation.

3) All heuristic exaptations inherit, to some extent, the problem-solving limitations of the heuristic exapted.

4) ‘Philosophical reflection’ inherits the problem-solving limitations of deliberative metacognition.

Now I don’t think there’s much anything controversial about any of these claims (though, to be certain, there’s a great many devils lurking in the details adduced). So note what happens when we add the following:

5) We should expect human philosophical practice will express, in a variety of ways, the problem-solving limitations of deliberative metacognition.

Which seems equally safe. But note how the terrain of the philosophical debate regarding the nature of the soul has changed. Any claim purporting the exceptional nature of this or that intentional phenomena now needs to run the gauntlet of (5). Why assume we cognize something ontologically exceptional when we know we are bound to be duped somehow? All things being equal, mediocre explanations will always trump exceptional ones, after all.

The challenge of (5) has been around for quite some time, but if you read (precritical) eliminativists like Churchland, Stich, or Rosenberg, this is where the battle grinds to a standstill. Why? Because they have no general account of how the inevitable problem-solving limitations of deliberative metacognition would be expressed in human philosophical practice, let alone how they would generate the appearance of intentional phenomena. Since all they have are promissory notes and suggestive gestures, ontologically exceptional accounts remain the only game in town. So, despite the power of (5), the only way to speak of intentional phenomena remains the traditional, philosophical one. Science is blind without theory, so absent any eliminativist account of intentional phenomena, it has no clear way to proceed with their investigation. So it hews to exceptional posits, trusting in their local efficacy, and assuming they will be demystified by discoveries to come.

Thus the challenge posed by Alien Philosophy. By giving real, abductive teeth to (5), my account overturns the argumentative terrain between eliminativism and intentionalism by transforming the explanatory stakes. It shows us how stupidity, understood ecologically, provides everything we need to understand our otherwise baffling intuitions regarding intentional phenomena. “On Alien Philosophy” challenges the Intentionalist to explain more with less (the very thing, of course, he or she cannot do).

Now I think I’ve solved the problem, that I have a way to genuinely naturalize meaning and cognition. The science will sort my pretensions in due course, but in the meantime, the heuristic neglect account of intentionality, given its combination of mediocrity and explanatory power, has to be regarded as a serious contender.

Scripture become Philosophy become Fantasy

by rsbakker


Cosmos and History has published “From Scripture to Fantasy: Adrian Johnston and the Problem of Continental Fundamentalism” in their most recent edition, which can be found here. This is a virus that needs to infect as many continental philosophy graduate students as possible, lest the whole tradition be lost to irrelevance. The last millennium’s radicals have become this millennium’s Pharisees with frightening speed, and now only the breathless have any hope of keeping pace.

ABSTRACT: Only the rise of science allowed us to identify scriptural ontologies as fantastic conceits, as anthropomorphizations of an indifferent universe. Now that science is beginning to genuinely disenchant the human soul, history suggests that traditional humanistic discourses are about to be rendered fantastic as well. Via a critical reading of Adrian Johnston’s ‘transcendental materialism,’ I attempt to show both the shape and the dimensions of the sociocognitive dilemma presently facing Continental philosophers as they appear to their outgroup detractors. Trusting speculative a priori claims regarding the nature of processes and entities under scientific investigation already excludes Continental philosophers from serious discussion. Using such claims, as Johnston does, to assert the fundamentally intentional nature of the universe amounts to anthropomorphism. Continental philosophy needs to honestly appraise the nature of its relation to the scientific civilization it purports to decode and guide, lest it become mere fantasy, or worse yet, conceptual religion.

KEYWORDS: Intentionalism; Eliminativism; Humanities; Heuristics; Speculative Materialism

All transcendental indignation welcome! I was a believer once.

It Is What It Is (Until Notified Otherwise)

by rsbakker



The thing to always remember when one finds oneself in the middle of some historically intractable philosophical debate is that path-dependency is somehow to blame. This is simply to say that the problem is historical in that squabbles regarding theoretical natures always arises from some background of relatively problem-free practical application. At some point, some turn is taken and things that seem trivially obvious suddenly seem stupendously mysterious. St. Augustine, in addition to giving us one of the most famous quotes in philosophy, gives us a wonderful example of this in The Confessions when he writes:

“What, then, is time? If no one asks of me, I know; if I wish to explain to him who asks, I know not.” XI, XIV, 17

But the rather sobering fact is that this is the case with a great number of the second order questions we can pose. What is mathematics? What’s a rule? What’s meaning? What’s cause? And of course, what is phenomenal consciousness?

So what is it with second order interrogations? Why is ‘time talk’ so easy and effortlessly used even though we find ourselves gobsmacked each and every time someone asks what time qua time is? It seems pretty clear that either we lack the information required or the capacity required or some nefarious combination of both. If framing the problem like this sounds like a no-brainer, that’s because it is a no-brainer. The remarkable thing lies in the way it recasts the issue at stake, because as it turns out, the question of the information and capacity we have available is a biological one, and this provides a cognitive ecological means of tackling the problem. Since practical solving for time (‘timing’) is obviously central to survival, it makes sense that we would possess the information access and cognitive capacity required to solve a wide variety of timing issues. Given that theoretical solving for time (qua-time) isn’t central to survival (no species does it and only our species attempts it), it makes sense that we wouldn’t possess the information access and cognitive capacity required, that we would suffer time-qua-time blindness.

From a cognitive ecological perspective, in other words, St. Augustine’s perplexity should come as no surprise at all. Of course solving time-qua-time is mystifying: we evolved the access and capacity required for solving the practical problems of timing, and not the theoretical problem of time. Now I admit if the cognitive ecological approach ground to a halt here it wouldn’t be terribly illuminating, but there’s quite a bit more to be said: it turns out cognitive ecology is highly suggestive of the different ways we might expect our attempts to solve things like time-qua-time to break down.

What would it be like to reach the problem-solving limits of some practically oriented problem-solving mode? Well, we should expect our assumptions/intuitions to stop delivering answers. My daughter is presently going through a ‘cootie-catcher’ phase and is continually instructing me to ask questions, then upbraiding me when my queries don’t fit the matrix of possible ‘answers’ provided by the cootie-catcher (yes, no, and versions of maybe). Sometimes she catches these ill-posed questions immediately, and sometimes she doesn’t catch them until the cootie-catcher generates a nonsensical response.


Now imagine your child never revealed their cootie-catcher to you: you asked questions, then picked colours or numbers or animals, and it turned out some were intelligibly answered, and some were not. Very quickly you would suss out the kinds of questions that could be asked, and the kinds that could not. Now imagine unbeknownst to you that your child replaced their cootie-catcher with a computer running two separately tasked, distributed AlphaGo type programs, the first trained to provide well-formed (if not necessarily true) answers to basic questions regarding causality and nothing else, the second trained to provide well-formed (if not necessarily true) answers to basic questions regarding goals and intent. What kind of conclusions would you draw, or more importantly, assume? Over time you would come to suss out the questions generating ill-formed answers versus questions generating well-formed ones. But you would have no way of knowing that two functionally distinct systems were responsible for the well-formed answers: causal and purposive modes would seem the product of one cognitive system. In the absence of distinctions you would presume unity.

Think of the difference between Plato likening memory to an aviary in the Theaetetus and the fractionate, generative memory we now know to be the case. The fact that Plato assumed as much, unity and retrieval, shouts something incredibly important once placed in a cognitive ecological context. What it suggests is that purely deliberative attempts to solve second-order problems, to ask questions like what is memory-qua-memory, will almost certainly run afoul the problem of default identity, the identification that comes about for the want of distinctions. To return to our cootie-catcher example, it’s not simply that we would report unity regarding our child’s two AlphaGo type programs the way Plato did with memory, it’s that information involving its dual structure would play no role in our cognitive economy whatsoever. Unity, you could say, is the assumption built into the system. (And this applies as much to AI as it does to human beings. The first ‘driverless fatality’ died because his Tesla Model S failed to distinguish a truck trailer from the sky.)

Default identity, I think, can play havoc with even the most careful philosophical interrogations—such as the one Eric Schwitzgebel gives in the course of rebutting Keith Frankish, both on his blog and in his response in The Journal of Consciousness Studies, “Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage.”

According to Eric, “Illusionism as a Theory of Consciousness” presents the phenomenal realist with a dilemma: either they commit to puzzling ontological features such as simple, ineffable, intrinsic, or so on, or they commit to explaining those features away, which is to say, some variety of Illusionism. Since Eric both believes that phenomenal consciousness is real, and that the extraordinary properties attributed to it are likely not real, he proposes a third way, a formulation of phenomenal experience that neither inflates it into something untenable, nor deflates into something that is plainly not phenomenal experience. “The best way to meet Frankish’s challenge,” he writes, “is to provide something that the field of consciousness studies in any case needs: a clear definition of phenomenal consciousness, a definition that targets a phenomenon that is both substantively interesting in the way that phenomenal consciousness is widely thought to be interesting but also innocent of problematic metaphysical and epistemological assumptions” (2).

It’s worth noting the upshot of what Eric is saying here: the scientific study of phenomenal consciousness cannot, as yet, even formulate their primary explanandum. The trick, as he sees it, is to find some conceptual way to avoid the baggage, while holding onto some semblance of a wardrobe. And his solution, you might say, is to wear as many outfits as he possibly can. He proposes that definition by example is uniquely suited to anchor an ontologically and epistemologically innocent concept of phenomenal consciousness.

He has but one caveat: any adequate formulation of phenomenal consciousness has to account or allow for what Eric terms its ‘wonderfulness’:

If the reduction of phenomenal consciousness to something physical or functional or “easy” is possible, it should take some work. It should not be obviously so, just on the surface of the definition. We should be able to wonder how consciousness could possibly arise from functional mechanisms and matter in motion. Call this the wonderfulness condition. 3

He concedes the traditional properties ascribed to phenomenal experience outrun naturalistic credulity, but the feature of begging belief remains to be explained. This is the part of Eric’s position to keep an eye on because it means his key defense against eliminativism is abductive. Whatever phenomenal consciousness is, it seems safe to say it is not something easily solved. Any account purporting to solve phenomenal consciousness that leaves the wonderfulness condition unsatisfied is likely missing phenomenal consciousness altogether.

And so Eric provides a list of positive examples including sensory and somatic experiences, conscious imagery, emotional experience, thinking and desiring, dreams, and even other people, insofar as we continually attribute these very same kinds of experiences to them. By way of negative examples, he mentions a variety of intimate, yet obviously not phenomenally conscious processes, such as fingernail growth, intestinal lipid absorption, and so on.

He writes:

Phenomenal consciousness is the most folk psychologically obvious thing or feature that the positive examples possess and that the negative examples lack. I do think that there is one very obvious feature that ties together sensory experiences, imagery experiences, emotional experiences, dream experiences, and conscious thoughts and desires. They’re all conscious experiences. None of the other stuff is experienced (lipid absorption, the tactile smoothness of your desk, etc.). I hope it feels to you like I have belabored an obvious point. Indeed, my argumentative strategy relies upon this obviousness. 8

Intuition, the apparent obviousness of his examples, is what he stresses here. The beauty of definition by example is that offering instances of the phenomenon at issue allows you to remain agnostic regarding the properties possessed by that phenomenon. It actually seems to deliver the very metaphysical and epistemological innocence Eric needs to stave off the charge of inflation. It really does allow him to ditch the baggage and travel wearing all his clothes, or so it seems.

Meanwhile the wonderfulness condition, though determining the phenomenon, does so indirectly, via the obvious impact it has on human attempts to cognize experience-qua-experience. Whatever phenomenal consciousness is, contemplating it provokes wonder.

And so the argument is laid out, as spare and elegant as all of Eric’s arguments. It’s pretty clear these are examples of whatever it is we call phenomenal consciousness. Of course, there’s something about them that we find downright stupefying. Surely, he asks, we can be phenomenal realists in this austere respect?

For all its intuitive appeal, the problem with this approach is that it almost certainly presumes a simplicity that human cognition does not possess. Conceptually, we can bring this out with a single question: Is phenomenal consciousness the most folk psychologically obvious thing or feature the examples share, or is it obvious in some other respect? Eric’s claim amounts to saying the recognition of phenomenal consciousness as such belongs to everyday cognition. But is this the case? Typically, recognition of experience-qua-experience is thought to be an intellectual achievement of some kind, a first step toward the ‘philosophical’ or ‘reflective’ or ‘contemplative’ attitude. Shouldn’t we say, rather, that phenomenal consciousness is the most obvious thing or feature these examples share upon reflection, which is to say, philosophically?

This alternative need only be raised to drag Eric’s formulation back into the mire of conceptual definition, I think. But on a cognitive ecological picture, we can actually reframe this conceptual problematization in path-dependent terms, and so more forcefully insist on a distinction of modes and therefore a distinction in problem-solving ecologies. Recall Augustine, how we understand time without difficulty until we ask the question of time qua time. Our cognitive systems have no serious difficulty with timing, but then abruptly break down when we ask the question of time as such. Even though we had the information and capacity required to solve any number of practical issues involving time, as soon as we pose the question of time-qua-time that fluency evaporates and we find ourselves out-and-out mystified.

Eric’s definition by example, as an explicitly conceptual exercise, clearly involves something more than everyday applications of experience talk. The answer intuitively feels as natural as can be—there must be some property X these instances share or exclude, certainly!—but the question strikes most everyone as exceptional, at least until they grow accustomed to it. Raising the question, as Augustine shows us, is precisely where the problem begins, and as my daughter would be quick to remind Eric, cootie-catchers only work if we ask the right question. Human cognition is fractionate and heuristic, after all.


All organisms are immersed in potential information, difference making differences that could spell the difference between life and death. Given the difficulties involved in the isolation of causes, they often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information organisms have, evolved and learned sensitivities to effects systematically correlated to those environmental systems relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers adapted to deep information environments, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible.

We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems and the resources required to solve them are wildly disparate, not all access is equal.

Information access, I think, divides cognition into two distinct forms, two different families of ‘AlphaGo type’ programs. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.

Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.

Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.

(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the necessity, the efficacy, and the limitations of source insensitive (cue correlative) cognition. Andrei Cimpian’s lab and the work of Klaus Fiedler (as well as that of the Adaptive Behaviour and Cognition Research Group more generally) are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)

So what are we to make of Eric’s attempt to innocently (folk psychologically) pose the question of experience-qua-experience in light of this rudimentary distinction?

If one takes the brain’s ability to cognize its own cognitive functions as a condition of ‘experience talk,’ it becomes very clear very quickly that experience talk belongs to a source insensitive cognitive regime, a system adapted to exploit correlations between the information consumed (cues) and the vastly complicated systems (oneself and others) requiring solution. This suggests that Eric’s definition by example is anything but theoretically innocent, assuming, as it does, that our source insensitive, experience-talk systems pick out something in the domain of source sensitive cognition… something ‘real.’ Defining by example cues our experience-talk system, which produces indubitable instances of recognition. Phenomenal consciousness becomes, apparently, an indubitable something. Given our inability to distinguish between our own cognitive systems (given ‘cognition-qua-cognition blindness’), default identity prevails; suddenly it seems obvious that phenomenal experience somehow, minimally, belongs to the order of the real. And once again, we find ourselves attempting to square ‘posits’ belonging to sourceless modes of cognition with a world where everything has a source.

We can now see how the wonderfulness condition, which Eric sees working in concert with his definition by example, actually cuts against it. Experience-qua-experience provokes wonder precisely because it delivers us to crash space, the point where heuristic misapplication leads our intuitions astray. Simply by asking this question, we have taken a component from a source insensitive cognitive system relying (qua heuristic) on strategic correlations to the systems requiring solution to solve, and asked a completely different, source sensitive system to make sense of it. Philosophical reflection is a ‘cultural achievement’ precisely because it involves using our brains in new ways, applying ancient tools to novel questions. Doing so, however, inevitably leaves us stumbling around in a darkness we cannot see, running afoul confounds we have no way of intuiting, simply because they impacted our ancestors not at all. Small wonder ‘phenomenal consciousness’ provokes wonder. How could the most obvious thing possess so few degrees of cognitive freedom? How could light itself deliver us to darkness?

I appreciate the counterintuitive nature of the view I’m presenting here, the way it requires seeing conceptual moves in terms of physical path-dependencies, as belonging to a heuristic gearbox where our numbness to the grinding perpetually convinces us that this time, at long last, we have slipped from neutral into drive. But recall the case of memory, the way blindness to its neurocognitive intricacies led Plato to assume it simple. Only now can we run our (exceedingly dim) metacognitive impressions of memory through the gamut of what we know, see it as a garden of forking paths. The suggestion here is that posing the question of experience-qua-experience poses a crucial fork in the consciousness studies road, the point where a component of source-insensitive cognition, ‘experience,’ finds itself dragged into the court of source sensitivity, and productive inquiry grinds to a general halt.

When I employ experience talk in a practical, first-order way, I have a great deal of confidence in that talk. But when I employ experience talk in a theoretical, second-order way, I have next to no confidence in that talk. Why would I? Why would anyone, given the near-certainty of chronic underdetermination? Even more, I can see of no way (short magic) for our brain to have anything other than radically opportunistic and heuristic contact with its own functions. Either specialized, simple heuristics comprise deliberative metacognition or deliberative metacognition does not exist. In other words, I see no way of avoiding experience-qua-experience blindness.

This flat out means that on a high dimensional view (one open to as much relevant physical information as possible), there is just no such thing as ‘phenomenal consciousness.’ I am forced to rely on experience related talk in theoretical contexts all the time, as do scientists in countless lines of research. There is no doubt whatsoever that experience-talk draws water from far more than just ‘folk psychological’ wells. But this just means that various forms of heuristic cognition can be adapted to various experimentally regimented cognitive ecologies—experience-talk can be operationalized. It would be strange if this weren’t the case, and it does nothing to alleviate the fact that solving for experience-qua-experience delivers us, time and again, to crash space.

One does not have to believe in the reality of phenomenal consciousness to believe in the reality of the systems employing experience-talk. As we are beginning to discover, the puzzle has never been one of figuring out what phenomenal experiences could possibly be, but rather figuring out the biological systems that employ them. The greater our understanding of this, the greater our understanding of the confounds characterizing that perennial crash space we call philosophy.