The Philosopher, the Drunk, and the Lamppost

A crucial variable of interest is the accuracy of metacognitive reports with respect to their object-level targets: in other words, how well do we know our own minds? We now understand metacognition to be under segregated neural control, a conclusion that might have surprised Comte, and one that runs counter to an intuition that we have veridical access to the accuracy of our perceptions, memories and decisions. A detailed, and eventually mechanistic, account of metacognition at the neural level is a necessary first step to understanding the failures of metacognition that occur following brain damage and psychiatric disorder. Stephen M. Fleming and Raymond J. Dolan, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1338–1349doi:10.1098/rstb.2011.0417

As well as the degree to which we should accept the deliverances of philosophical reflection.

Philosophical reflection is a cultural achievement, an exaptation of pre-existing cognitive capacities. It is entirely possible that philosophical reflection, as an exaptation of pre-existing biocognitive capacities, suffers any number of cognitive short-circuits. And this could very well explain why philosophy suffers the perennial problems it does.

In other words, the empirical possibility of Blind Brain Theory cannot be doubted—no matter how disquieting its consequences seem to be. What I would like to assess here is the probability of the account being empirically substantiated.

The thesis is that traditional philosophical problem-solving continually runs afoul illusions falling out of metacognitive neglect. The idea is that intentional philosophy has been the butt of the old joke about the police officer who stops to help a drunk searching for his keys beneath a lamppost. The punch-line, of course, is that even though the drunk lost his keys in the parking lot, he’s searching beneath the lamppost because that’s the only place he can see. The twist for the philosopher lies in the way neglect consigns the parking lot—the drunk’s whole world in fact—to oblivion, generating the illusion that the light and the lamppost comprise an independent order of existence. For the philosopher, the keys to understanding what we are essentially can be found nowhere else because they exhaust everything that is within that order. Of course the keys that this or that philosopher claims to have found take wildly different forms—they all but shout profound theoretical underdetermination—but this seems to trouble only the skeptical spoil-sports.

Now I personally think the skeptics have always possessed far and away the better position, but since they could only articulate their critiques in the same speculative idiom as philosophy, they have been every bit as easy to ignore as philosophers. But times, I hope to show, have changed—dramatically so. Intentional philosophy is simply another family of prescientific discourses. Now that science has firmly established itself within its traditional domains, we should expect it to be progressively delegitimized the way all prescientific discourses have delegitimized.

To begin with, it is simply an empirical fact that philosophical reflection on the nature of human cognition suffers massive neglect. To be honest, I sometimes find myself amazed that I even need to make this argument to people. Our blindness to our own cognitive makeup is the whole reason we require cognitive science in the first place. Every single fact that the sciences of cognition and the brain have discovered is another fact that philosophical reflection is all but blind to, another ‘dreaded unknown unknown’ that has always structured our cognitive activity without our knowledge.

As Keith Frankish and Jonathan Evans write:

The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning.  “The Duality of Mind: An Historical Perspective, In Two Minds: Dual Processes and Beyond, 25

We are standing on the cusp of a revolution in self-understanding unlike any in human history. As they note, the process of digesting the implications of these discoveries is just getting underway—news of the revolution has just hit the streets of capital, and the provinces will likely be a long time in hearing it. As a result, the old ways still enjoy what might be called the ‘Only-game-in-town Effect,’ but not for very long.

The deliverances of theoretical metacognition just cannot be trusted. This is simply an empirical fact. Stanslaus Dehaene even goes so far as to state it as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).

As I mentioned, I think this is a deathblow, but philosophers have devised a number of cunning ways to immunize themselves from this fact—philosophy is the art of rationalization, after all! If the brain (for some pretty obvious reasons) is horrible at metacognizing brain functions, then one need only insist that something more than the brain is at work. Since souls will no longer do, the philosopher switches to functions, but not any old functions. The fact that the functions of a system look different depending on the grain of investigation is no surprise: of course neurocellular level descriptions will differ from neural-network level descriptions. The intentional philosopher, however, wants to argue for a special, emergent order of intentional functions, one that happens to correspond to the deliverances of philosophical reflection. Aside from this happy correspondence, what makes these special functions so special is their incompatibility with biomechanical functions—an incompatibility so profound that biomechanical explanation renders them all but unintelligible.

Call this the ‘apples and oranges’ strategy. Now I think the sheer convenience of this view should set off alarm bells: If the science of a domain contradicts the findings of philosophical reflection, then that science must be exploring a different domain. But the picture is far more complicated, of course. One does not overthrow more than two thousand years of (apparent) self-understanding on the back of two decades of scientific research. And even absent this institutional sanction, there remains something profoundly compelling about the intentional deliverances of philosophical reflection, despite all the manifest problems. The intentionalist need only bid you to theoretically reflect, and lo, there are the oranges… Something has to explain them!

In other words, pointing out the mountain of unknown unknowns revealed by cognitive science is simply not enough to decisively undermine the conceits of intentional philosophy. I think it should be, but then I think the ancient skeptics had the better of things from the outset. What we really need, if we want to put an end to this vast squandering of intellectual resources, is to explain the oranges. So long as oranges exist, some kind of abductive case can be made for intentional philosophy. Doing this requires we take a closer look at what cognitive science can teach us about philosophical reflection and its capacity to generate self-understanding.

The fact is the intentionalist is in something of a dilemma. Their functions, they admit, are naturalistically inscrutable. Since they can’t abide dualism, they need their functions to be natural (or whatever it is the sciences are conjuring miracles out of) somehow, so whatever functions they posit, say as one realized in the scorekeeping attitudes of communities, they have to track brain function somehow. This responsibility to cognitive scientific finding regarding their object is matched by a responsibility to cognitive scientific finding regarding their cognitive capacity. Oranges or no oranges, both their domain and their capacity to cognize that domain answer to what cognitive science ultimately reveals. Some kind of emergent order has to be discovered within the order of nature, and we have to have to somehow possess the capacity to reliably metacognize that emergent order. Given what we already know, I think a strong case can be made that this latter, at least, is almost certainly impossible.

Consider Dehaene’s Global Neuronal Workspace Theory of Consciousness (GNW). On his account, at any given moment the information available for conscious report has been selected from parallel swarms of nonconscious processes, stabilized, and broadcast across the brain for consumption by other swarms of other nonconscious processes. As Dehaene writes:

The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result—a conscious symbol—to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing. Consciousness and the Brain, 105

Whatever philosophical reflection amounts to, insofar as it involves conscious report it involves this ‘hybrid serial-parallel machine’ described by Dehaene and his colleagues, a model which is entirely consistent with the ‘adaptive unconscious’ (See Tim Wilson’s A Stranger to Ourselves for a somewhat dated, yet still excellent overview) described in cognitive psychology. Whatever a philosopher can say regarding ‘intentional functions’ must in some way depend on the deliverances of this system.

One of the key claims of the theory, confirmed via a number of different experimental paradigms, is that access (or promotion) to the GNW is all or nothing. The insight is old: psychologists have long studied what is known as the ‘psychological refractory period,’ the way attending to one task tends to blot out or severely impair our ability to perform other tasks simultaneously. But recent research is revealing more of the radical ‘cortical bottleneck’ that marks the boundary between the massively parallel processing of multiple precepts (or interpretations thereof) and the serial stage of conscious cognition. [Marti, S., et al., A shared cortical bottleneck underlying Attentional Blink and Psychological Refractory Period, NeuroImage (2011), doi:10.1016/j.neuroimage.2011.09.063]

This is important because it means that the deliverances the intentional philosopher depend on when reflecting on problems involving intentionality or ‘experience’ more generally are limited to what makes the ‘conscious access cut.’ You could say the situation is actually far worse, since conscious deliberation on conscious phenomena requires the philosopher use the very apparatus they’re attempting to solve. In a sense they’re not only wagering that the information they require actually reaches consciousness in the first place, but that it can be recalled for subsequent conscious deliberation. The same way the scientist cannot incorporate information that doesn’t, either via direct observation or indirect observation via instrumentation, find its way to conscious awareness, the philosopher likewise cannot hazard ‘educated’ guesses regarding information that does not somehow make the conscious access cut, only twice over. In a sense, they’re peering at the remaindered deliverances of a serial straw through a serial straw–one that appears as wide as the sky for neglect! So there is a very real question of whether philosophical reflection, an artifactual form of deliberative cognition, has anything approaching access to the information it needs to solve the kinds of problems it purports to solve. Given the role that information scarcity plays in theoretical underdetermination, the perpetually underdetermined theories posed by intentional philosophers strongly suggest that the answer is no.

But if the science suggests that philosophical reflection may not have access to enough information to answer the questions in its bailiwick, it also raises real questions of whether it has access to the right kind of information. Recent research has focussed on attempting to isolate the mechanisms in the brain responsible for mediating metacognition. The findings seem to be converging on the rostrolateral prefrontal cortex (rlPFC) as playing a pivotal role in the metacognitive accuracy of retrospective reports. As Fleming and Dolan write:

A role for rlPFC in metacognition is consistent with its anatomical position at the top of the cognitive hierarchy, receiving information from other prefrontal cortical regions, cingulate and anterior temporal cortex. Further, compared with non-human primates, rlPFC has a sparser spatial organization that may support greater interconnectivity. The contribution of rlPFC to metacognitive commentary may be to represent task uncertainty in a format suitable for communication to others, consistent with activation here being associated with evaluating self-generated information, and attention to internal representations. Such a conclusion is supported by recent evidence from structural brain imaging that ‘reality monitoring’ and metacognitive accuracy share a common neural substrate in anterior PFC.  Italics added, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1343. doi:10.1098/rstb.2011.0417

As far as I can tell, the rlPFC is perhaps the best candidate we presently have for something like a ‘philosopher module’ [See Badre, et al. "Frontal cortex and the discovery of abstract action rules." Neuron (2010) 66:315–326.] though the functional organization of the PFC more generally remains a mystery. [Kalina Christoff’s site and Steve Fleming’s site are great places to track research developments in this area of cognitive neuroscience] It primarily seems to be engaged by abstract relational and semantic tasks, and plays some kind of role mediating verbal and spatial information. Mapping evidence also shows that its patterns of communication to other brain regions varies as tasks vary; in particular, it seems to engage regions thought to involve visuospatial and semantic processes. [Wendelken et al., “Rostrolateral Prefrontal Cortex: Domain-General or Domain-Sensitive?” Human Brain Mapping, 000:00-00, 2011 1-12.]

Cognitive neuroscience is nowhere close to any decisive picture of abstract metacognition, but hopefully the philosophical moral of the research should be clear: whatever theoretical metacognition is, it is neurobiological. And this is just to say that the nature of philosophical reflection—in the form of say, ‘making things explicit,’ or what have you—is not something that philosophical reflection on ‘conscious experience’ can solve! Dehaene’s law applies as much to metacognition as to any other metacognitive process—as we should expect, given the cortical bottleneck and what we know of the rlPFC. Information is promoted for stabilization and broadcast from nonconscious parallel swarms to be consumed by nonconscious parallel swarms, which include the rlPFC, which in turn somehow informs further stabilizations and broadcasts. What we presently ‘experience,’ the well from which our intentional claims are drawn, somehow comprises the serial ‘stabilization and broadcast’ portion of this process—and nothing else.

The rlPFC is an evolutionary artifact, something our ancestors developed over generations of practical problem-solving. It is part and parcel of the most complicated (not to mention expensive) organ known. Assume, for the moment, that the rlPFC is the place where the magic happens, the part of the ruminating philosopher’s brain where ‘accurate intuitions’ of the ‘nature of mind and thought’ arise allowing for verbal report. (The situation is without a doubt far more complicated, but since complication is precisely the problem the philosopher faces, this example actually does them a favour). There’s no way the rlPFC could assist in accurately cognizing its own function—another rlPFC would be required to do that, requiring a third rlPFC, and so on and so on. In fact, there’s no way the brain could directly cognize its own activities in any high-dimensionally accurate way. What the rlPFC does instead—obviously one would think—is process information for behaviour. It has to earn its keep after all! Given this, one should expect that it is adapted to process information that is itself adapted to solve the kinds of behaviourally related problems faced by our ancestors, that it consists of ad hoc structures processing ad hoc information.

Philosophy is quite obviously an exaptation of the capacities possessed by the rlPFC (and the systems of which it is part), the learned application of metacognitive capacities originally adapted to solve practical behavioural problems to theoretical problems possessing radically different requirements—such as accuracy, the ability to not simply use a cognitive tool, but to be able to reliably determine what that cognitive tool is.

Even granting the intentionalist their spooky functional order, are we to suppose, given everything considered, that we just happened to have evolved the capacity to accurately intuit this elusive functional order? Seems a stretch. The far more plausible answer is that this exaptation, relying as it does on scarce and specialized information, was doomed from the outset to get far more things wrong than right (as the ancient skeptics insisted!). The far more plausible answer is that our metacognitive capacity is as radically heuristic as cognitive science suggests. Think of the scholastic jungle that is analytic and continental philosophy. Or think of the yawning legitimacy gap between mathematics (exaptation gone right) versus the philosophy of mathematics (exaptation gone wrong). The oh so familiar criticisms of philosophy, that it is impractical, disconnected from reality, incapable of arbitrating its controversies—in short, that it does not decisively solve—are precisely the kinds of problems we might expect, were philosophical reflection an artifact of an exaptation gone wrong.

On my account it is wildly implausible that any design paradigm like evolution could deliver the kind of cognition intentionalism requires. Evolution solves difficult problems heuristically: opportunistic fixes are gradually sculpted by various contingent frequencies in its environment, which in our case, were thoroughly social. Since the brain is the most difficult problem any brain could possibly face, we can assume the heuristics our brain relies on to cognize other brains will be specialized, and that the heuristics it uses to cognize itself will be even more specialized still. Part of this specialization will involve the ability to solve problems absent any causal information: there is simply no way the human brain can cognize itself the way it cognizes its natural environment. Is it really any surprise that causal information would scuttle problem-solving adapted to solve in its absence? And given our blindness to the heuristic nature of the systems involved, is it any surprise that we would be confounded by this incompatibility for as long as we have?

The problem, of course, it that it so doesn’t seem that way. I was a Heideggerean once. I was also a Wittgensteinian. I’ve spent months parsing Husserl’s torturous attempts to discipline philosophical reflection. That version of myself would have scoffed at these kinds of criticisms. ‘Scientism!’ would have been my first cry; ‘Performative contradiction!’ my second. I was so certain of the intrinsic intentionality of human things that the kind of argument I’m making here would have struck me as self-evident nonsense. ‘Not only are these intentional oranges real,’ I would have argued, ‘they are the only thing that makes scientific apples possible.’

It’s not enough to show the intentionalist philosopher that, by the light of cognitive science, it’s more than likely their oranges do not exist. Dialectically, at least, one needs to explain how, intuitively, it could seem so obvious that they do exist. Why do the philosopher’s ‘feelings of knowing,’ as murky and inexplicable as they are, have the capacity to convince them of anything, let alone monumental speculative systems?

As it turns out, cognitive psychology has already begun interrogating the general mechanism that is likely responsible, and the curious ways it impacts our retrospective assessments: neglect. In Thinking, Fast and Slow, Daniel Kahneman cites the difficulty we have distinguishing experience from memory as the reason why we retrospectively underrate our suffering in a variety of contexts. Given the same painful medical procedure, one would expect an individual suffering for twenty minutes to report a far greater amount than an individual suffering for half that time or less. Such is not the case. As it turns out duration has “no effect whatsoever on the ratings of total pain” (380). Retrospective assessments, rather, seem determined by the average of the pain’s peak and its coda. Absent intellectual effort, you could say the default is to remove the band-aid slowly.

Far from being academic, this ‘duration neglect,’ as Kahneman calls it, places the therapist in something of a bind. What should the physician’s goal be? The reduction of the pain actually experienced, or the reduction of the pain remembered. Kahneman provocatively frames the problem as a question of choosing between selves, the ‘experiencing self’ that actually suffers the pain and the ‘remembering self’ that walks out of the clinic. Which ‘self’ should the therapist serve? Kahneman sides with the latter. “Memories,” he writes, “are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self” (381). If the drunk has no recollection of the parking lot, then as far as his decision making is concerned, the parking lot simply does not exist. Kahneman writes:

Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self. 381

Could it be that this is what philosophers are doing? Could they, in the course of defining and arranging their oranges, simply be confusing their memory of experience with experience itself? So in the case of duration neglect, information regarding the duration of suffering makes no difference in the subject’s decision making because that information is nowhere to be found. Given the ubiquity of similar effects, Kahneman generalizes the insight into what he calls WYSIATI, or What-You-See-Is-All-There-Is:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our nonconscious cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. 85

Kahneman’s WYSIATI, you could say, provides a way to explain Dehaene’s Law regarding the chronic overestimation of awareness. The cortical bottleneck renders conscious access captive to the facts as they are given. If information regarding things like the duration of suffering in an experimental context isn’t available, then that information simply makes no difference for subsequent behaviour. Likewise, if information regarding the reliability of an intuition or ‘feeling of knowing’ (aptly abbreviated as ‘FOK’ in the literature!) isn’t available, then that information simply makes no difference—at all.

Thus the illusion of what I’ve been calling cognitive sufficiency these past few years. Kahneman lavishes the reader in Thinking, Fast and Slow with example after example of how subjects perennially confuse the information they do have with all the information they need:

You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance. 201

You could say what his research has isolated the cognitive conceit that lies at the heart of Plato’s cave: absent information regarding the low-dimensionality of the information they have available, shadows become everything. Like the parking lot, the cave, the chains, the fire, even the possibility of looking from side-to-side simply do not exist for the captives.

As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little. We often fail to allow for the possibility that evidence that should be critical to our judgment is missing—what we see is all there is. Furthermore, our associative system tends to settle on a coherent pattern of activation and suppresses doubt and ambiguity. 87-88

Could the whole of intentional philosophy amount to varieties of story-telling, ‘theory-narratives’ that are compelling to their authors precisely to the degree they are underdetermined? The problem as Kahneman outlines it is twofold. For one, “[t]he human mind does not deal well with nonevents” (200) simply because unavailable information is information that makes no difference. This is why deception, or any instance of controlling information availability, allows us to manipulate our fellow drunks so easily. For another, “[c]onfidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it,” and “not a reasoned evaluation of the probability that this judgment is correct” (212). So all that time I was reading Heidegger nodding, certain that I was getting close to finding the key, I was simply confirming parochial assumptions. Once I had bought in, coherence was automatic, and the inferences came easy. Heidegger had to be right—the key had to be beneath his lamppost—simply because it all made so much remembered sense ‘upon reflection.’

Could it really be as simple as this? Now given philosophers’ continued insistence on making claims despite their manifest institutional incapacity to decisively arbitrate any of them, neglect is certainly a plausible possibility. But the fact is this is precisely the kind of problem we should expect given that philosophical reflection is an exaptation of pre-existing cognitive capacities.

Why? Because what researchers term ‘error awareness,’ like every other human cognitive capacity, does not come cheap. To be sure, the evolutionary premium on error-detection is high to the extent that adaptive behaviour is impossible otherwise. It is part and parcel of cognition. But philosophical reflection is, once again, an exaptation of pre-existing metacognitive capacities, a form of problem-solving that has no evolutionary precedent. Research has shown that metacognitive error-awareness is often problematic even when applied to problems, such as assessing memory accuracy or behavioural competence in retrospect, that it has likely evolved to solve. [See, Wessel, “Error awareness and the error-related negativity: evaluating the first decade of evidence,” Front Hum Neurosci. 2012; 6: 88. doi: 10.3389/fnhum.2012.00088, for a GNW related review] So if conscious error-awareness is hit or miss regarding adaptive activities, we should expect that, barring some cosmic stroke of evolutionary good fortune, it pretty much eludes philosophical reflection altogether. Is it really surprising that the only erroneous intuitions philosophers seem to detect with any regularity are those belonging to their peers?

We’re used to thinking of deficits in self-awareness in pathological terms, as something pertaining to brain trauma. But the picture emerging from cognitive science is positively filled with instances of non-pathological neglect, metacognitive deficits that exist by virtue of our constitution. The same way researchers can game the heuristic components of vision to generate any number of different visual illusions, experimentalists are learning how to game the heuristic components of cognition to isolate any number of cognitive illusions, ways in which our problem-solving goes awry without the least conscious awareness. In each of these cases, neglect plays a central role in explaining the behaviour of the subjects under scrutiny, the same way clinicians use neglect to explain the behaviour of their impaired patients.

Pathological neglect strikes us as so catastrophically consequential in clinical settings simply because of the behavioural aberrations of those suffering it. Not only does it make a profoundly visible difference, it makes a difference that we can only understand mechanistically. It quite literally knocks individuals from the problem-ecology belonging to socio-cognition into the problem-ecologies belonging to natural cognition. Socio-cognition, as radically heuristic, leans heavily on access to certain environmental information to function properly. Pathological neglect denies us that information.

Non-pathological neglect, on the other hand, completely eludes us because, insofar as we share the same neurophysiology, we share the same ‘neglect structure.’ The neglect suffered is both collective and adaptive. As a result, we only glimpse it here and there, and are more cued to resolve the problems it generates than ponder the deficits in self-awareness responsible. We require elaborate experimental contexts to draw it into sharp focus.

All Blind Brain Theory does is provide a general theoretical framework for these disparate findings, one that can be extended to a great number of traditional philosophical problems—including the holy grail, the naturalization of intentionality. As of yet, the possibility of such a framework remains at most an inkling to those at the forefront of the field (something that only speculative fiction authors dare consider!) but it is a growing one. Non-pathological neglect is not only a fact, it is ubiquitous. Conceptualized the proper way, it possesses a very parsimonious means of dispatching with a great number of ancient and new conundrums…

At some point, I think all these mad ramblings will seem painfully obvious, and the thought of going back to tackling issues of cognition neglecting neglect will seem all but unimaginable. But for the nonce, it remains very difficult to see—it is neglect we’re talking about, after-all!—and the various researchers struggling with its implications lie so far apart in terms of expertise and idiom that none can see the larger landscape.

And what is this larger landscape? If you swivel human cognitive capacity across the continuum of human interrogation you find a drastic plunge in the dimensionality and an according spike in the specialization of the information we can access for the purposes of theorization as soon as brains are involved. Metacognitive neglect means that things like ‘person’ or ‘rule’ or what have you seem as real as anything else in the world when you ponder them, but in point of fact, we have only our intuitions to go on, the most meagre deliverances lacking provenance or criteria. And this is precisely what we should expect given the rank inability of the human brain to cognize itself or others in the high-dimensional manner it cognizes its environments.

This is the picture that traditional, intentional philosophy, if it is to maintain any shred of cognitive legitimacy moving forward, must somehow accommodate. Since I see traditional philosophy as largely an unwitting artifact of this landscape, I think such an accommodation will result in dissolution, the realization that philosophy has largely been a painting class for the blind. Some useful works have been produced here and there to be sure, but not for any reason the artists responsible suppose. So I would like to leave you with a suggestive parallel, a way to compare the philosopher with the sufferer of Anton’s Syndrome, the notorious form of anosognosia that leaves blind patients completely convinced they can see. So consider:

First, the patient is completely blind secondary to cortical damage in the occipital regionsof the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses,therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. Prigatano and Wolf, “Anton’s Syndrome and Unawareness of Partial or Complete Blindness,” The Study of Anosognosia, 456.

And compare to:

First, the philosopher is metacognitively blind secondary to various developmental and structural constraints. Second, the philosopher is not aware of his metacognitive blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his metacognitive incapacity. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

An Empty Post on Empty Ideas

*

Discontinuity Thesis: A ‘Birds of a Feather’ Argument Against Intentionalism

A hallmark of intentional phenomena is what might be called ‘discontinuity,’ the idea that the intentional somehow stands outside the contingent natural order, that it possesses some as-yet-occult ‘orthogonal efficacy.’ Here’s how some prominent intentionalists characterize it:

“Scholars who study intentional phenomena generally tend to consider them as processes and relationships that can be characterized irrespective of any physical objects, material changes, or motive forces. But this is exactly what poses a fundamental problem for the natural sciences. Scientific explanation requires that in order to have causal consequences, something must be susceptible of being involved in material and energetic interactions with other physical objects and forces.” Terrence Deacon, Incomplete Nature, 28

“Exactly how are consciousness and subjective experience related to brain and body? It is one thing to be able to establish correlations between consciousness and brain activity; it is another thing to have an account that explains exactly how certain biological processes generate and realize consciousness and subjectivity. At the present time, we not only lack such an account, but are also unsure about the form it would need to have in order to bridge the conceptual and epistemological gap between life and mind as objects of scientific investigation and life and mind as we subjectively experience them.” Evan Thompson, Mind in Life, x

“Norms (in the sense of normative statuses) are not objects in the causal order. Natural science, eschewing categories of social practice, will never run across commitments in its cataloguing of the furniture of the world; they are not by themselves causally efficacious—no more than strikes or outs are in baseball. Nonetheless, according to the account presented here, there are norms, and their existence is neither supernatural nor mysterious. Normative statuses are domesticated by being understood in terms of normative attitudes, which are in the causal order.” Robert Brandom, Making It Explicit, 626

What I would like to do is run through a number of different discontinuities you find in various intentional phenomena as a means of raising the question: What are the chances? What’s worth noting is how continuous these alleged phenomena are with each other, not simply in terms of their low-dimensionality and natural discontinuity, but in terms of mutual conceptual dependence as well. I made a distinction between ‘ontological’ and ‘functional’ exemptions from the natural even though I regard them as differences of degree because of the way it maps stark distinctions in the different kinds of commitments you find among various parties of believers. And ‘low-dimensionality’ simply refers to the scarcity of the information intentional phenomena give us to work with—whatever finds its way into the ‘philosopher’s lab,’ basically.

So with regard to all of the following, my question is simply, are these not birds of a feather? If not, then what distinguishes them? Why are low-dimensionality and supernaturalism fatal only for some and not others?

.

Soul – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of the Soul, you will find it consistently related to Ghost, Choice, Subjectivity, Value, Content, God, Agency, Mind, Purpose, Responsibility, and Good/Evil.

Game – Anthropic. Low-dimensional. Functionally exempt from natural continuity (insofar as ‘rule governed’). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Game is consistently related to Correctness, Rules/Norms, Value, Agency, Purpose, Practice, and Reason.

Aboutness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Aboutness is consistently related to Correctness, Rules/Norms, Inference, Content, Reason, Subjectivity, Mind, Truth, and Representation.

Correctness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Correctness is consistently related to Game, Aboutness, Rules/Norms, Inference, Content, Reason, Agency, Mind, Purpose, Truth, Representation, Responsibility, and Good/Evil.

Ghost – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of Ghosts, you will find it consistently related to God, Soul, Mind, Agency, Choice, Subjectivity Value, and Good/Evil.

Rules/Norms – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Rules and Norms are consistently related to Game, Aboutness, Correctness, Inference, Content, Reason, Agency, Mind, Truth, and Representation.

Choice – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Embodies inexplicable efficacy. Choice is typically discussed in relation to God, Agency, Responsibility, and Good/Evil.

Inference – Anthropic. Low-dimensional. Functionally exempt (‘irreducible,’ ‘autonomous’) from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Inference is consistently related to Game, Aboutness, Correctness, Rules/Norms, Value, Content, Reason, Mind, A priori, Truth, and Representation.

Subjectivity – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Subjectivity is typically discussed in relation to Soul, Rules/Norms, Choice, Phenomenality, Value, Agency, Reason, Mind, Purpose, Representation, and Responsibility.

Phenomenality – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. Phenomenality is typically discussed in relation to Subjectivity, Content, Mind, and Representation.

Value – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Value discussed in concert with Correctness, Rules/Norms, Subjectivity, Agency, Practice, Reason, Mind, Purpose, and Responsibility.

Content – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Content discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Phenomenality, Reason, Mind, A priori, Truth, and Representation.

Agency – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Agency is discussed in concert with Games, Correctness, Rules/Norms, Choice, Inference, Subjectivity, Value, Practice, Reason, Mind, Purpose, Representation, and Responsibility.

God – Anthropic. Low-dimensional. Ontologically exempt from natural continuity (as the condition of everything natural!). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds God discussed in relation to Soul, Correctness, Ghosts, Rules/Norms, Choice, Value, Agency, Purpose, Truth, Responsibility, and Good/Evil.

Practices – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Practices are discussed in relation to Games, Correctness, Rules/Norms, Value, Agency, Reason, Purpose, Truth, and Responsibility.

Reason – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Reason discussed in concert with Games, Correctness, Rules/Norms, Inference, Value, Content, Agency, Practices, Mind, Purpose, A priori, Truth, Representation, and Responsibility.

Mind – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Mind considered in relation to Souls, Subjectivity, Value, Content, Agency, Reason, Purpose, and Representation.

Purpose – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Purpose discussed along with Game, Correctness, Value, God, Reason, and Representation.

A priori – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One often finds the A priori discussed in relation to Correctness, Rules/Norms, Inference, Subjectivity, Content, Reason, Truth, and Representation.

Truth – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Truth discussed in concert with Games, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Value, Content, Practices, Mind, A priori, Truth, and Representation.

Representation – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Representation discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Subjectivity, Phenomenality, Content, Reason, Mind, A priori, and Truth.

Responsibility – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Responsibility is consistently related to Game, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Reason, Agency, Mind, Purpose, Truth, Representation, and Good/Evil.

Good/Evil – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Good/Evil consistently related to Souls, Correctness, Subjectivity, Value, Reason, Agency, God, Purpose, Truth, and Responsibility.

.

The big question here, from a naturalistic standpoint, is whether all of these characteristics are homologous or merely analogous. Are the similarities ontogenetic, the expression of some shared ‘deep structure,’ or merely coincidental? For me this has to be what I think is one of the most significant questions that never get’s asked in cognitive science. Why? Because everybody has their own way of divvying up the intentional pie (including interpretavists like Dennett). Some of these items are good, and some of them are bad, depending on whom you talk to. If these phenomena were merely analogous, then this division need not be problematic—we’re just talking fish and whales. But if these phenomena are homologous—if we’re talking whales and whales—then the kinds of discursive barricades various theorists erect to shelter their ‘good’ intentional phenomena from ‘bad’ intentional phenomena need to be powerfully motivated.

Pointing out the apparent functionality of certain phenomena versus others simply will not do. The fact that these phenomena discharge some kind of function somehow seems pretty clear. It seems to be the case that God anchors the solution to any number of social problems—that even Souls discharge some function in certain, specialized problem-ecologies. The same can be said of Truth, Rule/Norm, Agency—every item on this list, in fact.

And this is precisely what one might expect given a purely biomechanical, heuristic interpretation of these terms as well (with the added advantage of being able to explain why our phenomenological inheritance finds itself mired in the kinds of problems it does). None of these need count as anything resembling what our phenomenological tradition claims to explain the kinds of behaviour that accompanies them. God doesn’t need to be ‘real’ to explain church-going, no more than Rules/Norms do to explain rule-following. Meanwhile, the growing mountain of cognitive scientific discovery looms large: cognitive functions generally run ulterior to what we can metacognize for report. Time and again, in context after context, empirical research reveals that human cognition is simply not what we think it is. As ‘Dehaene’s Law’ states, “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). Perhaps this is simply what intentionality amounts to: a congenital ‘overestimation of awareness,’ a kind of WYSIATI or ‘what-you-see-is-all-there-is’ illusion. Perhaps anthropic, low-dimensional, functionally exempt from natural continuity, inscrutable in terms of natural continuity, source of perennial controversy, and possesses inexplicable efficacy are all expressions of various kinds of neglect. Perhaps it isn’t just a coincidence that we are entirely blind to our neuromechanical embodiment and that we suffer this compelling sense that we are more than merely neuromechanical.

How could we cognize the astronomical causal complexities of cognition? What evolutionary purpose would it serve?

What impact does our systematic neglect of those capacities have on philosophical reflection?

Does anyone really think the answer is going to be ‘minimal to nonexistent’?

Father’s Day, 2025

Sunday, June 15th, 2025, New York BureauWal-Mart stock has enjoyed a recent surge in both algorithmic and human markets today, given greater than expected profits attributed to the company’s new visual fixation tracking systems. This novel system not only allows the retail giant to track the eye-movements of every customer in the store (a data collection system that promises immense downstream rewards in its own right), it also provides the informational underpinnings for a real-time version of what the industry has come to call the ‘Ping Market.’

You know those little twinkling lights on the shelves that make your child crow with delight? Well each flicker is what is called a ‘ping,’ a peripheral stimulus designed to alert the attentional systems of the human brain, and so trigger what is called ‘gaze fixation,’ which in turn is a powerful predictor of consumer choice. Historically (if three years count as such!), retailers have auctioned pings in bulk on the basis of statistical data garnered over time. Wal-Mart’s new system, however, allows its suppliers to quite literally bid for customers as they walk down the aisle, using online data tracking customer’s saccades and fixations to calculate the probable effectiveness of individual pings (and, unfortunately, those endlessly annoying ‘shelfies’). Customer tracking is anonymized by law, of course, but the company reports that its most recent ’15 for 15’ promotion has managed to boost its privacy opt-out level to just over 83%. An envious number to be sure.

‘‘Go fish’ has long been a term of frustration in retail,’ says the project director, Dr. Howard Singh. ‘With [their new ping system] the metaphor has become literal, and Wal-Mart, always the ocean of the industry, has transformed itself into an angler’s delight!’

For those who pay the fee, of course.

Enactivist Re-enactment

Adam Robberts has tidied up and reposted a debate he and I have been having on enactivism and Blind Brain Theory over at his Knowledge Ecology site. Dissenters are welcome to weigh in over here, there, or at any one of the several sites that have reblogged our exchange. All the lapses in decorum and diction are entirely my own.

By dint of sheer coincidence, Eric Schwitzgebel has included the Bonjour post immediately below in his roundup of the Philosopher’s Carnival - Zombie Mary should be pleased! God and Jesus, not so much.

As for the novel, still no word on pub dates, but I did spend the past two days with my friend Madness going through the manuscript, so if you’re curious as to his impressions, appraisals, by all means visit the SECOND APOCALYPSE FORUM and plague him with questions…

 

Zombie Mary versus God and Jesus: Against Lawrence Bonjour’s “Against Materialism”

 

I should begin with a tip of my hat to Dirk Fellman, since this post is a direct consequence of the damn interesting links he sends. In this case, it was a link to The Waning of Materialism, a collection of articles inveighing against materialism under a number of different banners. For me, ‘material’ is simply a pole on a continuum, that which provides the most data. It’s whatever scientists seem to be able to endlessly mine for information, and to thus endlessly reconfigure into boggling demonstrations of power. Insofar as this is what scientists indeed do, mine and enable, I’m only interested in materialism in terms falling out of Blind Brain Theory, which is to say, in terms of dimensionality. Science is the premier data-mining institution on the planet. The question of what ‘matter’ might be apart from all the differences it makes does not strike me as a promising one. Nor does the question of whether matter monopolizes existence. BBT lets me sidestep these questions, since it sees the interminable controversies spinning out of the material and the ideal as a paradigmatic example of a heuristic run amok, and so elects to talk of high and low dimensionality instead.

For information to be (nonsemantic) information, some difference must be made: even the dualist is pinned to the information continuum in this sense. Since information generally enables cognition, the high-dimensional view generally trumps the low-dimensional, and it seems fair to say that BBT, in this respect, counts as a kind of materialism, albeit a peculiar one. I’ve already sketched what it makes of the Knowledge Argument in THE Something About Mary. What follows is an attempt to show how it fares against Lawrence Bonjour’s retooling of Frank Jackson’s famous thought experiment in his “Against Materialism,” the piece that the editors of Waning take as “an overview of the entire volume.”

Bonjour is a property dualist. He holds that mental properties form a special class of nonphysical or nonmaterial properties distinct from those studied in the natural sciences more generally. He makes no secret of how weak he thinks materialism is–and indeed his whole paper is permeated with the sense that he can scarce believe he needs to make his argument at all. “I have always found this situation extremely puzzling,” he writes. “As far as I can see, materialism is a view that has no very compelling argument in its favor and that is confronted with very powerful objections to which nothing even approaching an adequate response has been offered” (5). Since the case is all but closed for Bonjour, he proposes to simply review the ‘very powerful objections’–as a matter of historical record, perhaps–to show the gentle reader why they need not worry about materialist bogeymen. The problem, he claims, is that materialism “offers no account at all of consciousness and seems incapable in principle of doing so” (5).

In a sense, I actually agree with Bonjour on this point: traditional materialism cannot explain consciousness as it appears to reflection. Every attempt it makes leaves this ‘consciousness-as-metacognized’ untouched, and thus remains vulnerable to those, like Bonjour, who find themselves compelled by what they think they so plainly intuit. But as the above should make clear, my own position–Blind Brain Theory–is no ordinary materialism. Where others work their way toward consciousness-as-metacognized only to find themselves stranded on the stoop, BBT actually possesses the resources to kick down the door. The key to untangling all the knots of phenomenality and intentionality, I hope to show, lies in understanding the kinds of illusions metacognitive neglect has foisted on all our historical attempts to understand them thus far, illusions that Bonjour has been kind enough to illustrate in rather dramatic fashion.

In the argument I would like to focus on, Bonjour proposes an extension of Frank Jackson’s original Knowledge Argument to the issue of the intentionality of consciousness, and to the question of internal content more specifically. As he writes:

The issue I want to raise here is whether a materialist view can account for sort of conscious intentional content just characterized. Can it account for conscious thoughts being about various things in a way that can be grasped or understood by the person in question? In a way the answer has already been given. Since materialist views really take no account at all of consciousness, they obviously offer no account of this particular aspect of it. But investigating this narrower aspect of the issue can still help to deepen the basic objection to materialism. 17

To illustrate this incapacity, Bonjour bids us imagine a different Mary, one possessing complete physical knowledge of Bonjour as he entertains various thoughts. Given complete physical information, can she know “what I am consciously thinking about at a particular moment?” (17).

It seems clear that knowing all the physical facts regarding Bonjour’s brain is insufficient, given the relationality of Bonjour’s thoughts, the fact they are about things in the world. Bonjour continues:

A functionalist would no doubt say that it is no surprise that Mary could not do this. In order to know the complete causal or functional role of my internal states, Mary also needs to know about their about their external causal relations to various things. And it might be suggested that, if Mary knows all of the external causal relations in which my various states stand, she will in fact be able to figure out what I am consciously thinking about at a particular time. No doubt the details that pick out any particular object of thought will be very complicated, but there is, it might be claimed, no reason to doubt in principle she could do this.” 18

Now Bonjour thinks that this is “another piece of materialist doctrine that again has the status very similar to that of a claim of theology” (18). One might respond that this is essentially the same assumption that informs skepticism regarding paranormal phenomena—that given enough information, some natural explanation can be found for apparently supernatural phenomena—but that would be beside the point since Bonjour thinks the materialist is in serious trouble even if we grant this particular conceit.

For, as already emphasized, it is an undeniable fact about conscious intentional content that I am able for the most part to consciously understand or be aware of what I am thinking about ‘from the inside.’ Clearly I do not in general do this on the basis of external causal knowledge: I do not have such knowledge and would not know what to do about it if I did. All that I normally have any sort of direct access to, if materialism is true, is my own internal physical and physiological states, and thus my conscious understanding of what I am thinking about at a particular moment must be somehow a feature or result of those internal states alone. 18

Bonjour is simply pointing out that even though he himself lacks access to any such information regarding his brain function and its causal environmental history, he nevertheless knows what he’s thinking about. Any metacognitive understanding he has of his thoughts, therefore, is proximally grounded, the product of his internal states. He continues:

Causal relations to external things may help to produce the relevant features of the internal states in question, but there is no apparent way in which such external relations can somehow be partly constitutive of the fact that my conscious thoughts are about various things in a way of which I can be immediately aware. But if these internal states are sufficient to fix the object of my thought in a way that is accessible to my understanding or awareness, then knowing about those internal states should be sufficient for Mary as well, without any knowledge of the external causal relations. And yet, as we have already seen, it is obvious that this is not the case. 18

If he can know what he’s thinking simply given his internal states, then why is it the case that Mary cannot? The argument grants her knowledge of those states: so why is it that she needs to know so much more to be able to determine what he’s thinking?

Thus we have the basis for an argument parallel to Jackson’s original argument against qualia: Mary knows all the relevant physical facts; she is not able on the basis of this knowledge to know what I am consciously thinking about at a particular moment; but what I am thinking about at that moment is as surely a fact about the world as anything else; therefore complete physical knowledge is not complete knowledge, and so materialism is false.  18-19.

This is about as clear an example of the way metacognitive neglect plays havoc with philosophical reflection as any I’ve encountered. What Bonjour is giving us here is a tale of two perspectives, one external and omniscient, another internal and sufficient. Since material omniscience isn’t sufficient, we can infer that there’s more to nature than meets the material eye, that some kind of supernaturalism is true.

Over the years I’ve come to the conclusion that all Mary type arguments boil down to versions of what might be called the ‘God-and-Jesus’ strategy. The marketing genius of Jesus, as Nietzsche so wryly observed, is the way his mortality transforms a fact-omniscient God into a God who also knows what it’s like. To be human is to be ignorant, to neglect everything save what enters this rare sliver we call life. God can only truly know humanity by becoming human as a result. He needs to exist within our ‘neglect structure,’ you could say.

So in Mary-type arguments Mary plays the third-person God, and some first-person experience plays Jesus. The upshot is always the same: We need Jesus because some knowledge necessarily lies outside God’s omniscience: knowledge of what it is like being blinkered and benighted—merely ‘human.’

The ease with which this argumentative form slips between theological and (allegedly) nontheological domains is worth keeping in mind, here. But the real takeaway is found in how God and Jesus highlight the pivotal role neglect plays in all its incarnations. With Jesus, God has to systematically divest Himself of cognitive capacities, consign more and more to neglect, the ‘unknown unknown,’ in order to know ‘what it’s like’ to be human. Jesus thus poses a limit, a kind of neglect structure, on the omniscience of God, and in this way becomes the skyhook exception to the infinite that links humanity and God via shared experience. (Thus the ‘horrible secret’ of ‘God on the cross,’ as Nietzsche calls it, the fact that “[all] of us are nailed to the cross, consequently we are divine” (Anti-Christ, 51)).

Now consider Bonjour’s version of this argument: What distinguishes his facts from natural or physical facts is that he need only access his internal states to know the content of his thought, whereas Mary needs to access both those internal states and their external causal relations. Where neglecting external causal relations precludes Mary knowing the content of Bonjour’s thoughts, it has no bearing whatsoever on his knowledge of his thoughts. The fact that he knows he’s thinking this or that is an environment independent fact. This disqualifies Mary’s claim to omniscience because, for her, all such facts can only be environmentally cognized. Since Mary requires added information regarding external systems to determine what he’s thinking about means that there’s something, environment independent fact, that not even God can know, and that Bonjour and everyone living possesses.

So to repeat his question: “Can [materialism] account for conscious thoughts being about various things in a way that can be grasped or understood by the person in question?” Not even if it were God, he is saying. You have to have Jesus.

Bonjour not only openly acknowledges that metacognition systematically neglects external causal information, he makes it a centrepiece of his argument. Neglect of external causal relations is what sets his facts apart from Mary’s natural facts, what makes him Jesus, in effect. God can know our thoughts, but He cannot know our thoughts the way Jesus knows our thoughts. He cannot know what it’s like to be me, or Bonjour, as the case might be.

Of course, any such argument should give us pause. As keen as Bonjour is to leverage the distinction neglect affords him—the way it allows him to distinguish between modes of knowing, and thereby argue a distinction in modes of being (material versus nonmaterial)—calling attention to the neglect, as opposed to the distinction, raises the possibility that he’s simply spinning ignorance into an ontological virtue.

In strict causal terms, on a ‘zombie Mary’ account, say, the argument simply unravels. Here the question is one of one biomechanism attempting to systematically engage a second biomechanism that is systematically engaging some other kind of system, perhaps itself. What we want to know is how biomechanism 1 might come to occupy a relation with biomechanism 2 such that the behavioural possibilities of 1 are the same behavioural possibilities possessed by Mary coming to know what Bonjour is thinking about. So biomechanism 1, ‘zombie-Mary’ would be able to do all the things, make all the sounds that Mary could do knowing what Bonjour thought, only with biomechanism 2, or ‘zombie-Bonjour.’ And the same goes for zombie-Bonjour: it would be able to occupy a relation with itself that allowed it to do all the things Bonjour could do on the basis of knowing what he’s thinking.

One only need suppose this is possible (even though no one doubts that our brains possess very real, very physical, cognitive and metacognitive systems), since the point of this zombie analogue is to simply draw out a striking feature of the physical picture of Bonjour’s argument, the very picture he agrees with only up to a point.

Physically speaking, zombie Mary is comporting itself to a functionally independent, environmentally external system: cognizing zombie Bonjour’s brain processes. Zombie Bonjour, on the other hand, is comporting itself to a functionally entangled, environmentally internal system: metacognizing its own brain processes. It’s hard to imagine any two more radically different ‘biocognitive perspectives,’ the one solving a functionally distinct, distal system using all the ancient machinery of environmental cognition, the other solving a functionally entangled, proximal system using far more youthful metacognitive machinery, the former possessing high-dimensional, variable access to the processes involved, the latter possessing low-dimensional, fixed access to those self-same processes.

On zombie Mary, then, I think it’s pretty plain that no matter how one finesses zombie Mary’s physical comportment to zombie Bonjour, the radically different nature of their respective cognitive and metacognitive relationships means there is simply no way zombie Mary can possess the same comportment to zombie Bonjour that zombie Bonjour possesses to itself short of becoming zombie Bonjour.

Even on a zombie account, then, we find ourselves confronted with a version of the God and Jesus dilemma!

But now the upshot, which seems almost miraculous in theological and philosophical contexts (providing for the possibility of a special, human reality apart from nature or God) has become rather mundane. On zombie Mary, the difference is merely a matter of different systems possessing different resources and modes of access. The idea that zombie Mary’s mode is the only mode, that there could be an ‘omniscient’ zombie Mary simply makes no sense, insofar as she’s simply another biomechanism stranded in its environment the same as any other zombie, capable of occupying only a finite number of comportments. The very notion that she could be ‘fact omniscient,’ in other words, attributes something supernatural to her, a hint of God, if you will. The notion that zombie Bonjour’s quite different biocognitive capacity evidences something supernatural, a little bit of Jesus, likewise has no place in this scenario. It’s natural all the way down.

Now of course Bonjour would balk at the very notion of zombie Mary and adduce any number of arguments against the very idea, I’m sure. Hints of God and bits of Jesus have a very real role to play in his metaphysical view, albeit dressed in a more respectable nomenclature. But what he can’t do is run away from all the questions that it raises. So now when he writes, for instance, “if these internal states are sufficient to fix the object of my thought in a way that is accessible to my understanding or awareness, then knowing about those internal states should be sufficient for Mary as well, without any knowledge of the external causal relations” (18), we can ask him whether he’s equivocating cognition with metacognition, the drastically different challenges of solving other people with solving oneself. Bonjour agrees that cognition and biology are intimately related somehow, that aphasiology* is a very real branch of medical science. He accepts that we’re machines in some sense; he just wants, like so very many others, to think that we are something more as well. Nevertheless he agrees that the meat has a say. Likewise, he has to admit to the drastic biocognitive difference between Mary cognizing his thought and him metacognizing his own thought. So he has no way of avoiding the question of whether his argument is simply mixing cognitive apples with metacognitive oranges, why we should assume that his ability to know what he’s thinking without knowing external causal relations is indicative of anything other than the fact that very different systems are involved. Surely, given the rather obvious fact of that difference, we should be hesitant to accept supernatural conclusions that it could very well obviate.

Zombie Bonjour, for instance, need not have any secondary comportment to its environmental comportments to effectively intervene in those environments. This is a good thing, given the cognitive challenges the astronomical biomechanical complexity of zombie Bonjour poses any cognitive system, let alone one packed into the same skull (imagine a primatologist sewn into a sack with a chimpanzee troop). Systematic metacognitive neglect is a given when one considers the problem in biomechanical terms.

Is it merely a coincidence that the same goes for Bonjour proper? He too is astronomically complex. He also doesn’t need to metacognize his thoughts to think them. And he too suffers from massive metacognitive neglect. The high-dimensional picture of the brain that’s now emerging from the cognitive sciences is a picture of what we are almost entirely blind to. Whatever metacognitive capacity we possess is obviously both low-dimensional and specialized, consisting of heuristic systems adapted to troubleshoot specific first-order problem-ecologies. Since Bonjour is already physically comported to his environment in various cognitive and noncognitive ways, any capacity to metacognize this relation, to ‘know what he’s thinking about,’ say, need only build on this pre-existing comportment. Like so many other ‘quick and dirty’ cognitive systems, Bonjour’s capacity to metacognize has evolved to make due without, to solve problems using as little potentially relevant information as possible. This is arguably why he can know what he’s ‘thinking,’ ‘experiencing,’ ‘desiring,’ and so on without knowing anything about the astronomically complicated mechanical relations that make it possible. Metacognition is a ‘need-to-know’ capacity, a system or set of systems accessing only the information required to tackle certain problem-ecologies.

The problem, however, is that metacognition is not itself among those things that metacognition needs to know.’ Metacognition accesses low-dimensional, specialized information blind to the fact that it is such. This is no problem so long as we restrict its application to adaptive problem-ecologies. The capacity to ‘report our thoughts’ doubtlessly solved any number of problems for our ancestors. As soon as the philosopher repurposes this capacity to solve, say, the ‘problem of materialism,’ however, we should expect things will go awry—and here’s the thing, exactly the way they do. Why? Because philosophical reflection requires using information adapted to heuristically solve ‘What am I thinking?’ problems to solve the considerably more demanding question, ‘What is thinking?’ without any inkling whatsoever of the adequacy of that information. We should expect such attempts to endlessly run aground controversy the way they do. Given that the adequacy of our intuitions is the assumptive default (as with what Kahneman and Tversky call ‘availability heuristics,’ for instance*) one might expect that philosophers would systematically confuse their darkly glimpsed special-purpose metacognitive access for something whole and general-purpose, for an order of reality somehow beyond the high-dimensional physical reality revealed by natural science—for something supernatural.

Neglect, then, plays a crucial role at three distinct junctures in Bonjour’s argument, at least. Neglect of the physical differences between environmental cognition and metacognition licenses his equivocation of Mary’s access and Bonjour’s own access to the content of his thoughts. Lacking access to any information regarding cognitive activity strands deliberative metacognition (reflection) with what is being cognized, which becomes a kind of ‘availability heuristic.’ Blind to our knowing (the machinery is indisposed, after-all), we attribute the distinction to the known. Epistemic blindness generates the cognitive illusion of ontological distinction.

Neglect of the physical implementation of Mary’s environmental cognition licenses the plausibility of Mary’s omniscience, and thus renders her inability to cognize Bonjour’s thought fraught with ontological significance. The ‘view from nowhere,’ as it is sometimes called, is as clear-cut an example of metacognitive neglect as you can hope to find. Absence admits no distinctions, so knowledge seems (descending the ladder of ontological commitment) disembodied, transcendent, emergent, or virtual; ‘nowhere’ becomes indistinguishable to ‘everywhere,’ and the in principle possibility of omniscience simply seems to follow. There’s no limit to the number of ghosts you can pack into a room—or skull.

Neglect of the low-dimensional, domain-specific nature of metacognition generates the illusion that “what I am thinking about at that moment is as surely a fact about the world as anything else” (19), rather than what it almost certainly is: a special-purpose posit adapted to solving a specific problem-ecology. As a matter of empirical fact, Bonjour’s cognitive relationship to his own cognitive activity is radically different than his cognitive relationship to his environment. To think that this radical difference is irrelevant to the radical differences between first-person and third-person knowledge (not to mention the knots they have us twisted in!) is wildly implausible, to say the least. Far from a fact like any other, ‘What he is thinking’—the information Bonjour has available to report—is an incredibly low-dimensional communicative shorthand, one specifically tailored to solve the kinds of problems our preliterate—prephilosophical— ancestors faced. Blind to the heuristic nature of metacognition, Bonjour confuses special-purpose information with all purpose information.

Why is Bonjour so convinced? For the same reason anyone suffering Anton’s Syndrome is convinced they can see: he is blind to his blindness, and so thinks he sees everything he needs to see.

The Eliminativistic Implicit II: Brandom in the Pool of Shiloam

norm brain

In “The Eliminativistic Implicit I,” we saw how the implicit anchors the communicative solution of humans and their activities. Since comprehension consists in establishing connections between behaviours and their precursors, the inscrutability of those precursors requires we use explanatory posits, suppositional surrogate precursors, to comprehend ourselves and our fellows. The ‘implicit’ is a kind of compensatory mechanism, a communicative prosthesis for neglect, a ‘blank box’ for the post facto proposal of various, abductively warranted precursors.

We also saw how the implicit possessed a number of different incarnations:

1) The Everyday Implicit: The regime of folk posits adapted to solve various practical problems involving humans (and animals).

2) The Philosophical Implicit: The regime of intentional posits thought to solve aspects of the human in general.

3) The Psychological Implicit: The regime of functional posits thought to solve various aspects of the human in general.

4) The Mechanical Implicit: The regime of neurobiological posits thought to solve various aspects of the human in general.

The overarching argument I’m pressing is that only (4) holds the key to any genuine theoretical understanding of (1-3). On my account of (4), (1) is an adaptive component of socio-communicative cognition, (2) is largely an artifact of theoretical misapplications of those heuristic systems, and (3) represents an empirical attempt to approximate (4) on the basis of indirect behavioural evidence.

In this episode, the idea is to illustrate how both the problems and the apparent successes of the Philosophical Implicit can be parsimoniously explained in terms of neglect and heuristic misapplication via Robert Brandom’s magisterial Making-it-Explicit. We’ll consider what motivates Brandom’s normative pragmatism, why he thinks that only normative cognition can explain normative cognition. Without this motivation, the explanation of normative cognition defaults to natural cognition (epitomized by science), and Brandom quite simply has no subject matter. The cornerstone of his case is the Wittgensteinian gerrymandering argument against Regularism. As I hope to show, Blind Brain Theory dismantles this argument with surprising facility. And it does so, moreover, in a manner that explains why so many theorists (including myself at one time!) are so inclined to find the argument convincing. As it turns out, the intuitions that motivate Normativism turn on a cluster of quite inevitable metacognitive illusions.

norm pentagon

Blind Agents

Making-it-Explicit: Reasoning, Representing, and Discursive Commitment is easily the most sustained and nuanced philosophical consideration of the implicit I’ve encountered. I was gobsmacked I was when I first read it in the late 90s. Stylistically, it had a combination of Heideggerean density and Analytic clarity that I found narcotic. Argumentatively, I was deeply impressed by the way Brandom’s interpretive functionalism seemed to actually pull intentional facts from natural hats, how his account of communal taking as seemed to render normativity at once ‘natural’ and autonomous. For a time, I bought into a great deal of what Brandom had to say—I was particularly interested in working my ‘frame ontology’ into his normative framework. Making It Explicit had become a big part of my dissertation… ere epic fantasy saved my life!

I now think I was deluded.

In this work, Brandom takes nothing less than the explication of the ‘game of giving and asking for reasons’ as his task, “making explicit the implicit structure characteristic of discursive practice as such” (649). He wants to make explicit the role that making explicit plays in discursive cognition. It’s worth pausing to ponder the fact that we do so very many things with only the most hazy or granular second-order understanding. It might seem so platitudinal as to go without saying, but it’s worth noting in passing at least: Looming large in the implicature of all accounts such as Brandom’s is the claim that we somehow know the world without ever knowing how we know the world.

As we saw in the previous installment, the implicit designates a kind of profound cognitive incapacity, a lack of knowledge regarding our own activities. The implicit entails what might be called a Blind Agent Thesis, or BAT. Brandom, by his own admission, is attempting to generalize the behaviour of the most complicated biomechanical system known to science almost entirely blind to the functioning of that system. (He just thinks he’s operating at an ‘autonomous social functional’ level). He is, as we shall see, effectively arguing his own particular BAT.

Insofar as every theoretician, myself included, is trying to show ‘what everyone is missing,’ there’s a sense in which something like BAT is hard to deny. Why all the blather, otherwise? But this negative characterization clearly has a problem: How could we do anything without knowing how to do it? Obviously we have to ‘know how’ in some manner, otherwise we wouldn’t be able to do anything at all! This is the sense in which the implicit can be positively characterized as a species of knowing in its own right. And this leads us to the quasi-paradoxical understanding of the implicit as ‘knowing without knowing,’ a knowing how to do something without knowing how to discursively explain that doing.

Making explicit, Brandom is saying, has never been adequately made explicit—this despite millennia of philosophical disputation. He (unlike Kant, say) never offers any reason why this is the case, any consideration of what it is about making explicit in particular that should render it so resistant to explication—but then philosophers are generally prone to take the difficulty of their problems as a given. (I’m not the only one out there shouting the problem I happen to working on is like, the most important problem ever!) I mention this because any attempt to assay the difficulty of the problem of making making-explicit explicit would have explicitly begged the question of whether he (or anyone else) possessed the resources required to solve the problem.

You know, as blind and all.

What Brandom provides instead is an elegant reprise of the problem’s history, beginning with Kant’s fundamental ‘transformation of perspective,’ the way he made explicit the hitherto implicit normative dimension of making explicit, what allowed him “to talk about the natural necessity whose recognition is implicit in cognitive or theoretical activity, and the moral necessity whose recognition is implicit in practical activity, as species of one genus” (10).

Kant, in effect, had discovered something that humanity had been all but oblivious to: the essentially prescriptive nature of making explicit. Of course, Brandom almost entirely eschews Kant’s metaphysical commitments: for him, normative constraint lies in the attributions of other agents and nowhere else. Kant, in other words, had not so much illuminated the darkness of the implicit (which he baroquely misconstrues as ‘transcendental’) as snatch one crucial glimpse of its nature.

Brandom attributes the next glimpse to Frege, with his insistence on “respecting and enforcing the distinction between the normative significance of applying concepts and the causal consequences of doing so” (11). What Frege made explicit about making explicit, in other words, was its systematic antipathy to causal explanation. As Brandom writes:

“Psychologism misunderstands the pragmatic significance of semantic contents. It cannot make intelligible the applicability of norms governing the acts that exhibit them. The force of those acts is a prescriptive rather than a descriptive affair; apart from their liability to assessments of judgments as true and inferences as correct, there is no such thing as judgment or inference. To try to analyze the conceptual contents of judgments in terms of habits or dispositions governing the sequences of brain states or mentalistically conceived ideas is to settle on the wrong sort of modality, on causal necessitation rather than rational or cognitive right.” (12)

Normativity is naturalistically inscrutable, and thanks to Kant (“the great re-enchanter,” as Turner (2010) calls him), we know that making explicit is normative. Any explication of the implicit of making explicit, therefore, cannot be causal—which is to say, mechanistic. Frege, in other words, makes explicit a crucial consequence of Kant’s watershed insight: the fact that making explicit can only be made explicit in normative, as opposed to natural, terms. Explication is an intrinsically normative activity. Making causal constraints explicit at most describes what systems will do, never prescribes what they should do. Since we now know that explication is an intrinsically normative activity, making explicit the governing causal constraints has the effect of rendering the activity unintelligible. The only way to make explication theoretically explicit is to make explicit the implicit normative constraints that make it possible.

Which leads Brandom to the third main figure of his brief history, Wittgenstein. Thus far, we know only that explication is an intrinsically normative affair—our picture of making explicit is granular in the extreme. What are norms? Why do they have the curious ‘force’ that they do? What does that force consist in? Even if Kant is only credited with making explicit the normativity of making explicit, you could say the bulk of his project is devoted to exploring questions precisely like these. Consider, for instance, his explication of reason:

“But of reason one cannot say that before the state in which it determines the power of choice, another state precedes in which this state itself is determined. For since reason itself is not an appearance and is not subject at all to any conditions of sensibility, no temporal sequence takes place in it even as to its causality, and thus the dynamical law of nature, which determines the temporal sequence according to rules, cannot be applied to it.” Kant, The Critique of Pure Reason, 543

Reason, in other words, is transcendental, something literally outside nature as we experience it, outside time, outside space, and yet somehow fundamentally internal to what we are. The how of human cognition, Kant believed, lies outside the circuit of human cognition, save for what could be fathomed via transcendental deduction. Kant, in other words, not only had his own account of what the implicit was, he also had an account for what rendered it so difficult to make explicit in the first place!

He had his own version of BAT, what might be called a Transcendental Blind Agent Thesis, or T-BAT.

Brandom, however, far prefers the later Wittgenstein’s answers to the question of how the intrinsic normativity of making explicit should be understood. As he writes,

“Wittgenstein argues that proprieties of performance that are governed by explicit rules do not form an autonomous stratum of normative statuses, one that could exist though no other did. Rather, proprieties governed by explicit rules rest on proprieties governed by practice. Norms that are explicit in the form of rules presuppose norms implicit in practices.” (20)

Kant’s transcendental represents just such an ‘autonomous stratum of normative statuses.’ The problem with such a stratum, aside from the extravagant ontological commitments allegedly entailed, is that it seems incapable of dealing with a peculiar characteristic of normative assessment known since ancient times in the form of Agrippa’s trilemma or the ‘problem of the criterion.’ The appeal to explicit rules is habitual, perhaps even instinctive, when we find ourselves challenged on some point of communication. Given the regularity with which such appeals succeed, it seems natural to assume that the propriety of any given communicative act turns on the rules we are prone to cite when challenged. The obvious problem, however, is that rule citing is itself a communicative act that can be challenged. It stems from occluded precursors the same as anything else.

What Wittgenstein famously argues is that what we’re appealing to in these instances is the assent of our interlocutors. If our interlocutors happen to disagree with our interpretation of the rule, suddenly we find ourselves with two disputes, two improprieties, rather than one. The explicit appeal to some rule, in other words, is actually an implicit appeal to some shared system of norms that we think will license our communicative act. This is the upshot of Wittgenstein’s regress of rules argument, the contention that “while rules can codify the pragmatic normative significance of claims, they do so only against a background of practices permitting the distinguishing of correct from incorrect applications of those rules” (22).

Since this account has become gospel in certain philosophical corners, it might pay to block out the precise way this Wittgensteinian explication of the implicit does and does not differ from the Kantian explication. One comforting thing about Wittgenstein’s move, from a naturalist’s standpoint at least, is that it adverts to the higher-dimensionality of actual practices—it’s pragmatism, in effect. Where Kant’s making explicit is governed from somewhere beyond the grave, Wittgenstein’s is governed by your friends, family, and neighbours. If you were to say there was a signature relationship between their views, you could cite this difference in dimensionality, the ‘solidity’ or ‘corporeality’ that Brandom appeals to in his bid to ground the causal efficacy of his elaborate architecture (631-2).

Put differently, the blindness on Wittgenstein’s account belongs to you and everyone you know. You could say he espouses a Communal Blind Agent Thesis, or C-BAT. The idea is that we’re continually communicating with one another while utterly oblivious as to how we’re communicating with one another. We’re so oblivious, in fact, we’re oblivious to the fact we are oblivious. Communication just happens. And when we reflect, it seems to be all that needs to happen—until, that is, the philosopher begins asking his damn questions.

It’s worth pointing out, while we’re steeping in this unnerving image of mass, communal blindness, that Wittgenstein, almost as much as Kant, was in a position analogous to empirical psychologists researching cognitive capacities back in the 1950s and 1960s. With reference to the latter, Piccinini and Craver have argued (“Integrating psychology and neuroscience: functional analyses as mechanism sketches,” 2011) that informatic penury was the mother of functional invention, that functional analysis was simply psychology’s means of making due, a way to make the constitutive implicit explicit in the absence of any substantial neuroscientific information. Kant and Wittgenstein are pretty much in the same boat, only absent any experimental means to test and regiment their guesswork. The original edition of Philosophical Investigations, in case you were wondering, was published in 1953, which means Wittgenstein’s normative contextualism was cultured in the very same informatic vacuum as functional analysis. And the high-altitude moral, of course, amounts to the same: times have changed.

The cognitive sciences have provided a tremendous amount of information regarding our implicit, neurobiological precursors, so much so that the mechanical implicit is a given. The issue now isn’t one of whether the implicit is causal/mechanical in some respect, but whether it is causal/mechanical in every respect. The question, quite simply, is one of what we are blind to. Our biology? Our ‘mental’ programming? Our ‘normative’ programming? The more we learn about our biology, the more we fill in the black box with scientific facts, the more difficult it seems to become to make sense of the latter two.

norms

Ineliminable Inscrutability Scrutinized and Eliminated

Though he comes nowhere near framing the problem in these explicitly informatic terms, Brandom is quite aware of this threat. American pragmatism has always maintained close contact with the natural sciences, and post-Quine, at least, it has possessed more than its fair share of eliminativist inclinations. This is why he goes to such lengths to argue the ineliminability of the normative. This is why he follows his account of Kant’s discovery of the normativity of the performative implicit with an account of Frege’s critique of psychologism, and his account of Wittgenstein’s regress argument against ‘Regulism’ with an account of his gerrymandering argument against ‘Regularism.’

Regularism proposes we solve the problem of rule-following with patterns of regularities. If a given performance conforms to some pre-existing pattern of performances, then we call that performance correct or competent. If it doesn’t so conform, then we call it incorrect or incompetent. “The progress promised by such a regularity account or proprieties of practice,” Brandom writes, “lies in the possibility of specifying the pattern or regularity in purely descriptive terms and then allowing the relation between regular and irregular performance to stand in for the normative distinction between what is correct and what is not” (MIE 28). The problem with Regularism, however, is “that it threatens to obliterate the contrast between treating a performance as subject to normative assessment of some sort and treating it as subject to physical laws” (27). Thus the challenge confronting any Regularist account of rule-following, as Brandom sees it, is to account for its normative character. Everything in nature ‘follows’ the ‘rules of nature,’ the regularities isolated by the natural sciences. So what does the normativity that distinguishes human rule-following consist in?

For a regularist account to weather this challenge, it must be able to fund a distinction between what is in fact done and what ought to be done. It must make room for the permanent possibility of mistakes, for what is done or taken to be correct nonetheless turn out to be incorrect or inappropriate according to some rule or practice.” 27

The ultimate moral, of course, is that there’s simply no way this can be done, there’s no way to capture the distinction between what happens and what ought to happen on the basis of what merely happens. No matter what regularity the Regularist adduces ‘to play the role of norms implicit in practice,’ we find ourselves confronted by the question of whether it’s the right regularity. The fact is any number of regularities could play that role, stranding us with the question of which regularity one should conform to—which is to say, the question of the very normative distinction the Regularist set out to solve in the first place. Adverting to dispositions to pick out the relevant regularity simply defers the problem, given that “[n]obody ever acts incorrectly in the sense of violating his or her own dispositions” (29).

For Brandom, as with Wittgenstein, the problem of Regularism is intimately connected to the problem of Regulism: “The problem that Wittgenstein sets up…” he writes, “is to make sense of a notion of norms implicit in practice that will not lose either the notion of the implicitness, as regulism does, or the notion of norms, as simple regularism does” (29). To see this connection, you need only consider one of Wittgenstein’s more famous passages from Philosophical Investigations:

§217. “How am I able to obey a rule?”–if this is not a question about causes, then it is about the justification for my following the rule in the way I do.

If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: “This is simply what I do.”

The idea, famously, is that rule-following is grounded, not in explicit rules, but in our actual activities, our practices. The idea, as we saw above, is that rule-following is blind. It is ‘simply what we do.’ “When I obey a rule, I do not choose,” Wittgenstein writes. “I obey the rule blindly” (§219). But if rule-following is blind, just what we find ourselves doing in certain contexts, then in what sense is it normative? Brandom quotes McDowell’s excellent (certainly from a BBT standpoint!) characterization of the problem in “Wittgenstein on Following a Rule”: “How can a performance be nothing but a ‘blind’ reaction to a situation, not an attempt to act on interpretation (thus avoiding Scylla); and be a case of going by a rule (avoiding Charybdis)?” (Mind, Value, and Reality, 242).

Wittgenstein’s challenge, in other words, is one of theorizing nonconscious rule-following in a manner that does not render normativity some inexplicable remainder. The challenge is to find some way to avoid Regulism without lapsing into Regularism. Of course, we’ve grown inured to the notion of ‘implicit norms’ as a theoretical explanatory posit, so much so as to think them almost self-evident—I know this was once the case for me. But the merest questioning quickly reveals just how odd implicit norms are. Nonconscious rule-following is automatic rule-following, after all, something mechanical, dispositional. Automaticity seems to preclude normativity, even as it remains amenable to regularities and dispositions. Although it seems obvious that evaluation and justification are things that we regularly do, that we regularly engage in normative cognition navigating our environments (natural and social), it is by no means clear that only normative posits can explain normative cognition. Given that normative cognition is another natural artifact, the product of evolution, and given the astounding explanatory successes of science, it stands to reason that natural, not supernatural, posits are likely what’s required.

All this brings us back to C-BAT, the fact that Wittgenstein’s problem, like Brandom’s, is the problem of neglect. ‘This is simply what I do,’ amounts to a confession of abject ignorance. Recall the ‘Hidden Constraint Model’ of the implicit from our previous discussion. Cognizing rule-following behaviour requires cognizing the precursors to rule-following behaviour, precursors that conscious cognition systematically neglects. Most everyone agrees on the biomechanical nature of those precursors, but Brandom (like intentionalists more generally) wants to argue that biomechanically specified regularities and dispositions are not enough, that something more is needed to understand the normative character of rule-following, given the mysterious way regularities and dispositions preclude normative cognition. The only way to avoid this outcome, he insists, is to posit some form of nonconscious normativity, a system of preconscious, pre-communicative ‘rules’ governing cognitive discourse. The upshot of Wittgenstein’s arguments against Regularism seems to be that only normative posits can adequately explain normative cognition.

But suddenly, the stakes are flipped. Just as the natural is difficult to understand in the context of the normative, so too is the normative difficult to understand in the context of the natural. For some critics, this is difficulty enough. In Explaining the Normative, for instance, Stephen Turner does an excellent job tracking, among other things, the way Normativism attempts to “take back ground lost to social science explanation” (5). He begins by providing a general overview of the Normativist approach, then shows how these self-same tactics characterized social science debates of the early twentieth-century, only to be abandoned as their shortcomings became manifest. “The history of the social sciences,” he writes, “is a history of emancipation from the intellectual propensity to intentionalize social phenomenon—this was very much part of the process that Weber called the disenchantment of the world” (147). His charge is unequivocal: “Brandom,” he writes, “proposes to re-enchant the world by re-instating the belief in normative powers, which is to say, powers in some sense outside of and distinct from the forces known to science” (4). But although this is difficult to deny in a broad stroke sense, he fails to consider (perhaps because his target is Normativism in general, and not Brandom, per se) the nuance and sensitivity Brandom brings to this very issue—enough, I think, to walk away theoretically intact.

In the next installment, I’ll consider the way Brandom achieves this via Dennett’s account of the Intentional Stance, but for the nonce, it’s important that we keep the problem of re-enchantment on the table. Brandom is arguing that the inability of natural posits to explain normative cognition warrants a form of theoretical supernaturalism, a normative metaphysics, albeit one he wants to make as naturalistically palatable as possible.

Even though neglect is absolutely essential to their analyses of Regulism and Regularism, neither Wittgenstein nor Brandom so much as pause to consider it. As astounding as it is, they simply take our utter innocence of our own natural and normative precursors as a given, an important feature of the problem ecology under consideration to be sure, but otherwise irrelevant to the normative explication of normative cognition. Any role neglect might play beyond anchoring the need for an account of implicit normativity is entirely neglected. The project of Making It Explicit is nothing other than the project of making the activity of making explicit explicit, which is to say, the project of overcoming metacognitive neglect regarding normative cognition, and yet nowhere does Brandom so much as consider just what he’s attempting to overcome.

Not surprisingly, this oversight proves catastrophic—for the whole of Normativism, and not simply Brandom.

Just consider, for instance, the way Brandom completely elides the question of the domain specificity of normative cognition. Normative cognition is a product of evolution, part of a suite of heuristic systems adapted to solve some range of social problems as effectively as possible given the resources available. It seems safe to surmise that normative cognition, as heuristic, possesses what Todd, Gigarenzer, and the ABC Research Group (2012) call an adaptive ‘problem-ecology,’ a set of environments possessing complementary information structures. Heuristics solve via the selective uptake of information, wedding them in effect, to specific problem-solving domains. ‘Socio-cognition,’ which manages to predict, explain, even manipulate astronomically complex systems on the meagre basis of observable behaviour, is paradigmatic of a heuristic system. In the utter absence of causal information, it can draw a wide variety of reliable causal conclusions, but only within a certain family of problems. As anthropomorphism, the personification or animation of environments, shows, humans are predisposed to misapply socio-cognition to natural environments. Pseudo-solving natural environments via socio-cognition may have solved various social problems, but precious few natural ones. In fact, the process of ‘disenchantment’ can be understood as a kind of ‘rezoning’ of socio-cognition, a process of limiting its application to those problem-ecologies where it actually produces solutions.

Which leads us to the question: So what, then, is the adaptive problem ecology of normative cognition? More specifically, how do we know that the problem of normative cognition belongs to the problem ecology of normative cognition?

As we saw Brandom’s argument against Regularism could itself be interpreted as a kind of ‘ecology argument,’ as a demonstration of how the problem of normative cognition does not belong to the problem ecology of natural cognition. Natural cognition cannot ‘fund the distinction between ought and is.’ Therefore the problem ecology of normative cognition does not belong to natural cognition. In the absence of any alternatives, we then have an abductive case for the necessity of using normative cognition to solve normative cognition.

But note how recognizing the heuristic, or ecology dependant, nature of normative cognition has completely transformed the stakes of Brandom’s original argument. The problem for Regularism turns, recall, on the conspicuous way mere regularities fail to capture the normative dimension of rule-following. But if normative cognition were heuristic (as it almost certainly is), if what we’re prone to identify as the ‘normative dimension’ is something specific to the application of normative cognition, then this becomes the very problem we should expect. Of course the normative dimension disappears absent the application of normative cognition! Since Regularism involves solving normative cognition using the resources of natural cognition, it simply follows that it fails to engage resources specific to normative cognition. Consider Kripke’s formulation of the gerrymandering problem in terms of the ‘skeptical paradox’: “For the sceptic holds that no fact about my past history—nothing that was ever in my mind, or in my external behavior—establishes that I meant plus rather than quus” (Wittgenstein, 13). Even if we grant a rule-follower access to all factual information pertaining to rule-following, a kind of ‘natural omniscience,’ they will still be unable to isolate any regularity capable of playing ‘the role of norms implicit in practice.’ Again, this is precisely what we should expect given the domain specificity of normative cognition proposed here. If ‘normative understanding’ were the artifact of a cognitive system dedicated to the solution of a specific problem-ecology, then it simply follows that the application of different cognitive systems would fail to produce normative understanding, no matter how much information was available.

What doesn’t follow is that normative cognition thus lies outside the problem ecology of natural cognition, let alone inside the problem ecology of normative cognition. The ‘explanatory failure’ that Brandom and others use to impeach the applicability of natural cognition to normative cognition is nothing of the sort. It simply makes no sense to demand that one form of cognition solve another form of cognition as if it were that other form. We know that normative cognition belongs to social cognition more generally, and that social cognition—‘mindreading’—operates heuristically, that it has evolved to solve astronomically complicated biomechanical problems involving the prediction, understanding, and manipulation of other organisms absent detailed biomechanical information. Adapted to solve in the absence of this information, it stands to reason that the provision of that information, facts regarding biomechanical regularities, will render it ineffective—‘grind cognitive gears,’ you could say.

Since these ‘technical details’ are entirely invisible to ‘philosophical reflection’ (thanks to metacognitive neglect), the actual ecological distinction between these systems escapes Brandom, and he assumes, as all Normativists assume, that the inevitable failure of natural cognition to generate instances of normative cognition means that only normative cognition can solve normative cognition. Blind to our cognitive constitution, instances of normative cognition are all he or anyone else has available: our conscious experience of normative cognition consists of nothing but these instances. Explaining normative cognition is thus conflated with replacing normative cognition. ‘Competence’ becomes yet another ‘spooky explanandum,’ another metacognitive inkling, like ‘qualia,’ or ‘content,’ that seems to systematically elude the possibility the possibility of natural cognition (for suspiciously similar reasons).

This apparent order of supernatural explananda then provides the abductive warrant upon which Brandom’s entire project turns—all T-BAT and C-BAT approaches, in fact. If natural cognition is incapable, then obviously something else is required. Impressed by how our first-order social troubleshooting makes such good use of the Everyday Implicit, and oblivious to the ecological limits of the heuristic systems responsible, we effortlessly assume that making use of some Philosophical Implicit will likewise enable second-order social troubleshooting… that tomes like Making It Explicit actually solve something.

But as the foregoing should make clear, precisely the opposite is the case. As a system adapted to troubleshoot first-order social ecologies, normative cognition seems unlikely to theoretically solve normative cognition in any satisfying manner. The very theoretical problems that plague Normativism—supernaturalism, underdetermination, and practical inapplicability—are the very problems we should expect if normative cognition were not in fact among the problems that normative cognition can solve.

As an evolved, biological capacity, however, normative cognition clearly belongs to the problem ecology of natural cognition. Simply consider how much the above sketch has managed to ‘make explicit.’ In parsimonious fashion it explains, 1) the general incompatibility of natural and normative cognition; 2) the inability of Regularism to ‘play the role of norms implicit in practice’; 3) why this inability suggests the inapplicability of natural cognition to the problem of normative cognition; 4) why Normativism seems the only alternative as a result; and 5) why Normativism nonetheless suffers the debilitating theoretical problems it does. It solves the notorious Skeptical Paradox, and much else aside, using only the idiom of natural cognition, which is to say, in a manner not only compatible with the life sciences, but empirically tractable as well.

Brandom is the victim of a complex of illusions arising out of metacognitive neglect. Wittgenstein, who had his own notion of heuristics and problem ecologies (grammars and language games), was sensitive to the question of what kinds of problems could be solved given the language we find ourselves stranded with. As a result, he eschews the kind of systematic normative metaphysics that Brandom epitomizes. He takes neglect seriously insofar as ‘this is simply what I do’ demarcates, for him, the pale of credible theorization. Even so, he nevertheless succumbs to a perceived need to submit, however minimally or reluctantly, the problem of normative cognition (in terms of rule-following) to the determinations of normative cognition, and is thus compelled to express his insights in the self-same supernatural idiom as Brandom, who eschews what is most valuable in Wittgenstein, his skepticism, and seizes on what is most problematic, his normative metaphysics.

There is a far more parsimonious way. We all agree humans are physical systems nested within a system of such systems. What we need to recognize is how being so embedded poses profound constraints on what can and cannot be cognized. What can be readily cognized are other systems (within a certain range of complexity). What cannot be readily cognized is the apparatus of cognition itself. The facts we call ‘natural’ belong to the former, and the facts we call ‘intentional’ belong to the latter. Where the former commands an integrated suite of powerful environmental processors, the latter relies on a hodgepodge of specialized socio-cognitive and metacognitive hacks. Since we have no inkling of this, we have no inkling of their actual capacities, and so run afoul a number of metacognitive impasses. So for instance, intentional cognition has evolved to overcome neglect, to solve problems in the absence of causal information. This is why philosophical reflection convinces us we somehow stand outside the causal order via choice or reason or what have you. We quite simply confuse an incapacity, our inability to intuit our biomechanicity, with a special capacity, our ability to somehow transcend or outrun the natural order.

We are physical in such a way that we cannot intuit ourselves as wholly physical. To cognize nature is to be blind to the nature of cognizing. To be blind to that blindness is to think cognizing has no nature. So we assume that nature is partial, and that we are mysteriously whole, a system unto ourselves.

Reason be praised.

 

Follow

Get every new post delivered to your Inbox.

Join 531 other followers