Three Pound Brain

No bells, just whistling in the dark…

Tag: problem of qualia

Reading From Bacteria to Bach and Back I: On Cartesian Gravity

by rsbakker

ABDUCTION AND DIAGNOSIS

Problem resolution generally possesses a diagnostic component; sometimes we can find workarounds, but often we need to know what the problem consists in before we can have any real hope of advancing beyond it. This is what Daniel Dennett proposes to do in his recent From Bacteria to Bach and Back, to not only sketch a story of how human comprehension arose from the mindless mire of biological competences, but to provide a diagnostic account of why we find such developmental stories so difficult to credit. He hews to the slogan I’ve oft repeated here on Three Pound Brain: We are natural in such a way that we find it impossible to intuit ourselves as natural. It’s his account of this ‘in such a way,’ that I want to consider here. As I’ve said many times before, I think Dennett has come as close as any philosopher in history to unravelling the conjoined problems of cognition and consciousness—and I am obliged to his acumen and creativity in more ways than I could possibly enumerate—but I’m convinced he remains entangled, both theoretically and dialectically, by several vestigial commitments to intentionalism. He remains a prisoner of ‘Cartesian gravity.’ Nowhere is this clearer than in his latest book, where he sets out to show how blind competences, by hook, crook, and sheer, mountainous aggregation, can actually explain comprehension, which is to say, understanding as it appears to the intentional stance.

Dennett offers two rationales for braving the question of comprehension, the first turning on the breathtaking advances made in the sciences of life and cognition, the second anchored in his “better sense of the undercurrents of resistance that shackle our imaginations” (16). He writes:

I’ve gradually come to be able to see that there are powerful forces at work, distorting imagination—my own imagination included—pulling us first one way and then another. If you learn to see these forces too, you will find that suddenly things begin falling into place in a new way. 16-17

The original force, the one begetting subsequent distortions, he calls Cartesian gravity. He likens the scientific attempt to explain cognition and consciousness to a planetary invasion, with the traditional defenders standing on the ground with their native, first-person orientation, and the empirical invaders finding their third-person orientation continually inverted the closer they draw to the surface. Cartesian gravity, most basically, refers to the tendency to fall into first-person modes of thinking cognition and consciousness. This is a problem because of the various, deep incompatibilities between the first-person and third-person views. Like a bi-stable image (Dennett provides the famous Duck-Rabbit as an example), one can only see the one at the expense of seeing the other.

Cartesian gravity, in other words, refers to the intuitions underwriting the first-person side of the famed Explanatory Gap, but Dennett warns against viewing it in these terms because of the tendency in the literature to view the divide as an ontological entity (a ‘chasm’) instead of an epistemological artifact (a ‘glitch’). He writes:

[Philosophers] may have discovered the “gap,” but they don’t see it for what it actually is because they haven’t asked “how it got that way.” By reconceiving of the gap as a dynamic imagination-distorter that has arisen for good reasons, we can learn to traverse it safely or—what may amount to the same thing—make it vanish. 20-21

It’s important, I think, to dwell on the significance of what he’s saying here. First of all, taking the gap as a given, as a fundamental feature of some kind, amounts to an explanatory dereliction. As I like to put it, the fact that we, as a species, can explain the origins of nature down to the first second and yet remain utterly mystified by the nature of this explanation is itself a gobsmacking fact requiring explanation. Any explanation of human cognition that fails to explain why humans find themselves so difficult to explain is woefully incomplete. Dennett recognizes this, though I sometimes think he fails to recognize the dialectical potential of this recognition. There’s few better ways to isolate the sound of stomping feet from the speculative cacophony, I’ve found, than by relentlessly posing this question.

Secondly, the argumentative advantage of stressing our cognitive straits turns directly on its theoretical importance: to naturalistically diagnose the gap is to understand the problem it poses. To understand the problem it poses is to potentially resolve that problem, to find some way to overcome the explanatory gap. And overcoming the gap, of course, amounts to explaining the first-person in third-person terms—to seize upon what has become the Holy Grail of philosophical and scientific speculation.

The point being that the whole cognition/consciousness debate stands balanced upon some diagnosis of why we find ourselves so difficult to fathom. As the centerpiece of his diagnosis, Cartesian gravity is absolutely integral to Dennett’s own position, and yet surveying the reviews From Bacteria to Bach and Back has received (as of 9/12/2017, at least), you find the notion is mentioned either in passing (as in Thomas Nagel’s piece in The New York Review of Books), dismissively (as in Peter Hankin’s review in Conscious Entities), or not at all.

Of course, it would probably help if anyone had any clue as to what ‘first-person’ or ‘third-person’ actually meant. A gap between gaps often feels like no gap at all.

ACCUMULATING MASS

“The idea of Cartesian gravity, as so far presented, is just a metaphor,” Dennett admits, “but the phenomenon I am calling by this metaphorical name is perfectly real, a disruptive force that bedevils (and sometimes aids) our imaginations, and unlike the gravity of physics, it is itself an evolved phenomenon. In order to understand it, we need to ask how and why it arose on the planet earth” (21). Part of the reason so many reviewers seem to have overlooked its significance, I think, turns on the sheer length of the story he proceeds to tell. Compositionally speaking, it’s rarely a good idea to go three hundred pages—wonderfully inventive, controversial pages, no less—without substantially revisiting your global explanandum. By time Dennett tells us “[w]e are ready to confront Cartesian gravity head on” (335) it feels like little more than a rhetorical device—and understandably so.

The irony, of course, is that Dennett thinks that nothing less than Cartesian gravity has forced the circuitous nature of his route upon him. If he fails to regularly reference his metaphor, he continually adverts to its signature consequence: cognitive inversion, the way the sciences have taken our traditional, intuitive, ab initio, top-down presumptions regarding life and intelligence and turned them on their head. Where Darwin showed how blind, bottom-up processes can generate what appear to be amazing instances of design, Turing showed how blind, bottom-up processes can generate what appear to be astounding examples of intelligence, “natural selection on the one hand, and mindless computation on the other” (75). Despite some polemical and explanatory meandering (most all of it rewarding), he never fails to keep his dialectical target, Cartesian exceptionalism, firmly (if implicitly) in view.

A great number of the biological examples Dennett adduces in From Bacteria to Bach and Back will be familiar to those following Three Pound Brain. This is no coincidence, given that Dennett is both an info-junkie like myself, as well as constantly on the lookout for examples of the same kinds of cognitive phenomena: in particular, those making plain the universally fractionate, heuristic nature of cognition, and those enabling organisms to neglect, and therefore build-upon, pre-existing problem-solving systems. As he writes:

Here’s what we have figured out about the predicament of the organism: It is floating in an ocean of differences, a scant few of which might make a difference to it. Having been born to a long lineage of successful copers, it comes pre-equipped with gear and biases for filtering out and refining the most valuable differences, separating the semantic information from the noise. In other words, it is prepared to cope in some regards; it has built-in expectations that have served its ancestors well but may need revision at any time. To say that it has these expectations is to say that it comes equipped with partially predesigned appropriate responses all ready to fire. It doesn’t have to waste precious time figuring out from first principles what to do about an A or a B or a C. These are familiar, already solved problems of relating input to output, perception to action. These responses to incoming simulation of its sensory systems may be external behaviors: a nipple affords sucking, limbs afford moving, a painful collision affords retreating. Or they may be entirely covert, internal responses, shaping up the neural armies into more effective teams for future tasks. 166

Natural environments consist of regularities, component physical processes systematically interrelated in ways that facilitate, transform, and extinguish other component physical processes. Although Dennett opts for the (I think) unfortunate terminology of ‘affordances’ and ‘Umwelts,’ what he’s really talking about are ecologies, the circuits of selective sensitivity and corresponding environmental frequency allowing for niches to be carved, eddies of life to congeal in the thermodynamic tide. With generational turnover, risk sculpts ever more morphological and behavioural complexity, and the life once encrusting rocks begins rolling them, then shaping and wielding them.

Now for Dennett, the crucial point is to see the facts of human comprehension in continuity with the histories that make it possible, all the while understanding why the appearance of human comprehension systematically neglects these self-same conditions. Since his accounts of language and cultural evolution (via memes) warrant entire posts in their own right, I’ll elide them here, pointing out that each follow this same incremental, explanatory pattern of natural processes enabling the development of further natural processes, tangled hierarchies piling toward something recognizable as human cognition. For Dennett, the coincidental appearance of La Sagrada Familia (arguably a paradigmatic example of top-down thinking given Gaudi’s reputed micro-managerial mania) and Australian termite castles expresses a profound continuity as well, one which, when grasped, allows for the demystification of comprehension, and inoculation against the pernicious effects of Cartesian gravity. The leap between the two processes, what seems to render the former miraculous in a way the latter does not, lies in the sheer plasticity of the processes responsible, the way the neurolinguistic mediation of effect feedback triggers the adaptive explosion we call ‘culture.’ Dennett writes:

Our ability to do this kind of thinking [abstract reasoning/planning] is not accomplished by any dedicated brain structure not found in other animals. There is no “explainer nucleus” for instance. Our thinking is enabled by the installation of a virtual machine made of virtual machines made of virtual machines. The goal of delineating and explaining this stack of competences via bottom-up neuroscience alone (without the help of cognitive neuroscience) is as remote as the goal of delineating and explaining the collection of apps on your smart phone by a bottom-up deciphering of its hardware circuit design and the bit-strings in memory without taking a peek at the user interface. The user interface of an app exists in order to make the competence accessible to users—people—who can’t know, and don’t need to know, the intricate details of how it works. The user-illusions of all the apps stored in our brains exist for the same reason: they make our competences (somewhat) accessible to users—other people—who can’t know, and don’t need to know, the intricate details. And then we get to use them ourselves, under roughly the same conditions, as guests in our own brain. 341

This is the Dennettian portrait of the first-person, or consciousness as it’s traditionally conceived: a radically heuristic point of contact and calibration between endogenous and exogenous systems, one resting on occluded stacks of individual, collective, and evolutionary competence. The overlap between what can be experienced and what can be reported is no cosmic coincidence: the two are (likely) coeval, part of a system dedicated to keeping both ourselves and our compatriots as well informed/misinformed—and as well armed with the latest competences available—as possible.

We can give this strange idea an almost paradoxical spin: it is like something to be you because you have been enabled to tell us—or refrain from telling us—what it’s like to be you!

When we evolved into in us, a communicating community of organisms that can compare notes, we became the beneficiaries of a system of user-illusions that rendered versions of our cognitive processes—otherwise as imperceptible as our metabolic processes—accessible to us for purposes of communication. 344

Far from the phenomenological plenum the (Western) tradition has taken it to be, then, consciousness is a presidential brief prepared by unscrupulous lobbyists, a radically synoptic aid to specific, self-serving forms of individual and collective action.

our first-person point of view of our own minds is not so different from our second-person point of view of others’ minds: we don’t see, or hear, or feel, the complicated neural machinery turning away in our brains but have to settle for an interpreted, digested version, a user-illusion that is so familiar to us that we take it not just for reality but also for the most indubitable and intimately known reality of all. That’s what it is like to be us. 345

Thus, the astounding problem posed by Cartesian gravity. As a socio-communicative interface possessing no access whatsoever to our actual sources, we can only be duped by our immediate intuitions. Referring to John Searle’s Cartesian injunction to insist upon a first-person solution of meaning and consciousness, Dennett writes:

The price you pay for following Searle’s advice is that you get all your phenomena, the events and things that have to be explained by your theory, through a channel designed not for scientific investigation but for handy, quick-and-dirty use in the rough and tumble of time-pressured life. You can learn a lot about how the brain it—you can learn quite a lot about computers by always insisting on the desk-top point of view, after all—but only if you remind yourself that your channel is systematically oversimplified and metaphorical, not literal. That means you must resist the alluring temptation to postulate a panoply of special subjective properties (typically called qualia) to which you (alone) have access. Those are fine items for our manifest image, but they must be “bracketed,” as the phenomenologist’s say, when we turn to scientific explanation. Failure to appreciate this leads to an inflated list of things that need to be explained, featuring, preeminently, a Hard Problem that is nothing but an artifact of the failure to recognize that evolution has given us a gift that sacrifices literal truth for utility. 365-366

Sound familiar? Human metacognitive access and capacity is radically heuristic, geared to the solution of practical ancestral problems. As such, we should expect that tasking that access and capacity, ‘relying on the first-person,’ with solving theoretical questions regarding the nature of experience and cognition will prove fruitless.

It’s worth pausing here, I think, to emphasize just how much this particular argumentative tack represents a departure from Dennett’s prior attempts to clear intuitive ground for his views. Nothing he says here is unprecedented: heuristic neglect has always lurked in the background of his view, always found light of day in this or that corner of this or that argument. But at no point—not in Consciousness Explained, nor even in “Quining Qualia”—has it occupied the dialectical pride of place he concedes it in From Bacteria to Bach and Back. Prior to this book, Dennett’s primary strategy has been to exploit the kinds of ‘crashes’ brought about by heuristic misapplication (though he never explicitly characterizes them as such). Here, with Cartesian gravity, he takes a gigantic step toward theorizing the neurocognitive bases of the problematic ‘intuition pumps’ he has targeted over the years. This allows him to generalize his arguments against first-person theorizations of experience in a manner that had hitherto escaped him.

But he still hasn’t quite found his way entirely clear. As I hope to show, heuristic neglect is far more than simply another tool Dennett can safely store with his pre-existing commitments. The best way to see this, I think, is to consider one particular misreading of the new argument against qualia in Chapter 14.

GRAVITY MEETS REALITY

In “Dennett and the Reality of Red,” Tom Clark presents a concise and elegant account of how Dennett’s argument against the reality of qualia in From Bacteria to Bach and Back turns upon a misplaced physicalist bias. The extraordinary thing about his argument—and the whole reason we’re considering it here—lies in the way he concedes so much of Dennett’s case, only to arrive at a version of the very conclusion Dennett takes himself to be arguing against:

I’d suggest that qualia, properly understood, are simply the discriminable contents of sensory experience – all the tastes, colors, sounds, textures, and smells in terms of which reality appears to us as conscious creatures. They are not, as Dan correctly says, located or rendered in any detectable mental medium. They’re not located anywhere, and we are not in an observational or epistemic relationship to them; rather they are the basic, not further decomposable, hence ineffable elements of the experiences we consist of as conscious subjects.

The fact that ‘Cartesian gravity’ appears nowhere in his critique, however, pretty clearly signals that something has gone amiss. Showing as much, however, requires I provide some missing context.

After introducing his user-illusion metaphor for consciousness, Dennett is quick to identify the fundamental dialectical problem Cartesian gravity poses his characterization:

if (as I have just said) your individual consciousness is rather like the user-illusion on your computer screen, doesn’t this imply that there is a Cartesian theatre after all, where this portrayal happens, where the show goes on, rather like the show you perceive on the desktop? No, but explaining what to put in place of the Cartesian theatre will take some stretching of the imagination. 347

This is the point where he introduces a third ‘strange inversion of reasoning,’ this one belonging to Hume. Hume’s inversion, curiously enough, lies in his phenomenological observation of the way we experience causation ‘out there,’ in the world, even though we know given our propensity to get it wrong that it belongs to the machinery of cognition. (This is a canny move on Dennett’s part, but I think it demonstrates the way in which the cognitive consequences of heuristic neglect remain, as yet, implicit for him). What he wants is to ‘theatre-proof’ his account of conscious experience as a user-illusion. Hume’s inversion provides him a way to both thematize and problematize the automatic assumption that the illusion must itself be ‘real.’

The new argument for qualia eliminativism he offers, and that Clark critiques, is meant to “clarify [his] point, if not succeed in persuading everybody—as Hume says, the contrary notion is so riveted in our minds” (358). He gives the example of the red afterimage experienced in complementary colour illusions.

The phenomenon in you that is responsible for this is not a red stripe. It is a representation of a red stripe in some neural system of representation that we haven’t yet precisely located and don’t yet know how to decode, but we can be quite sure it is neither red nor a stripe. You don’t know exactly what causes you to seem to see a red stripe out in the world, so you are tempted to lapse into Humean misattribution: you misinterpret your sense (judgment, conviction, belief, inclination) that you are seeing a red stripe as arising from a subjective property (a quale, in the jargon of philosophy) that is the source of your judgment, when in fact, that is just about backward. It is your ability to describe “the red stripe,” your judgment, your willingness to make the assertions you just made, and your emotional reactions (if any) to “the red stripe” that is the source of your conviction that there is a subjective red stripe. 358-359

The problem, Dennett goes on to assert, lies in “mistaking the intentional object of a belief for its cause” (359). In normal circumstances, when we find ourselves in the presence of an apple, say, we’re entirely justified in declaring the apple the cause of our belief. In abnormal circumstances, however, this reflex dupes us into thinking that something extra-environmental—‘ineffable,’ supernatural—has to be the cause. And thus are inscrutable (and therefore perpetually underdetermined) theoretical posits like qualia born, giving rise to scholastic excesses beyond numbering.

Now the key to this argument lies in the distinction between normal and abnormal circumstances, which is to say the cognitive ecology occasioning the application of a certain heuristic regime—namely the one identified by Hume. For Clark, however, the salient point of Dennett’s argument is that the illusory red stripe lies nowhere.

Dan, a good, sophisticated physicalist, wants everything real to be locatable in the physical external world as vetted by science. What’s really real is what’s in the scientific image, right? But if you believe that we really have experiences, that experiences are specified in terms of content, and that color is among those contents, then the color of the experienced afterimage is as real as experiences. But it isn’t locatable, nor are any of the contents of experience: experiences are not observables. We don’t find them out there in spacetime or when poking around in the brain; we only find objects of various qualitative, quantitative and conceptual descriptions, including the brains with which experiences are associated. But since experiences and their contents are real, this means that not all of what’s real is locatable in the physical, external world.

Dennett never denies that we have experiences, and he even alludes to the representational basis of those experiences in the course of making his red stripe argument. A short time later, in his consideration of Cartesian gravity, he even admits that our ability to report our experiences turns on their content: “By taking for granted the content of your mental states, by picking them out by their content, you sweep under the rug all the problems of indeterminacy or vagueness of content” (367).

And yet, even though Clark is eager to seize on these and other instances of experience-talk, representation-talk, and content-talk, he completely elides the circumstances occasioning them, and thus the way Dennett sees all of these usages as profoundly circumstantial—‘normal’ or ‘abnormal.’ Sometimes they’re applicable, and sometimes they’re not. In a sense, the reality/unreality of qualia is actually beside the point; what’s truly at issue is the applicability of the heuristic tools philosophy has traditionally applied to experience. The question is, What does qualia-talk add to our ability to naturalistically explain colour, affect, sound, and so on? No one doubts our ability to correlate reportable metacognitive aspects of experience to various neural and environmental facts. No one doubts our sensory discriminatory abilities outrun our metacognitive discriminatory abilities—our ability to report. The empirical relationships are there regardless: the question is one of whether the theoretical paradigms we reflexively foist on these relationships lead anywhere other than endless disputation.

Clark not only breezes past the point of Dennett’s Red Stripe argument, he also overlooks the rather stark challenge it poses it his own position. Simply raising the spectre of heuristic metacognitive inadequacy, as Dennett does, obliges Clark to warrant his assumptive metacognitive claims. (Arguing, as Clark does, that we have no epistemic relation to our experiences simply defers the obligation to this second extraordinary claim: heaping speculation atop speculation generates more problems, not less). Dennett spends hundreds of pages amassing empirical evidence for the fractionate, heuristic nature of cognition. Given that our ancestors required only the solution of practical problems, the chances that human metacognition furnishes the information and capacity required to intuit the nature of experience (that it consists of representations consisting of contents consisting of qualia) is vanishingly small. What we should expect is that our metacognitive reflexes will do what they’ve always done: apply routines adapted to practical cognitive and communicative problem resolution to what amounts to radically unprecedented problem ecology. All things being equal, it’s almost certain that the so-called first-person can do little more than flounder before the theoretical question of itself.

The history of intentional philosophy and psychology, if nothing else, vividly illustrates as much.

In the case of content, it’s hard not to see Clark’s oversight as tendentious insofar as Dennett is referring to the way content talk exposes us to Cartesian gravity (“Reading your own mind is too easy” (367)) and the relative virtues of theorizing cognition via nonhuman species. But otherwise, I’m inclined to think Clark’s reading of Dennett is understandable. Clark misses the point of heuristic neglect entirely, but only because Dennett himself remains fuzzy on just how his newfound appreciation for the Grand Inversion—the one we’ve been exploring here on Three Pound Brain for years now—bears on his preexisting theoretical commitments. In particular, he has yet to see the hash it makes of his ‘stances’ and the ‘real patterns’ underwriting them. As soon as Dennett embraced heuristic neglect, opportunistic eliminativism ceased being an option. As goes the ‘reality’ of qualia, so goes the ‘reality’ supposedly underwriting the entire lexicon of traditional intentionalist philosophy. Showing as much, however, requires showing how Heuristic Neglect Theory arises out of the implications of Dennett’s own argument, and how it transforms Cartesian gravity into a proto-cognitive psychological explanation of intentional philosophy—an empirically tractable explanation for why humanity finds humanity so dumbfounding. But since I’m sure eyes are crossing and chins are nodding, I’ll save the way HNT can be directly drawn from the implicature of Dennett’s position for a second installment, then show how HNT both denies representation ‘reality,’ while explaining what makes representation talk so useful in my third and final post on what has been one the most exciting reading adventures in my life.

The Second Room: Phenomenal Realism as Grammatical Violation

by rsbakker

Aphorism of the Day: Atheist or believer, we all get judged by God. The one that made us, or the one we make.

neuro skull

So just what the hell did Wittgenstein mean when he wrote this?

“And yet you again and again reach the conclusion that the sensation itself is a nothing.” Not at all. It is not a something, but not a nothing either! The conclusion was only that a nothing would serve just as well as a something about which nothing could be said.” (1953, 304)

I can remember attempting to get a handle on this section of Philosophical Investigations in a couple of graduate seminars, contributing nothing more than once stumping my professor with the question of fraudulent workplace injury claims. But now, at long last, I (inadvertently) find myself in a position to explain what Wittgenstein was onto, and perhaps where he went wrong.

My view is simply that the mental and the environmental are pretty much painted in the same informatic brush, and pretty much comprehended using the same cognitive tools, the difference being that the system as a whole is primarily evolved to the track and exploit the environmental, and as a result has great difficulty attempting to track and leverage the ‘mental’ so-called.

If you accept the mechanistic model of the life sciences, then you accept that you are an environmentally situated, biomechanical, information processing system. Among the features that characterize you as such a system is what might be called ‘structural idiosyncrasy,’ the fact that the system is the result of innumerable path dependencies. As a bottom-up designer, evolution relies on the combination of preexisting capacities and happenstance to provide solutions, resulting in an vast array of ad hoc capacities (and incapacities). Certainly the rigours of selection will drive various functional convergences, but each of those functions will bear the imprimatur of the evolutionary twists that led it there.

Another feature that characterizes you as such a system is medial neglect. Given that the resources of the system are dedicated to modelling and exploiting your environments, the system itself constitutes a ‘structural blindspot’: it is the one part of your environment that you cannot readily include in your model of the environment. The ‘medial’ causality of the neural, you could say, must be yoked to the ‘lateral’ causality of the environmental to adequately track and respond to opportunities and threats. To system must be blind to itself to see the world.

A third feature that characterizes you as such a system is heuristic specificity. Given the combination of environmental complexity, structural limitations, and path dependency, cognition is situation-specific, fractionate, and non-optimal. The system solves environmental problems by neglecting forms of information that are either irrelevant or not accessible. So, to give what is perhaps the most dramatic example, one can suggest that intentionality, understood as aboutness, possesses a thoroughly heuristic structure. Given medial neglect, the system has no access to information pertaining to anything but the grossest details of its causal relationship to its environments. It is forced, therefore, to model that relationship in coarse-grained, acausal terms–or put differently, in terms that occlude the neurofunctionality that makes the relationship possible. As a result, you experience apples in your environment, oblivious to any of the machinery this makes possible. This ‘occlusion of the neurofunctional’ generates efficiencies (enormous ones, given the system’s complexity) so long as the targets tracked are not themselves causally perturbed by (medial) tracking. Since the system is blind to the medial, any interference it produces will generate varying degrees of ‘lateral noise.’

A final feature that characterizes you as such a system might be called internal access invariability, the fact that cognitive subsystems receive information via fixed neural channels. All this means is that cognitive subsystems are ‘hardwired’ into the rest of the brain.

Given a handful of caveats, I don’t think any of the above should be all that controversial.

Now, the big charge against Wittgenstein regarding sensation is some version of crypto-behaviourism, the notion that he is impugning the reality of sensation simply because only pain behaviour is publicly observable, while the pain itself remains a ‘beetle in a box.’ The problem people have with this characterization is as clear as pain itself. One could say that nothing is more real than pain, and yet here’s this philosopher telling you that it is ‘neither a something nor a nothing.’

Now I also think nothing is more real than pain, but I also agree with Wittgenstein, at long last, that pain is ‘neither a something or a nothing.’ The challenge I face is one of finding some way to explain this without sounding insane.

The thing to note about the four features listed above is how each, in its own way, compromises human cognition. This is no big news, of course, but my view takes the approach that the great philosophical conundrums can be seen as diagnostic clues to the way cognition is compromised, and that conversely, the proper theoretical account of our cognitive shortcomings will allow us to explain or explain away the great philosophical conundrums. And Wittgenstein’s position certainly counts as one of the most persistent puzzles confronting philosophers and cognitive scientists today: the question of the ontological status of our sensations.

Another way of putting my position is this: Everyone agrees you’re are a biomechanism possessing myriad relationships with your environment. What else would humans (qua natural) be? The idea that understanding the specifics of how human cognition fits into that supercomplicated causal picture will go a long way to clearing up our myriad, longstanding confusions is also something most everyone would agree with. What I’m proposing is a novel way of seeing how those confusions fall out of our cognitive limitations–the kinds of information and capacities that we lack, in effect.

So what I want to do, in a sense, is turn the problem of sensation in Wittgenstein upside down. The question I want to ask is this: How could the four limiting features described above, structural idiosyncrasy (the trivial fact that out of all the possible forms of cognition we evolved this one), medial neglect (the trivial fact that the brain is structurally blind to itself as a brain), heuristic specificity (the trivial fact that cognition relies on a conglomeration of special purpose tools), and access invariability (the trivial fact that cognition accesses information via internally fixed channels) possibly conspire to make Wittgenstein right?

Well, let’s take a look at what seems to be the most outrageous part of the claim: the fact that pain is ‘neither a something or a nothing.’ This, I think, points rather directly at heuristic specificity. The idea here would be that the heuristic or heuristic systems we use to identify entities are simply misapplied with reference to sensations. As extraordinary as this claim might seem, it really is old hat scientifically speaking. Quantum Field Theory forced us quite some time ago to abandon the assumption that our native understanding of entities and existence extends beyond the level of apples and lions we evolved to survive in. That said, sensation most certainly belongs the ‘level’ of apples and lions: eating apples causes pleasure as reliably as lion attacks cause pain.

We need some kind of account, in other words, of how construing sensations as extant things might count as a heuristic misapplication. This is where medial neglect enters the picture. First off, medial neglect explains why heuristic misapplications are inevitable. Not only can’t we intuit the proper scope of application for the various heuristic devices comprising cognition, we can’t even intuit the fact that cognition consists of multiple heuristic devices at all! In other words, cognition is blind to both its limits and its constitution. This explains why misapplications are both effortless and invisible–and most importantly, why we assume cognition to be universal, why quantum and cosmological violations of intuition come as a surprise. (This also motivates taking a diagnostic approach to classic philosophical problems: conundrums such as this indirectly reveal something of the limitations and constitution of cognition).

But medial neglect can explain more than just the possibility of such a misapplication; it also provides a way to explain why it constitutes a misapplication, as well as why the resulting conundrums take the forms they do. Consider the ‘aboutness heuristic’ considered above. Given that the causal structure of the brain is dedicated to tracking the causal structure of its environment, that structure cannot itself be tracked, and so must be ‘assumed.’ Aboutness is forced upon the system. This occlusion of the causal intricacies of the system’s relation to its environment is inconsequential. So long as the medial tracking of  targets in no way interferes with those targets, medial neglect simply relieves the system of an impossible computational load.

But despite it’s effectiveness, aboutness remains heuristic, remains a device (albeit a ‘master device’) that solves problems via information neglect. This simply means that aboutness possesses a scope of applicability, that it is not universal. It is adapted to a finite range of problems, namely, those involving functionally independent environmental entities and events. The causal structure of the system, again, is dedicated to modelling the causal structure of its environment (thus the split between medial (modelling) and lateral (modelled) functionality). This insures the system will encounter tremendous difficulty whenever it attempts to model its own modelling. Why? I’ve considered a number of different reasons (such a neural complexity) in a number of different contexts, but the primary, heuristic culprit is that the targets to be tracked are all functionally entangled in these ‘metacognitive’ instances.

The basic structure of human cognition, in other words, is environmental, which is to say, adapted to things out there functioning independent of any neural tracking. It is not adapted to the ‘in here,’ to what we are prone to call the mental. This is why the introspective default assumption is to see the ‘mental’ as a ‘secondary environment,’ as a collection of functionally independent events and entities tracked by some kind of mysterious ‘inner eye.’ Cognition isn’t magical. To cognize something requires cognitive resources. Keeping in mind that the point of this exercise is to explain how Wittgenstein could be right, we could postulate (presuming evolutionary parsimony) that second-order reflection possesses no specially adapted ‘master device,’ no dedicated introspective cognitive system, but instead relies on its preexisting structure and tools. This is why the ‘in here’ is inevitably cognized as a ‘little out there,’ a kind of peculiar secondary environment.

A sensation–or quale to the use the philosophy of mind term–is the product of an occurrent medial circuit, and as such impossible to laterally model. This is what Wittgenstein means when he says pain is ‘neither a something nor a nothing.’ The information required to accurately cognize ‘pain’ is the very information systematically neglected by human cognition. Second-order deliberative cognition transforms it into something ‘thinglike,’ nevertheless, because it is designed to cognize functionally independent entities. The natural question then becomes, What is this thing? Given the meagre amount of information available and the distortions pertaining to cognitive misapplication, it necessarily becomes the most baffling thing we can imagine.

Given structural idiosyncrasy (again, the path dependence of our position in ‘design space’), it simply ‘is what is it is,’ a kind of astronomically coarse-grained ‘random projection’ of higher dimensional neural space perhaps. Why is pain like pain? Because it dangles from all the same myriad path dependencies as our brains do. Given internal access invariability (again, the fact that cognition possesses fixed channels to other neural subsystems) it is also all that there is as well: cognition cannot inspect or manipulate a quale the way it can actual things in its environment via exploratory behaviours, so unlike other objects they necessarily appear to be ‘irreducible’ or ‘simple.’ On top of everything, qualia will also seem causally intractable given the utter occlusion of neurofunctionality that falls out of medial neglect, as well the distortions pertaining to heuristic specificity.

As things therefore, qualia strike as ineffable, intrinsic, and etiologically opaque. Strange ‘somethings’ indeed!

Given our four limiting features, then, we can clearly see that Wittgenstein’s hunch is grammatical and not behaviouristic. The problem with sensations isn’t so much epistemic privacy as it is information access and processing: when we see qualia as extant things requiring explanation like other things we’re plugging them into a heuristic regime adapted to discharge functional independent environmental challenges. Wittgenstein himself couldn’t see it as such, of course, which is perhaps why he takes the number of runs at the problem as he does.

Okay, so much for Wittgenstein. The real question, at this point, is one of what it all means. After all, despite what might seem like fancy explanatory footwork, we still find ourselves stranded with a something that is neither a something nor a nothing! Given that absurd conclusions generally mean false premises, why shouldn’t we simply think Wittgenstein was off his rocker?

Well, for one, given the conundrums posed by ‘phenomenal realism,’ you could argue that the absurdity is mutual. For another, the explanatory paradigm I’ve used here (the Blind Brain Theory) is capable of explaining away a great number of such conundrums (at the cost of our basic default assumptions, typically).

The question then becomes whether a general gain in intelligibility warrants accepting one flagrant absurdity–a something that is neither a something nor a nothing.

The first thing to recall is that this situation isn’t new. Apparent absurdity is alive and well at the cosmological and quantum levels of physical explanation. The second thing to recall is that human cognition is the product of myriad evolutionary pressures. Much as we did not evolve to be ideal physicists, we did not evolve to be ideal philosophers. Structural idiosyncrasy, in other words, gives us good reason to expect cognitive incapacities generally. And indeed, cognitive psychology has spent several decades isolating and identifying numerous cognitive foibles. The only real thing that distinguishes this particular ‘foible’ is the interpretative centrality (not to mention cherished status) of its subject matter–us!

‘Us,’ indeed. Once again, if you accept the mechanistic model of the life sciences (if you’re inclined to heed your doctor before your priest), then you accept that you are an environmentally situated, biomechanical information processing system. Given this, perhaps we should add a fifth limiting feature that characterizes you: ‘informatic locality,’ the way your system has to make due with the information it can either store or sense. Your particular brain-environment system, in other words, is its own ‘informatic frame of reference.’

Once again, given the previous four limiting features, the system is bound to have difficulty modelling itself. Consider another famous head-scratcher from the history of philosophy, this one from William James:

“The physical and the mental operations form curiously incompatible groups. As a  room, the experience has occupied that spot and had that environment for thirty years. As your field of consciousness it may never have existed until now. As a room, attention will go on to discover endless new details in it. As your mental state merely, few new ones will emerge under attention’s eye. As a room, it will take an earthquake, or a gang of men, and in any case a certain amount of time, to destroy it. As your subjective state, the closing of your eyes, or any instantaneous play of your fancy will suffice. In the real world, fire will consume it. In your mind, you can let fire play over it without effect. As an outer object, you must pay so much a month to inhabit it. As an inner content, you may occupy it for any length of time rent-free. If, in short, you follow it in the mental direction, taking it along with events of personal biography solely, all sorts of things are true of it which are false, and false of it which are true if you treat it as a real thing experienced, follow it in the physical direction, and relate it to associates in the outer world. (“Does ‘Consciousness’ Exist?“)

The genius of this passage, as I take it, is the way refuses the relinquish the profound connection between the third person and the first, rather alternating from the one to other, as if it were a single, inexplicable lozenge that tasted radically different when held against the back or front of the tongue–the room as empirically indexed versus the room as phenomenologically indexed. Wittgenstein’s problem, expressed in these terms, is simply one of how the phenomenological room fits into the empirical. From a brute mechanistic perspective, the system is first modelling the room absent any model of its occurrent modelling, then modelling its modelling of the room–and here’s the thing, absent any model of its occurrent modelling. The aboutness heuristic, as we saw, turns on medial neglect. This is what renders the second target, ‘room-modelling,’ so difficult to square with the ‘grammar’ of the first, ‘room,’ perpetually forcing us to ask, What the hell is this second room?

The thing to realize at this juncture is that there is no way to answer this question so long as we allow the apparent universality of the aboutness heuristic get the better of us. ‘Room-modelling’ will never fit the grammar of ‘room’ simply because it is–clearly, I would argue–the product of informatic privation (due to medial neglect) and heuristic misapplication (due to heuristic specificity).

On the contrary, the only way to solve this ‘problem’ (perhaps the only way to move beyond the conundrums that paralyze philosophy of mind and consciousness research as a whole) is to bracket aboutness, to finally openly acknowledge that our apparent baseline mode of conceptualizing truth and reality is in fact heuristic, which is to say, a mode of problem-solving that turns on information neglect and so possesses a limited scope of effective application. So long as we presume the dubious notion that cognitive subsystems adapted to trouble-shooting external environments absent various classes of information are adequate to the task of trouble-shooting the system of which they are a part, then we will find ourselves trapped in this grammatical (algorithmic) impasse.

In other words, we need to abandon our personal notion of the ‘knower’ as a kind of ‘anosognosiac fantasy,’ and begin explaining our inability to resolve these difficulties in subpersonal terms. We are an assemblage of special purpose cognitive tools, not whole, autonomous knowers attempting to apprehend the fundamental nature of things. We are machines attempting to model ourselves as such, and consistently failing because of a variety of subsystemic functional limitations.

You could say what we need is a whole new scientific subdiscipline: the cognitive psychology of philosophy. I realize that this sounds like anathema to many–it certainly strikes me as such! But no matter what one thinks of the story above, I find it hard to fathom how philosophy can avoid this fate now that the black box of the brain has been cracked open. In other words, we need to see the inevitability of this picture or something like it. As a natural result of the kind of system that we happen to be, the perennial conundrums of consciousness (and perhaps philosophy more generally) are something that science will eventually explain. Only ignorance or hubris could convince us otherwise.

We affirm the cosmological and quantum ‘absurdities’ we do because of the way science allows us to transcend our heuristic limitations. Science, you could say, is a kind of ‘meta-heuristic,’ a way to organize systems such that their individual heuristic shortcomings can be overcome. The Blind Brain picture sketched above bets that science will sketch the traditional metaphysical problem of consciousness in fundamentally mechanistic terms. It predicts that the traditional categorical bestiary of metaphysics will be supplanted by categories of information indexed according to their functions. It argues that the real difficulty of consciousness lies in the cognitive illusions secondary to informatic neglect.

One can conceive this different ways I think: You could keep your present scientifically informed understanding of the universe as your baseline, and ‘explain away’ the mental (and much of the lifeworld with it) as a series of cognitive illusions. Qualia can be conceived as ‘phenomemes,’ combinatorial constituents of conscious experience, but no more ‘existential’ than phonemes are ‘meaningful.’  This view takes the third-person brain revealed by science as canonical, and the first-person brain (you!) as a ‘skewed and truncated low-dimensional projection’ of that brain. The higher-order question as to the ontological status of that ‘skewed and truncated low-dimensional projection’ is diagnostically blocked as a ‘grammatical violation,’ by the recognition that such a move constitutes a clear heuristic misapplication.

Or one could envisage a new kind of scientific realism, where the institutions are themselves interpreted as heuristic devices, and we can get to the work of describing the nonsemantic nature of our relation to each other and the cosmos. This would require acknowledging the profundity of our individual theoretical straits, to embrace our epistemic dependence on the actual institutional apparati of science–to see ourselves as glitchy subsystems in larger social mechanisms of ‘knowing.’ On this version, we must be willing to detach our intellectual commitments from our commonsense intuitions wholesale, to see the apparent sufficiency and universality of aboutness as a cognitive illusion pertaining to heuristic neglect, first person or third.

Either way, consciousness, as we intuit it, can at best be viewed as virtual.