Lamps Instead of Ladies: the Hard Problem Explained

by rsbakker

Definition of the Day – Insight: what happens when a new blindness makes our old blindness its bitch.

.

The so-called “hard problem” is generally understood as the problem consciousness researchers face closing Joseph Levine’s “explanatory gap,” the question of how mere physical systems can generate conscious experience. The problem is that, as Descartes noted centuries ago, consciousness is so damned peculiar when compared to the natural world that it reveals. On the one hand you have qualia, or the raw feel, the ‘what-it-is-like’ of conscious experiences. How could meat generate such bizarre things? On the other hand you have intentionality, the aboutness of consciousness, as well as the related structural staples of the mental, the normative and the purposive.

In one sense, my position is a mainstream one: consciousness is another natural phenomena that will be explained naturalistically. But it is not just another natural phenomenon: it is the natural phenonmenon that is attempting to explain itself naturalistically. And this is where the problem becomes an epistemological nightmare – or very, very hard.

This is why I espouse what might be called a “Dual Explanation Account of Consciousness.” Any one of the myriad theories of consciousness out there could be entirely correct, but we will never know this because we disagree about just what must be explained for an explanation of consciousness to count as ‘adequate.’ The Blind Brain Theory explains the hardness of the hard problem in terms of the information we should expect the conscious systems of the brain to lack. The consciousness we think we cognize, I want to argue, is the product of a variety of ‘natural anosognosias.’ The reason everyone seems to be barking up the wrong explanatory tree is simply that we don’t have the consciousness we think we do.

Personally, I’m convinced this has to be case to some degree. Let’s call the cognitive system involved in natural explanation the ‘NE system.’ The NE system, we might suppose, originally evolved to cognize external environments: this is what it does best. (We can think of scientific explanation as a ‘training up’ of this system, pressing it to its peak performance). At some point, the human brain found it more and more reproductively efficacious to cognize onboard information – data from itself – as well. In addition to continually sampling and updating environmental information, it began doing the same with its own neural information.

Now if this marks the genesis of human self-consciousness, the confusions we collectively call the ‘hard problem’ become the very thing we should expect. We have an NE system exquisitely adapted over hundreds of millions of years to cognize environmental information suddenly forced to cognize 1) the most complicated machinery we know of in the universe (itself); 2) from a fixed (hardwired) ‘perspective’; and 3) with nary more than a million years of evolutionary tuning.

Given this (and it seems fairly airtight to me), we should expect that the NE system would have enormous difficulty cognizing consciously available information. (1) suggests that the information gleaned will be drastically fractional. (2) suggests that the information accessed will be thoroughly parochial, but also, entirely ‘sufficient,’ given the NE’s rank inability to ‘take another perspective’ relative the gut brain the way it can relative its external environments. (3) suggests the information provided will be haphazard and distorted, the product of kluge-type mutations.

In other words, (1) implies ‘depletion,’ (2) implies ‘truncation’ (since we can’t access the causal provenance of what we access), and (3) implies a motley of distortions. Your NE is quite literally restricted to informatic scraps.

This is the point I keep hammering in my discussions with consciousness researchers: our attempts to cognize experience utilize the same machinery that we use to cognize our environments – evolution is too fond of ‘twofers’ to assume otherwise, too cheap. Given this, the “hard problem” not only begins to seem inevitable, but something that probably every other biologically conscious species in the universe suffers. The million dollar question is this: If information privation generates confusion and illusion regarding phenomena within consciousness, why should it not generate confusion and illusion when regarding consciousness itself?

Think of the myriad mistakes the brain makes: just recently, while partying with my brother-in-law on the front porch, we became convinced that my neighbour from across the street was standing at her window glaring at us – I mean, convinced. It wasn’t until I walked up to her house to ask whether we were being too noisy (or noisome!) that I realized it was her lamp glaring at us (it never liked us anyway), that it was a kooky effect of light and curtains. What I’m saying is that peering at consciousness is no different than peering at my neighbour’s window, except that we are wired to the porch, and so have no way of seeing lamps instead of ladies. Whether we are deliberating over consicousness or deliberating over neighbours, we are limited to the same cognitive systems. As such, it simply follows that the kinds of distortions information privation causes in the one also pertain to the other. It only seems otherwise with consciousness because we are hardwired to the neural porch and have no way of taking a different informatic perspective. And so, for us, it just is the neighbour lady glaring at you through the window, even though it’s not.

Before we can begin explaining consciousness, we have to understand the severity of our informatic straits. We’re stranded: both with the patchy, parochial neural information provided, and with our ancient, environmentally oriented cognitive systems. The result is what we call ‘consciousness.’

The argument in sum is pretty damn strong: Consciousness (as it is) evolved on the back of existing, environmentally oriented cognitive systems. Therefore, we should assume that the kinds of information privation effects pertaining to environmental cognition also apply to our attempts to cognize consciousness. (1), (2), and (3) give us good reason to assume that consciousness suffers radical information privation. Therefore, odds are we’re mistaking a good number of lamps for ladies – that consciousness is literally not what we think it is.

Given the breathtaking explanatory successes of the natural sciences, then, it stands to reason that our gut antipathy to naturalistic explanations of consciousness are primarily an artifact of our ‘brain blindess.’

What we are trying to explain, in effect, is information that has to be depleted, truncated, and distorted – a lady that quite literally does not exist. And so when science rattles on about ‘lamps,’ we wave our hands and cry, “No-no-no! It’s the lady I’m talking about.”

Now I think this is a pretty novel, robust, and nifty dissection of the Hard Problem. Has anyone encountered anything similar anywhere? Does anyone see any obvious assumptive or inferential flaws?