Aphorism of the Day: Philosophers are beetles as much as the rest of us, aimless and slow-moving, seeing only what’s immediately before them. All that differs is the abstraction and relevance of the pattern on the floor.
So every few months I have this burst of philosophical insight/fabrication that lights up my nucleus acumbens and makes me traipse around thinking I’ve solved a bunch of the world’s deepest philosophical problems. Then the trickle of sober reflection begins, questions like cups of steaming hot Joe, slowly reviving the bitchier angels of my assumptive nature.
Now that the black box of the brain has been cracked open, we will begin to tinker, and we can expect that science will eventually renovate the inner world as radically as it has the outer. I still think this is probably an inevitability, though the counterarguments some of you raised has reminded me that any number of things could happen in the interim. I’ve also softened my hard stance against the technological optimist: even if technology is the rope that will likely hang us, it remains the only rope we got. Maybe some intermediate stage of neurotechnology will give us the intelligence we need to our way through our crazy future. I remain pessimistic, but it’s certainly a possibility.
The two acronyms I coined, UNNF (for Universal Natural Neurophysiological Frame) and IANF (for Idiosyncratic Artificial Neurophysiological Frame) got me thinking about ‘the boundary conditions’ of consciousness–again. Most of the information processed by the brain never reaches consciousness: out of three pounds, we’re stranded with a few ounces, the famous NCCs, the neural correlates of consciousness–or what some researchers call the thalamocortical system. Our UNNF is spread like luminous cobwebs through the filamentary gloom of our brain.
The ‘Extinction Argument’ I’ve been making is that human identity will not be conserved across substantial changes to our UNNF. What struck me is simply the force of this argument. Of course, radical changes to the neurophysiology of consciousness will translate into radically altered consciousnesses–structures and modalities of experience that could be beyond our ability to comprehend. And this got me thinking…
My argument for several years now–the Blind Brain Hypothesis–has been that the ‘information horizons’ of the thalamocortical system can actually explain some of the most baffling features of conscious experience. The conceptual centerpiece of this argument is something I call encapsulation, the way information horizons seem to pinch experience into self-contained ‘bubbles.’
My problem has always been one of making this argument as obvious to others as it seems to me. I’ve come to realized that the gestalt shift I’m advocating is by no means an easy one, and that absent any institutional authority I can only sound like yet another crackpot with another theory of consciousness. There is no escaping the noose of value attribution, believe you me! (I actually submitted a paper comparing consciousness to coin tricks for The Journal of Consciousness Studies around five years ago, one which the editor was quite enthusiastic about, but the peer reviews I received made me think the article had been dismissed on a quick skim of the abstract.)
So I started wondering if there was I way I could yoke the force of my Extinction Argument (EA) to the Blind Brain Hypothesis (BBH). The force of the former, I thought turns on the differences between our UNNF and the multifarious IANFs to come–in other words, a kind of expansion of consciousness into inexplicable terrain. And this go me thinking about examples of ‘diminished consciousness.’ Neuropathology is the uranium mine of consciousness studies, the place where many researchers derive their fuel. Agnosia and neglect are among the most popular breakdowns considered.
In cases of agnosia and neglect, a boundary of consciousness that was once coterminous with the rest of humanity suddenly collapses, robbing the victim of basic experiences and competencies. These disorders have the effect of ‘shrinking consciousness,’ of rewriting the thalamocortical system’s information horizons. Not only do certain ‘boundaries of consciousness’ become clear, the functional roles played by various neural resources are also thrown into relief. The loss of neural circuitry packs an experiential wallop. The smallest of lesions can transform how we experience ourselves and the world in catastrophic and often baffling ways.
These cases of ‘shrunken consciousness’ demonstrate the profound role thalamocortical information horizons play in shaping and structuring conscious experience. To understand ENCAPSULATION, you have to appreciate the way the neural correlates of consciousness necessarily suffer a neurophysiologically fixed form of frame neglect. Unless you think information horizons only become efficacious once pathology renders them obvious, the kinds of local and idiosyncratic experiential scotomata (unperceived absences) resulting from neuropathology simply must have global and universal counterparts.
If so, then what are they? I think encapsulation answers this question.
The way some sufferers of unilateral neglect lose all sense of ‘extrapersonal space’ on their left or right, to the point of being unable to recognize they have lost that space, demonstrates what I call the ‘holophenomenal’ character of experience, the way it’s always ‘full,’ (so that we require secondary systems to detect absences). For each neuromodular component of the thalamocortical system, the correlated experience has to be ‘full’ simply because those components cannot process information they cannot access: this is why our visual field has no proper ‘edge’ (the way it does when cinematically represented as a peephole). Only interrelated systems (the varieties of memory in particular) can generate the intimation of something more or something missing.
Now, consider how all the deep structural mysteries of consciousness–unity, transparency, nowness, self-identity–turn on the absence of distinctions.
Consciousness, for instance, seems ‘unified,’ holistic insofar as everything seems internally related to everything else: a change in this feature seems to bring about a change in that. This is one of the reasons the kinds of experiential distortions arising from brain injuries seem so counterintuitive: the thalamocortical system has no access to its componential structure, to the diverse, externally related subsystems that continually feed and absorb its information. When one of those subsystems is knocked out, the information feed vanishes, and the bandwidth of consciousness shrinks–you shrink–and in ways that seem impossible (if you can recognize the loss at all) because our thalamocortical system is a surface feeder, utterly unable to digest what goes on beyond its information horizons. All experience is ‘given,’ and absent any parallel experience of its information donors (of the kind we have of our perceptual systems, for instance: blindness makes sense to us), everything is pinched into a queer kind of absolute identity. Despite the constitutive differences in the information fed forward, a background of sameness haunts all of it–what I call ‘default identity.’
What is default identity? This is the Escher twist in the portrait. Differences in conscious experience reflect differences in neural information. There is no ‘experience of’ identity or difference absent that information–there is no experience at all–and this lack, I’m suggesting, leverages a kind of higher order, structural identity. In the same way unilateral neglect causes individuals to confuse half of their extrapersonal space with the whole, ‘temporal frame neglect’ (the ‘nonpathological neglect’ forced on consciousness by the information horizons of the thalamocortical system) causes individuals to confuse a moment of time with all the time there is. Each moment is at once ‘just another moment’ and the only moment, which is to say, the same. Thus the paradox of the Now.
I know this must sound like a kind of ‘neuro-existentialism,’ and therefore hinky to those hailing from more scientifically institutionalized background. But my thesis is empirical: Some of the most perplexing structural features of consciousness can be explained as the result of various kinds of neurophysiologically fixed ‘agnosias.’ I like to think it’s strangeness is simply a function of its novelty: I’m not sure anyone has explored this “What would it be like to be a thalamocortical system?” line of thinking before, and the ways lacking certain kinds of information can structure experience.
At the same time I find it exciting the way these speculations seem to bear on so many ‘continental philosophical’ problematics. I could see writing papers, for instance, reinterpreting Heidegger’s hypokaimenon, or Derrida’s differance, or Adorno’s nonidentical as various expressions of encapsulation; or how the intuitions that underwrite Kant’s notion of the transcendental, the way consciousness stands both inside and outside the world of experience, derive from encapsulation.
At the same time it seems to provide a single explanatory rubric for a whole array of more general philosophical problems, especially regarding intentionality. Once you understand that consciousness is cross-sectional, the result of a set of integrated systems that access only fractions of the brain’s information load, and as such has to make sideways sense of things (because these cross-sections are all it has to work with) then a whole shitload of apparent conundrums seem to evaporate, including the problem of why we find these things so problematic in the first place!
But none of this, of course, comes close to tackling the kernel of the ‘Hard Problem,’ which is simply why these tangles of electrified meat should be conscious at all. But it does offer tantalizing suggestions as to why we find this, the holy grail of philosophical questions, so difficult to answer. Consciousness could very well be a coin trick that the brain plays on itself.
Lastly, I should point out that as fascinating as I find it all, I actually can’t bring myself to believe any of this. Which is why I decided to make it the protagonist’s ‘fictional philosophy’ in Light, Time, and Gravity.
My ‘Philosophy X’ is his heartbreaking burden.