Three Pound Brain

No bells, just whistling in the dark…

Month: October, 2012

Another Goddamn Anti-Transcendentalist Manifesto

by rsbakker

Aphorism of the Day: If the eye is every bit as cracked as the mirror, then cracked reflects true, and true looks cracked.

Aphorism of the Day II: My larva hurts.

Definition of the Day – Phenomenology: A common, hysterical variant of Anton’s Syndrome; a form of philosophical anosognosia (secondary to reading Husserl and other forms of blunt-force trauma to the frontal cortex), involving blind subjects endlessly arguing things they cannot see they cannot see.


The Hard Problem. The Hard Problem breaks the back of Levi Bryant’s argument against continental transcendentalism. Otherwise I want to show you the lion that shares the litter box with his pussy cat.

Bryant’s brief challenge has spawned numerous critical responses across the web: Agent Swarm, An und fur sich, and one of my faves, Enemy Industry. People are unimpressed, and for good reason. Freud’s metaphor of the narcissistic wound is not a serious explanation of the kinds of generalizations Bryant makes, let alone one that would past the muster of his own criteria. Nevertheless I definitely like this particular story, or a social psychological variant of it, simply because it seems clear that we do the same groupish things we do, no matter how rarified the context. Everyone protects their interests by arguing what seems most obvious to them. It’s the frick’n program. How often do institutions collectively realize their futility?

Never. They huddle and pat asses. They waver and they rally. They grow old, wait for their apostate students to guide them with ink and condescension to the door.

You’re no different.

Let’s brush away a little straw in the interests of honesty and clarity. Embracing naturalism does not entail embracing reductionism. Science is a mess. Embracing naturalism need not entail embracing materialism, though many naturalists are materialists. Many are instrumentalists, inferentialists, or like Rorty, pragmatists more generally, just as inclined to load the horse and whip the cart as you. But it does entail embracing the mitigated skepticism that forces naturalism upon us in the first place. Humans are, as a rule, theoretically incompetent outside the institutional confines of the natural sciences. These very claims lay outside those institutional confines, suffer the same incompetencies. Naturalism isn’t about playing the same old bullshit game with a different vocabulary. Things are upside down when it comes to all the most difficult questions. In naturalism, you play the concept game, and you inevitably over-commit on something, and you catch yourself, and you’re all like, ‘Fuck, maan, I overcommitted again.’ Embracing naturalism means embracing epistemic humility, appreciating just how, given the levels of abstraction we roll and smoke, we’re doomed to confabulate more than cogitate. That’s the way it works in philosophy now: What was an implicit inability to resolve or arbitrate disputes short of fashion or exhaustion has become explicit.

It’s hard to imagine what things should look like. Very hard. This is the beginning the Great Conceptual Transition, the point where all our semantic intuitions are about to be stressed like never before, where everyone, no matter how deeply cemented in the academy, has heard the thundercrack through the stone. The smart money, I think, is that it’s all going to be swept away, that science, being science, is going to pick up the cup and empty us out–one final, errant libation.

We always forget. We always think our seeing is as big as the things we think we see, truth big, existence big, and so we confuse our own immobility for immovability. We forget the naturalist need only shove a knitting needle through our tear duct, rewrite us with a wiggle. We forget that our grand theories are smoke, and that science is the stack, the engine, and the screw propelling us toward the edge of the world–the void of the posthuman.

We always forget–and yet somehow we know. Imagine including all the ambient ambivalence you have regarding your work and profession in these packaged little proof-read pills you call ‘papers.’ Imagine including all the off-the-academic-record comments, the myriad petty condemnations, the she’s-strong-he’s-weak estimations, the between-sips-of-coffee conviction that it’s all bullshit, a game, another status rat race, only dressed in the world’s most voluminous robes. Imagine bottling that nagging sense of disproportion

Think of the way ideas get you high.

It’s heady stuff, the sheer power of the natural–of theoretical knowledge. Given our incompetencies, it is perhaps inevitable that many will want to lay claim to it. It seems clear that as soon as people begin asserting that ‘social constructivism is a naturalism’ the concept has been stretched more than my sexy underwear. In his curious, ‘gotcha’ followup, Bryant introduces the crucial criterion of naturalism: Everything is natural. But this is meaningless if ‘natural’ is a barrel-wide thong, so let’s stipulate another criterion: Naturalism entails openness to the possibility that intentionality is illusory. If you cannot bring yourself to believe that this is a real, empirical possibility, then you are a transcendentalist plain and simple, one of those kids who dresses cool, but slips away as soon as some jock cracks the Jack.

Because the empirical possibility that intentionality is a kind of cognitive mirage, that meaning is merely an ‘informatic blur,’ is very real. Naturalism has to be as open as science is open to be naturalism. There’s no reason to assume that evolution did not saddle us with a profoundly deceptive self-understanding. We are need-to-know, and given the steep metabolic requirements of the brain, not to mention the structural infelicities incumbent upon any self-tracking information system, it is certainly possible, perhaps even probable, that we are fundamentally deceived about our own nature, that the counterintuitive gymnastics of the quantum has us as a qualitative counterpart. In naturalism, meaning is an open question, one that scientific research, not theoretical confabulation, will answer.

You continental philosophers suffer the same myriad cognitive biases as the rest of us, and what’s more, you’ve been trained to take astute advantage of them. You see science overthrowing the self, troubling the subject, and you see confirmation, when what you should worry about is the trend. You never pause to consider in your celebrations of fragmentation the possibility that everything is broken all the way down, that with the subject goes meaning and morality and so on. You need to realize that your noocentrism could be of a piece with biocentrism and geocentrism, that in essence, you’re simply stamping your feet, demanding that science leave, at the very least, this one last cherished conceit intact…

Man as meaning maker.

The Hard Problem is your crack in the door–your Messianic moment, as Adorno would say, summing Nietzsche’s divine post-mortem. And my Messianic moment, too. The difference is that I think attacking it is the surest way to settle the matter. The world is filled with fucking apologists.

But in the end, it really doesn’t matter whether you rationalize some defence or not. Trends be trends, my friends. Call it the Big Fat Pessimistic Induction: Yours is a prescientific discourse, one whose domain is about to be overrun by the sciences. The black box of the brain has been cracked open, revealing more than enough to put your conceptual conceits on notice. Did you really think you would be the lone exception? That your discourse, out of all of them, would be the one to prevail, to hold back the empirical philistines that had conquered all corners of existence otherwise? It’s not quite that point yet, but the longer you continue your discourses independent of the sciences, the more magical you become–the less cognitive. And with legitimacy goes institutional credibility. Like it or not, you have begun the perhaps not so long drift toward Oprah spots with Eckhart Tolle.

So sure, Bryant was wrong. But he’s also right. Regardless of any argument, any wank pro and con, the bonfire of the humanities has begun.

Less Human than Human: The Cyborg Fantasy versus the Neuroscientific Real

by rsbakker

Aphorism of the Day: Arguing the future in ignorance of the present is simply creationism turned upside down.  Instead of claiming we come from a God stamped in our image, you claim we will become a God stamped in our image.


So below is the printed version of the talk I gave yesterday at the 2012 Toronto Speculative Fiction Colloquium on “Beyond the Human.” It was the first time I had ever tried to incorporate visuals into my talk and it was – like many of my presentations – something of a bumbling mess. As always, I received many kudos for the content of my talk. The presentation–not so much!

Needless to say, my dream of becoming an inspirational speaker remains as elusive as ever.

The Chizine organizers did a spectacular job–this was easily one of the best run speaking gigs I’ve ever done, enough to make my own orgo-ineptitude seem rare and forgivable (I hope). I need to thank Dan Mellamphy and Nandita Biswas-Mellamphy for the gift of their company and conversation. I’d like to thank Peter Watts and Caitlin Sweet for being such gracious hosts on Saturday night, plying us with wine, wit, and pizza (and then more wit) until the wee hours (as middle-aged parents reckon them, anyway). And I need to thank you guys for filling the house. I’d also like to give a special shout to Jake, who flew all the way from California to attend the event. Crazy awesome, Dude!

For those of you who are virally minded, I’m interested in hearing back from some transhumanists. I’ve made different versions of this argument in a few different ways now, and I’m not sure I’ve received a single substantive response.


Less Human than Human: The Cyborg Fantasy versus the Neuroscientific Real


When alien archaeologists sift through the rubble of our society, which public message, out of all those they unearth, will be the far and away most common?

The answer to this question is painfully obvious–when you hear it, that is. Otherwise, it’s one of those things that is almost too obvious to be seen.

Sale… Sale–or some version of it. On sale. For sale. 10% off. 50% off. Bigger savings. Liquidation event!

Or, in other words, more for less.

Consumer society is far too complicated to be captured in any single phrase, but you could argue that no phrase better epitomizes its mangled essence. More for less. More for less. More for less.


Thus the intuitive resonance of “More Human than Human,” the infamous tagline of the Tyrell Corporation, or even ‘transhumanism’ more generally, which has been vigorously rebranding itself the past several months as ‘H+,’ an abbreviation of ‘Humanity plus.’

What I want to do is drop a few rocks into the hungry woodchipper of transhumanist enthusiasm. Transhumanism has no shortage of critics, but given a potent brand and some savvy marketing, it’s hard not to imagine the movement growing by leaps and bounds in the near future. And in all the argument back and forth, no one I know of (with the exception of David Roden, whose book I eagerly anticipate) has really paused to consider what I think is the most important issue of all. So what I want to do is isolate a single, straightforward question, one which the transhumanist has to be able to answer to anchor their claims in anything resembling rational discourse (exuberant discourse is a different story). The idea, quite simply, is to force them to hold the fingers they have crossed plain for everyone to see, because the fact is, the intelligibility of their entire program depends on research that is only just getting under way.

I think I can best sum up my position by quoting the philosopher Andy Clark, one the world’s foremost theorists of consciousness and cognition, who after considering competing visions of our technological future, good and bad, writes, “Which vision will prove the most accurate depends, to some extent, on the technologies themselves, but it depends also–and crucially–upon a sensitive appreciation of our own nature” (Natural-Born Cyborgs, 173). It’s this latter condition, the ‘sensitive appreciation of our own nature,’ that is my concern, if only because this is precisely what I think Clark and just about everyone else fails to do.

First, we need to get clear on just how radical the human future has become. We can talk about the singularity, the transformative potential of nano-bio-info-technology, but it serves to look back as well, to consider what was arguably humanity’s last great break with its past, what I will here call the ‘Old Enlightenment.’ Even though no social historical moment so profound or complicated can be easily summarized, the following opening passage, taken from a 1784 essay called, “An Answer to the Question: ‘What is Enlightenment?’” by Immanuel Kant, is the one scholars are most inclined to cite:

Enlightenment is man’s emergence from his self-incurred immaturity. Immaturity is the inability to use one’s own reason without the guidance of another. This immaturity is self-incurred if its cause is not lack of understanding, but lack of resolution and courage to use it without the guidance of another. The motto of the enlightenment is therefore: Sapere aude! Have courage to use your own understanding!” (“An Answer to the Question: ‘What is Enlightenment?’” 54)

Now how modern is this? For my own part, I can’t count all the sales pitches this resonates with, especially when it comes to that greatest of contradictions, the television commercial. What is Enlightenment? Freedom, Kant says. Autonomy, not from the political apparatus of the state (he was a subject of Frederick the Great, after all), but from the authority of traditional thought–from our ideological inheritance. More new. Less old. New good. Old bad. Or in other words, More better, less worse. The project of the Enlightenment, according to Kant, lies in the maximization of intellectual and moral freedom, which is to say, the repudiation of what we were and an openness to what we might become. Or, as we still habitually refer to it, ‘Progress.’ The Old Enlightenment effectively rebranded humanity as a work in progress, something that could be improved–enhanced–through various forms of social and personal investment. We even have a name for it, nowadays: ‘human capital.’

The transhumanists, in a sense, are offering nothing new in promising the new. And this is more than just ironic. Why? Because even though the Old Enlightenment was much less transformative socially and technologically than the New will almost certainly be, the transhumanists nevertheless assume that it was far more transformative ideologically. They assume, in other words, that the New Enlightenment will be more or less conceptually continuous with the Old. Where the Old Enlightenment offered freedom from our ideological inheritance, but left us trapped in our bodies, the New Enlightenment is offering freedom from our biological inheritance–while leaving our belief systems largely intact. They assume, quite literally, that technology will deliver more of what we want physically, not ideologically.

More better

Of course, everything hinges upon the ‘better,’ here. More is not a good in and of itself. Things like more flooding, more tequila, or more herpes, just for instance, plainly count as more worse (although, if the tequila is Patron, you might argue otherwise). What this means is that the concept of human value plays a profound role in any assessment of our posthuman future. So in the now canonical paper, “Transhumanist Values,” Nick Bostrom, the Director of the Future of Humanity Institute at Oxford University, enumerates the principle values of the transhumanist movement, and the reasons why they should be embraced. He even goes so far as to provide a wish list, an inventory of all the ways we can be ‘more human than human’–though he seems to prefer the term ‘enhanced.’ “The limitations of the human mode of being are so pervasive and familiar,” he writes, “that we often fail to notice them, and to question them requires manifesting an almost childlike naiveté.” And so he gives us a shopping list of our various incapacities: lifespan; intellectual capacity; body functionality; sensory modalities, special faculties and sensibilities; mood, energy, and self-control. He characterizes each of these categories as constraints, biological limits that effectively prevent us from reaching our true potential. He even provides a nifty little graph to visualize all that ‘more better’ out there, hanging like ripe fruit in the garden of our future, just waiting to be plucked, if only–as Kant would say–we possess the courage.

As a philosopher, he’s too sophisticated to assume that this biological emancipation will simply spring from the waxed loins of unfettered markets or any such nonsense. He fully expects humanity to be tested by this transformation–”[t]ranshumanism,” as he writes, “does not entail technological optimism”–so he offers transhumanism as a kind of moral beacon, a star that can safely lead us across the tumultuous waters of technological transformation to the land of More-most-better–or as he explicitly calls it elsewhere, Utopia.

And to his credit, he realizes that value itself is in play, such is the profundity of the transformation. But for reasons he never makes entirely clear, he doesn’t see this as a problem. “The conjecture,” he writes, “that there are greater values than we can currently fathom does not imply that values are not defined in terms of our current dispositions.” And so, armed with a mystically irrefutable blanket assertion, he goes on to characterize value itself as a commodity to be amassed: “Transhumanism,” he writes, “promotes the quest to develop further so that we can explore hitherto inaccessible realms of value.”

Now I’ve deliberatively refrained from sarcasm up to this point, even though I think it is entirely deserved, given transhumanism’s troubling ideological tropes and explicit use of commercial advertising practices. You only need watch the OWN channel for five minutes to realize that hope sells. Heaven forbid I inject any anxiety into what is, on any account, an unavoidable, existential impasse. I mean, only the very fate of humanity lies in the balance. It’s not like your netflix is going to be cancelled or anything.

For those unfortunates who’ve read my novel Neuropath, you know that I am nowhere near as sunny about the future as I sound. I think the future, to borrow an acronym from the Second World War, has to be–has to be–FUBAR. Totally and utterly, Fucked Up Beyond All Recognition. Now you could argue that transhumanism is at least aware of this possibility. You could even argue, as some Critical Posthumanists (as David Roden classifies them) do, that FUBAR is exactly what we need, given that the present is so incredibly FU. But I think none of these theorists really has a clear grasp of the stakes. (And how could they, when I so clearly do?)

Transhumanism may not, as Nick Bostrom says, entail ‘technological optimism,’ but as I hope to show you, it most definitely entails scientific optimism. Because you see, this is precisely what falls between the cracks in debates on the posthuman: everyone is so interested in what Techno-Santa has in his big fat bag of More-better, that they forget to take a hard look at Techno-Santa, himself, the science that makes all the goodies, from the cosmetic to the apocalyptic, possible. Santa decides what to put in the bag, and as I hope to show you, we have no reason whatsoever to trust the fat bastard. In fact, I think we have good reason to think he’s going to screw us but good.

As you might expect, the word ‘human’ gets bandied about quite a bit in these debates–we are, after all, our own favourite topic of conversation, and who doesn’t adore daydreaming about winning the lottery? And by and large, the term is presented as a kind of given: after all, we are human, and as such, obviously know pretty much all we need to know about what it means to be human–don’t we?

Don’t we?


This is essentially Andy Clark’s take in Natural-born Cyborgs: Given what we now know about human nature, he argues, we should see that our nascent or impending union with our technology is as natural as can be, simply because, in an important sense, we have always been cyborgs, which is to say, at one with our technologies. Clark is a famous proponent of something called the Extended Mind Thesis, and for more than a decade he has argued forcefully that human consciousness is not something confined to our skull, but rather spills out and inheres in the environmental systems that embed the neural. He thinks consciousness is an interactionist phenomena, something that can only be understood in terms of neuro-environmental loops. Since he genuinely believes this, he takes it as a given in his consideration of our cyborg future.

But of course, it is nowhere near a ‘given.’ It isn’t even a scientific controversy: it’s a speculative philosophical opinion. Fascinating, certainly. But worth gambling the future of humanity?

My opinion is equally speculative, equally philosophical–but unlike Clark, I don’t need to assume that it’s true to make my case, only that it’s a viable scientific possibility. Nick Bostrom, of all people, actually explains it best, even though he’s arrogant enough to think he’s arguing for his own emancipatory thesis!

“Further, our human brains may cap our ability to discover philosophical and scientific truths. It is possible that the failure of philosophical research to arrive at solid, generally accepted answers to many of the traditional big philosophical questions could be due to the fact that we are not smart enough to be successful in this kind of enquiry. Our cognitive limitations may be confining us in a Platonic cave, where the best we can do is theorize about “shadows”, that is, representations that are sufficiently oversimplified and dumbed-down to fit inside a human brain.” (“Transhumanist Values”)

Now this is precisely what I think, that our ‘cognitive limitations’ have forced us to make do with ‘shadows,’ ‘oversimplified and dumbed-down’ information, particularly regarding ourselves–which is to say, the human. Since I’ve already quoted the opening passage from Kant’s “What is Enlightenment?” it perhaps serves, at this point, to quote the closing passage. Speaking of the importance of civil freedom, Kant concludes: “Eventually it even influences the principles of governments, which find that they can themselves profit by treating man, who is more than a machine, in a manner appropriate to his dignity” (60). Kant, given the science of his day, could still assert a profound distinction between man, the possessor of values, and machine, the possessor of none. Nowadays, however, the black box of the human brain has been cracked open, and the secrets that have come tumbling out would have made Kant shake for terror or fury. Man, we now know, is a machine–that much is simple. The question, and I assure you it is very real, is one of how things like moral dignity–which is to say, things like value–arise from this machine, if at all.

It literally could be the case that value is another one of these ‘shadows,’ an ‘oversimplified’ and ‘dumbed-down’ way to make the complexities of evolutionary effectiveness ‘fit inside a human brain.’ It now seems pretty clear, for instance, that the ‘feeling of willing’ is a biological subreption, a cognitive illusion that turns on our utter blindness to the neural antecedents to our decisions and thoughts. The same seems to be the case with our feeling of certainty. It’s also becoming clear that we only think we have direct access to things like our beliefs and motivations, that, in point of fact, we use the same ‘best guess’ machinery that we use to interpret the behaviour of others to interpret ourselves as well.

The list goes on. But the only thing that’s clear at this point is that we humans are not what we thought we were. We’re something else. Perhaps something else entirely. The great irony of posthuman studies is that you find so many people puzzling and pondering the what, when, and how of our ceasing to be human in the future, when essentially that process is happening now, as we speak. Put in philosophical terms, the ‘posthuman’ could be an epistemological achievement rather than an ontological one. It could be that our descendants will look back and laugh their gearboxes off, the notion of a bunch of soulless robots worrying about the consequences of becoming a bunch of soulless robots.

So here’s the question I would ask Mr. Bostrom: Which human are you talking about? The one you hope that we are, or the one that science will show us to be?

Either way, transhumanism as praxis–as a social movement requiring real-world action like membership drives and market branding, is well and truly ‘forked,’ to use a chess analogy: ‘Better living through science’ cannot be your foundational assumption unless you are willing to seriously consider what science has to say. You don’t get to pick and choose which traditional illusion you get to cling to.

Transhumanism, if you think about it, should be renamed transconfusionism, and rebranded as X+.

In a sense what I’m saying is pretty straightforward: no posthumanism that fails to consider the problem of the human (which is just to say, the problem of meaning and value) is worthy of the name. Such posthumanisms, I think anyway, are little more than wishful thinking, fantasies that pretend otherwise. Why? Because at no time in human history has the nature of the human been more in doubt.

But there has to be more to the picture, doesn’t there? This argument is just too obvious, too straightforward, to have been ‘overlooked’ these past couple decades. Or maybe not.

The fact is, no matter how eloquently I argue, no matter how compelling the evidence I adduce, how striking or disturbing the examples, next to no one in this room is capable of slipping the intuitive noose of who and what they think they are. The seminal American philosopher Wilfred Sellars calls this the Manifest Image, the sticky sense of subjectivity provided by our immediate intuitions–and here’s the thing, no matter science has to say (let alone a fantasy geek with a morbid fascination with consciousness and cognition). To genuinely think the posthuman requires us to see past our apparent, or manifest, humanity–and this, it turns out, is difficult in the extreme. So, to make my argument stick, I want to leave you with a way of understanding both why my argument is so destructive of transhumanism, and why that destructiveness is nevertheless so difficult to conceive, let alone to believe.

Look at it this way. The explanatory paradigm of the life sciences is mechanistic. Either we humans are machines, or everything from Kreb’s cycle to cell mitosis is magical. This puts the question of human morality and meaning in an explanatory pickle, because, for whatever reason, the concepts belonging to morality and meaning just don’t make sense in mechanistic terms. So either we need to understand how machines like us generate meaning and morality, or we need to understand how machines like us hallucinate meaning and morality.

The former is, without any doubt, the majority position. But the latter, the position that occupies my time, is slowly growing, as is the mountain of counterintuitive findings in the sciences of the mind and brain. I have, quite against my inclination, prepared a handful of images to help you visualize this latter possibility, what I call the Blind Brain Theory.

Imagine we had perfect introspective access, so that each time we reflected on ourselves we were confronted with something like this:

We would see it all, all the wheels and gears behind what William James famously called the “blooming, buzzing confusion” of conscious life. Would their be any ‘choice’ in this system? Obviously not, just neural mechanisms picking up where environmental mechanisms have left off. How about ‘desire’? Again, nothing we really could identify as such, given that we would know, in intimate detail, the particulars of the circuits that keep our organism in homeostatic equilibrium with our environments. Well, how about morals, the values that guide us this way and that? Once again, it’s hard what these might be, given that we could, at any moment, inspect the mechanistic regularities that in fact govern our behaviour. So no right or wrong? Well, what would these be? Of course, given the unpredictability of events, the mechanism would malfunction periodically, throw its wife’s work slacks into the dryer, maybe have a tooth or two knocked out of its gears. But this would only provide information regarding the reliability of its systems, not its ‘moral character.’

Now imagine dialling back the information available for introspective access, so that your ability to perfectly discriminate the workings of your brain becomes foggy:

Now imagine a cost-effectiveness expert (named ‘Evolution’) comes in, and tells you that even your foggy but complete access is far, far too expensive: computation costs calories, you know! So he goes through and begins blacking out whole regions of access according to arcane requirements only he is aware of. What’s worse, he’s drunk and stoned, and so there’s a whole haphazard, slap-dash element to the whole procedure, leaving you with something like this:

But of course, this foggy and fractional picture actually presumes that you have direct introspective access to information regarding the absence of information, when this is plainly not the case, and not required, given the rigours of your paleolithic existence. This means, you can no longer intuit the fractional nature of your introspection intuitions, that the far-flung fragments of access you possess actually seem like unified and sufficient wholes, leaving you with:

This impressionistic mess is your baseline. Your mind. But of course, it doesn’t intuitively seem like an impressionistic mess–quite the opposite, in fact. But this is simply because it is your baseline, your only yardstick. I know it seems impossible, but consider, if dreams lacked the contrast of waking life, they would be the baseline for lucidity, coherence, and truth. Likewise, there are degrees of introspective access–degrees of consciousness–that would make what you are experiencing this very moment seem like little more than a pageant of phantasmagorical absurdities.

The more the sciences of the brain discover, the more they are revealing that consciousness and its supposed verities–like value–are confused and fractional. This is the trend. If it persists, then meaning and morality could very well turn out to be artifacts of blindness and neglect–illusions the degree to which they seem whole and sufficient. If meaning and morality are best thought of as hallucinations, then the human, as it has been understood down through the ages, from the construction of Khufu to the first performance of Hamlet to the launch of Sputnik, never existed, and, in a crazy sense, we have been posthuman all along. And the transhuman program as envisioned by the likes of Nick Bostrom becomes little more than a hope founded on a pipedream.

And our future becomes more radically alien than any of us could possibly conceive, let alone imagine.

Less Than ‘Zero Qualia’: Or Why Getting Rid of Qualia Allows us to Recover Experience (A Reply to Keith Frankish)

by rsbakker

Aphorism of the Day: Here, it turns out, is so bloody small that even experience finds itself evicted and housed over there.


From Philosophy TV:

Richard Brown: And you know there is a–I don’t want to say growing movement–but there is a disturbing undercurrent [laughs] of philosophers who are out and saying that they are in fact zombies. So I don’t know if you are aware of this or not but…

Keith Frankish: I’m… [laughs] Not phenomenally.

Richard Brown: Okay… [laughs]

Keith Frankish: [laughs] Yes, I might align myself with this ‘disturbing undercurrent.’


I think philosophy of mind–as an institution–is caught in a great dilemma: either they accept the parochial, heuristic nature of intentional cognition, or they condemn themselves to never understanding human consciousness. This was the basis of my interpretation of Frank Jackson’s Mary argument as a ‘heuristic scope of application detector,’ a way to make the limits of human environmental cognition known. Why does it seem possible for Mary to know everything about red without every having experienced red? Why does the additional information provided by experiencing red not obviously count as ‘knowledge’?  In other words, why the conflict of intuitions?

The problem, in a nutshell, has to do with informatic neglect (see my previous post for more detail). Heuristic cognition leverages computational efficiencies by ignoring information. Intentional cognition, in particular, systematically neglects all the neurofunctional information pertaining to our environmental tracking. In a sense, this is all that ‘transparency’ is: blindness to the mechanisms responsible for environmental cognition. Given the functional independence of our environments, neglecting this information pays real computational dividends. Given reliable tracking systems, information regarding those systems is not necessary to cognize systems tracked, but only so long as those systems tracked are not ‘functionally entangled’ with the systems tracking. You can puzzle through a small engine repair because the systems doing the tracking in no way interfere with the system tracked. What you might call the medial causal relations that enable you to repair small engines in no way impinge on the lateral causal relations that make engines breakdown or run.

This is why intentional cognition is almost environmentally universal, simply because the environmental systems tracked are almost universally functionally independent of our cognition. I say ‘almost,’ of course, because on the microscopic level this functional independence breaks down as the lateral systems tracked become sensitive to ‘interference’ from medial systems tracking: if photons leave small engines untouched, they have dramatic effects on subatomic particles. This is also why intentional cognition can only get consciousness wrong. When we attempt to cognize conscious experience, we have an instance of a cognitive system that systematically neglects medial causal relationships attempting to track a functionally entangled system as if it were independent. The lateral and the medial are one and the same in these instances of attempted cognition, which quite simply means that neither can be cognized or ‘intuited.’

And this, on the Blind Brain Theory (BBT), is the primary hook from which the ‘mind/body’ problem hangs. What we ‘cognize’ when we draw conscious experience into deliberative cognition is quite literally analogous to Anton’s Syndrome: we think we see everything there is to be seen, and yet we really don’t see anything at all. Consciousness, as it appears to us, is a kind of ‘forced perspective’ illusion. Given that we are brainbound, or functionally entangled, and given the environmental orientation of our cognitive systems, we have no way to ‘intuit’ consciousness absent gross distortions. As such, consciousness as it appears is literally inexplicable, period, let alone in natural terms. It can only be explained away, leaving a remainder, consciousness as it is, as the only thing science need concern itself with.

In this post, I want to consider a recent ‘radical position’ in the philosophy of mind, that belonging to Keith Frankish, and show 1) the facility with which his argument can be recapitulated, even explained, in BBT terms; and 2) how it is nowhere near radical enough.

In his “Quining Diet Qualia,” Frankish notes that defences of what he terms ‘classic qualia,’ understood as “introspectable qualitative properties of experience that are intrinsic, ineffable, and subjective” (1-2) have largely vanished from the literature, primarily because ‘intrinsic properties’ resist explanation in either functional or representational terms. Instead, theorists have opted for a ‘watered-down conception’ of qualia in terms of “phenomenal character, subjective feel, raw feel, or ‘what-is-it-likeness’” (2), what Frankish calls ‘diet qualia.’ The idea behind talking about qualia in these terms makes them palatable to both dualists and physicalists, or ‘theory-neutral,’ as Frankish puts it, since everyone assumes that qualia, in this restricted sense, at least, are real.

But Frankish doubts that qualia make sense in even this minimal sense. To illustrate his suspicion, he introduces the concept of ‘zero qualia,’ which he defines as those “properties of experiences that dispose us to judge that experiences have introspectable qualitative properties that are intrinsic, ineffable, and subjective” (4). His strategy will be to use zero qualia to show that diet qualia don’t differ from classic qualia in any meaningful sense.

Now, one of the things that caught my eye in this paper was the striking resemblance between zero qualia and my phenophage thought experiment from several weeks back:

Imagine a viscous, gelatinous alien species that crawls into human ear canals as they sleep, then over the course of the night infiltrates the conscious subsystems of the brain. Called phenophages, these creatures literally feed on the ‘what-likeness’ of conscious experience. They twine about the global broadcasting architecture of the thalamocortical system, shunting and devouring what would have been conscious phenomenal inputs. In order to escape detection, they disconnect any system that could alert its host to the absence of phenomenal experience. More insidiously still, they feed-forward any information the missing phenomenal experience would have provided the cognitive systems of its host, so that humans hosting phenophages comport themselves as if they possessed phenomenal experience in all ways. They drive through rush hour traffic, complain about the sun in their eyes, compliment their spouses’ choice of clothing, ponder the difference between perfumes, extol the gustatory virtues of their favourite restaurant, and so on. (TPB 21/09/2012)

By defining zero qualia in terms of their cognitive effects, Frankish has essentially generated a phenophagic concept of qualia–which is to say, qualia that aren’t qualitative at all. I-know-I-know, but before you let that squint get the better or you, consider the way this conceptualization recontextualizes the supposedly minimal commitment belonging to diet qualia. By detaching the supposed cognitive effects of phenomenality from phenomenality, zero qualia raise the question of just what this supposedly neutral ‘phenomenal character’ is. As Frankish puts it, “What could a phenomenal character be, if not a classical quale? How could a phenomenal residue remain when intrinsicality, ineffability, and subjectivity have been stripped away?” (4). Zero qualia, in other words, have the effect of showing that diet qualia, despite the label, are packed with classic calories:

The worry can be put another way. There are competing pressures on the concept of diet qualia. On the one hand, it needs to be weak enough to distinguish it from that of classic qualia, so that functional or representational theories of consciousness are not ruled out a priori. On the other hand, it needs to be strong enough to distinguish it from the concept of zero qualia, so that belief in diet qualia counts as realism about phenomenal consciousness. My suggestion is that there is no coherent concept that fits this bill. In short, I understand what classic qualia are, and I understand what zero qualia are, but I don’t understand what diet qualia are; I suspect the concept has no distinctive content. (4-5)

Frankish then continues to show why he thinks various attempts to save the concept are doomed to failure. The dilemma is structured so that either the proponent of diet qualia takes the further step of defining ‘phenomenal character,’ a conceptual banana peel that sends them skidding back into the arms of classic qualia, or they explain why dispositions aren’t what they really meant all along.

Now on the BBT account, qualia need to be rethought within a consciousness and cognition structured and fissured by informatic neglect. The heuristic nature of intentional cognition means that medial neurofunctionality is always neglected. And as I said above, this means deliberative reflection on conscious experience constitutes a clear cut ‘scope violation,’ an instance of using a heuristic to solve a problem it never evolved to tackle. Introspective intentional cognition, on this account, is akin to climbing trees with flippers.

Of course it doesn’t seem this way–quite the opposite in fact–and for reasons that BBT predicts. Like medial neurofunctionality, the limits of intentional cognition are also lost to neglect. Short of learning those limits, the scope of applicability of intentional cognition, universality is bound to be the default assumption. So our intentional cognitive systems make sense of what they can oblivious of their incapacity. The ease with which they conjure worlds out of pixels and paint, for instance, demonstrates their power and automaticity. BBT suggests that something analogous happens when intentional cognition is fed metacognitive information: the information is organized in a manner amenable to intentional, environmental cognition.

As asserted above, the point of the intentional heuristic is to isolate and troubleshoot lateral environmental relations (normative or causal) against a horizon of variable information access. Thus it ‘lateralizes,’ you could say, the first-person, turns it into little environment. The problem is that this ‘phenomenal environment’ literally possesses no horizon of variable access (cognition is functionally entangled, or ‘brainbound,’ with reference to experience) and, thanks to the interference of the medial neurofunctionality neglected, no lateral causal relationships. Like Plato’s cave-dwellers, intentional cognition is quite simply stuck with information it cannot cognize. ‘Phenomenal character’ becomes a round peg in a world of cognitive squares: as it has to be on the BBT account.

By making the move to ‘cognitive dispositions,’ zero qualia bank on our scientific knowledge of the otherwise neglected axis of medial neurofunctionality. The challenge, for the diet qualia advocate, is to explain how phenomenal character anchors this medial neurofunctionality (understood as cognitive dispositions), to explain, in other words, what role ‘phenomenal character’ plays–if any. But of course, thanks to the heuristic short-circuit described above, this is precisely what the diet qualia advocate cannot do. The question then becomes, of course, one of what ‘diet’ amounts to. Either one moves inside the black box and embraces classic qualia or one moves outside it and settles for zero qualia.

But of course, neither of these options are tenable either. Dispositional accounts, though epistemologically circumspect, have a tendency to be empirically inert: the job of science is to explain dispositions, which is to say, use theory to crack open black boxes. Epistemological modesty isn’t always a virtue. And besides, there remains the fact that we actually do have these experiences!

Frankish’s real point, of course, is that philosophy of mind has made no progress whatsoever in the move to diet qualia, that phenomenality remains as impervious as ever to functional or representational explanation and understanding. But he remains as mystified as everyone else about the origins and dynamics of the problem. I would append, ‘only more honestly so,’ were it not for claims like, “I think everyone agrees that zero qualia exist,” in the interview referenced above. I certainly don’t, and for reasons that I think should be quite clear.

For one, consider how his ‘cognitive dispositions’ only run one way, which is to say, from the black box of phenomenality, when the medial neurofunctionality occluded by metacognitive deliberation almost certainly runs back and forth, or in other words, is exceedingly tangled. And this underscores the artificiality of zero qualia, the way they can only do their intuitive work by submitting to what is a thoroughly distorted understanding of conscious experience in the first place. The very notion that phenomenal character can be ‘boxed,’ cleanly parsed from its cognitive consequences, is an obvious artifact of neurofunctional informatic neglect, the way, intentional cognition automatically organizes information for troubleshooting.

On the BBT account, the problem lies in the assumption that intentional cognition is universal when it is clearly heuristic, which is to say, an information neglecting problem-solving device adapted to specific problem-solving contexts. The ‘qualia’ that everyone has been busily arguing about and pondering in consciousness research and the philosophy of mind are simply the artifacts of a clear (once you know what to look for) heuristic scope violation. There are no such things, be they classic, diet, or zero.

Now given that the universality of intentional cognition is the default assumption of nearly every soul reading this, I’m certain that what I’m about to say will sound thoroughly preposterous, but I assure it possesses its own, counterintuitive yet compelling logic (once you grasp the gestalt, that is!). I want to suggest that it makes no more sense to speak of qualia ‘existing’ than it does to speak of individual letters ‘meaning.’ Qualia are subexistential in the same way that phonemes are ‘subsemantic.’

But they must be something! your intuitions cry–and so they must, given that intentional cognition is blind to its heuristic limits, to the very possibility that it might be parochial. It has no other choice but to treat the first-person as a variant of the third, to organize it for the kinds of environmental troubleshooting it is adapted to do. After all, it works everywhere else: Why not here? Well, as we have seen, because qualia are neurofunctionally integral to the effective functioning of intentional cognition, they are a medial phenomenon, and as such are utterly inaccessible to intentional cognition, given the structure of informatic neglect that characterizes it.

But this doesn’t mean we can’t understand them, that McGinn and the Mysterians are correct. McGinn, you could say, glimpsed the way phenomenality might exceed the reach of intentional cognition while still assuming that the latter was humanly universal, that we couldn’t gerrymander ways to see around our intuitions, as we have, for example, with general relativity or quantum mechanics.

Consciousness presents us with precisely the same dilemma: cling to heuristic intuitions that simply do not apply, or forge ahead and make what sense of these things as we can. If the concept ‘existence’ belongs to some heuristic apparatus, then the notion that qualia are subexistential is merely counterintuitive. Otherwise, relieved of the need to force them into a heuristic never designed to accommodate them, we can make very clear sense of them as phenomemes, the combinatorial building blocks of ‘existence,’ the way phonemes are the combinatorial building blocks of ‘meaning.’ They do not ‘exist’ the way apples, say, exist in intentional cognition, simply because they belong to a different format. ‘What is redness?’ makes no sense if we ask it in the same intuitive way we ask, ‘What are apples?’ The key, again, is to avoid tripping over our heuristics. Though redness eludes the gross, categorical granularity of intentional cognition, we can nevertheless talk apples and rednesses together in terms of nonsemantic information–which is just to say, in terms belonging to what the life sciences take us to be: evolved, environmentally-embedded, information processing systems.

Because of course, the flip side of all this confusion regarding qualia is the question of how a mere machine can presume to ‘know truth,’ as opposed to happening to stand in certain informatic relationships with its environments, some effective, others not. When it comes to conundrums involving intentionality, qualia are by no means lonely.

‘V’ is for Defeat: The Total and Utter Annihilation of Representational Theories of Mind

by rsbakker

Aphorism of the Day: The mere fact of cartoons shouts the environmental orientation of our cognitive heuristics. A handful of lines is all the brain needs to create a world. South Park, of all things, likely means we have no idea what we’re talking about when we purport to explain ‘consciousness.’


Some kind of pervasive and elusive incompatibility haunts the relation between our intuitive self-understanding, what Wilfred Sellars famously referred to as the ‘Manifest Image,’ and our ever deepening natural self-understanding, the ‘Scientific Image.’ The question is really quite simple: How do we make intentionality consistent with causality? How do we make the intentional logic of the subject fit with the causal logic of the object? Most philosophers are what might called semantic Hawks, thinkers bent on finding ways of overcoming this incompatibility, hoping against hope that the resolution will leap out of the conceptual or empirical details. Some are semantic Diplomats, thinkers who have thrown their hands up, arguing the cognitive autonomy of the two domains. And still others, the semantic Profiteers, simply want to translate the causal into an expression of the intentional, to make science one particularly powerful ‘language game’ among others.

I’m what you might call a semantic Defeatist, someone convinced the only real solution is to explain the whole thing away. I think the Hawks are fighting a battle they’ve literally evolved to lose, that the Diplomats, despite their best intentions, are negotiating with ghosts, and that the Profiteers have simply found a way to load the horse and whip the cart. Defeatists, of course, rarely prevail, but they do persist. And so the madness of arguing for the profound and troubling structural role blindness plays in human consciousness and cognition continues. Existence understood as the tissue of neglect. Yee. Hah.

Today, I want to discuss the semantic Hawks, provide a historical and conceptual cartoon of what makes them so warlike, and then sketch out, as best as I can, why I think they are doomed to lose their war.

Like their political counterparts, semantic Hawks are motivated by conviction, particularly regarding the nature of meaning, representation, and truth. Given the millennial philosophical miasma surrounding these concepts, one might wonder how anyone could muster any conviction of any kind regarding their ‘nature.’ I know back in my continental philosophical days it was one of those ‘other guy’ head-scratchers, the preposterous commitment that made so much so-called ‘analytic thought’ sound more like religion than philosophy. But that was bigotry on my part, plain and simple. The Hawks constitute the semantic majority for damn good reasons. They are eminently sensible, which, as we shall see, is precisely the problem.

Historically, you have the influence of Frege and Russell at the beginning of the 20th century. A hundred and fifty years previous, Hume’s examinations of human nature had dramatically disclosed the limits that subjectivity placed on our attempts to think objective truth. Toward the end of the 18th century, Kant thought he had seen a way through: if we could deduce the categorical nature of that subjectivity, then we could, at the very least, grasp the true-for-us. But this just led to Hegel and the delicious-but-not-so-nutritious absurdity of reducing everything to ‘objective subjectivity.’ What Frege and Russell offered was nothing less than a way to pop the suffocating bubble of subjectivity, theories of meaning that seemed to put language, and therefore language users, in clear contact with the in-itself.

Practically speaking, the development of formal semantics was like cracking open caulked-shut windows. Given a handful of rules, you could formalize what seem to be the truth preserving features of natural languages. Of course, it only captured a limited set of linguistic features, and even within this domain it was plagued with puzzles and explanatory conundrums. But it was extraordinarily powerful nonetheless, so much so that it seemed natural to assume that with a little ingenious conceptual work all those pesky wrinkles could be ironed out, and we could jam with a perfectly-pressed Frock of Ages.

The theories of meaning arising out of these considerations in the philosophy of language also seemed–and still seem–to nicely dovetail with parallel questions in the philosophy of mind. Like language, conscious experience clearly seems to put us in logical contact with the world. Experiences, like claims, can be true or false. Phenomenology, like phonology, seems to vanish in the presentation of something else. And this drops us square in the lap of representationalism’s power as an explanatory paradigm: intentionality, meaning, and normativity are not simply central to human cognition, they are the very things that must be explained.

Conscious experience is representational: the reason we see through experience is the same as the reason we see through paintings or television screens. What is presented–qualia or paint or pixilated light–re-presents something else from the world, the representational content. What could be more obvious?

With the development of computers toward the middle of the 20th century, theorists in philosophy and psychology suddenly found themselves with a conspicuously mechanistic model of how it might all work. Human cognition, both personal and subpersonal, could be understood in terms of computations performed on representations. The relation of the mental to the neural, on this account, was no more mysterious than the relation between software and hardware (which, as it turns out, is every bit as mysterious!). And so, given this combination of intuitive appeal and continuity with other ‘hard’ research programs, representational theories of mind proved well nigh irresistible, not only to Anglo-American philosophy, but to a psychological establishment keen to go to rehab after a prolonged bout of behaviourism.

The real problem, aside from deciding the best way to characterize the theoretical details of the representational picture, is one of ironing out the causal details. The brain, after all, is biomechanical, an object belonging to the domain of the life sciences more generally. If you want to avoid the hubristic and (from a scientific perspective) preposterous enterprise of positing supra-natural entities, you need to explain how all this representation business, well, actually works. Thus the decades-long project of theorizing causal accounts of content.

The big problem, it turns out, is one of providing a natural account of content determination that simultaneously makes sense of misrepresentation. Jerry Fodor famously frames the difficulty in terms of the ‘disjunction problem’: you can say that your representation ‘dog’ is causally triggered by sensing a dog in your environment, which seems well and fine. The problem is that your representation ‘dog’ is sometimes causally triggered by sensing a fox in your environment (perhaps in less than ideal observational conditions). So the question becomes what, causally, makes your representation ‘dog’ a representation of a dog as opposed to a representation of a dog or fox. What, in other words, causally explains the way representations can be wrong? This may seem innocuous at first glance, but the very intelligibility of the representational account depends on it. Without some natural way of sorting content determining causes (dogs) from non-content determining causes (foxes or anything else) you quite simply have no causal account of content.

After decades of devious ingenuity, critics (most notably Fodor himself) have always been able to show how purported solutions run afoul some variant of this problem. So why not strike your colours and move on as a Defeatist like me advocates? The thing to remember is that there are at least two explanatory devils in this particular philosophical room: for many, conscious experience, short of representational theories, seems so baffling that the difficulties pertaining to causal content determination are a bargain in comparison. And this is one big reason why anti-representational accounts have made only modest headway over the intervening years: they literally seem to throw the baby out with the bathwater.

For the Hawk, intentionality is a primary explanandum. Recall the power of formal semantics I alluded to above: not only do logic and mathematics work, not only do they make science itself possible, they seem to be intentional through and through (though BBT disputes even this!). Given that intentionality is every bit as ‘real’ as causality, the question becomes one of how they come together in our heads. The responsible thing, it would seem, is to chalk up their track record of theoretical failure to mere factual ignorance, to simply continue taking runs at the problem armed with more and more neuroscientific knowledge.

As a Defeatist, however, I think the problem is thoroughly paradigmatic. I don’t worry about throwing out the baby with the bathwater simply because I’m not convinced the baby ever existed (unlike the Profiteers, for instance, who think the baby was switched in the hospital). For the Hawk, however, this means I have nothing short of an extraordinary explanatory and argumentative burden to discharge: not only do I need to explain why there’s no intentional baby, I need to explain why so many are so convinced that there is. Even worse, it would seem that I need to also explain away formal semantics itself, or at least account for its myriad and quite dazzling achievements. Worse of all, I probably need to explain Truth on top of everything.

The Blind Brain Theory (BBT) has crazy things to say about all these things. But I lack the space to do much more than wedge my foot in the door here. None of these burdens will be discharged in what follows. If I manage to convince a soul or two that their ingenuity is better wasted elsewhere, so much the better. But all I really want to show is that BBT is worth the time and effort required to understand it on its own terms. And I hope to do this by using it to formulate two, interrelated questions that I think are so straightforward and so obviously destructive of the representationalist paradigm, they might actually merit the hyperbole of this post’s title.

The first point I want to make has to do with heuristics, particularly as they are conceived by the growing number of researchers studying what is called ‘ecological rationality.’ Any strategy that solves problems by ignoring available information is heuristic. ‘Rules of thumb’ work by means of granularity and neglect, by ignoring complexities or entire domains if need be. As a result, they are problem specific: they only work when applied to a limited set of specifically structured challenges. As Todd and Gigarenzer write,

“The concept of ecological rationality–of specific decision-making tools fit to particular environments–is intimately linked to that of the adaptive toolbox. Traditional theories of rationality that instead assume one single decision mechanism do not even ask when this universal tools works better or worse than any other, because it is the only one thought to exist. Yet the empirical evidence looks clear: Humans and other animals rely on multiple cognitive tools. And cognition in an uncertain world would be inferior, inflexible, and inefficient with a general purpose optimizing calculator…” (Ecological Rationality, 14)

Ecological rationality looks at cognition in thoroughly evolutionary terms, which is to say, as adaptations, as a ‘toolbox’ of myriad biomechanical responses to various environmental challenges. It turns out that optimization strategies, problem-solving approaches that seek to maximize information availability in an attempt to generate optimal solutions, are not only much more computationally cumbersome (and thus an evolutionary liability), they are also often less effective than far simpler, far cheaper, quicker, and more robust heuristic strategies.

Todd and Gigarenzer give the example of catching a baseball. Until recently the prevailing assumption was that fielders unconsciously used a complex algorithm to estimate distance, velocity, angle, resistance, wind, and so on, to calculate the ball’s trajectory and anticipate where it would land–all within a matter of seconds. As it turns out, they actually rely on rules of thumb like the gaze heuristic, where they fix their gaze on the ball high up and start running so that the image of the ball rises at a continuous rate relative to their gaze and position. Rather than calculate the ball’s trajectory, they let the trajectory steer them in.

For our purposes, the important aspects of heuristic troubleshooting are 1) informatic neglect, the strategic omission of information; and 2) ecological matching, the way heuristics are only effective for a certain set of problems.

As far as I know, no one in consciousness research and philosophy of mind circles has bothered to think through the more global implications of informatic neglect on cognition, let alone consciousness. Most everyone with a naturalistic bent accepts the heuristic, plural nature of human and animal cognition. But no one to my knowledge has thought through the fact that the ‘representational paradigm’ is itself a heuristic.

How can we know the ‘R-paradigm’ is heuristic? Well… Because of the need to provide a causal account of content-determination!

Causal information, in other words, is the information neglected, the very thing the R-paradigm elides. I think you could mount a strong argument that the R-paradigm has to be heuristic simply on evolutionary, developmental grounds. But the primary reason is structural: there is simply no way for the brain to track the causal complexities of its own cognitive systems, even if it paid evolutionary dividends to do so. This structural fact, you could suppose, finds expression in the paradigmatic absence of neurofunctional information in so-called representational cognition.

The R-paradigm is heuristic–full stop. It systematically neglects information. This means (or at the very least, strongly suggests) that the R-paradigm, like all other heuristics, is ecologically matched to a specific set of problems. The R-paradigm, in other words, it is not a universal problem-solving device.

And this means that the R-paradigm is something that can be applied out-of-school–that it can be misapplied. Understood in these terms, the tenacious nature of the content-determination problem (and the grounding problem more generally) takes on an entirely new significance: Is it merely coincidental that Hawkish philosophers cannot conceptually (let alone empirically) explain the R-paradigm in causal terms–which is to say, in terms of the very information the R-paradigm neglects?

Perhaps. But let’s take a closer look.

As a heuristic, the R-paradigm necessarily has a limited scope of applicability: it is a parochial problem-solver, and only appears universal thanks (once again) to informatic neglect. It seems relatively safe to assume that the R-paradigm is primarily adapted to environmental problem-solving or third-person cognition. If this were so, we might expect it to possess a certain facility for causal relations in our environments. And indeed, as the transparency that motivates the Hawks would suggest, it’s tailor made for causal explanations of things not itself. It neglects almost all information pertaining to our informatic relation to our environment, and delivers objects bouncing around in relation to one another–fodder for causal explanation.

Small wonder, then, everything goes haywire when you take this heuristic to the question of consciousness and the brain. Neglecting your informatic relation to functionally independent systems in your environment is one thing; Neglecting your informatic relation to functionally dependent systems in your own brain is something altogether different. The R-paradigm is quite literally a heuristic that neglects the very information required to cognize consciousness.  How could it not misfire when faced with this problem? How could it come remotely close to accurately characterizing itself?

The problem of content determination, on the BBT account, is actually analogous to the problem of self-determination–which is to say, free will. In the latter, the problem is one of causally squaring the circle of ‘choice,’ whereas in the former the problem is one of causally squaring the circle of ‘meaning.’ Where cause flattens choice, it simply sails past meaning. And how could it be otherwise, when nothing less than truth is the long-sought-after ‘effect’?

Like choice, aboutness is a heuristic, a way of managing environmental relationships in the absence of constitutive causal information. It is a kluge–perhaps the most profound one. No conspiracy of causal factors can conjure representational content because the relationship sought is an exceedingly effective but nevertheless granular substitutefor the lack of access to those selfsame factors.

Of course it doesn’t seem that way, intuitively speaking. Consider the example of the gaze heuristic, given above. Does it make sense to suppose the gaze heuristic is actually an optimization algorithm? Of course not: Informatic neglect is constitutive of heuristic problem-solving. So why did so many assume that some kind of optimization algorithm underwrote ball catching? Why, in other words, was the informatic neglect involved in ball-catching something that required experimental research to reveal? Well, because informatic neglect is just that: informatic neglect. Not only is information systematically elided, information regarding this elision is lacking as well. This effectively renders heuristics invisible to conscious experience. Not only do we lack direct awareness of which heuristic we are using, we generally have no idea that we are relying heuristics at all. (Kahneman’s recent Thinking, Fast and Slow provides a wonderful crash course on this point. What he calls WYSIATI, or What-You-See-Is-All-There-Is, is a version of ‘informatic neglect’ as used here).

Aboutness not only seems ‘sufficient,’ to be the only tool we need; it also seems to be universal, a tool for all problem-solving occasions. Moreover, given the profoundly structural nature of the informatic neglect involved, the fact that the brain is necessarily blind to its own neurofunctionality, there is a sense in which aboutness is unavoidable: if the gaze heuristic is one tool among many, then aboutness is our hand, a ‘tool’ we cannot but use, (short of fumbling things with our elbows). More still, you can add to this list what might be called the ‘ease of worlding.’ One need only watch an episode of South Park to appreciate how primed our cognitive systems are, and how little information they require, to generate ‘external environments.’ It’s easy to forget that the ‘representational images’ that surround us are actually spectacular kinds of visual illusions. Structure a meagre amount of visual information the proper way, and we automatically cognize depth in flat surfaces populated with non-existent objects.

Aboutness provides the structural frame of our cognitive relation to our environments, conjuring worlds automatically at the least provocation. Given this, you could argue that representational theories of mind are a kind of ‘forced move,’ a theoretical step we had to take in our attempts to understand consciousness. But you can also see why it’s something a mature scientific account of consciousness and cognition requires we must see our way past. As soon as you acknowledge the intimate, inextricable relationship between mind and brain, you acknowledge that the former somehow turns on neurofunctionality–which is to say, the very thing systematically neglected by aboutness.

Reflecting on conscious experience means feeding brain processes to a heuristicthat spontaneously and systematically renders it causally inexplicable. In a sense, this explains the charges of ‘homunculism’ you find throughout the literature. The idea of a ‘little observer in the head’ that mistakenly ‘objectifies’ or ‘hypostatizes’ aspects of conscious experience is more than a little impressionistic. Framed in terms of heuristics and informatic neglect, the metaphoric problem of homunculism becomes a clear instance of heuristic misapplication: How can we trust a heuristic obviously designed to cognize our environments absent neurofunctional information to assist our attempts to cognize ourselves in terms of neurofunctional information?

If anything, one should expect that such a heuristic system would cognize the brain in non-neurofunctional terms, which is to say, as something quite apart from the brain. In other words, given something like an aboutness heuristic, one should expect dualistic interpretations of consciousness to be a kind of intuitive default. And what is more, given something like the aboutness heuristic, one should expect consciousness to be exceedingly difficult to understand in causal–which is to say, naturalistic–terms. Using the aboutness heuristic to cognize the brain environmentally, in the third-person, isn’t problematic simply because isolating causal relations in functionally independent systems is its stock and trade. Neglecting all the enabling machinery between the cognizing brain and the brain cognized facilitates cognizing the latter because that machinery is irrelevant to its function. Blindness to its own enabling machinery literally facilitates seeing the enabling machinery of other brains. Using the aboutness heuristic to cognize the brain in the first-person, therefore, is bound to generate intuitions of profound difference, as well as drive an apparently radical cognitive wedge between the first-person and third-person. What is obvious in the latter, becomes obscure in the former, and vice versa.

The route from the aboutness heuristic, the implicit device we are compelled to use given the structural inaccessibility of neurofunctional information, to the philosophically explicit R-paradigm described above should be obvious, at least in outline. Using the aboutness heuristic to cognize the brain in the first-person–or metacognitive applications–will tend to make an ‘environment’ of conscious experience, transform it into a repertoire of discrete elements. Since these elements seem to automatically vanish like paint or pixels in the apparent process of presenting something else, and since the enabling machinery is nowhere to be found, the activity of the aboutness heuristic is mistaken for a property belonging to each element. They are dubbed ‘representations,’ discrete ‘vehicles’ that take the something-else-presented as their ‘content’ or ‘meaning.’

Since the informatic neglect of causality is also constitutive of this new, secondary aboutness relation between thing representing and thing represented, it must be conceived in granular, normative terms–which is to say, in terms belonging to still another heuristic adapted to the structural neglect of causal information. And this, of course, kicks the door open onto another domain of philosophical perplexity (and another longwinded bloghard).

But as should be clear, if we take the mechanistic paradigm of the life sciences as our cognitive baseline, which representational theories of mind purport do, then it should be quite clear that there are no such things as representations (not even in the environmental sense of paintings and television screens). What we call ‘representations,’ what seems to be so obvious to basic intuition, is actually an artifact of that intuition, a ‘rule of thumb’ so profound that it seems to structure conscious experience itself, but really only provides an efficient shortcut for cognizing gross features of our environments absent any constitutive neurofunctional information.

We have no representations, not of dogs or foxes or anything else. Rather, we have nets bound into sensimotor loops that endlessly trawl our environments for patterns of information, sometime catching dogs, sometimes missing. Homomorphisms abound, yes. But speaking of homomorphic cogs within a mechanism is a far cry from speaking of representational mechanisms. The former, for one, is genuinely scientific!–at least to the extent it doesn’t require positing occult properties.

And perhaps this should come as no surprise. Science has been, if nothing else, the death-march of human conceit.

But I’m sure anyone with Hawkish sympathies is scowling, wondering exactly where I took a hard turn off the edge of the map map. What could be more obvious than our intentional relation to the world? Not much–I agree. But then not so long ago one could say the same about the motionlessness of the Earth or the solidity of objects. As I mentioned, I have come nowhere near discharging the explanatory and argumentative burdens as likely perceived by proponents of representational theories of mind. But despite this, the following two questions, I think, are straightforward enough, obvious enough, to reflect some of that burden back onto the representationalist, and perhaps test some Hawkish backs:

1) What information does the R-paradigm neglect?

2) How does this impact it’s scope of applicability?

The difficulty these questions pose for representationalism, I would argue, is the difficulty a sustained consideration of informatic neglect and its myriad roles pose for consciousness research and cognitive science as a whole.

In the Shadow of Ishual

by rsbakker

Aphorism of the Day: The inability to distinguish ‘political’ from ‘nice’ has saved more lives than penicillin and taken at least as many as speeding.


Madness has been kind enough to post a teaser from the beginning of The Unholy Consult on the Second Apocalypse Forum, for those who are interested. The book is inching toward completion, and barring any revisionary madness (no relation), looks like it will be even more of a behemoth than The White-Luck Warrior.

Also a reminder for those of you in the Toronto area, I’m scheduled to give a talk entitled, “Less Human than Human: The Cyborg Fantasy versus the Neuroscientific Real,” at the 2012 Toronto SpecFic Colloquium this October 28th. Bring family, friends, pets and quirky strangers – just be sure to leave your souls behind…

If you can’t make it, I’m also scheduled to give a talk and reading at Laurier University sometime mid-November. I’ll post the details when I get them, perhaps on my new, fancy-pants author website, where I hope to post sundry observations on the nature of children, chocolate, and spectacular sunsets. Every three pound brain needs a skull and hair…

Or at the very least, a zipper.


Spinoza’s Sin and Leibniz’s Mill

by rsbakker

Aphorism of the Day: Every tyrannical system, to conserve itself as a system, will scapegoat even its king. So does drama masquerade as change.


So I’m reading and digging Paul Churchland’s most recent book, Plato’s Camera, while puzzling over David Chalmer’s latest at the same time, and I find myself thinking of Spinoza’s approbation against misconstruing the Condition in terms belonging to the Conditioned. In Part II of his Appendix Containing Metaphysical Thoughts he writes:

In this Chapter God’s existence is explained quite differently from the way in which men commonly understand it; for they confuse God’s existence with their own, so they imagine God as being somewhat like a Man and do not take note of the true idea of God which they have, or are completely ignorant of having it. As a result they can neither prove God a priori, i.e., from his true definition, or essence, nor prove it a posteriori, from the idea of him, insofar as it is in us. Nor can they conceive God’s existence. (The Collected Works of Spinoza, 315)

Given the analogical nature of human cognition, the reasons for this nearly universal error are quite clear: ‘men’ mined the information belonging to their own manifest image in their attempts to conceive God, simply because it was the most intuitive and readily available. Given this heuristic brush and informatic palette, they painted God in psychological terms, only possessing their features to the ‘nth degree.’ A personal God.

Spinoza catalogues and critiques the numerous expressions of this fundamental error in what follows, showing why the perplexities and contradictions that pertain to a personal God arise, and how these problems simply fall away if you subtract what is human from God. He was branded a heretic for his trouble, disowned by the Jewish community, and so reviled by Christians that some commentators believe that the following figure I want to consider intentionally expunged all traces of Spinoza’s influence from his own philosophy.

In philosophy of mind and consciousness research circles, Leibniz is typically mentioned with reference to his famous windmill example, which he uses to illustrate the now hoary conceptual gulf between doing and feeling. He writes:

One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception. (Monadology, §17)

In a sense, the problem of Leibniz’s Mill simply turns Spinoza’s Sin on its head. The Mill cannot be the Condition, Leibniz is arguing, because he cannot fathom how it could generate the Conditioned, manifest ‘perception.’ In a sense, it captures the Hard Problem in a nutshell: how could all this ramshackle machinery generate the exquisite smell of turkey dinner on a warm, autumn afternoon, or anything else that we experience for that matter?

What does this have to do with reading Churchland? Well, Churchland wants to argue that cognitive science is guilty of committing Spinoza’s Sin, that too many are too prone to construe the Condition, neural function, by analogy to the Conditioned, psychology and language. So, for instance, in The Cambridge Handbook of Cognitive Science, you find Barbara Von Eckhardt explaining:

There is nothing even approximating a systematic semantics for even a fragment of [any mental representation system]. Nevertheless, there are ways to inductively infer to some global semantic features [any mental representation system], arguably, must have. One way is to extrapolate, via a form of ‘transcendental’ reasoning, from features of cognitive science’s explananda. (33)

In other words, Spinoza’s Sin is actually a Virtue: the explananda of cognitive science are nothing other than manifest features of cognition, what it is we generally think we’re doing (given what little we have to go on) whenever we cognize ourselves, others, and the world. So the idea, Von Eckhardt is saying, is to reason from the Conditioned, our manifest informatic palette, to the Condition, whatever will be eventually described in a complete representational theory of mind. She thinks, quite sensibly, that our manifest experience and intuitions are what need to be explained.

Churchland argues otherwise–or well, almost. Not only does the ‘linguaformal’ approach look increasingly unlikely the more we learn about the brain, it renders the obvious cognitive continuity between humans and animals very, very difficult to understand. In Plato’s Camera he paints a picture of cognition where Kant’s simple frame of timeless transcendental categories is smashed into a myriad of nondiscursive, neural ‘maps’ understood according to the formation and weighting of synaptic connections among populations of neurons possessing various, mathematically tractable structural predispositions. “Simply replace,” he writes, “‘single complex predicate’ with ‘single prototype point in high-dimensional activation space,’ and you have the outlines of the view to be defended here” (23).

Churchland, in other words, isn’t so interested in overthrowing the old order as he is in electing a new government. As radical as his account often seems, he still clings to certain boilerplate semantic assumptions, still sees the Mill representationally, which is to say, as a kind of content machine. Meaning, for him, remains something requiring a positive explanation. He argues that “deploying a background map of some practically relevant feature space, a map sporting some form of dynamical place marker, is a common and highly effective technique for monitoring, modulating, and regulating many practically relevant behaviours” (Plato’s Camera, 131). But even in the examples he provides, the homomorphisms he points out are all simply parts of larger dynamic systems, begging the question of why maps should be accorded pride of place in his account of cognition, rather than being relegated to one kind of heuristic tool among many.

Put differently, he ultimately succumbs to temptation and commits Spinoza’s Sin. Rather than, as BBT suggests, demoting ‘traditional epistemology’–treating it as a signature example of the way informatic neglect leads us to universalize heuristics, informatic processes that selectively ignore information to better solve specific problem sets–Churchland wants to dress it in more scientifically fashionable clothes.

Grasping the abject wickedness of Spinoza’s Sin requires an appreciation of the abyssal nature of the gulf between the Condition and the Conditioned when it comes to the question of human consciousness and cognition. One needs to understand, in other words, why the Mill has such difficulty fathoming itself as a Mill. Churchland, after all, is more than just a very, very intelligent man. He also possesses the imaginative capacity and institutional courage to make the analogical leap beyond linguaformalism–and yet, even still, he cannot relinquish certain intuitions regarding content…


Imagine a Mill designed to cognize environmental information, whirring and clicking in the dark. If you could peer through the gloom you would see loosely packed machinery, literally unimaginable in complexity, clattering away, wheel spinning wheel, cog rotating cog–swiss-watch complexities extending through impenetrable gloom.

Now imagine a flashlight, shining down across and penetrating into this machinery, illuminating and eclectic multitude of surfaces, the crest of a spinning wheel here, a length of strut there, the handle of lever, a corner of casing, on and on, a cobweb of fragmentary glimpses, become more and more fractional and dim the deeper the light probes the machine’s bowel. Peering, all you can see are shreds of machinery, a kind of inexplicable constellation in the black.

Now imagine that what’s illuminated represents the information accessible to conscious experience. Not only is information pertaining to the vast bulk of the machine inaccessible, information regarding the actual mechanical role of those parts somewhat illuminated is also out of reach–so much so, that even information pertaining to the lack of this information is missing. This means you need to cut out all those fragmentary, functionally distributed glimpses, then paste them into a singular Collage, transform a mishmash of perspectival distortions into one ‘manifest’ image. The informatic cobweb fills the screen, you could say.

Not so different from what-you-are-experiencing-this-very-moment-here-now.

Feed this information back to the Mill (whose machinery, remember, is primarily designed to trouble-shoot environmental information). Utterly blind to the vast amounts of information neglected, it takes the Collage to be sufficient–all the information accessed becomes all the information required. Since information drives distinction, its absence leverages the cognitive illusion of sufficient wholes–as I have written elsewhere, consciousness can be seen as a kind of ‘flicker-fusion’ writ large. Short of neuroscience, it has no real recourse to information that hales from beyond the Collage in its attempts to cognize the Collage. It is informatically encapsulated.

The Collage, in other words, is the Conditioned, the well from which our cognitive systems draw water whenever tasked with troubleshooting the Condition. Given the reworked Mill analogy above, it’s easy to see the peril of Spinoza’s Sin: From the informatic vantage of the Collage, the neurofunctional axis can only be indirectly inferred, never directly intuited. This is why the functional findings of cognitive science so often strike those without any real exposure to the field as so counterintuitive. Not only are we ‘in the dark’ with reference to ourselves, we are, in a very real sense, congenitally and catastrophically misinformed.

Pending a mature neuroscientific understanding, we are, in effect, the hostage of our metacognitive intuitions, and for better or worse, representation looms large among them. Churchland yields unwarranted pride of place to the homomorphic components of our heuristic systems, endows them with bloated significance, simply because metacognitive intuition, and hence tradition, mistakenly accords representations a privileged role. Because, quite simply, it feels right. It ain’t called temptation for nothing!

The Blind Brain Theory, as I hope the above thumbnail makes clear, affords the resources required to throw off the analogical yoke of the Conditioned once and for all, to subtract the human, not from God, but from the human, thus showing that–beyond the scope of a certain parochial heuristic at least–we just never were what we took ourselves to be.

And perhaps more importantly, never will be.

Out-Danning Dennett

by rsbakker

The idea is this. What you take yourself to be at this very moment is actually a kind of informatic illusion.

For me, the picture has come to seem obvious, but I understand that this is the case for everyone with a theory to peddle. So the best I can do is explain why it seems obvious to me.

One of the things I have continually failed to do is present my take, Blind Brain Theory (BBT), in terms that systematically relate it to other well-known philosophical positions. The reason for this, I’m quite certain, is laziness on my part. As a nonacademic, I never have to exposit what I read for the purposes of teaching, and so the literature tends to fall into the impressionistic background of my theorization. I actually think this is liberating, insofar as it has insulated me from many habitual ways of thinking through problems. I’m not quite sure I would have been able to connect the dots the way I have chasing the institutional preoccupations of academe. But it has certainly made the task of communicating my views quite a bit harder than it perhaps should be.

So I’ve decided to bite the bullet and lay out the ways BBT overlaps and (I like to think!) outruns Daniel Dennett’s rather notorious and oft-misunderstood position on consciousness. For many, if not most, this will amount to using obscurity to clarify murk, but then you have to start somewhere.

First, we need to get one fact straight: consciousness possesses informatic boundaries. This is a fact Dennett ultimately accepts, no matter how his metaphors dance around it. Both of his theoretical figures, ‘multiple drafts’ and ‘fame in the brain’ imply boundaries, a transition of processes from unconsciousness to consciousness. Some among a myriad of anonymous processes find neural celebrity, or as he puts it in “Escape from the Cartesian Theater,” “make the cut into the elite circle of conscious events.” Many subpersonal drafts become one. What Dennett wants to resist is the notion that this transition is localized, that it’s brought together for the benefit of some ‘neural observer’ in the brain–what he calls the ‘Cartesian Theatre.’ One of the reasons so many readers have trouble making sense of his view has to do, I think, with the way he fails to recognize the granularity of this critical metaphor, and so over-interprets its significance. In Consciousness Explained, for instance, he continually asserts there is no ‘finishing line in the brain,” no point where consciousness comes together–”no turnstyle” as he puts it. Consciousness is not, he explicitly insists in his notorious piece (with Marcel Kinsbourne) “Time and the Observer” in Behavioural and Brain Sciences, a subsystem. And yet, at the same time you’ll find him deferring to Baars’ Global Workspace theory of consciousness, even though it was inspired by Jerry Fodor’s notion of some ‘horizontal’ integrative mechanism in the brain, an account that Dennett has roundly criticized as ‘Cartesian’ elsewhere.

The evidence that consciousness is localized (even if widely distributed) through the brain is piling up, which is a happy fact, since according to BBT consciousnesscan only be explained in subsystematic terms. Consciousness possesses dynamic informatic boundaries, both globally and internally, all of which are characterized, from the standpoint of consciousness, by various kinds of neglect.

In cognitive psychology and neurology, ‘neglect’ refers to an inability to detect or attend to some kind of deficit. Hemi-neglect, which is regularly mentioned in consciousness discussions, refers to the lateralized losses of awareness commonly suffered by stroke victims, who will sometimes go so far as to deny ownership of their own limbs. Cognitive psychology also uses the term to refer to our blindness various kinds of information in various problem-solving contexts. So ‘scope neglect,’ for instance, involves our curious inability to ‘value’ problems according to their size.  My view is that the neglect revealed in various cognitive biases and neuropathologies actually structures ‘apparent consciousness’ as a whole. I think this particular theoretical cornerstone counts as one of Dennett’s ‘lost insights.’ Although he periodically raises the issue of neglect and anosognosia, his disavowal of ‘finishing lines’ makes it impossible for him to systematically pursue their relation to consciousness. He overgeneralizes his allergy to metaphors of boundary and place.

So, to give a quick example, where BBT views Frank Jackson’s Mary argument as a kind of ‘neglect detector,’ a thought experiment that reveals the scope of applicability of the ‘epistemic heuristic’ (EH), Dennett thinks it constitutes a genuine first-order challenge, a circle that must be squared. BBT is more interested in diagnosing than disputing the intuition that physical knowledge could be complete in the absence of any experience of red. Why does an obvious informatic addition to our environmental relationship (the experience of red) not strike us as an obvious epistemic addition? Well, because our ‘epistemic heuristic,’ even in its philosophically ‘refined’ forms, is still a heuristic, and as such, not universally applicable. Qualia simply lie outside the EH scope of applicability on my view.

I take Dennett’s infamous ‘verificationism’ as an example of a ‘near miss’ on his part. What he wants to show is that the cognitive relationship to qualia is informatically fixed–or ‘brainbound’–in a way that the cognitive relationship to environments are not: With redness, you have no informatic recourse the way you do with an apple–what you see is what you get, period. On my view, this is exactly what we should expect, given the evolutionary premium on environmental cognition: qualia are best understood as ‘phenomemes,’ subexistential combinatorial elements that enable environmental cognition similar to the way phonemes are subsemantic combinatorial elements that enable linguistic meaning (I’ll get to the strange metaphysical implications of this shortly). Granting that qualia are ‘cognition constitutive,’ we should expect severe informatic access constraints when attempting to cognize them. On the BBT account, asking what qualia ‘are’ is simply an informatic confusion on par with asking what the letter ‘p’ means. The primary difference is that we have a much better grasp of the limits of linguistic heuristics (LH) than we do EH. EH, thanks to neglect, strikes us as universal, as possessing an unlimited scope of applicability. Thus the value of Mary-type thought experiments.

Lacking the theoretical resources of BBT, Dennett can only form a granular notion of this problem. In one of his most famous essays, Quining Qualia, he takes the ‘informatic access’ problem, and argues that ‘qualia’ are conceptually incoherent because we lack the informatic resources to distinguish changes in them (it could be our memory that has been transformed), and empirically irrelevant because those changes would seem to make no difference one way or another. Where he uses the ‘informatic access problem’ as a argumentative tool to make the concept of qualia ‘look bad,’ I take the informatic access problem to be an investigative clue. What Dennett shows via his ‘intuition pumps,’ I think, are simply the limits of applicability of EH.

But this difference does broach the most substantial area of overlap between my position and Dennett’s. In a sense, what I’m calling EH could be characterized as an ‘epistemological stance,’ akin to the variety of stances proposed by Dennett.

BBT takes two interrelated angles on ‘brain blindness’ or neglect. The one has to do with how the appearance of consciousness–what we think we are enjoying this very moment–is conditioned by informatic constraints or ‘blindnesses.’ The other has to do with the plural, heuristic nature of human cognition, how our various problem-solving capacities are matched to various problems (the way cognition is ‘ecological’), and how they leverage efficiencies via strategic forms of informatic neglect. What I’m calling EH, for instance, seems to be both informatically sufficient and universally applicable, thanks to neglect–the same neglect that rendered it invisible altogether to our ancestors. In fact, however, it elides enormous amounts of relevant information, including the brain functions that make it possible. So, remaining faithful to the intuitions provided by EH, we conceive knowledge in terms of relations between knowers and things known, and philosophy sets to work trying to find ways to fit ever greater accumulations of scientific information into this ‘intuitive picture’–to no avail. How do mere causal relations conspire to create epistemological relations, which is to say, normative about relations? On my view, these relations are signature examples of informatic neglect: ‘aboutness’ is a shortcut, a way to relate devices in the absence of any causal information. ‘Normativity’ is also a shortcut, a way to model mechanism in the absence of any mechanistic information. (Likewise, ‘object’ is a shortcut, and even ‘existence’ is a shortcut–coarse-grained tools that get certain work done). Is it simply a coincidence that syntax can be construed as mechanism bled of everything save the barest information? Even worse, BBT suggests it could be the case that both aboutness and normativity are little more than reflective artifacts, merely deliberative cartoons of what we think we are doing given our meagre second-order informatic access to our brain’s activity.

In one of his most lucid positional essays, “Real Patterns,” Dennett argues the ‘realism’ of his stance approach vis-a-vis thinkers like Churchland, Davidson, and Rorty. In particular, he wants explain how his ‘intentional stance’ and the corresponding denial of ‘original intentionality’ does not reduce intentionality to the status of a ‘useful fiction.’ Referencing Churchland’s observations regarding the astronomical amount of compression involved in the linguistic coding of neural states (in “Eliminative Materialism and the Propositional Attitudes“), he makes the point that I’ve made here very many times: the informatic asymmetry between what the brain is doing and what we think we’re doing is nothing short of abyssal. When we attribute desires and beliefs and goals and so on to another brain, our cognitive heuristics are, Dennett wants to insist, trading in very real patterns, only compressed to a drastic degree. It’s the reality of those patterns that render the ‘intentional stance’ so useful. It’s the degree of compression that renders them incompatible with the patterns belonging to the ‘physical stance’–and thus, scientifically intractable.

The only real problem BBT has with this analysis is its granularity, a lack of resolution that leads Dennett to draw several erroneous conclusions. The problem, in a nutshell, is that far more than ‘compression’ is going on, as Dennett subsequently admits when discussing his differences with Davidson (the fact that two interpretative schemes can capture the same real pattern, and yet be incompatible with each other). Intentional idioms are heuristics in the full sense of the term: their effectiveness turns on informatic neglect as much as the algorithmic compression of informatic redundancies. To this extent, the famous ‘pixilated elephant’ Dennett provides to illustrate his argument is actually quite deceiving. The idea is to show the way two different schemes of dots can capture the same pattern–an elephant. What makes this example so deceptive is the simplistic account of informatic access it presupposes. It lends itself to the impression that ‘informatic depletion’ alone characterizes the relation between intentional idioms and the ‘real patterns’ they supposedly track. It entirely ignores the structural specifics of the informatic access at issue (the variety of bottlenecks posited by BBT), the fact that our Intentional Heuristic (IH), very much like EH, elides whole classes of information, such as the bottom-up causal provenance belonging to the patterns tracked. IH, in other words, suffers from informatic distortion and truncation as much as depletion.

His illustration would have been far more accurate if one of the pixilated figures only showed only the elephant’s trunk. When our attentional systems turn to our ‘intentional intuitions’ (when we reflect on intentionality), deliberative cognition only has access to the stored trace of globally broadcast (or integrated) information. Information regarding the neurofunctional context of that information is nowhere to be found. So in a profound sense, IH can only access/track acausal fragments of Dennett’s ‘real patterns.’ Because these fragments are systematically linked to what it is our brains are actually doing, IH will seem to be every bit as effective as our brains at predicting, manipulating, and understanding the behavioural outputs of other brains. Because of neglect (the absence of information flagging the insufficiency of available information), IH will seem complete, unbounded, which is likely why our ancestors used it to theorize the whole of creation. IH constitutively confuses the trunk for the whole elephant.

In other words, Dennett fails to grasp several crucial specifics of his own account. This oversight (and to be clear, there are always oversights, always important details overlooked, even in my own theoretical comic strips) marks a clear parting of the ways between his position and my own. It’s the way developmental and structural constraints consistently distort and truncate the information available to IH that explains the consistent pattern of conceptual incompatibilities between the causal and intentional domains. And as I discuss below, it’s a primary reason why I, unlike Dennett, remain unwilling to take theoretical refuge in pragmatism. No matter what the ‘reality’ of intentionality, BBT shows that the informatic asymmetry between it and the ‘real patterns’ it tracks is severe enough warrant suspending commitment to any theoretical extrapolation, even one as pseudo-deflationary as pragmatism, based upon it.

This oversight is also a big reason why I so often get that narcissistic ‘near miss’ feeling whenever I read Dennett–why he seems trapped using metaphors that can only capture the surface features of BBT. Consider the ‘skyhook’ and ‘crane’ concepts that he introduces in Darwin’s Dangerous Idea to explain the difference between free-floating, top-down religious and naturally grounded, bottom-up evolutionary approaches to explanation. On my reading, he might as well as used ‘trunk’ and ‘elephant’!

Moreover, because he overlooks the role played by neglect, he has no real way of explaining our conscious experience of cognition, the rather peculiar fact that we are utterly blind to the way our brains swap between heuristic cognitive modes. Instead, Dennett relies on the pragmatics of ‘perspective talk’–the commonsense way in which we say things like ‘in my view,’ ‘from his perspective,’ ‘from the standpoint of,’ and so on–to anchor our intuitions regarding the various ‘stances’ he discusses. Thus all the vague and (perhaps borderline) question-begging talk of ‘stances.’

BBT replaces this idiom with that of heuristics, thus avoiding the pitfalls of intentionality while availing itself of what we are learning about the practical advantages of specialized (which is to say, problem specific) cognitive systems, how ignoring information not only generates metabolic efficiencies, but computational ones as well. The reason for our ‘peculiar blindness’–the reason Dennett has had to do to such great lengths to make ‘Cartesian intuitions’ visible–is actually internal to the very notion of heuristics, which, in a curious sense, use blindness to leverage what they can see. From the BBT standpoint, Dennett consistently fails to recognize the role informatic neglect plays in all these phenomena. He understands the fractured, heuristic nature of cognition. He is acutely aware of the informatic limitations pertaining to thought on a variety of issues. But the pervasive, positive, structural role these limitations play in the appearance of consciousness largely eludes him. As a result, he can only argue that our traditional intuitions of consciousness are faulty. Because he has no principled means of explaining away ‘error consciousness,’ all he can do is plague it with problems and offer his own, alternative account. As a result, he finds himself arguing against intuitions he can only blame and never quite explain. BBT changes all of that. Given its resources, it can pinpoint the epistemic or intentional heuristics, enumerate all the information missing, then simply ask, ‘How should we determine the appropriate scope of applicability?’

The answer, simply enough, is ‘Where EH works!’ Or alternately, ‘Where IH works!’ BBT allows us, in other words, to view our philosophical perplexities as investigative clues, as signs of where we have run afoul informatic availability and/or cognitive applicability–where our ‘algorithms’ begin balking at the patterns provided. On my view, the myriad forms of neglect that characterize human cognition (and consciousness) can be glimpsed in the shadows they have cast across the whole history of philosophy.

Bur care must be taken to distinguish the pragmatism suggested by ‘where x works’ above from the philosophical pragmatism Dennett advocates. As I mentioned above, he accepts that intentional idiom is coarse-grained, but given its effectiveness, and given the mandatory nature of the manifest image, he thinks it’s in our ‘interests’ to simply redefine our folk-psychological understanding using science to lard in the missing information. So with regard to the will, he recommends (in Freedom Evolves) that we trade our incoherent traditional understanding in for a revised, scientifically informed understanding of free will as ‘behavioural versatility.’ Since, for Dennett, this is all ‘free will’ has ever been, redefinition along these lines is imminently reasonable. I remember once quipping in a graduate seminar that what Dennett was saying amounted to telling you, at your Grandma Mildred’s funeral, “Don’t worry. Just call rename your dog, Mildred.” After the laughter faded, one of the other students, I forget who, was quick to reply, “That only sounds bad if your dog wasn’t your Grandma Mildred all along.”

I’ve since come to think this exchange does a good job of illustrating the stakes of this particular turn of the debate.

You can raise the most obvious complaint against Dennett: that the inferential dimension of his redefinition makes usage of the concept ‘freedom’ tendentious. We would be doing nothing more than gaming all the ambiguities we can to interpret scientific ‘crane information’ into our preexisting folk-psychological conceptual scaffold–wilfully apologizing, assuming these scientific ‘cranes’ can be jammed into a ‘skyhook’ inferential infrastructure. Dennett himself admits that, given the information available to experience, ‘behavioural versatility’ is not what free will seems to be. Or put differently, that the feeling of willing is an illusion.

The ‘feeling of willing,’according to BBT, turns on a structural artifact of informatic neglect. We are skyhooks–from the informatic perspective of ourselves. The manifest image is magical. Intentionality is magical. On my view, the ‘scientific explanations’ are far more likely to resemble ‘explanations away’ than ‘explanations of.’ The question really is one of how other folk-psychological staples will fare as cognitive neuroscience proceeds. Will they be more radically incompatible or less? Imagine experience and the skein of intuitive judgments that seem to bind it as a kind of lateral plane passing through an orthogonal, or ‘medial,’ neurofunctional space. Before science and philosophy, that lateral plane was continuous and flat, or maximally intuitive. It was just the way things were. With the accumulation of information through the raising of philosophical questions (which provide information regarding the insufficiency of the information available to conscious experience) through history, the intuitive topography of the plane became progressively more and more dimpled and knotted. With the institutionalization of science, the first real rips appear. And now, as more information regarding various neurofunctions becomes available, the skewing and shredding are becoming more and more severe. The question is, what will the final ‘plane of experiential intuition’ look like? How will our native intuitions fare?

How deceptive is consciousness?

Dennett’s answer: Enough to warrant considerable skepticism, but not enough warrant abandoning existing folk-psychological concepts. The glass, in other words, is half full. My answer: Enough to warrant wondering if anyone has ever had a clue ever. The glass lies in pieces across the floor. The trend, at least, is foreboding. According to BBT, the informatic neglect that renders the ‘feeling of willing’ possible is a structural feature belonging to all intentional concepts. Given this, it predicts that very many folk-psychological concepts will suffer the fate the ‘feeling of willing’ seems to be undergoing as I write. From the standpoint of knowledge, experience is about to be cast into the neurofunctional wind.

Grandma Mildred isn’t you dog. She’s a ghost.

Either way, this is why I think pragmatic or inferentialist accounts are every bit as hopeless as traditional approaches. You can say, ‘There’s nothing but patterns, so lets run with them!’ and I’ll say, ‘Where? To the playground? Back to Hegel?’ When knowledge and experience break in two, the philosopher, to be a philosopher, must break with it. The world never wants for apologists.

BBT allows us to frame the problem with a clarity that evaded Dennett. If our difficulties turn on the limited applicability of our heuristics, the question really should be one of finding the heuristic that possesses the most applicability. In my view, that heuristic is the one that allows us to comprehend heuristics in the first place: nonsemantic information. The problem with pragmatism as a heuristic lies in the way it actively, as opposed to structurally (which it also does), utilizes informatic neglect. Anything can be taken as anything, if you game the ambiguities right. You could say it makes a virtue out of stupidity.

In place of philosophical pragmatism, my view recommends a kind of philosophical akratism, a recognition of the heuristic structure of human cognition, an understanding of the structural role of informatic neglect, and a realization that conscious experience and cognition are drastically, perhaps catastrophically, distorted as a result.

Deliberative human cognition has only the information globally broadcast (or integrated) at its disposal. Likewise, the information globally broadcast only has human cognition. The first means that human cognition has no access whatsoever to vast amounts of constitutive processing–which is to say, no access to neurofunctional contexts. The second means that we likely cognize conscious experience as experience via heuristics matched to our natural and social environments, as something quite other than whatever it is.

Small wonder consciousness has proven to be such a knot!

And this, for me, is where the fireworks lay: critics of Dennett often complain about the difficulty of getting a coherent sense of what his theory of consciousness is, as opposed to what it is not. For better or worse, BBT paints a very distinct–if almost preposterously radical–picture of consciousness.

So what does that picture look like?

It purports, for instance, to explain how the apparent reflexivity of consciousness can arise from the irreflexivity of natural processes. For me, this constitutes the most troubling, and at the same time, most breathtaking, theoretical dividend of BBT: the parsimonious way it explains away conscious reflexivity. Dennett (working with Marcel Kinsbourne) sails across the insight’s wake in “Time and the Observer” where he argues, among other things, for the thoroughgoing dissociation of the experience of time from the time of experience, how the time constraints imposed by the actual physical distribution of consciousness in the brain means that we should expect our conscious experience of time to ‘break down’ in psychophysical experimental contexts at or below certain thresholds of temporal resolution.

The centerpiece of his argument is the deeply puzzling experimental variant of the well-known ‘phi phenomenon,’ how two different closely separated spots projected in rapid sequence on a screen will seem to be a single spot moving from location to location. When experimenters use two different colours for each of the spots: not only do subjects report seeing the spot move, they claim to see it change colour, and here’s the thing, midway. What makes this so strange is the fact that they perceive the colour change before the second spot appears–before ‘seeing’ what the second colour is. Ruling out precognition, Dennett proposes two mechanisms to account for the illusion: either the subjects consciously see the spots as they are only to have the memory almost instantaneously revised for consistency, what he calls the ‘Orwellian’ explanation, or the subjects consciously see the product of some preconscious imposition of consistency, what he calls the ‘Stalinesque’ explanation. Given his quixotic allergy to neural boundaries, he argues that our inability to answer this question means there is no definite where and when of consciousness in the brain, at least at these levels of resolution.

Dennett’s insight here is absolutely pivotal: the brain ‘constructs,’ as opposed to perceives or measures, the passage of time, given the resources it has available. The time of temporal representation is not the time represented. But he misconstrues the insight, seeing in it a means to cement his critique of the Cartesian Theatre. The question of whether this process is Orwellian or Stalinist, whether neural history is rewritten or staged, simply underscores the informatic constraints on our experience of time, our utter blindness to neurofunctional context of the experience–which is to say, our utter blindness to the time of conscious experience. Dennett, in other words, is himself making a boundary argument, only this time from the inside out: the inability to arbitrate between the Orwellian and Stalinist scenarios clearly demarcates the information horizon of temporal experience.

And this is where the theoretical resources of BBT come into play. Wherever it encounters apparent informatic constraints,it asks how they find themselves expressed in experience. Saying that temporal experience possesses informatic boundaries is platitudinal. All modalities of experience are finite: we can only see, hear, taste, think, and time so much in a given moment. Saying that the informatic boundaries of experience are themselves expressed in experience is somewhat more tricky, but you need only attend to your own visual margins to see a dramatic example of such an expression.

You could say vision is an exceptional example, given the volume of information it provides in comparison to other experiential modalities. Nevertheless, one could argue that such boundaries must find some kind of experiential expression, even if, as in the cases of clinical neglect, it evades deliberative cognition. BBT proposes that neglect is complete in many, if not most cases, and information regarding informatic boundaries is only indirectly available, typically via contexts (such as psychological experimentation) that foreground discrepancies between brute environmental availability and actual access. The phi phenomenon provides a vivid demonstration of this–as does, for that matter, psychophysical phenomena such as flicker-fusion. For some mysterious reason (perhaps the mysterious reason), what cannot be discriminated, such as the flashing of lights below a certain temporal threshold, is consciously experienced as unitary. It seems a fact of experience almost too trivial to note, but perhaps immensely important: Why, in the absence of information, is identity the default?

If you think about it, a good number of the problems of consciousness can be formulated in terms of identity and information. BBT takes precisely this explanatory angle, interpreting things like the unity of consciousness, personal identity, and nowness or subjective time as products of various species of neglect–literally as kinds of fusions.’

The issue of time as it is consciously experienced contains a cognitive impasse at least as old as Aristotle: the problem of the now. The problem, as Aristotle conceived it, lay in what might called the persistence of identity in difference that seems to characterize the now, how the now somehow remains the same across the succession of now moments. As we have seen, whenever BBT encounters an apparent cognitive impasse, it asks what role informatic constraints play. The constraints, as identified by Dennett and Kinsbourne in their analyses in “Time and the Observer,” turn on the dissociation of the time of representation from the time represented. In a very profound sense, our conscious experience of time is utterly blind to the time of conscious experience, which is to say, information pertaining to the timing of conscious timing.

So what does this, the conscious neglect of the time of conscious timing, mean? The same thing all instances of informatic neglect mean: fusion. The fusing of flickering lights when their frequency exceeds a certain informatic threshold seems innocuous likely because the phenomenon is so isolated within experience. The kind of temporal fusion at issue here, however, is coextensive with experience: as many commentators have noted, the so-called ‘window of presence’ is just experience in a profound sense. The now always seems to be the same now because the information regarding the time of conscious timing, the information required to globally distinguish moment from moment, is simply not available. In a very profound sense, ‘flicker fusion’ is a local, experientially isolated version of what we are.

Thus BBT offers a resolution of the now paradox and an explanation of personal identity in a single conceptual stroke, as it were. It provides, in other words, a way of explaining how natural and irreflexive processes give rise to the apparent reflexivity that so distinguishes consciousness. And by doing so it drastically reduces the explanatory burden of consciousness, leaving only ‘default identity’ or ‘fusion’ as the mystery to be explained. Given this, it provides a principled means of ‘explaining away’ consciousness as we seem to experience it. Using informatic neglect as our conceptual spade, one need only excavate the kinds of information the conscious brain cannot access from our scientific understanding of the brain to unearth something that resembles–to a remarkable degree–the first-person perspective. Consciousness, as we (think we) experience it, is fundamentally structured by various patterns of informatic neglect.

And it does so using an austere set of concepts and relatively uncontroversial assumptions. Conscious episodes are informatically encapsulated. Deliberative cognition is plural and heuristic (though neglect means it appears otherwise). Combining the informatic neglect pertaining to the first–which Dennett has mistakenly eschewed–with the problems of ‘matching’ pertaining to the second, produces what I think could very well be the single most parsimonious and comprehensive theory of ‘consciousness’ in the field.

But I anticipate it will be a hard sell, with the philosophy of mind crowd most of all. Among the many invisible heuristics that enable and plague us are those primed to dismiss outgroup deviations from ingroup norms–and I am, sadly, merely a tourist in these conceptual climes. Then there’s the brute fact of Hebb’s Law: the intuitions underwriting BBT demand more than a little neural plasticity, especially given the degree to which they defect from any number of implicit and canonically explicit assumptions. I’m asking huge populations of old neurons to fire in unprecedented ways–never a good thing, especially when you happen to an outgroup amateur!

And then there’s the problem of informatic neglect itself, especially with reference to what I earlier called the epistemic heuristic. I often find myself flabbergasted by how far out of step I’ve fallen with consensus opinion since the key insight behind BBT nixed my dissertation over a decade ago. Even the notion of content has come to seem alien to me! a preposterous artifact of philosophers blindly applying EH beyond its scope of application. On the BBT account, the most effective way to understand meaning is as an artifact of structured informatic neglect. In a real sense, it holds there is no such thing as meaning, so the wide-ranging debates on content and representation that form the assumptive baseline for so many debates you find in the philosophy of mind are little more than chimerical from its standpoint. Put simply, ‘truth’ and ‘reference’ (even ‘existence’!) are best understood as kinds of heuristics, cognitive adaptations that maximize effectiveness via forms of informatic neglect, and so possess limited scopes of applicability.

Even the classical metaphysical questions regarding materialism are best considered heuristic chimera on my view. Information, nonsemantically construed, allows the theorist to do an end run around all these dilemmas, as well as all the dichotomies and dualisms that fall out of them.

We are informatic subsystems attempting to extend our explanatory ‘algorithms’ as far into subordinate, parallel, and superordinate systems as we can, either by accumulating more information or by varying our algorithmic (cognitive) relation to the information already possessed. Whatever problem our system takes on, resolution depends upon this relation between information accumulation and algorithmic versatility. So as we saw with ‘qualia,’ our system is stranded: we cannot penetrate and interact with red the way we can with apples, and so the prospect of information accumulation are dim. Likewise, our algorithms are heuristic, possessing a neglect structure appropriate to environmental problem-solving (given various developmental and structural constraints), which is to say, a scope of applicability that simply does not (as one might expect) include qualia.

The ‘problem of consciousness,’ on the BBT account, is simply an artifact of literally being what science takes us to be: an informatic subsystem. What has been bewildering us all along is our blindness to our blindness, our inability to explicitly consider the prevalent and decisive role that informatic neglect plays in our understanding of human cognition. The problem of consciousness, in other words, is nothing less than a decisive demonstration of the heuristic nature of semantic/epistemic cognition–a fact that really, in the end, should come as no surprise. Why, when human and animal cognition is so obviously heuristic in so many ways, would we assume that a patron as stingy as evolution would flatter us with a universal problem-solving device, if not for simple blindness to the limitations of our brains?

The scientific problem of consciousness remains, of course. Default identity remains to be explained. But given BBT, the philosophical conundrums have for the most part been explained away…

As have we.