Three Pound Brain

No bells, just whistling in the dark…

Month: December, 2012

How to Build a First Person (Using only Natural Materials)

by rsbakker

Aphorism of the Day: Birth is the only surrender to fate possible.

.

In film you have the famous ‘establishing shot,’ a brief visual survey, usually a long or medium shot, of the space the ensuing sequence will analyze along more intimate angles. Space, you could say, is the conclusion that comes first, the register that always precedes its analysis. Some directors play with this, continually force their audience into the analysis absent any spatial analysand. The viewer is thrown, disoriented as a result. Sometimes directors build outward, using the lure of established space as a kind of narrative instrument. Sometimes they shackle the eye to detail, mechanically denying events their place, and so inciting claustrophobia in the airy void of the theatre. They use the space represented to wage war against the space of representing.

If the same has happened here, it’s been entirely inadvertent. I’m not sure how I’ll look back at this year–this attempt to sketch out ‘post-intentional philosophy.’ It’s been a tremendously creative time, to be sure. A hundred thousand words for the beast that is The Unholy Consult, and easily as much written here. I’m not sure I’ve ever enjoyed such a period of intense creativity. These posts have simply been dropping in my head, one after another, some as long as journal articles, most all of them bristling with detail, jargon, and counterintuitive complexities. When I think about it, I’m blown away that Three Pound Brain has grown the way it has, half-again over last year…

For I wanketh.

Large.

Now I want to think the explanation is simple, that against all reason, I’ve managed to climb into a new space, an undiscovered country. But all I know for sure is that I’m arguing something genuinely new–something genuinely radical. So folly or not, I pursue, run down what seem to be the never-ending permutations of this murderous take on the human soul. We have yet to see what science will make of us. And we have very little reason to believe our hearts won’t be broken the way human hearts are almost always broken when they pitch traditional hope against scientific indifference. Who knows? Three Pound Brain could be the place, the cradle where our most epic delusion dies.

Either way, the time has come to pan back, crank up the depth of field, and finally provide some kind of establishing shot. This ain’t going to be easy–for me or you. At a certain level the formulations are almost preposterously simplistic (a ‘machinology’ as noir-realism, I think, termed it). I’m talking about the brain in exceedingly general terms, after all. I could delve into the (of course stochastic) mechanics in more detail, I suppose, go ‘neuroanatomical’ in an effort to add more empirical plumage. I still intend to write about the elegant way the Blind Brain Theory falls out of Bayesian predictive-coding models of the brain.

But for the nonce, I don’t need to. The apparently insuperable conundrums of the first person, the consciousness we think we have, can be explained using some quite granular structural and developmental assumptions. We just need to turn our normal way of looking at things upside down–to stop viewing our metacognitive image of meaning and agency as some kind of stupendous achievement. Why? Because doing so takes theoretical metacognition at its word, something that cognitive science has shown–quite decisively–to be the province of fools. If anything, the ‘stupendous achievement’ is the one possessing far and away the greatest evolutionary pedigree and utilizing the most neural resources: environmental cognition. Taking this as our baseline, we can begin diagnosing the ancient perplexities of the metacognitive image as the result of informatic occlusion and cognitive overreach.

We could be a kind of dream, you and I, one that isn’t even useful in any recognizable manner. This is where the difficulty lies: the way BBT requires we contravene our most fundamental intuitions.

It’s all about the worst case scenario. Philosophy, to paraphrase Brassier, is no sop to desire. If science stands poised to break us, then thought must submit to this breaking in advance. The world never wants for apologists: there will always be an army of Rosenthals and Badious. Someone needs to think these things, no matter how dehumanizing or alienating they seem to be.  Besides, only those who dare thinking the post-intentional need fear ‘losing’ anything. If meaning and morality are the genuine emergent realities that the vast bulk of thinkers, analytic or continental, assume them to be, they should be able to withstand any sustained attempt to explain them away.

And if not? Well then, welcome to the future.

.

So, how do you build a first person?

Imagine the sum of information, understood in the deliberately vague sense of systematic differences making systematic differences, comprising you and your immediate environment. The holy grail of consciousness research is simply understanding how what you are experiencing this very moment fits into this ‘natural informatic field.’ The brass ring, in other words, is one of understanding how you qua person resides in you qua organism–or in other words, explaining how mechanism generates consciousness and intentionality.

Now until recently, science could only track natural processes up to your porch. You qua organism are a mansion of astronomical complexities, and even as modern medicine overran your outer defences, your brain remained an unconquerable citadel, the one place in nature where the old, prescientific games of giving-and-asking-for-reasons could flourish. This is why I continually talk about the ‘bonfire of the humanities,’ the impending collapse of the traditional discourses of the soul. This is why I continually speak of BBT in eschatological terms, pose it as a precursor of the posthuman: if scientifically confirmed, it means that Man-the-meaning-maker is of a piece with Man-the-image-of-God and Man-the-centre-of-the-universe, that noocentrism will join biocentrism and geocentrism in the reliquary of human intellectual conceit and folly. And this is why I mourn ‘Akratic Culture,’ society fissured by the scission of knowledge and experience, with managerial powers exploiting the mechanistic efficiencies of the former, and the client masses fleeing into the intentional opacities of the latter, seeking refuge in vacant affirmation and subreptive autonomy.

So how does the soul fit into the natural informatic field? BBT argues that the best way to conceive the difference between the first and third person is in terms of informatic neglect. Since the structure and function of the brain is dedicated to reliably modelling the structure and function of its environment, the brain remains that part of the environment that it cannot reliably model. BBT terms the modelling structure and function ‘medial’ and the modelled structure and function ‘lateral.’ The brain’s inability to model its modelling, it terms medial neglect. Medial neglect simply means the brain cannot cognize itself as a brain, and so must cognize itself otherwise. This ‘otherwise’ is what we call the soul, mind, consciousness, the first-person, being-in-the-world, etc.

So consider a perspective on a brain:

Diagram brain

Note that the target here is your perspective on the diagrammed brain, not the brain itself. Since the structure and function of your brain are dedicated to modelling the structure and function of your environment, the modelling nowhere appears within the modelled as anything resembling the modelled, even though we know the brain modelling is as much a brain as the brain modelled. The former, rather, provides the ‘occluded frame’ of the latter. At any given moment your perspective ‘hangs,’ as it were, outside of everything. You can pause and reflect on your perspective, of course, model your modelling, as say, something like this:

Diagram brain perspective 1

but only from the standpoint of another ‘occluded frame,’ the oblivion of medial neglect. This second diagram, in other words, can only model the medial, neurofunctional information neglected in the first by once again neglecting that information. No matter how many times we stack these diagrams, how far we press the Rylean regress, we will still be stranded with medial neglect, the ‘unframed frame’ of the first person. The reason for this, it is important to note, is purely mechanical as opposed to semantic: the machinery of modelling simply cannot model itself as it models.

But even though medial neglect means thoroughgoing neurofunctional occlusion–the brains only appear within the first person–these diagrams show it is by no means complete. As mentioned above, the brain’s inability to model itself as a brain (another natural mechanism in its environment) means it must model itself as a ‘perspective,’ something at once situated within its environment, and somehow mysteriously hanging outside of it–both local and nonlocal.

Many of the apparent peculiarities belonging to consciousness and intentionality as we intuit them, on the BBT account, turn on either medial neglect directly or one of a number of other structural and developmental confounds such as brain complexity, evolutionary caprice, and access invariance. The brain, unable to model itself as a brain, is forced to rely on what little metacognitive information its structure and evolutionary development afford.

This is where informatic neglect becomes a problem more generally, which is to say, over and above the problems posed by medial neglect in particular. We now know human cognition is fractionate, a collection of situation specific problem-solving devices, and yet we have no direct awareness of relying on anything save a singular, universal capacity for problem-solving. We regularly rely on dubious information, resort to the wrong device on the wrong occasion, entirely convinced of the justness of our cause, the truth of our theory, or what have you.

Mistakes like these and others reveal the profound and peculiar structural role informatic neglect plays in conscious experience. In the absence of information pertaining to our (medial) causal relation to our environment, we experience aboutness. In the absence of discriminations (in the absence of information) we experience wholes. In the absence of information regarding the insufficiency of information, we presume sufficiency.

But the most difficult-to-grasp structural quirk of informatic neglect has to be the ‘local nonlocality’ we encountered above, what I’ve been calling asymptosis, the fact that the various limits of cognitive and perceptual modalities cannot figure within those cognitive and perceptual modalities. As mechanical, no neural subsystem can model its modelling as it models. This is why, for instance, you cannot see the limits of your visual field–or why, in other words, the boundary of your visual field is asymptotic.

So in the diagrams above, you see a brain and none of the neural machinery responsible for that seeing primarily because of informatic neglect. It is you, a whole (and autonomous) person, seeing that brain and not a fractionate conglomerate of subpersonal cognitive mechanisms because of informatic neglect. Likewise, this metacognitive appraisal that it is ‘you’ looking at a brain is self-evident because of informatic neglect: you have no information to the contrary. And lastly, the ‘frame’ (the medial neurofunctionality) of what you see constitutively outruns what you see because, once again, of informatic neglect.

This is all just to say that the intentional, holistic, sufficient, and asymptotic structure of the first person simply follows from the fact that the brain is biomechanical.

This claim may seem innocuous, but it is big, I assure you, monstrously big. Why? Because, aside from at long last providing a parsimonious theoretical means of naturalizing consciousness and intentionality, it also argues that they (as intuitively conceived) are largely cognitive illusions, kinds of ‘natural anosognosias’ that we cannot but suffer given the constraints and confounds facing neural metacognition. It means that the very form of ‘subjectivity’ (and not merely the ‘self’) actually is a kind of dream.

Make no mistake, if the Blind Brain Theory (or something like it) turns out to be correct, it will be the last theory in the history of philosophy as traditionally conceived. Why? Because BBT is as much a translation manual as a theory, a potential way to transform the great intentional problems of philosophy into the mechanical subject matter of cognitive neuroscience.

Trust me, I know how out-and-out preposterous this sounds… But as I said above, the gates of the soul have been battered down.

Since the devil is in the details, it might pay to finesse this sketch with more information. So to return to what I termed the natural informatic field above, the sum of all the static and dynamic systematic differences that constitute you qua organism. How specifically does informatic neglect allow us to plug the phenomenal/intentional into the physical/mechanical?

From a life sciences perspective, the natural informatic field consists of externally-related structures and irreflexive processes. Our brain is that portion of the Field biologically adapted to model and interact with the rest of the Field (the environment) via information collected from the Field. The conscious subsystem of the brain is that portion of the Field biologically adapted to model and interact with the rest of the Field via information collected from the brain. All we need ask is what information is available to what cognitive resources as the conscious subsystem generates its model. In a sense, all we need do is subtract varieties and densities of information from the pot of overall information. I know the conceptual jargon makes this all seem dreadfully complicated, but it really is this simple.

So, what information can the conscious subsystem of the brain provide what cognitive resources in the course of generating its model? No causal information regarding its own neurofunctionality, as we have seen. The model, therefore, will have to be medially acausal. No temporal information regarding its own neurofunctionality either. The model, therefore, will have to be medially atemporal. Minimal information regarding its own structural complexity, given the constraints and confounds mentioned above. The model, therefore, will be structurally undifferentiated relative to environmental models. Minimal information regarding its own informatic and cognitive limitations, once again, given the aforementioned constraints and confounds. The model, therefore, will be both canonical (because of sufficiency) and intractable (because incompatible with existing, environmentally-oriented cognitive resources).

Now the key principle that seems to make this work is the way neglect leverages varieties of identity. BBT, in effect, interprets the appearance of consciousness as a kind of ‘flicker fusion writ large.’ In the absence of distinctions, the brain (for reasons that will fall out of any successful scientific theory of consciousness proper) conjures experiential continuities. Occlusion equals identity, according to BBT.

What makes the first person as it appears so peculiar from the standpoint of environmental cognition has to do with ‘informatic captivity’ or access invariance, our brain’s inability to vary its informatic relationship to itself the way it can its environments. So, on the BBT account, the ‘unity of consciousness’ that so impressed Descartes is simply of a piece with the way, in the absence of information, we confuse aggregates for individuals more generally, as when we confuse ants on the sidewalk with spilled paint, for instance. But where cognition can vary its access and so accumulate the information required to revise ‘spilled paint’ into ‘swarming ants’ in our environment, metacognition is trapped with the spilled paint of the ‘soul.’ The first person appears to be an internally-related ‘whole,’ in other words, simply because we lack the information to cognize it otherwise. The holistic consciousness we think we enjoy, in other words, is a kind of cartoon.

(This underscores the way the external-relationality characteristic of our environment is an informatic and cognitive achievement, something the human brain has evolved to model and exploit. On the BBT account, internal-relationality is generally a symptom of missing information, a structurally and developmentally imposed loss of dimensionality.)

But what makes the first person so intractable, a hitherto inexhaustible source of perplexity, only becomes apparent when we consider the diachronic dimension of this ‘fusion in occlusion,’ the way neglect winnows the implacable irreflexivity of the natural into the labile reflexivity of the mental. The conscious system’s inability to model its modelling as it models applies to temporal modelling as well. The temporal system can no more ‘time its timing’ than the visual system can ‘see its seeing.’ This means that metacognition has no way to intuit the ‘time of timing,’ leading, once again, to default identity and all the paradoxes belonging to the ‘now.’ The temporal field is ‘locally nonlocal’ or asymptotic, muddy and fleeting yet apparently monolithic and self-identical.

So, in a manner similar to the way information privation collapses external-relationality into apparent internal-relationality, it also collapses irreflexivity into apparent reflexivity. Conscious cognition can track environmental irreflexivity readily enough, but it cannot track this tracking and so intuits otherwise. The first person cartoon suffers the diachronic hallucination of fundamental continuity in time. Once again metacognition mistakes oblivion (or less dramatically, incapacity) for identity.

To get a sense of how radical this is one need only consider the very paradigm of atemporal reflexivity in philosophy, the a priori. On the BBT account, what we call the a priori is what algorithmic nature looks like from the inside. No matter how much content you hollow out of your formalisms, you are still talking about something magical, still begging what Eugene Wigner famously called ‘the unreasonable effectiveness of mathematics,’ the question of why an externally-related, irreflexive nature should prove so amenable to an internally-related, reflexive mathematics. BBT answers: because mathematics is itself natural, it’s most systematically ‘viral’ expression. It collapses the disjunct, asserts continuity where the tradition perceives the inexplicable. Mathematics only seems ‘supra-natural’ because until recently it could only be explored performatively in the ‘laboratory’ of our own brains, and because of the way metacognition shears away its informatic dimensions. Given the illusion of sufficiency, the a priori cartoon strucks us as the efficacious source of a special, transcendental form of cognition. Only now, as computational complexities force mathematicians and physicists to rely more and more on machines, mechanical implementations that (by some cosmic coincidence) are entirely capable of performing ‘semantic’ operations without the least whiff of ‘understanding,’ are we in a position to entertain the possibility that ‘formal semantics’ are simply another ghost in the human machine.

And the list of radical reinterpretations goes on–after a year of manic exploration and elaboration I feel like I’ve scarcely scratched the surface. I could use some help, if anyone is so inclined!

So with that in ‘mind,’ I leave you with the following establishing shot: Consciousness as you conceive/perceive it this very moment now is the tissue of neglect, painted on the same informatic canvas with the same cognitive brushes as our environment, only blinkered and impressionistic in the extreme. Reflexivity, internal-relationality, sufficiency, and intentionality, can all be seen as hallucinatory artifacts of informatic closure and scarcity, the result of a brain forced to make the most with the least using only the resources it has at hand. This is a picture of the first person as an informatically intergrated series of scraps of access, forced by structural bottlenecks to profoundly misrecognize itself as something somehow hooked upon the transcendental, self-sufficient and whole….

To see you.

The Second Room: Phenomenal Realism as Grammatical Violation

by rsbakker

Aphorism of the Day: Atheist or believer, we all get judged by God. The one that made us, or the one we make.

neuro skull

So just what the hell did Wittgenstein mean when he wrote this?

“And yet you again and again reach the conclusion that the sensation itself is a nothing.” Not at all. It is not a something, but not a nothing either! The conclusion was only that a nothing would serve just as well as a something about which nothing could be said.” (1953, 304)

I can remember attempting to get a handle on this section of Philosophical Investigations in a couple of graduate seminars, contributing nothing more than once stumping my professor with the question of fraudulent workplace injury claims. But now, at long last, I (inadvertently) find myself in a position to explain what Wittgenstein was onto, and perhaps where he went wrong.

My view is simply that the mental and the environmental are pretty much painted in the same informatic brush, and pretty much comprehended using the same cognitive tools, the difference being that the system as a whole is primarily evolved to the track and exploit the environmental, and as a result has great difficulty attempting to track and leverage the ‘mental’ so-called.

If you accept the mechanistic model of the life sciences, then you accept that you are an environmentally situated, biomechanical, information processing system. Among the features that characterize you as such a system is what might be called ‘structural idiosyncrasy,’ the fact that the system is the result of innumerable path dependencies. As a bottom-up designer, evolution relies on the combination of preexisting capacities and happenstance to provide solutions, resulting in an vast array of ad hoc capacities (and incapacities). Certainly the rigours of selection will drive various functional convergences, but each of those functions will bear the imprimatur of the evolutionary twists that led it there.

Another feature that characterizes you as such a system is medial neglect. Given that the resources of the system are dedicated to modelling and exploiting your environments, the system itself constitutes a ‘structural blindspot’: it is the one part of your environment that you cannot readily include in your model of the environment. The ‘medial’ causality of the neural, you could say, must be yoked to the ‘lateral’ causality of the environmental to adequately track and respond to opportunities and threats. To system must be blind to itself to see the world.

A third feature that characterizes you as such a system is heuristic specificity. Given the combination of environmental complexity, structural limitations, and path dependency, cognition is situation-specific, fractionate, and non-optimal. The system solves environmental problems by neglecting forms of information that are either irrelevant or not accessible. So, to give what is perhaps the most dramatic example, one can suggest that intentionality, understood as aboutness, possesses a thoroughly heuristic structure. Given medial neglect, the system has no access to information pertaining to anything but the grossest details of its causal relationship to its environments. It is forced, therefore, to model that relationship in coarse-grained, acausal terms–or put differently, in terms that occlude the neurofunctionality that makes the relationship possible. As a result, you experience apples in your environment, oblivious to any of the machinery this makes possible. This ‘occlusion of the neurofunctional’ generates efficiencies (enormous ones, given the system’s complexity) so long as the targets tracked are not themselves causally perturbed by (medial) tracking. Since the system is blind to the medial, any interference it produces will generate varying degrees of ‘lateral noise.’

A final feature that characterizes you as such a system might be called internal access invariability, the fact that cognitive subsystems receive information via fixed neural channels. All this means is that cognitive subsystems are ‘hardwired’ into the rest of the brain.

Given a handful of caveats, I don’t think any of the above should be all that controversial.

Now, the big charge against Wittgenstein regarding sensation is some version of crypto-behaviourism, the notion that he is impugning the reality of sensation simply because only pain behaviour is publicly observable, while the pain itself remains a ‘beetle in a box.’ The problem people have with this characterization is as clear as pain itself. One could say that nothing is more real than pain, and yet here’s this philosopher telling you that it is ‘neither a something nor a nothing.’

Now I also think nothing is more real than pain, but I also agree with Wittgenstein, at long last, that pain is ‘neither a something or a nothing.’ The challenge I face is one of finding some way to explain this without sounding insane.

The thing to note about the four features listed above is how each, in its own way, compromises human cognition. This is no big news, of course, but my view takes the approach that the great philosophical conundrums can be seen as diagnostic clues to the way cognition is compromised, and that conversely, the proper theoretical account of our cognitive shortcomings will allow us to explain or explain away the great philosophical conundrums. And Wittgenstein’s position certainly counts as one of the most persistent puzzles confronting philosophers and cognitive scientists today: the question of the ontological status of our sensations.

Another way of putting my position is this: Everyone agrees you’re are a biomechanism possessing myriad relationships with your environment. What else would humans (qua natural) be? The idea that understanding the specifics of how human cognition fits into that supercomplicated causal picture will go a long way to clearing up our myriad, longstanding confusions is also something most everyone would agree with. What I’m proposing is a novel way of seeing how those confusions fall out of our cognitive limitations–the kinds of information and capacities that we lack, in effect.

So what I want to do, in a sense, is turn the problem of sensation in Wittgenstein upside down. The question I want to ask is this: How could the four limiting features described above, structural idiosyncrasy (the trivial fact that out of all the possible forms of cognition we evolved this one), medial neglect (the trivial fact that the brain is structurally blind to itself as a brain), heuristic specificity (the trivial fact that cognition relies on a conglomeration of special purpose tools), and access invariability (the trivial fact that cognition accesses information via internally fixed channels) possibly conspire to make Wittgenstein right?

Well, let’s take a look at what seems to be the most outrageous part of the claim: the fact that pain is ‘neither a something or a nothing.’ This, I think, points rather directly at heuristic specificity. The idea here would be that the heuristic or heuristic systems we use to identify entities are simply misapplied with reference to sensations. As extraordinary as this claim might seem, it really is old hat scientifically speaking. Quantum Field Theory forced us quite some time ago to abandon the assumption that our native understanding of entities and existence extends beyond the level of apples and lions we evolved to survive in. That said, sensation most certainly belongs the ‘level’ of apples and lions: eating apples causes pleasure as reliably as lion attacks cause pain.

We need some kind of account, in other words, of how construing sensations as extant things might count as a heuristic misapplication. This is where medial neglect enters the picture. First off, medial neglect explains why heuristic misapplications are inevitable. Not only can’t we intuit the proper scope of application for the various heuristic devices comprising cognition, we can’t even intuit the fact that cognition consists of multiple heuristic devices at all! In other words, cognition is blind to both its limits and its constitution. This explains why misapplications are both effortless and invisible–and most importantly, why we assume cognition to be universal, why quantum and cosmological violations of intuition come as a surprise. (This also motivates taking a diagnostic approach to classic philosophical problems: conundrums such as this indirectly reveal something of the limitations and constitution of cognition).

But medial neglect can explain more than just the possibility of such a misapplication; it also provides a way to explain why it constitutes a misapplication, as well as why the resulting conundrums take the forms they do. Consider the ‘aboutness heuristic’ considered above. Given that the causal structure of the brain is dedicated to tracking the causal structure of its environment, that structure cannot itself be tracked, and so must be ‘assumed.’ Aboutness is forced upon the system. This occlusion of the causal intricacies of the system’s relation to its environment is inconsequential. So long as the medial tracking of  targets in no way interferes with those targets, medial neglect simply relieves the system of an impossible computational load.

But despite it’s effectiveness, aboutness remains heuristic, remains a device (albeit a ‘master device’) that solves problems via information neglect. This simply means that aboutness possesses a scope of applicability, that it is not universal. It is adapted to a finite range of problems, namely, those involving functionally independent environmental entities and events. The causal structure of the system, again, is dedicated to modelling the causal structure of its environment (thus the split between medial (modelling) and lateral (modelled) functionality). This insures the system will encounter tremendous difficulty whenever it attempts to model its own modelling. Why? I’ve considered a number of different reasons (such a neural complexity) in a number of different contexts, but the primary, heuristic culprit is that the targets to be tracked are all functionally entangled in these ‘metacognitive’ instances.

The basic structure of human cognition, in other words, is environmental, which is to say, adapted to things out there functioning independent of any neural tracking. It is not adapted to the ‘in here,’ to what we are prone to call the mental. This is why the introspective default assumption is to see the ‘mental’ as a ‘secondary environment,’ as a collection of functionally independent events and entities tracked by some kind of mysterious ‘inner eye.’ Cognition isn’t magical. To cognize something requires cognitive resources. Keeping in mind that the point of this exercise is to explain how Wittgenstein could be right, we could postulate (presuming evolutionary parsimony) that second-order reflection possesses no specially adapted ‘master device,’ no dedicated introspective cognitive system, but instead relies on its preexisting structure and tools. This is why the ‘in here’ is inevitably cognized as a ‘little out there,’ a kind of peculiar secondary environment.

A sensation–or quale to the use the philosophy of mind term–is the product of an occurrent medial circuit, and as such impossible to laterally model. This is what Wittgenstein means when he says pain is ‘neither a something nor a nothing.’ The information required to accurately cognize ‘pain’ is the very information systematically neglected by human cognition. Second-order deliberative cognition transforms it into something ‘thinglike,’ nevertheless, because it is designed to cognize functionally independent entities. The natural question then becomes, What is this thing? Given the meagre amount of information available and the distortions pertaining to cognitive misapplication, it necessarily becomes the most baffling thing we can imagine.

Given structural idiosyncrasy (again, the path dependence of our position in ‘design space’), it simply ‘is what is it is,’ a kind of astronomically coarse-grained ‘random projection’ of higher dimensional neural space perhaps. Why is pain like pain? Because it dangles from all the same myriad path dependencies as our brains do. Given internal access invariability (again, the fact that cognition possesses fixed channels to other neural subsystems) it is also all that there is as well: cognition cannot inspect or manipulate a quale the way it can actual things in its environment via exploratory behaviours, so unlike other objects they necessarily appear to be ‘irreducible’ or ‘simple.’ On top of everything, qualia will also seem causally intractable given the utter occlusion of neurofunctionality that falls out of medial neglect, as well the distortions pertaining to heuristic specificity.

As things therefore, qualia strike as ineffable, intrinsic, and etiologically opaque. Strange ‘somethings’ indeed!

Given our four limiting features, then, we can clearly see that Wittgenstein’s hunch is grammatical and not behaviouristic. The problem with sensations isn’t so much epistemic privacy as it is information access and processing: when we see qualia as extant things requiring explanation like other things we’re plugging them into a heuristic regime adapted to discharge functional independent environmental challenges. Wittgenstein himself couldn’t see it as such, of course, which is perhaps why he takes the number of runs at the problem as he does.

Okay, so much for Wittgenstein. The real question, at this point, is one of what it all means. After all, despite what might seem like fancy explanatory footwork, we still find ourselves stranded with a something that is neither a something nor a nothing! Given that absurd conclusions generally mean false premises, why shouldn’t we simply think Wittgenstein was off his rocker?

Well, for one, given the conundrums posed by ‘phenomenal realism,’ you could argue that the absurdity is mutual. For another, the explanatory paradigm I’ve used here (the Blind Brain Theory) is capable of explaining away a great number of such conundrums (at the cost of our basic default assumptions, typically).

The question then becomes whether a general gain in intelligibility warrants accepting one flagrant absurdity–a something that is neither a something nor a nothing.

The first thing to recall is that this situation isn’t new. Apparent absurdity is alive and well at the cosmological and quantum levels of physical explanation. The second thing to recall is that human cognition is the product of myriad evolutionary pressures. Much as we did not evolve to be ideal physicists, we did not evolve to be ideal philosophers. Structural idiosyncrasy, in other words, gives us good reason to expect cognitive incapacities generally. And indeed, cognitive psychology has spent several decades isolating and identifying numerous cognitive foibles. The only real thing that distinguishes this particular ‘foible’ is the interpretative centrality (not to mention cherished status) of its subject matter–us!

‘Us,’ indeed. Once again, if you accept the mechanistic model of the life sciences (if you’re inclined to heed your doctor before your priest), then you accept that you are an environmentally situated, biomechanical information processing system. Given this, perhaps we should add a fifth limiting feature that characterizes you: ‘informatic locality,’ the way your system has to make due with the information it can either store or sense. Your particular brain-environment system, in other words, is its own ‘informatic frame of reference.’

Once again, given the previous four limiting features, the system is bound to have difficulty modelling itself. Consider another famous head-scratcher from the history of philosophy, this one from William James:

“The physical and the mental operations form curiously incompatible groups. As a  room, the experience has occupied that spot and had that environment for thirty years. As your field of consciousness it may never have existed until now. As a room, attention will go on to discover endless new details in it. As your mental state merely, few new ones will emerge under attention’s eye. As a room, it will take an earthquake, or a gang of men, and in any case a certain amount of time, to destroy it. As your subjective state, the closing of your eyes, or any instantaneous play of your fancy will suffice. In the real world, fire will consume it. In your mind, you can let fire play over it without effect. As an outer object, you must pay so much a month to inhabit it. As an inner content, you may occupy it for any length of time rent-free. If, in short, you follow it in the mental direction, taking it along with events of personal biography solely, all sorts of things are true of it which are false, and false of it which are true if you treat it as a real thing experienced, follow it in the physical direction, and relate it to associates in the outer world. (“Does ‘Consciousness’ Exist?“)

The genius of this passage, as I take it, is the way refuses the relinquish the profound connection between the third person and the first, rather alternating from the one to other, as if it were a single, inexplicable lozenge that tasted radically different when held against the back or front of the tongue–the room as empirically indexed versus the room as phenomenologically indexed. Wittgenstein’s problem, expressed in these terms, is simply one of how the phenomenological room fits into the empirical. From a brute mechanistic perspective, the system is first modelling the room absent any model of its occurrent modelling, then modelling its modelling of the room–and here’s the thing, absent any model of its occurrent modelling. The aboutness heuristic, as we saw, turns on medial neglect. This is what renders the second target, ‘room-modelling,’ so difficult to square with the ‘grammar’ of the first, ‘room,’ perpetually forcing us to ask, What the hell is this second room?

The thing to realize at this juncture is that there is no way to answer this question so long as we allow the apparent universality of the aboutness heuristic get the better of us. ‘Room-modelling’ will never fit the grammar of ‘room’ simply because it is–clearly, I would argue–the product of informatic privation (due to medial neglect) and heuristic misapplication (due to heuristic specificity).

On the contrary, the only way to solve this ‘problem’ (perhaps the only way to move beyond the conundrums that paralyze philosophy of mind and consciousness research as a whole) is to bracket aboutness, to finally openly acknowledge that our apparent baseline mode of conceptualizing truth and reality is in fact heuristic, which is to say, a mode of problem-solving that turns on information neglect and so possesses a limited scope of effective application. So long as we presume the dubious notion that cognitive subsystems adapted to trouble-shooting external environments absent various classes of information are adequate to the task of trouble-shooting the system of which they are a part, then we will find ourselves trapped in this grammatical (algorithmic) impasse.

In other words, we need to abandon our personal notion of the ‘knower’ as a kind of ‘anosognosiac fantasy,’ and begin explaining our inability to resolve these difficulties in subpersonal terms. We are an assemblage of special purpose cognitive tools, not whole, autonomous knowers attempting to apprehend the fundamental nature of things. We are machines attempting to model ourselves as such, and consistently failing because of a variety of subsystemic functional limitations.

You could say what we need is a whole new scientific subdiscipline: the cognitive psychology of philosophy. I realize that this sounds like anathema to many–it certainly strikes me as such! But no matter what one thinks of the story above, I find it hard to fathom how philosophy can avoid this fate now that the black box of the brain has been cracked open. In other words, we need to see the inevitability of this picture or something like it. As a natural result of the kind of system that we happen to be, the perennial conundrums of consciousness (and perhaps philosophy more generally) are something that science will eventually explain. Only ignorance or hubris could convince us otherwise.

We affirm the cosmological and quantum ‘absurdities’ we do because of the way science allows us to transcend our heuristic limitations. Science, you could say, is a kind of ‘meta-heuristic,’ a way to organize systems such that their individual heuristic shortcomings can be overcome. The Blind Brain picture sketched above bets that science will sketch the traditional metaphysical problem of consciousness in fundamentally mechanistic terms. It predicts that the traditional categorical bestiary of metaphysics will be supplanted by categories of information indexed according to their functions. It argues that the real difficulty of consciousness lies in the cognitive illusions secondary to informatic neglect.

One can conceive this different ways I think: You could keep your present scientifically informed understanding of the universe as your baseline, and ‘explain away’ the mental (and much of the lifeworld with it) as a series of cognitive illusions. Qualia can be conceived as ‘phenomemes,’ combinatorial constituents of conscious experience, but no more ‘existential’ than phonemes are ‘meaningful.’  This view takes the third-person brain revealed by science as canonical, and the first-person brain (you!) as a ‘skewed and truncated low-dimensional projection’ of that brain. The higher-order question as to the ontological status of that ‘skewed and truncated low-dimensional projection’ is diagnostically blocked as a ‘grammatical violation,’ by the recognition that such a move constitutes a clear heuristic misapplication.

Or one could envisage a new kind of scientific realism, where the institutions are themselves interpreted as heuristic devices, and we can get to the work of describing the nonsemantic nature of our relation to each other and the cosmos. This would require acknowledging the profundity of our individual theoretical straits, to embrace our epistemic dependence on the actual institutional apparati of science–to see ourselves as glitchy subsystems in larger social mechanisms of ‘knowing.’ On this version, we must be willing to detach our intellectual commitments from our commonsense intuitions wholesale, to see the apparent sufficiency and universality of aboutness as a cognitive illusion pertaining to heuristic neglect, first person or third.

Either way, consciousness, as we intuit it, can at best be viewed as virtual.

Getting Subpersonal: Should Dennett Rethink the Intentional Stance?

by rsbakker

Don’t you look at my girlfriend,

She’s the only one I got.

Not much of a girlfriend,

Never seem to get a lot.

–Supertramp, “Breakfast in America”

.

This shows that there is no such thing as the soul–the subject, etc.–as it is conceived in the superficial psychology of the present day.

Indeed a composite soul would no longer be a soul.

–Wittgenstein, 5.5421, Tractatus Logico-Philosophicus

.

One way of conceptualizing the ‘problem of meaning’ presently confronting our society is in terms of the personal and the subpersonal. The distinction is one famously made by Wittgenstein (1974) in the Tractatus, where he notes the way psychological claims like ‘knows that,’ ‘believes that,’ ‘hopes that’ involve the individual taken as a whole (5.542).  Here , as in so many other places, Daniel Dennett has been instrumental in setting out the terms of the debate. On his account, the personal refers to what Wittgenstein called the ‘soul’ above, the whole agent as opposed to its parts. The subpersonal, on the other hand, refers to the parts as opposed to the whole, the constitutive components of the whole. Where the personal figures in intentional explanations, enabling the prediction, understanding, and manipulation of our fellows, the subpersonal figures in functional explanations, enabling the prediction, understanding, and manipulation of the neural mechanisms that make us tick.

The personal and the subpersonal, in other words, provide a way of conceptualizing the vexing relation between intentional and functional conceptuality that pertains directly to you. Where the personal level of description pertains to you as an agent, a subject of belief, desire, and so on, the subpersonal level of description pertains to you as an organism, as a biomechanism consisting of numerous submechanisms. In a strange sense, you are your own doppelganger, one that apparently answers to two incommensurable rationalities. This is why your lawyer, when you finally get around to murdering that local television personality, will be inclined to defend the subpersonal you by blaming neural devils that made you do it, while the prosecutor will be hell bent on sending the personal you to the gas chamber. It’s hard to convict subpersonal mechanisms.

As Wittgenstein says, the ‘composite soul’ is no soul. The obvious question is why? Why is the person an indivisible whole? Dennett (2007) provides the following explanation:

The relative accessibility and familiarity of the outer part of the process of telling people what I can see–I know my eyes have to be open, and focused, and I have to attend, and there has to be light–conceals from us the utter blank (from the perspective of introspection or simple self-examination) of the rest of the process. How do you know there’s a tree beside the house? Well, there it is, and I can see that it looks just like a tree! How do you know it looks like a tree? Well, I just do! Do you compare what it looks like to many other things in the world before settling upon the idea that it’s a tree? Not consciously. Is it labeled “tree”? No, I don’t need to ‘see’ a label; besides, if there were a label I’d have to read it, and know that it labelled the thing it was on. I just know it’s a tree. Explanation has to stop somewhere, and at the personal level it stops here, with brute abilities couched in the familiar intentionalistic language of knowing and seeing, noticing and recognizing and the like. (9)

What Dennett is describing here is a kind of systematic neglect, and in terms, no less, that would have made Heidegger proud: What is concealed? An utter blank. This is a wonderful description of what I’ve been calling medial neglect, the way the brain, adapted and dedicated to tracking ‘lateral’ environments, must remain to a profound extent the blindspot in its environment. To paraphrase Heidegger (1949), what is nearest is most difficult to see. The human brain systematically neglects itself, generating, as a result, numerous confusions, particularly when it attempts to cognize itself. We just ‘know without knowing.’ And as Dennett says, this is where explanation has to stop.

“The recognition that there are two levels of explanation,” he  writes, “gives birth to the burden of relating them” (1969, 20). In “Mechanism and Responsibility” (1981) he attempts to discharge this burden by isolating and defeating the various ‘incompatibility intuitions’ that lead to stark appraisals of the intentional/mechanical divide. So for instance, if you idealize rational agency, then any mechanical consideration of the agent will seem to shatter the illusion. But, if you accept that humans are always and only imperfectly rational, and that the intentional and mechanical are two modes of making sense of complex systems, then this extreme incompatibility dissolves. “What are we to make of the hegemony of mechanical explanation over intentional explanation?” he writes. “Not that it doesn’t exist, but that it is misdescribed if we suppose that whenever the former are confirmed, they drive out the latter” (246). Passages like these, I think, highlight a perennial tension between Dennett’s pragmatic and realist inclinations. The ‘hegemony,’ he often seems to imply, is pragmatic: the mechanical merely allows us to go places the intentional cannot. In this case, the only compatibility that matters is the compatibility of our explanations with our purposes. But when he has his realist hat on, the hegemony becomes metaphysical, the product of the way things are. And this is where his compatibilism begins to wobble.

So for instance, adopting Dennett’s pragmatic scheme means that intentional explanations will be appropriate or inappropriate depending on the context. As our needs change, so will the utility of the intentional stance. “All that is the case,” he writes, “is that we, as persons, cannot adopt exclusive mechanism (by eliminating the intentional stance altogether)” (254). If we were, as he puts it, “turned into zombies next week” (254) all bets would be off. It’s arguments like these that wear so many scowls into the brows of so many readers of Dennett. All it means to be an intentional system, he argues, is to be successfully understood in intentional terms. There is no fact of the matter, no ‘original intentionality.’ But if this is the case, how could we be turned into (as opposed to ‘taken as’) zombies next week?

Dennett, remember, wants to  be simultaneously a realist about mechanism and a pragmatist about intentionality. So isn’t he really just saying we are zombies (mere mechanisms) all the time, and that ‘persons’ are simply an artifact of the way we zombies are prone (perhaps given informatic neglect) to interpret one another? This certainly seems to be the most straightforward explanation. If it were simply a matter of ‘taking as,’ why would the advance of the life sciences (and the mechanistic paradigm) constitute any sort of threat? In other words, why would the personal need fear the future? As Dennett writes:

All this says nothing about the impossibility of dire depersonalization in the future. Wholesale abandonment of the intentional is in any case a less pressing concern than partial erosion of the intentional domain, an eventuality against which there are no conceptual guarantees at all. If the growing area of success in mechanistic explanation of human behaviour does not in of itself rob us of responsibility, it does make it more pragmatic, more effective or efficient, for people on occasion to adopt less than the intentional stance toward others. Until fairly recently the only well-known generally effective method of getting people to do what you wanted them to was to treat them as persons. (255)

That was 1971 (when Dennett presented the first draft of “Mechanism and Responsibility” at Yale), and this is 2012, some 41 years later, a time when you could say this ‘dire’ process of incremental depersonalization has finally achieved ‘economies of scale.’ What I want to consider is the possibility that history has actually outrun the Dennett’s arguments for the intentional stance.

Consider NeuroFocus, a neuromarketing corporation that I’ve critiqued in the past, and that now bills itself as the premier neuromarketer in the world. In a summary of the effectiveness of various ads televised over the 2008 Beijing Olympics, they describe their methodology thus:

NeuroFocus conducts brainwave-based research employing high density EEG (electroencephalographic) sensor technology, coupled with pixel-level eye movement tracking and GSR (galvanic skin response) measurements. The company captures brainwave activity across as many as 128 different sectors of the brain, at 2000 times a second for each of these locations. NeuroFocus’ patented brainwave monitoring technology produces results that are far more accurate, reliable and actionable than any other form of research.

The thing to note is that all three of these channels–brain waves, saccades, and skin conductance–are involuntary. None of these pertain, in other words, to you as a person. In fact, the person is actually the enemy in neuromarketing, in terms of both assessing and engineering ad effectiveness. Using these subpersonal indices, NeuroFocus measures what they call ‘Brand Perception Lift,’ the degree to which a given spot influences subconscious brand associations, and ‘Commercial Performance Lift,’ the degree to which it subconsciously induces consumers to make purchases. As the Advertising Research Foundation notes in a recent report:

The human mind is not well equipped to probe its own depths, to explain itself to itself, let alone to others. Many of the approaches used in traditional advertising research are focused on rational, conscious processes and are, therefore, not well suited to understanding emotion and the unconscious. Regardless of our comfort level, we have to explore approaches that are fundamentally different—indirect or passive approaches to measuring and understanding emotion and its impact.

‘You’ quite literally have no clear sense as to how ads effect your attitudes and behaviours. This disconnect between what a person self-reports and what a person actually does has always meant that marketing was as much art as science. But since Coca Cola began approaching brain researchers in the early 1990’s, neuromarketing in America has ballooned into an industry consisting of a dozen companies and dozens more consultancies. This is just to say that no matter what one thinks of the effectiveness of neuromarketing techniques as they stand (the ARF report linked above details several ‘ROI’ efficiencies and predicts more as the technology and techniques improve), a formidable, and growing, array of resources have been deployed in the pursuit of the subpersonal consumer.

NeuroFocus is by no means alone, and neuromarketing is becoming more and more ubiquitous. Consider the show Intervention. Concerned that advertisers were avoiding the show due to its intense emotional content (because let’s face it, the trials and tribulations of addiction make the concerns motivating most consumer products, things like hemorrhoids or dandruff, almost tragically trivial) A&E contracted NeuroFocus to see how viewers were actually responding to ads on their show. Their results?

Because neurological testing probes the deep subconscious mind for this data, advertisers can rely on these findings with complete confidence. The results of this study provide scientific evidence that when a company decides to advertise in reality programming that contains the kind of powerful and gripping content that Intervention features, there is no automatic downside to that choice. Instead, there is an opportunity to engage viewers’ subconscious minds in equally, and often even more powerful and gripping ways.

In other words, extreme emotional content renders viewers more susceptible to commercial messaging, not less. Note the way the two kinds of communication, the personal and the subpersonal, seem to be blurred in this passage. The ‘powerful and gripping content’ of the show, one would like to assume, entails A&E taking a personal stance toward their viewers, whereas the ‘powerful and gripping content’ of the commercials entails advertisers taking a subpersonal stance toward their viewers. The problem, however, is that the question is the effectiveness of Intervention as a vehicle for commercial advertising, a question that NeuroFocus answers by targeting the subpersonal. A&E has hired them, in effect, to assess the subpersonal effectiveness of Intervention as a vehicle for subpersonal commercial messaging.

In other words, the best way to maximize ROI (‘return on investment’) is to treat viewers as mechanisms, as machines to be hacked via multiple messaging mechanisms, one overtly commercial (advertising), the other covertly (Intervention). The dismal irony here of course, is that the covert messaging mechanism features ‘real life’ narratives featuring addicts trying to recover–what else?–personhood!

Make no mistake, the ‘Age of the Subpersonal’ is upon us. Now a trickle, soon a deluge. Dennett, of course, is famous (or infamous) for his strategy of ‘interpretative minimization,’ his tendency to explain away apparent conflicts between the intentional and the mechanical, the personal and the subpersonal. But he is by no means so cavalier as to confuse the theoretical dilemmas manufactured by philosophers bent on “answering the ultimate ontological question” (2011) with the kind of practical dilemma posed by the likes of NeuroFocus. “There is a real crisis,” Dennett (2006) admits, “and it needs our attention now, before irreparable damage is done to the fragile environment of mutually shared beliefs and attitudes on which a precious conception of human dignity does indeed depend for its existence” (1).

The ‘solution’ he offers requires us to appreciate the way our actions will impact communal expectations. He gives the (not-so-congenial) example of the respect we extend to corpses:

Even people who believe in immortal immaterial souls don’t believe that human “remains” harbor a soul. They think that the soul has departed, and what is left behind is just a body, just unfeeling matter. A corpse can’t feel pain, can’t suffer, can’t be aware of any indignities–and yet still we feel a powerful obligation to handle a corpse with respect, and even with ceremony, and even when nobody else is watching. Why? Because we appreciate, whether acutely or dimly, that how we handle this corpse now has repercussions for how other people, still alive, will be able to imagine their own demise and its aftermath. Our capacity to imagine the future is both the source of our moral power and a condition of our vulnerability. (6)

To protect the fragility of the person from the zombie described by science we need recall–the corpse! (The problem, of course, is that we have possible subpersonal explanations for post-mortem care-taking rituals as well, such as those proposed, for instance, by Pascal Boyer (2001)). The idea he develops calls for us to begin managing our traditional ‘belief environments’ the way we manage any other natural environment threatened by science and its consequences. And the best way to do this, he suggests, is to begin encouraging a person-friendly, doxastic ecological mind-set:  “If we want to maintain the momentousness of all decisions about life and death, and take the steps that elevate the decision beyond the practicalities of the moment, we need to secure the appreciation of this very fact, and enliven the imaginations of people so that they can recognize, and avoid wherever possible, and condemn, activities that would tend to erode the public trust in the presuppositions about what is–and should be–unthinkable.”

A slippery slope couched in moral indignation: the approach that failed when employed against evolution (against the mechanization of our origin),  and will almost certainly fail against the corresponding mechanization our soul. Surely any real solution to the problem of ‘getting too subpersonal’ has to turn on the reason why the subpersonal so threatens the personal. We’re simply tossing homilies to the wind, clucking-clucking in disapproval, otherwise. No. It’s clear the problem must be understood. And once again, the obvious explanation seems to be that the ‘hegemony of mechanistic explanation,’ as Dennett calls it, is real in a way intentionality is not. How for instance, should one interpret the situation I describe above? As a grift, a collection of unscrupulous persons manipulating another collection of unwitting persons? This certainly has a role to play in the kinds of ‘moral intuitions’ violated. But couldn’t the executives plea obligation? They have been charged, after all, with maximizing their shareholder’s ROI, and if mechanistic messaging is more effective than intentional messaging, if no laws are broken and no individuals are harmed, then what on earth could be the problem? Does potential damage to the manifest or traditional ‘belief environment,’ as Dennett has it, trump that obligation? Good luck convincing the most powerful institutions on the planet of that.

Otherwise, if it is the case that the mechanistic trumps the intentional (as neuromarketing, let alone myriad other subpersonal approaches to the human are making vividly clear), why are we talking about morality at all? Morality presumes persons, and this situation would seem to suggest there are no such things, not really, not now, not ever. Giving Occam his due, why not say no persons were harmed in this (or any other) case because no persons existed outside the skewed, parochial assumptions of the zombies involved: the smart zombies on the corporate side hacking the stupid zombies on the audience side?

What the hell is going on here? Seriously. Have we really been reduced to honouring corpses?

The sad fact is, this situation looks an awful lot like a magic show, where the illusion ticks along seamlessly only so long as certain information remains occluded. Various lapses (or as Dennett (1978) calls them, ‘tropisms’) can be tolerated, odd glimpses behind the curtain, hands too lethargic to fool the eye, but at some point, the assumptive economy that makes the illusion possible falls apart, and we witness the dawning of a far less magical aspect–a more desolate yet far more robust ‘level of explanation.’ In this picture, the whole of the human race is hardwired relative to themselves, chained before the magician of their own brain, seeing only what every other human being can see, and so remaining convinced they see everything that needs to be seen. Since the first true homo sapiens, the show has been seamless, save for the fact that none of it can be truly explained. But now that science has as last surmounted the complexities of the brain, more and more nefarious souls have begun sneaking peaks behind the curtain in the hope of transforming the personal show into a subpersonal scam…

Intentionality, in other words, depends on ignorance. This is what makes Dennett’s rapprochement between the personal and the subpersonal a matter of context, something dependant upon the future. Information accumulates given language and culture. The ‘compatibility’ he describes (accurately, I think, though more coyly than I would wish) is the compatibility of a magician watching his crosstown rival’s show, the compatibility of seeing the magic, because it is flawless performed, yet knowing the mechanics of the illusion all the same.

More importantly, intentionality depends on ignorance of mechanism, which is to say, the very ignorance science is designed to overcome. Only now are we seeing the breakdown in compatibility he feared in 1971. Why? Because mechanistic knowledge is progressive in a way that intentional knowledge is not, and so pays ever greater dividends. The sciences of the brain are allowing more and more people to leave the audience and climb onto the stage. The show is becoming more and more discordant, more difficult to square with the illusion of seeing everything there is to see.

The manifest image is becoming more and more inchoate. Neuromarketing is beginning to show, on a truly massive scale, how to see past the illusion of the person.

Why illusion? Throughout his corpus, Dennett adamantly insists on the objectivity of the intentional stance, that it’s predictive and explanatory power means that it picks out ‘real patterns’ (1991). Granting this is so (because one could argue that the only ‘intentional stance’ is the one belonging to philosophers attempting to cognize what amounts to scraps of metacognitive information), the patterns ‘picked out’ are both blinkered and idiosyncratic. Dennett acknowledges as much, but thinks this parochial objectivity licenses second-order, pragmatic justifications. He is honest enough to his pragmatism to historicize these justifications, to acknowledge that a day may come. Likewise, he is honest enough to the theoretical power of science to resist contextualism tout court, to avoid the hubris of transforming the natural into a subspecies of the cultural on the strength of something so unreliable as philosophical speculation.

But now that the inevitability of that ‘day’ seems to be clearly visible, it becomes more difficult to see how his second-order pragmatism isn’t tendentious, or even worse, question-begging. Dennett wants us to say we are mechanisms (what else would we be?) that take ourselves for persons for good reason. When arguing against ‘greedy reduction’ (Dennett, 1995), he leans hard on that last phrase, and only resorts to the predicate when he has to. He relentlessly emphasizes the pragmatic necessity of the personal. When arguing against original intentionality, he reverses emphasis, showing how the subpersonal grounds the personal, how the ‘skyhooks’ of tradition are actually ‘cranes’ (1995), or how the explaining the ‘magic of consciousness’ amounts to explaining a certain evolutionary trick (2003, 2005).

This ‘reversal of emphasis’ strategy has served him, not to mention philosophy and cognitive science, well (See, Elton, 2003) over some 40 plus years. But with the rise of industries like neuromarketing, I submit that the contextual grounds that warrant his intentional emphasis are dissolving beneath his feet simply because they are dissolving beneath everybody’s feet. Does he really think treating the intentional as an ‘endangered ecology’ will allow us to prevent, let alone resolve, problems like neuromarketing? The simple need to become proactive about our belief environment, to institute regimes of explicit and implicit ‘make-think,’ demonstrates–rather dramatically one would think–that we have crossed some kind of fundamental threshold, the very one, in fact, that he worried about early in his philosophical career.

Things are simply getting too subpersonal. Dennett wants us to say we are mechanisms that take ourselves for persons for good reason. What he really should say at this point, as a naturalist opposed to a pragmatist, is that we are mechanisms that take ourselves for persons, for reasons science is only begin to learn.

The more subpersonal information that becomes available, the more isolated and parochial the person will seem to become. Quine’s ‘dramatic idiom’ is set to be come increasingly hysterical unless employed as a shorthand for the mechanical. Why? Because the sciences, for better or worse, monopolize theoretical cognition–it’s all mere philosophy otherwise. This is why Dennett referred to the prospect of depersonalization as ‘dire’ in 1971, and why his call to become stewards of our doxastic ecology rings so hollow in 2006. No matter how prone philosophers are to mistake rank speculation for knowledge, one can rely on science to show them otherwise. This is what I’ve elsewhere referred to as the ‘Big Fat Pessimistic Induction.’ The power of mechanism, the power of the subpersonal, will continue to grow as scientific knowledge progresses–period.

This is also the scenario I sketch in my novel Neuropath, a near-future where the social and cultural dissociation between knowledge and experience has become obviously catastrophic, an illustration of the Semantic Apocalypse and the kind of ‘Akratic Culture’ we might expect to rise in its wake. Dennett uses the corpse analogy above to impress the importance of doxastic consequences, the idea that failing to honour corpses as para-persons undermines the ecology that demands we honour persons as well. But what if this particular ecological collapse is going to happen regardless? Throwing verbiage at science, no matter how eloquent, how incendiary, will not make it stop, which means intentional conservatism, no matter how well ‘intentioned,’ will only serve to drag out the inevitable.

Radicalism is the only way forward. Rather than squandering our critical resources on attempts to salvage the show, perhaps we need to shoo it from the stage, get down to the hard work of reinventing ourselves.

If intentionality is like a magic trick, then the accumulation of information regarding the neurofunctional specifics of consciousness will render it progressively more incoherent. Intentionality, in other words, requires the Only-game-in-town-effect at the level of praxis. When it becomes systematically rational for a person to treat others, even themselves, as mechanisms, Dennett lacks the ‘contextual closure’ he requires to convincingly promote compatibility. It is not always best to treat others as persons. Given the way the subpersonal trumps the personal, it pays to put ‘persons’ on notice even in the absence of lapses of rationality–perhaps especially in the absence of lapses. The other guy, after all, could be doing the same with you. There is a gaping difference, in other words, between the intentional stance we necessarily take and the intentional stance we conditionally take. Certainly we are forced to continue relying on intentional idioms, as I have throughout this very post, but we all possess some understanding of the cognitive limitations of that idiom, the fact that we, in some unnerving fashion, are speaking from a kind of conceptual dream. In Continental philosophical terms, you might say we’re speaking ‘under erasure.’ We communicate understanding we are mechanisms that take ourselves to be persons for reasons we are only beginning to learn.

What might those reasons look like? I’ve placed my chits on the Blind Brain Theory. The ‘apparent wholeness’ of the person is a result of generalized informatic neglect–or ‘adaptive anosognosia.’ Our deliberative cognitive systems (themselves at some level ‘subpersonal’) are oblivious to the neural functions they discharge–they suffer a kind of WYSIATI (Kahneman, 2012) writ large. So as a result they confuse their parochial glimpse for the entire show. Call it the ‘Metonymic Error,’ or ‘ME,’ a sort of ‘mereological fallacy’ (Bennett and Hacker, 2006) in reverse, the cognitive illusion that leads fragmentary, subpersonal assemblages to mistake themselves for something singular and whole.

And as I hope should be clear, it is a mistake. ‘Apparent wholeness’ (sufficiency) is a cognitive illusion in the same manner asymmetric insight is a cognitive illusion. The fact that both are adaptive doesn’t change this. Discharging subreptive functions doesn’t make misconceptions less illusory (any more than does the number of people labouring under it). The real difference is simply the degree our discourses seem to depend on the veracity of the former, the way my use of ‘mistake’ above, for instance, seems to beg the very intentionality I’m claiming is discredited. But, given that I’m deploying the term ‘under erasure,’ all this speaks to is the exhaustive nature of the illusion–which is to say, to our mutual cognitive anosognosia. Accusing me of performative contradiction not only begs the question, it makes the above examples regarding ‘subpersonalization’ very, very difficult to understand. I need only ask for an account for why mechanism trumps intentionality while leaving it intact.

But given that this is a form of nonpathological anosognosia we are talking about, which is to say, a cognitive deficit regarding cognitive deficits, people are bound to find it exceedingly difficult to recognize. As I’ve learned first hand, the reflex is to simply fall back on the manifest image, the way pretty much everyone in philosophy and cognitive science seems inclined to do, and to incessantly repeat the question: How could persons be illusions if they feature in so much ‘genuine understanding’?

The question no one wants to ask is, What else could they feature in?

Or to put the question differently: Imagine it were the case we had a thoroughly fragmentary, distorted, depleted intentional understanding, but we possessed brains that had nevertheless evolved myriad ways to successfully anticipate and coordinate with others, What would our cognition look like?

Idiosyncratic. Baffling… And yet mysteriously effective all the same.

Some crazy shit, I know. All these years our biggest worry was that we were digging our grave with science, never suspecting we might find our own corpse before hitting bottom.

.

References

.

Bennett M., R., and Hacker, P. (2003). Philosophical Foundations of Neuroscience.

Boyer, P. (2001). Religion Explained: The Evolutionary Origins of Religious Thought.

Dennett, D., C. (1969). Content and Consciousness.

Dennett, D., C. (1981). “Mechanism and responsibility,” Brainstorms.

Dennett, D., C. (1991). “Real Patterns.”

Dennett, D., C. (1995). Darwin’s Dangerous Idea.

Dennett, D., C. (2003). “Explaining the ‘magic’ of consciousness.”

Dennett, D., C. (2005). Sweet Dreams: Philosophical Obstacles to a Science of Consciousness.

Dennett, D., C. (2006) “How to Protect Human Dignity from Science.”

Dennett, D., C. (2007). “Heterophenomenology reconsidered.”

Dennett, D., C. (2011). “Kinds of Things–Towards a Bestiary of the Manifest Image.”

Elton, M., (2003). Daniel Dennett: Reconciling Science and Our Self-Conception.

Heidegger, M. (1949). “Letter on ‘Humanism.’”

Kahneman, D. (2012). Thinking, Fast and Slow.

Wittgenstein, L. (1974). Tractatus Logico-Philosophicus.

Facing, Chirping, Hawking my Wares

by rsbakker

Definition of the Day – Philosophy: 1) A form of ethereal waste known to clog the head. See, Pornography, Conceptual Forms of.

.

I actually have several announcements to make, but first I would like to thank Benjamin for a discussion every bit as cool and incisive as his post. By all means continue: the comments never close ’round these parts.

Otherwise, I want to officially announce that I’m now officially official. To whit:

The Official Website: The changes are in. Thanks one and all for your feedback. If all goes well, these very words should be glowing there at this very moment now.

The Official Facebook Page: Apparently, this is something that has to be done. Apparently, ‘word o’ mouth’ ain’t enough anymore, it’s gotta be word o’ face. Apparently, your attitudes regarding Facebook ‘say alot’ about your attitudes to the human race as a whole. But I can’t help it. I can’t help looking at Facebook in neuroprosthetic terms, like an informatic tapeworm exploiting a variety of subpersonal systems, not the least of which the occipital and fusiform face areas. If anything proves that I would be a wild-eyed hermit dressed in putrifying goatskins in any age other than this one, my totally irrational antipathy to Facebook has to be it. The World loves it – so of course it has to be poison! And then there’s the Book of Revelation. Maybe the number that had Jack of Patmos twisting in his goatskins was a Hamming number, the ugliest number of all.

The Devil’s Chirp: Okay, so the ‘Devil’s Tweet’ was already taken, but I’m actually glad in retrospect. Ambrose Bierce’s The Devil’s Dictionary is one of my favourite books, the near pitch-perfect combination of sarcasm and wisdom. My hope is to turn the Devil’s Chirp into a worthy homage to his assay into Satanic redefinition of the hypocritical human soul, but I’ll settle for a cheap knock-off. Now I just gotta figure out how it works. I have a hard time restricting myself to 140 characters in my novels, for Chrissakes. At the very least it’s proof I’ve sold my soul to the lowest bidder.

CBC Ideas: I’m due at the studio this morning.

And I felt wired already…

The Rant Within the Undead God – by Benjamin Cain

by rsbakker

Some centuries before the Common Era, in a sweltering outskirt of the ancient Roman Empire, a nameless wanderer, unkempt and covered in rags, climbed atop a boulder in the midst of a bustling market, cleared his throat and began shouting for no apparent reason:

“Mark my harangue, monstrous abode of the damned and you denizens of this godforsaken place! I have only my stern words to give you, though most of you don’t recognize the existential struggle you’re in; so I’ll cry foul, slink off into the approaching night, and we’ll see if my rant festers in your mind, clearing the way for alien flowers to bloom. How many poor outcasts, deranged victims of heredity, and forlorn drifters have shouted doom from the rooftops? In how many lands and ages have fools kept the faith from the sidelines of decadent courts, the aristocrats mocking us as we point our finger at a thousand vices and leave no stone unturned? And centuries from now, many more artists, outsiders, and mystics will make their chorus heard in barely imaginable ways, sending their subversive message, I foresee, from one land to the next in an instant, through a vast ethereal web called the internet. Those philosophers will look like me, unwashed and ill-fed, but they’ll rant from the privacy of their lairs or from public terminals linked by the invisible information highway. Instead of glaring at the accused in person, they’ll mock in secret, parasitically turning the technological power of a global empire against itself.

“But how else shall we resist in this world in which we’re thrown? No one was there to hurl us here where as a species we’re born, where we pass our days and lay down to die–not we, who might have been asked and might have refused the offer of incarnation, and not a personal God who might be blamed. Nevertheless, we’re thrown here, because the world isn’t idle; natural forces stir, they complexify and evolve; this mindless cosmos is neither living nor dead, but undead, a monstrous abomination that mocks the comforting myths we take for granted, about our supernatural inner essence. No spirit is needed to make a trillion worlds and creatures; the undead forces of the cosmos do so daily, creating and destroying with no rational plan, but still manifesting a natural pattern. What is this pattern, sewn into the fabric of reality? What is the simulated agenda of this headless horseman that drags us behind the mud-soaked hooves of its prancing beast? Just this: to create everything and then to destroy everything! Let that sink in, gentle folk. The universe opens up the book of all possibilities, has a glance at every page with its undead, glazed-over eyes, and assembles miniscule machines–atoms and molecules–to make each possibility an actuality somewhere in space and time, in this universe or the next, until each configuration is exhausted and then all will fly apart until not one iota of reality remains to carry out such blasphemous work. How many ways can a nonexistent God be shown up, I ask you? Everything a loving God might have made, the undead leviathan creates instead, demonstrating spirit’s superfluity, and then that monster, the magically animated carcass we inhabit will finally reveal its headlessness, the void at the center of all things, and nothing shall be left after the Big Rip.

“I ask again, how else to resist the abominable inhumanity of our world, but to make a show of detaching from some natural processes of cosmic putrefaction, to register our denunciation in all existential authenticity, and yet to cling to the bowels of this beast like the parasites we nonetheless are? And how else to rebel against our false humanity, against our comforting delusions, other than by replacing old, worn-out myths with new ones? For ours is a war on two fronts: we’re faced with a horrifying natural reality, which causes us to flee like children into a world of make-believe, whereupon we outgrow some bedtime stories and need others to help us sleep.

“We conquered masses in what will one day be called the ancient world have become disenchanted with Roman myths, as the cynicism of the elites who expect us to honour the self-serving Roman spin on local fables infects the whole Roman world. Now that Alexander the Great has opened the West to the East, we long for revitalization from the fountain of exotic Eastern mysticism, just as millennia from now I foresee that the wisdom of our time will inspire those who will call themselves modern, liberal, and progressive. And just as our experiments with Eastern ideas will afford our descendants a hiding place in Christian fantasies, which will distract Europeans from their Dark Age after the fall of Rome, so too the modern Renaissance will bear tainted fruit, as technoscientific optimism will give way to the postmodern malaise.

“Our wizards and craftsmen are dunces compared to the scientists and engineers to come. Romans believe they’ve mastered the forces of nature, and indeed their monuments and military power are staggering. But skeptics and rationalists will eventually peer into the heart of matter and into the furthest reaches of the universe, and so shall confirm once and for all the horrifying fact that nature is the undead, self-shaping god. The modernists will pretend to be unfazed by that revelation as they exploit natural processes to build wonders that will encourage the masses: diseases will be cured and food will be plentiful; all races, creeds, and sexes will be made legally equal; and–lowly mammals that they are–the future folk will personally venture into outer space! Alas, though, I discern another motif in reality’s weave, besides the undead behemoth’s implicit mockery of God: civilizations rise and fall according to the logic of the Iron Law of Oligarchy. Take any group of animals that need to live together to survive, and they will spontaneously form a power hierarchy, as the group is stabilized by a concentration of power that enables the weaker members to be most efficiently managed. Power corrupts, of course, and so leaders become decadent and their social hierarchy eventually implodes. The Roman elite that now rules most of the known world will overreach in their arrogance and will face the wrath of the hitherto conquered hordes. As above, so below: the universe actualizes each possibility only to extinguish it in favour of the next cosmic fad.

“And so likewise in the American civilization to come, plutocrats will reign from their golden toilets, but their vanity will undo their economic hegemony as they’ll take more and more of the nation’s wealth while the masses of consumers stagnate like neglected cattle, again laying the groundwork for social implosion. For a time, that future world I foresee will trust in the ideal of each person’s liberty, without appreciating the irony that when we remove the social constraints on freedom of expression, we clear the way for the more indifferent natural constraint of the Iron Law to take effect, and so we establish a more grotesque rule of the few over the many. Thus, American government will be structured to prevent an artificial tyranny, by establishing a conflict between its branches and by limiting the leader’s terms of office, but this hamstringing of government will create a power vacuum that will be filled by the selfish interests of the mightiest private citizens. In whichever time or place they’re found, those glorious, sociopathic few are avatars of undead nature, ruling without conscience or plan for the future; they build economic or military empires only to bring them crashing down as their animal instincts prove incapable of withstanding temptation. Conservatives excel at devising propaganda to rationalize oligarchy; modern liberals will experiment with progressive socialism only to inadvertently confirm the Iron Law, and so liberalism will give way to postmodern technocracy, to the dreary pragmatism of maintaining the oligarchic status quo while the hollow liberals pretend to offer a genuine political alternative to conservatism.

“What myths we live by to avoid facing the horror of our existential predicament! We personify the sun and the moon the way a child makes toys even out of rocks and twigs. The scientists of the far future, though, will investigate not just the outer mechanisms, but will master the workings of human thought. They’ll learn that our folk tales about the majesty of human nature are at best legends: we are not as conscious, rational, or free as we typically assume. Our ridiculous lust for sex proves this all by itself. We have contempt for older virgins who fail to attract a mate, even though almost everyone would be mortified to be caught in the sex act; at least no one remains to pity the throngs of copulating human animals, save the marginalized drifters who detach from the monstrous world. Psychologists will discover that while we can deliberate and attend to formal logic, we also make snap, holistic judgments, which is to say associative, emotional and intuitive leaps. Most of our mind is unconscious and reason is largely a means of manipulating others for social advantage. But even as modern rationalists will learn as much, rushing to exploit human weaknesses for profit, they will praise ultraconsciousness, ultrarationality and ultrafreedom. These secular humanists will worship their machines and a character named Spock, and they’ll assume that if only society were properly managed, progress would ensue. Thus, Reason shall render all premodern delusions obsolete, but that last, modern delusion of rationalism will be overcome only through postmodern weariness from all ideologies.

“The curse of reason is that thinking enough to discover the appalling truth of natural life prevents the thinker from being happy. That curse might be mitigated, though, if we recognize that the irrational part of our mind has its own standards. We crave stories to live by, models to admire, and artworks to inspire us. Our philosophical task as accursed animals is to assemble all that we learn into a coherent worldview, reconciling the world’s impersonality with our crude and short-sighted preferences. Happiness is for the ignorant or the deluded sleep-walkers; those who are kept awake by the ghost story of unpopular knowledge are too melancholy and disgusted by what they see to take much joy. When you face the facts that there is no God, no afterlife, no immortal soul, no transcendent human right, no perfect justice, no absolute morality, no nonhuman meaning of life, and no ultimate hope for the universe, you’ll understand that a happy life is the most farcical one. We sentient, intelligent mammals are cursed to be alienated from the impersonal world and from the myths we trust to personalize our thought processes. We are instinctive story-tellers: our inner voice narrates our deeds as we come to remember them, and we naturally gossip and anthropomorphize, evolved as we are to negotiate a social hierarchy. But how do we cope with the fact that the truest known narrative belongs to the horror genre? How shall we sleep at night, relative children that we all are, preoccupied with the urges of our illusory ego, when we’re destined to look askance at optimistic myths, inheriting the postmodern horror show?

“Shall I proceed to the final shocker of this woeful tale that enervates those with the treacherous luxury of freedom of thought? Given that nature is the undead self-creator of its forms, what is the last word, the climax of this rant within the undead god? While there’s no good reason to believe there is or ever was a transcendent, personal deity, we instinctively understand things by relating them to what’s most familiar, which is us; thus, we personify the unknown, fearing unseen monsters in the dark, and so even atheists are compelled to blame their misfortune on some deity, crying out to no one when they accidentally injure themselves. But if there’s no room in nature for this personal God whose possible existence we’re biologically compelled to contemplate, and there’s nothing for this God to do in the universe that shapes itself, the supreme theology is the most dire one, namely the speculation that Philipp Mainlander will one day formulate before promptly going insane and killing himself: God is literally dead. God committed elaborate suicide by transforming himself into something that could be perfectly destroyed, which is the material universe. God became corrupted by his omnipotence and insane by his alienation, and so the creativity of his ultimate act is an illusion: the world’s evolution is the process of God’s self-destruction, and we are vermin feeding off of God’s undying corpse. Sure, this is just a fiction, but it’s the most plausible way of fitting God–and so also our instinctive, irrational theistic inclination–into the rest of the ghastly postmodern worldview to come.

“Is there a third pattern manifesting throughout the cosmos, one of resistance and redemption? Do intelligent life forms evolve everywhere only to discover the tragedy of their existential situation, to succumb to madness or else to respond somehow with honour and grace? Perhaps we’ll learn to re-engineer ourselves by merging with our machines so that we no longer seek a higher purpose and we’ll reconcile ourselves to our role as agents of the universe’s decay and ultimate demise. Maybe an artistic genius will emerge who will enchant us with a stirring vision of how we might make the best of our predicament. From the skeptical, pessimistic viewpoint, which will be so easily justified in that sorrowful postmodern time, even our noblest effort to overcome our absurd plight will seem just another twist in the sickening melodrama, yet another stage of cosmic collapse; a cynic can afford to scoff at anything when his well of disgust is bottomless. But there’s a wide variety of human characters, as befits our position in a universe that tries out and discards all possibilities. I rant to the void until my throat aches and my eyes water. The undead god has no ears to hear, no eyes to behold its hideous reflection, and no voice with which to apologize or to instruct–unless you count the faculties of the stowaway creatures that are left alone to make sense of where they stand. So may some of you grow magnificent flowers from the soil of my words!”

The sun had set and most of the townsfolk had long since returned to their homes, having ignored or taken the opportunity to spit upon the doomsayer. A few remained until the end of his diatribe, their mouths hanging open in dismay and when they glanced at each other, asking what should be done, they lost sight of the preacher as he had indeed scurried away as promised, homeless, into the dark.

www.rscottbakker.com

by rsbakker

Aphorism of the Day: To be oblivious is to be heroic for so long as your luck holds.

.

It’s supposed to be the other guy, that guy, the no-conscience shill who sees capital opportunities no matter where he turns. But no, it turns out I’m that bum. And now my name, the one I inherited from my grandfather, only to have rescinded when that all fell through, stranding me with the second, and a prescient mother who insisted I sign the ‘R’ on everything I did, leading to innumerable sideways comments, mostly from educators (because then, as now, I was big for my age), now my name, the little crane that has plucked me from every crowd, hauled my soul up by the hair every time I have sinned, has become computer code, commercial coordinates, pinning me like a butterfly, or better yet a beetle, too ugly to be decorative, yet calling out my wares all the same. Makes me feel webby.

I’ve gone on and on about how I needed a skull for Three Pound Brain, or at the very least a toupe, something to disguise my cerebral excesses, convince that steady stream of window shoppers that pass through these lobes (generally to flee), that I can actually write a ripping yarn as well. And now I’ve gone and done it. It’s the beta version, and I’m groping for quora, because this shit is like tear gas to me. It all feels obnoxious, like the real fifth element is greed. It all feels like I’m aping the moves of those far more graceful.

Forgive the semantic origami. Funny how tones come across you, how much defense you can pack into pixels on a screen. Art, like all great adaptations, fortifies.

On a different note, next Monday Three Pound Brain will feature an awesome post by Benjamin Cain, another soul bent on exploring the intersection between pulp culture and philosophical speculation on our incredible shrinking future. Think Spinoza, World War Z, and full-frontal Futurama. According to Ben, Nietzsche forgot to shoot God in the head…

The Philosopher and the Cuckoo’s Nest

by rsbakker

Definition of Day – Introspection: A popular method of inserting mental heads up neural asses.

.

Question: How do you get a philosopher to shut up?

Answer: Pay for your pizza and tell him to get the hell off your porch.

I’ve told this joke at public speaking engagements more times than I can count, and it works: the audience cracks up every single time. It works because it turns on a near universal cultural presumption of  philosophical impracticality and cognitive incompetence. This presumption, no matter how much it rankles, is pretty clearly justified. Whitehead’s famous remark that all European philosophy is “a series of footnotes to Plato” is accurate so far as we remain as stumped regarding ourselves as were the ancient Greeks. Twenty-four centuries! Keeping in mind that I happen to be one of those cognitive incompetents, I want to provide a sketch of how we theorists of the soul could have found ourselves in these straits, as well as why the entire philosophical tradition as we know it is almost certainly about to be swept away.

In a New York Times piece entitled “Don’t Blink! The Hazards of Confidence,” Daniel Kahneman writes of his time in the Psychology Branch of the Israeli Army, where he was tasked with evaluating candidates for officer training by observing them in a variety of tests designed to isolate soldiers’ leadership skills. His evaluations, as it turned out, were almost entirely useless. But what surprised him was the way knowing this seemed to have little or no impact on the confidence with which he and his fellows submitted their subsequent evaluations, time and again. He was so struck by the phenomenon that he would go on to study it as the ‘illusion of validity,’ a specific instance of the general role the availability of information seems to plays in human cognition–or as he would later term it, What-You-See-Is-All-There-Is, or WYSIATI.

The idea, quite simply, is that because you don’t know what you don’t know, you tend, in many contexts, to think you know all that you need to know. As he puts it in Thinking, Fast and Slow:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our automatic cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. (2011, 85)

As Kahneman shows, this leads to myriad errors in reasoning, including our peculiar tendency in certain contexts to be more certain about our interpretations the less information we have available. The idea is so simple as to be platitudinal: only the information available for cognition can be cognized. Other information, as Kahneman says, “might as well not exist” for the systems involved. Human cognition, it seems, abhors a vacuum.

The problem with platitudes, however, is that they are all too often overlooked, even when, as I shall argue in this case, their consequences are spectacularly profound. In the case of informatic availability, one need only look to clinical cases of anosognosia to see the impact of what might be called domain specific informatic neglect, the neuropathological loss of specific forms of information. Given a certain, complex pattern of neural damage, many patients suffering deficits as profound as lateralized paralysis, deafness, even complete blindness, appear to be entirely unaware of the deficit. Perhaps because of the informatic bandwidth of vision, visual anosognosia, or ‘Anton’s Syndrome,’ is generally regarded as the most dramatic instance of the malady. Prigatano (2010) enumerates the essential features of the syndrome as following:

First, the patient is completely blind secondary to cortical damage in the occipital regions of the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses, therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. (456)

Obviously, the blindness stems from the occlusion of raw visual information. The second-order ‘blindness,’ the patient’s inability to ‘see’ that they cannot see, turns, one might suppose, on the unavailability of information regarding the unavailability of visual information. At some crucial juncture, the information required to process the lack of visual information has gone missing. As Kahneman might say, since our automatic cognitive system is dedicated to the construction of ‘the best possible story’ given only the information it has, the patient confabulates, utterly convinced they can see even though they are quite blind.

Anton’s Syndrome, in other words, can be seen as a neuropathological instance of WYSIATI. And WYSIATI, conversely, can be seen as a non-neuropathological version of anosognosia. What I want to suggest is that philosophers all the way back to the ancient Greeks have in fact suffered from their own version of Anton’s Syndrome–their own, non-neuropathological version of anosognosia. Specifically, I want to argue that philosophy has been systematically deluded into thinking their intuitions regarding the soul in any of its myriad incarnations–mind, consciousness, being-in-the-world, and so on–actually provides a reliable basis for second-order claim-making. The uncanny ease with which one can swap the cognitive situation of the Anton’s patient for that of the philosopher may be no coincidence:

First, the philosopher is introspectively blind secondary to various developmental and structural constraints. Second, the philosopher is not aware of his introspective blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his inability to introspect. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

What philosophers call ‘introspection,’ I want to suggest, provides some combination of impoverished information, skewed information, or (what amounts to the same) information matched to cognitive systems other than those employed in deliberative cognition, without–and here’s the crucial twist–providing information to this effect. As a result, what we think we see becomes all there is to be seen, as per WYSIATI. If the informatic and cognitive limits of introspection are not available for introspection (and how could they be?), then introspection will seem, curiously, limitless, no matter how severe the actual limits may be.

Now the stakes of this claim are so far-reaching that I’m sure it will have to seem preposterous to anyone with the slightest sympathy for philosophers and their cognitive plight. Accusing philosophers of suffering introspective anosognosia is basically accusing them of suffering a cognitive disability (as opposed to mere incompetence). So, in the interests of making my claim somewhat more palatable, I will do what philosophers typically do when they get into trouble: offer an analogy.

The lowly cuckoo, I think, provides an effective, if peculiar, way to understand this claim. Cuckoos are ‘obligate brood parasites,’ which is to say, they exclusively lay their eggs in the nests of other birds, relying on them to raise their chick (who generally kills the host bird’s own offspring) to reproductive age. The entire species, in other words, relies on exploiting the cognitive limitations of birds like the reed warbler. They rely on the inability of the unwitting host to discriminate between the cuckoo’s offspring and their own offspring. From a reed warbler’s standpoint, the cuckoo chick just is its own chick. Lacking any ‘chick imposter detection device,’ it simply executes its chick rearing program utterly oblivious to the fact that it is perpetuating another species’ genes. The fact that it does lack such a device should come as no surprise: so long as the relative number of reed warblers thus duped remains small enough, there’s no evolutionary pressure to warrant the development of one.

What I’m basically saying here is that humans lack a corresponding ‘imposter detection device’ when it comes to introspection. There is no doubt that we developed the capacity to introspect to discharge any number of adaptive behaviours. But there is also no doubt that ‘philosophical reflection on the nature of the soul’ was not one of those adaptive behaviours. This means that it is entirely possible that our introspective capacity is capable of discharging its original adaptive function while duping ‘philosophical reflection’ through and through. And this possibility, I hope to show, puts more than a little heat on the traditional philosopher.

‘Metacognition’ refers to our ability to know our knowledge and our skills, or “cognition about cognitive phenomena,” as Flavell puts it. One can imagine that the ability of an organism to model certain details of its own neural functions and thus treat itself as another environmental problem requiring solution would provide any number of evolutionary benefits. It pays to assess and revise our approaches to problems, to ask what it is we’re doing wrong. It likewise pays to ‘watch what we say’ in any number of social contexts. (I’m sure everyone has that one friend or family member who seems to lack any kind of self-censor). It pays to be mindful of our moods. It pays to be mindful of our actions, particularly when trying to learn some new skill.

The issue here isn’t whether we possess the information access or the cognitive resources required to do these things: obviously we do. The question is whether the information and cognitive resources required to discharge these metacognitive functions comes remotely close to providing us with what we need to answer theoretical questions regarding mind, consciousness, or being-in-the-world.

This is where the shadow cast by the mere possibility of introspective anosognosia becomes long indeed. Why? Because it demonstrates the utter insufficiency of our intuition of introspective sufficiency. It demonstrates that what we conceptualize as ‘mind’ or ‘consciousness’ or ‘being-in-the-world’ could very well be a ‘theoretical cuckoo,’ even if the information it accesses is ‘warbler enough’ for the type of metacognitive practices described above. Is a theoretically accurate conception of ‘consciousness’ required to assess and revise our approaches to problems, to self-censor, to track or communicate our moods, to learn some new skill?

Not at all. In fact, for all we know, the grossest of distortions will do.

So how might we be able to determine whether the consciousness we think we introspect is a theoretical cuckoo as opposed to a theoretical warbler? Since relying on introspection simply begs the question, we have to turn to indirect evidence. We might consider, for instance, the typical symptoms of insufficient information or cognitive misapplication. Certainly the perennial confusion, conundrum, and intractable debate that characterize traditional philosophical speculation on the soul suggest that something is missing. You have to admit the myriad explananda of philosophical reflection on the soul smack more than a little of Rorschach blots: everybody sees something different–astoundingly so, in some cases. And the few experiential staples that command any reasonable consensus, like intentionality or nowness, continue to resist analysis, let alone naturalization. One need only ask, What would the abject failure of transcendental philosophy look like? A different kind of perennial confusion, conundrum, and intractable debate? Sounds pretty fishy.

In other words, it’s painfully obvious that something has gone wrong. And yet, like the Anton’s patient, the philosopher insists they can still see! “What of the apriori?” they cry. “What of conditions of possibility?” Shrug. A kind of low-dimensional projection, neural interactions minus time and space? But then that’s the point: Who knows?

Meanwhile it seems very clear that something is rotten. The audience’s laughter is too canny to be merely ignorant. If you’re a philosopher, you feel it I suspect. Somehow, somewhere… something…

But the truly decisive fact is that the spectre of introspective anosognosia need only be plausible to relieve traditional philosophy of its transcendental ambitions. This particular skeptical ‘How do you know?’ unlike those found in the tradition, is not a product of the philosopher’s discursive domain. It’s an empirical question. Like it or not, we have been relegated to the epistemological lobby: Only cognitive neuroscience can tell us whether the soul we think we see is a cuckoo or not.

For better or worse, this happens to be the time we live in. Post-transcendental. The empirical quiet before the posthuman storm.

In retrospect, it will seem obvious. It was only a matter of time before they hung us from hooks with everything else in the packing plant.

Fuck it. The pizza tastes just as good, either way.