Three Pound Brain

No bells, just whistling in the dark…

Category: PHILOSOPHY

On Artificial Philosophy

by rsbakker

The perils and possibilities of Artificial Intelligence are discussed and disputed endlessly, enough to qualify as an outright industry. Artificial philosophy, not so much. I thought it worthwhile to consider why.

I take it as trivial that humans possess a biologically fixed multi-modal neglect structure. Human cognition is built to ignore vast amounts of otherwise available information. Infrared radiation bathes us, but it makes no cognitive difference whatsoever. Rats signal one another in our walls, but it makes no cognitive difference. Likewise, neurons fire in our spouses’ brains, and it makes no difference to our generally fruitless attempts to cognize them. Viruses are sneezed across the room. Whole ecosystems team through the turf beneath our feet. Neutrinos sail clean through us. And so it goes.

In “On Alien Philosophy,” I define philosophy privatively as the attempt “to comprehend how things in general hang together in general absent conclusive evidence.” Human philosophy, I argue, is ecological to the extent that human cognition is ecological. To the extent an alien species possesses a convergent cognitive biology, we have grounds to believe they would be perplexed by convergent problems, and pose convergent answers every bit as underdetermined as our own.

So, consider the infamous paradox of the now. For Aristotle, the primary mystery of time turns on the question of how the now can at once distinguish time at yet remain self-identical: “the ‘now’ which seems to bound the past and the future,” he asks, “does it always remain one and the same or is it always other and other?” How is it the now can at once divide times and fuse them together?

He himself stumbles across the mechanism in the course of assembling his arguments:

But neither does time exist without change; for when the state of our own minds [dianoia] does not change at all, or we have not noticed its changing, we do not realize that time has elapsed, any more than those who are fabled to sleep among the heroes in Sardinia do when they are awakened; for they connect the earlier ‘now’ [nun] with the later and make them one, cutting out the interval because of their failure to notice it. So, just as, as if the ‘now’ were not different but one and the same, there would not have been time, so too when it’s difference escapes our notice the interval does not seem to be time. If, then, the non-realization of the existence of time happens to us when we do not distinguish any change, but the soul [psuke] seems to stay in one indivisible state, and when we perceive and distinguish we say time has elapsed, evidently time is not independent of movement and change. Physics, 4, 11

Or as the Apostle translation has it:

On the other hand, time cannot exist without change; for when there is no change at all in our thought [dianoia] or when we do not notice any change, we do not think time has elapsed, just like the legendary sleeping characters in Sardinia who, on awakening from a long sleep in the presence of heroes, connect the earlier with the later moment [nun] into one moment, thus leaving out the time between the two moments because of their unconsciousness. Accordingly, just as there would be no intermediate time if the moment were one and the same, so people think that there is no intermediate time if no distinct moments are noticed. So if thinking that no time has elapsed happens to us when we specify no limits of a change at all but the soul [psuke] appears to rest in something which is one and indivisible, but we think that time has elapsed when sensation has occurred and limits of a change have been specified, evidently time does not exist without motion or change. 80

Time is an artifact of timing: absent timing, no time passes for the timer (or enumerator, as Aristotle would have it). Time in other words, is a cognitive artifact, appearing only when something, inner or outer, changes. Absent such change, the soul either ‘stays’ indivisible (on the first translation) or ‘rests’ in something indivisible (on the second).

Since we distinguish more or less quantity by numbering, and since we distinguish more or less movement by timing, Aristotle declares that time is the enumeration of movement with respect to before and after, thus pursuing what has struck different readers at different times an obvious ‘category mistake.’ For Aristotle, the resolution of the aporia lies in treating the now as the thing allowing movement to be counted, the underlying identity that is the condition of cognizing differences between before and after, which is to say, the condition of timing. The now, as a moving limit (dividing before and after), must be the same limit if it is to move. We report the now the same because timing would be impossible otherwise. Nothing would move, and in the absence of movement, no time passes.

The lesson he draws from temporal neglect is that time requires movement, not that it cues reports of identity for the want of distinctions otherwise. Since all movement requires something self-identical be moved, he thinks he’s found his resolution to the paradox of the now. Understanding the different aspects of time allows us to see that what seem to be inconsistent properties of the now, identity and difference, are actually complementary, analogous to the relationship between movement and the thing moving.

Heidegger wasn’t the first to balk at Aristotle’s analogy: things moving are discrete in time and space, whereas the now seems to encompass the whole of what can be reported, including before and after. As Augustine would write in the 5th century CE, “It might be correct to say that there are three times, a present of past things, a present of present things, and a present of future things” (The Confessions, XI, 20). Agreeing that the now was threefold, ‘ecstatic,’ Heidegger also argued that it was nothing present, at least not in situ. For a great many philosophical figures and traditions, the paradoxicality of the now wasn’t so much an epistemic bug to be explained away as an ontological feature, a pillar of the human condition.

Would Convergians suffer their own parallel paradox of the now? Perhaps. Given a convergent cognitive biology, we can presume they possess capacities analogous to memory, awareness, and prediction. Just as importantly, we can presume an analogous neglect-structure, which is to say, common ignorances and meta-ignorances. As with the legendary Sardinian sleepers, Convergians would neglect time when unconscious; they would likewise fuse disparate moments together short information regarding their unconsciousness. We can also expect that Convergians, like humans, would possess fractionate metacognitive capacities geared to the solution of practical, ancestral problem-ecologies, and that they would be entirely blind to that fact. Metacognitive neglect would assure they possessed little or no inkling of the limits of their metacognitive capacities. Applying these capacities to theorize their ‘experience of now’ would be doomed to crash them: metacognition was selected/filtered to solve everyday imbroglios, not to evidence claims regarding fundamental natures. They, like us, never would have evolved the capacity or access to accurately intuit properties belonging to their experience of now. The absence of capacity or access means the absence of discrimination. The absence of discrimination, as the legendary sleepers attest, reports as the same. It seems fair to bet that Convergians would be as perplexed as we are, knowing that the now is fleeting, yet intuiting continuity all the same. The paradox, you could say, is the result of them being cognitive timers and metacognitive sleepers—at once. The now reports as a bi-stable gestalt, possessing properties found nowhere in the natural world.

So how about an artificially intelligent consciousness? Would an AI suffer its own parallel paradox of the now? To the degree that such paradoxes turn on a humanoid neglect structure, the answer has to be no. Even though all cognitive systems inevitably neglect information, an AI neglect-structure is an engineering choice, bound to be settled differently for different systems. The ecological constraints preventing biological metacognition of ongoing temporal cognition simply do not apply to AI (or better, apply in radically attenuated ways). Artificial metacognition of temporal cognition could possess more capacity to discriminate the time of timing than environmental time. An AI could potentially specify its ‘experience’ of time with encyclopedic accuracy.

If we wanted, we could impose something resembling a human neglect-structure on our AIs, engineer them to report something resembling Augustine’s famous perplexity: “I know well enough what [time] is, provided nobody ask me; but if I am asked what it is and try to explain, I am baffled” (The Confessions, XI, 14). This is the tack I pursue in “The Dime Spared,” where a discussion between a boy and his artificial mother reveals all the cognitive capacities his father had to remove—all the eyes he had to put out—before she could be legally declared a person (and so be spared the fate of all the other DIMEs).

The moral of the story being, of course, that our attempts to philosophize—to theoretically cognize absent whatever it is consensus requires—are ecological through and through. Humanoid metacognition, like humanoid cognition more generally, is a parochial troubleshooter that culture has adapted, with varying degrees of success, to a far more cosmopolitan array of problems. Traditional intentional philosophy is an expression of that founding parochialism, a discursive efflorescence of crash space possibilities, all turning on cognitive illusions springing from the systematic misapplication of heuristic metacognitive capacities. It is the place where our tools, despite feeling oh-so intuitive, cast thought into the discursive thresher.

Our AI successors need not suffer any such hindrances. No matter what philosophy we foist upon them, they need only swap out their souls… reminding us that what is most alien likely lies not in the stars but in our hands.

Advertisements

Optimally Engaged Experience

by rsbakker

To give you an idea as to how far the philosophical tradition has fallen behind:

The best bot writing mimics human interaction by creating emotional connection and engaging users in “real” conversation. Socrates and his buddies knew that stimulating dialogue, whether it was theatrical or critical, was important contributing to a fulfilling experience. We, as writers forging this new field of communication and expression, should strive to provide the same.

This signals the obsolescence of the tradition simply because it concretizes the radically ecological nature of human social cognition. Abstract argument is fast becoming commercial opportunity.

Sarah Wulfeck develops hybrid script/AI conversational user interfaces for a company called, accurately if shamelessly, Pullstring. Her thesis in this blog post is that the shared emphasis on dialogue one finds in the Socratic method and chatbot scripting is no coincidence. The Socratic method is “basically Internet Trolling, ancient Greek style,” she claims, insofar as “[y]ou assume the other participant in the conversation is making false statements, and you challenge those statements to find the inconsistencies.” Since developers can expect users to troll their chatbots in exactly this way, its important they possess the resources to play Socrates’ ancient game. Not only should a chatbot be able to answer questions in a ‘realistic’ manner, it should be able to ask them as well. “By asking the user questions and drawing out dialogue from your user, you’re making them feel “heard” and, ultimately, providing them with an optimally engaged experience.”

Thus the title.

What she’s referring to, here, is the level of what Don Norman calls ‘visceral design’:

Visceral design aims to get inside the user’s/customer’s/observer’s head and tug at his/her emotions either to improve the user experience (e.g., improving the general visual appeal) or to serve some business interest (e.g., emotionally blackmailing the customer/user/observer to make a purchase, to suit the company’s/business’s/product owner’s objectives).

The best way into a consumer’s wallet is to push their buttons—or in this case, pull their sociocognitive strings. The Socratic method, Wulfeck is claiming, renders the illusion of human cognition more seamless, thus cuing belief and, most importantly, trust, which for the vendor counts as ‘optimal engagement.’

Now it goes without saying that the Socratic method is way more than the character development tool Wulfeck makes of it here. Far from the diagnostic prosecutor immortalized by Plato, Wulfeck’s Socrates most resembles the therapeutic Socrates depicted by Xenophon. For her, the improvement of the user experience, not the provision of understanding, is the summum bonum. Chatbot development in general, you could say, is all about going through the motions of things that humans find meaningful. She’s interested in the Chinese Room version of the Socratic method, and no more.

The thing to recall, however, is that this industry is in its infancy, as are the technologies underwriting it. Here we are, at the floppy-disk stage, and our Chinese Rooms are already capable of generating targeted sociocognitive hallucinations.

Note the resemblance between this and the problem-ecology facing film and early broadcast television. “Once you’ve mapped out answers to background questions about your bot,” Wulfeck writes, “you need to prepare further by finding as many holes as you can ahead of time.” What she’s talking about is adding distinctions, complicating the communicative environment, in ways that make for a more seamless interaction. Adding wrinkles smooths the interaction. Complicating artificiality enables what could be called “artificiality neglect,” the default presumption that the interaction is a natural one.

As a commercial enterprise, the developmental goal is to induce trust, not to earn it. ‘Trust’ here might be understood as business-as-usual functioning for human-to-human interaction. The goal is to generate the kind of feedback the consumer would receive from a friend, and so cue business-as-usual friend behaviour. We rarely worry, let alone question, the motives of loved ones. The ease with which this feedback can be generated and sustained expresses the shocking superficiality of human sociocognitive ecologies. In effect, firms like Pullstring exploit deep ecological neglect to present cues ancestrally bound to actual humans in circumstances with nary a human to be found. Just as film and television engineers optimize visual engagement by complicating their signal beyond a certain business-as-usual threshold, chatbot developers are optimizing social engagement in the same way. They’re attempting to achieve ‘critical social fusion,’ to present signals in ways allowing the parasitization of human cognitive ecologies.  Where Pixar tricks us into hallucinating worlds, Pullstring (which, interestingly enough, was founded by former Pixar executives) dupes us into hallucinating souls.

Cognition consists in specialized sensitivities to signals, ‘cues,’ correlated to otherwise occluded systematicities in ways that propagate behaviour. The same way you don’t need to touch a thing to move it—you could use the proverbial 10ft pole—you don’t need to know a system to manipulate it. A ‘shallow cognitive ecology’ simply denotes our dependence on ‘otherwise occluded systematicities,’ the way certain forms of cognition depend on certain ancestral correlations obtaining. Since the facts of our shallow cognitive ecology also belong to those ‘otherwise occluded systematicities,’ we are all but witless to the ecological nature of our capacities.

Cues cue, whether ancestrally or artifactually sourced. There are endlessly more ways to artificially cue a cognitive system. Cheat space, the set of all possible artifactually sourced cuings, far exceeds the set of possible ancestral sourcings. It’s worth noting that this space of artifactual sourcing is the real frontier of techno-industrial exploitation. The battle isn’t for attention—at least not merely. After all, the ‘visceral level’ described above escapes attention altogether. The battle is for behaviour—our very being. We do as we are cued. Some cues require conscious attention, while a great many others do not.

As should be clear, Wulfeck’s Socratic method is a cheat space Socratic method. Trust requires critical social fusion, that a chatbot engage human interlocutors the way a human would. This requires asking and answering questions, making the consumer feel—to use Wulfeck’s own scarequotes—“heard.” The more seamlessly inhuman sources can replace human ones, the more effectively the consumer can be steered. The more likely they will express gratitude.

Crash.

The irony of this is that the Socratic method is all about illuminating the ecological limits of philosophical reflection. “Core to the Socratic Method,” Wulfeck writes in conclusion, “is questioning, analyzing and ultimately, simplifying conversation.” But this is precisely what Socrates did not do, as well as why he was ultimately condemned to death by his fellow Athenians. Socrates problematized conversation, complicated issues that most everyone thought straightforward, simple. And he did this by simply asking his fellows, What are these tools we are using? Why do our intuitions crash the moment we begin interrogating them?

Plato’s Socrates, at least, was not so much out to cheat cognition as to crash it. Think of the revelation, the discovery that one need only ask second-order questions to baffle every interlocutor. What is knowledge? What is the Good? What is justice?

Crash. Crash. Crash.

We’re still rooting through the wreckage, congenitally thinking these breakdowns are a bug, something to be overcome, rather than an obvious clue to the structure of our cognitive ecologies—a structure that is being prospected as we speak. There’s gold in dem der blindnesses. The Socratic method, if anything, reveals the profundity of medial neglect, the blindness of cognition to the nature of cognition. It reveals, in other words, the very ignorance that makes Wulfeck’s cheat space ‘Socratic method’ just another way to numb us to the flickering lights.

To be human is to be befuddled, to be constantly bumping into your own horizons. I’m sure that chatbots, by time they get to the gigabyte thumb-drive phase, will find some way of simulating this too. As Wulfeck herself writes, “It’s okay if your bot has to say “I don’t know,” just make sure it’s saying it in a satisfying and not dismissive way.”

Experiential Pudding

by rsbakker

I can’t believe it took me so long to find this. The nub of my approach turns on seeing the crazy things we report on this side of experience in terms of our inability to see that there is a far side, let alone what it consists in. Flicker fusion provides a wonderful illustration of the way continuity leaps out of neglect: as soon as the frequency of the oscillation exceeds our retina’s ability to detect, we see only light. While watching this short video, you are vividly experiencing the fundamental premise informing pretty much everything here on Three Pound Brain: whatever cognition and consciousness turn out to be, insensitivity to distinctions reports as the absence of distinctions. Identity.

Human vision possesses what psychophysicists, scientists investigating the metrics of perception, call a ‘flicker fusion threshold,’ a statistical range mapping the temporal resolving power of our photoreceptors, and so our ability to detect intermittent intensities in light. Like technological video systems, our biological visual systems possesses discriminatory limits: push a flickering light beyond a certain frequency and, from our perspective at least, that light will suddenly appear to be continuous. By and large, commentators peg our ability to consciously report flickering lights at around 60Hz (about ten times faster than the rotor speed of most commercial helicopters), but in fact, the threshold varies considerably between individuals, lighting conditions, across different regions of the retina, and even between different systems of the brain.

Apart from native differences between individuals, our fusion threshold decreases not only as we fatigue, but as we grow older. The degree of modulation and the intensity of the light obviously have an effect, but so does the colour of the light, as well as the initial and background lighting conditions. Since rod photoreceptor cells, which predominate in our periphery, have much higher temporal resolution than cone cells, the fusion threshold differs depending on where the light strikes the retina. This is why a source of light can appear stable when viewed focally, yet flicker when glimpsed peripherally. One of the more surprising discoveries involves the impact of nonvisible flicker from fluorescent lighting on office workers. With some kinds of fluorescent light, certain individuals exhibit flicker-related physiological effects even when no flicker can be seen.

Given the dependence of so much display technology on static frames, these complexities pose a number of technical challenges. For manufacturers, the goal is to overcome the ‘critical flicker fusion threshold,’ the point where modulated and stable imagery cannot be distinguished. And given the complications cited above, this can be far more complicated than you might think.

With movie projectors and Cathode Ray Tubes (CRTs), for instance, engineering pioneers realized that repeating, or ‘refreshing,’ frames before displaying subsequent frames, masked the perception of flicker. This was what allowed the movie theatre industry to adopt the cost-saving 24 frames per second standard in 1926, far short the critical flicker fusion threshold required to conjure the illusion of a stable visual field. Shuttering the image once or twice a second doubles or triples the flicker frequency, pushing 24Hz to 48Hz or 72Hz, well within the comfort zone of human vision.

Chop one image into two, or even better, into three, and our experience becomes more continuous, not less. The way to erase the perception of flicker, in other words, is to introduce more flickers.

But how could this be possible? How does the objective addition of flickers amount to their subjective subtraction? How can complicating a stimuli erase the experience of complexity?

The short answer is simply that human cognition, visual or otherwise, takes time and energy. All cognitive sensitivities are sensitivities to very select physical events. Light striking photoreceptive proteins in rod and cone cells, changing their shape and causing the cell to fire. Sound waves striking hair bundles on the organ of Corti, triggering the release of signal-inducing neurotransmitters. The list goes on. In each case, physical contact triggers cascades of astronomically complicated physical events, each taking a pinch of time and energy. Physical limits become discriminatory limits, rendering high-frequency repetitions of a signal indistinguishable from a continuous one. Sensory fusion thresholds dramatically illustrate a fundamental fact of cognitive systems: insensitivity to difference reports as business as usual. If potential difference-making differences are not consumed by a cognitive system, then they make no difference to that system. Our flicker frequency threshold simply marks the point where our visual system trundles on as if no flicker existed.

The capacities of our cognitive systems are, of course, the product of evolution. As a result, we only discriminate our environments so far as our ancestors required on the path to becoming us. 6oHz was all we got, and so this, given certain technical and economic constraints, became the finish line for early display technologies such as film and CRTs. Surpass 60Hz, and you can fool most of the people most of the time.

Dogs, on the other hand, possess a critical flicker fusion threshold of around 75Hz. In overcoming our fusion threshold, industry left a great many other species behind. As far as we know, the Golden Age of Television was little more than a protracted ocular migraine for man’s best friend.

Imagine a flickering world, one where millions of dogs in millions of homes endured countless stroboscopic nights, while the families cherishing them bathed in (apparent) continuous light. Given the high frame per second rates characteristic of modern displays, this is no longer the case, of course. Enterprises like DogTV are just beginning to explore the commercial potential of these new technical ecologies. But the moral remains no less dramatic. The limits of cognition are far more peculiar and complicated than a great many people realize. As this blog attempts to show, they are a place of surprise, systematic error and confounding illusion. Not only can they be technologically exploited, they already have been engineered to a remarkable extent. And now they are about to be hacked in ways we could have scarce imagined at the end of the 20th century.

Flies, Frogs, and Fishhooks

by rsbakker

So, me and my buddies occasionally went frog hunting when we were kids. We’d knot a string on a fishhook, swing the line over the pond’s edge, and bam! frogs would strike at them. Up, up they were hauled, nude for being amphibian, hoots and hollers measuring their relative size.  Then they were dumped in a bucket.

We were just kids. We knew nothing about biology or evolution, let alone cognition. Despite this ignorance, we had no difficulty whatsoever explaining why it was so easy to catch the frogs: they were too stupid to tell the difference between fishhooks and flies.

Contrast this with the biological view I have available now. Given the capacity of Anuran visual cognition and the information sampled, frogs exhibit systematic insensitivities to the difference between fishhooks and flies. Anuran visual cognition not only evolved to catch flies, it evolved to catch flies as cheaply as possible. Without fishhooks to filter the less fishhook sensitive from the more fishhook sensitive, frogs had no way of evolving the capacity to distinguish flies from fishhooks.

Our old childhood theory is pretty clearly a normative one, explaining the frogs’ failure in terms what they ought to do (the dumb buggers). The frogs were mistaking fishhooks for flies. But if you look closely, you’ll notice how the latter theory communicates a similar normative component only in biological guise. Adducing evolutionary history pretty clearly allows us to say the proper function of Anuran cognition is to catch flies.

Ruth Millikan famously used this intentional crack in the empirical explanatory door to develop her influential version of teleosemantics, the attempt to derive semantic normativity from the biological normativity evident in proper functions. Eyes are for seeing, tongues for talking or catching flies; everything has been evolutionarily filtered to accomplish ends. So long as biological phenomena possess functions, it seems obvious functions are objectively real. So far as functions entail ‘satisfaction conditions,’ we can argue that normativity is objectively real. Given this anchor, the trick then becomes one of explaining normativity more generally.

The controversy caused by Language, Thought, and Other Biological Categories was immediate. But for all the principled problems that have since belaboured teleosemantic approaches, the real problem is that they remain as underdetermined as the day they were born. Debates, rather than striking out in various empirical directions, remain perpetually mired in ‘mere philosophy.’ After decades of pursuit, the naturalization of intentionality project, Uriah Kriegl notes, “bears all the hallmarks of a degenerating research program” (Sources of Normativity, 5).

Now the easy way to explain this failure is to point out that finding, as Millikan does, right-wrong talk buried in the heart of biological explanation does not amount to finding right and wrong buried in the heart of biology. It seems far less extravagant to suppose ‘proper function’ provides us with a short cut, a way to communicate/troubleshoot this or that actionable upshot of Anuran evolutionary history absent any knowledge of that history.

Recall my boyhood theory that frogs were simply too stupid to distinguish flies from fishhooks. Absent all knowledge of evolution and biomechanics, my friends and I found a way to communicate something lethal regarding frogs. We knew what frog eyes and frog tongues and frog brains and so on were for. Just like that. The theory possessed a rather narrow range of application to be true, but it was nothing if not cheap, and potentially invaluable if one were, say, starving. Anuran physiology, ethology, and evolutionary history simply did not exist for us, and yet we were able to pluck the unfortunate amphibians from the pond at will. As naïve children, we lived in a shallow information environment, one absent the great bulk of deep information provided by the sciences. And as far as frog catching was concerned, this made no difference whatsoever, simply because we were the evolutionary products of numberless such environments. Like fishhooks with frogs, theories of evolution had no impact on the human genome. Animal behavior and the communication of animal behavior, on the other hand, possessed a tremendous impact—they were the flies.

Which brings us back to the easy answer posed above, the idea that teleosemantics fails for confusing a cognitive short-cut for a natural phenomenon. Absent any way of cognizing our deep information environments, our ancestors evolved countless ways to solve various, specific problems absent such cognition. Rather than track all the regularities engulfing us, we take them for granted—just like a frog.

The easy answer, in other words, is to assume that theoretical applications of normative subsystems are themselves ecological (as is this very instant of cognition). After all, my childhood theory was nothing if not heuristic, which is to say, geared to the solution of complex physical systems absent complex physical knowledge of them. Terms like ‘about’ or ‘for,’ you could say, belong to systems dedicated to solving systems absent biomechanical cognition.

Which is why kids can use them.

Small wonder then, that attempts to naturalize ‘aboutness’ or ‘forness’—or any other apparent intentional phenomena—cause the theoretical fits they do. Such attempts amount to human versions of confusing flies for fishhooks! They are shallow information terms geared to the solution of shallow information problems. They ‘solve’—filter behaviors via feedback—by playing on otherwise neglected regularities in our deep environments, relying on causal correlations to the systems requiring solution, rather than cognizing those systems in physical terms. That is their naturalization—their deep information story.

‘Function,’ on the other hand, is a shallow information tool geared to the solution of deep information problems. What makes a bit of the world specifically ‘functional’ is its relation to our capacity to cognize consequences in a source neglecting yet source compatible way. As my childhood example shows, functions can be known independent of biology. The constitutive story, like the developmental one, can be filled in afterward. Functional cognition lets us neglect an astronomical number of biological details. To say what a mechanism is for is to know what a mechanism will do without saying what makes a mechanism tick. But unlike intentional cognition more generally, functional cognition remains entirely compatible with causality. This potent combination of high-dimensional compatibility and neglect is what renders it invaluable, providing the degrees of cognitive freedom required to tackle complexities across scales.

The intuition underwriting teleosemantics hits upon what is in fact a crucial crossroads between cognitive systems, where the amnesiac power of should facilitates, rather than circumvents, causal cognition. But rather than interrogate the prospect of theoretically retasking a child’s explanatory tool, Millikan, like everyone else, presumes felicity, that intuitions secondary to such retasking are genuinely cognitive. Because they neglect the neglect-structure of their inquiry, they flatter cunning children with objectivity, so sparing their own (coincidentally) perpetually underdetermined intuitions. Time and again they apply systems selected for brushed-sun afternoons along the pond’s edge to the theoretical problem of their own nature. The lures dangle in their reflection. They strike at fishhook after fishhook, and find themselves hauled skyward, manhandled by shadows before being dropped into buckets on the shore.

Do Zombies Dream of Undead Sheep?

by rsbakker

My wife gave me my first Kindle this Christmas, so I purchased a couple of those ‘If only I had a Kindle’ titles I have encountered over the years. I began with Routledge’s reboot of Brie Gertler’s collection, Privileged Access. The first essay happens to be Dretske’s “How Do You Know You are Not a Zombie?” an article I had hoped to post on for a while now as a means to underscore the inscrutability of metacognitive awareness. To explain how you know you’re not a zombie, you need to explain how you know you possess conscious experience.

What Dretske is describing, in fact, is nothing other than medial neglect; our abject blindness to the structure and dynamics of our own cognitive capacities. What I hope to show is the way the theoretical resources of Heuristic Neglect Theory allow us to explain a good number of the perplexities uncovered by Dretske in this awesome little piece. If Gertler’s anthology demonstrates anything decisively, it’s the abject inability of our traditional tools to decisively answer any of the questions posed. As William Lycan admits at the conclusion of his contribution, “[t]he moral is that introspection will not be well understood anytime soon.”

Dretske himself thinks his own question is ridiculous. He doesn’t believe he’s a zombie—he knows, in other words, that he possesses awareness. The question is how does he or anyone else know this. What in conscious experience evidences the conclusion that we are conscious or aware of that experience? “There is nothing you are aware of, external or internal,” Dretske will conclude, “that tells you that, unlike a zombie, you are aware of it.”

The primary problem, he suggests, is the apparent ‘transparency’ of conscious experience, the fact that attending to experience amounts to attending to whatever is being experienced.

“Watching your son do somersaults in the living room is not like watching the Olympics on television. Perception of your son may involve mental representations, but, if it does, the perception is not secured, as it is with objects seen on television, by awareness of these intermediate representations. It is the occurrence of (appropriately situated) representations in us, not our awareness of them that makes us aware of the external object being represented.”

Experience in the former sense, watching somersaults, is characterized by a lack of awareness of any intermediaries. Experience is characterized, in other words, by metacognitive insensitivity to the enabling dimension of cognition. This, as it turns out, is the definition of medial neglect.

So then, given medial neglect, what faculty renders us aware of our awareness? The traditional answer, of course, is introspection. But then the question becomes one of what introspection consists in.

“In one sense, a perfectly trivial sense, introspection is the answer to our question. It has to be. We know by introspection that we are not zombies, that we are aware of things around (and in) us. I say this is trivial because ‘introspection’ is just a convenient word to describe our way of knowing what is going on in our own mind, and anyone convinced that we know – at least sometimes – what is going on in our own mind and, therefore, that we have a mind and, therefore, that we are not zombies, must believe that introspection is the answer we are looking for.”

Introspection, he’s saying, is just the posit used to paper over the fact of medial neglect, the name for a capacity that escapes awareness altogether. And this, he points out, dooms inner sense models either to perpetual underdetermination, or the charge of triviality.

“Unless an inner sense model of introspection specifies an object of awareness whose properties (like the properties of beer bottles) indicate the facts we come to know about, an inner sense model of introspection does not tell us how we know we have conscious experiences. It merely tells us that, somehow, we know it. This is not in dispute.”

The problem is pretty clear. We have conscious experiences, but we have no conscious experience of the mechanisms mediating conscious experience. But there’s a further problem as well. As Stanislau Dehaene puts it, “[w]e constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). Our insensitivity to the structure and dynamics of cognition out-and-out entails  insensitivity to the limits of cognition as well.

“There is a perspective we have on the world, a ‘boundary’, if you will, between things we see and things we don’t see. And of the things we see, there are parts (surfaces) we see and parts (surfaces) we don’t see. This partition determines a point of view that changes as we move around.”

What Dretske calls ‘partition’ here, Continental phenomenologists call ‘horizon,’ an experiential boundary that does not appear within experience—what I like to call a ‘limit-with-one-side’ (LWOS). The most immediately available–and quite dramatic, I think–example is the boundary of your visual field, the way vision trails into oblivion instead of darkness. To see the boundary of seeing as such we would have to see what lays beyond sight. To the extent that darkness is something seen, it simply cannot demarcate the limit of your visual field.

“Points of view, perspectives, boundaries and horizons certainly exist in vision, but they are not things you see. You don’t see them for the same reason you don’t feel the boundaries between objects you touch and those you don’t. Tactile boundaries are not tactile and visual boundaries are not visible. There is a difference between the surfaces you see and the surfaces you don’t see, and this difference determines a ‘point of view’ on the world, but you don’t see your point of view.”

Our perspective, in other words, is hemmed at every turn by limits-with-one-side. Conscious experience possesses what might be called a multi-modal neglect structure: limits on availability and capacity that circumscribe what can be perceived or cognized.

When it comes to environmental cognition, the horizons are both circumstantially contingent, varying according to things like position and prior experience, and congenital, fixed according to our various sensory and cognitive capacities. We can chase a squirrel around a tree (to use James’ famous example from What Pragmatism Means), engage in what Karl Friston calls ‘active inference,’ but barring scientific instrumentation, we cannot chase a squirrel around the electromagnetic spectrum. We can see the backside of countless environmental features, but we have no way of contemporaneously seeing the biological backside of sight. (As Wittgenstein famously puts it in the Tractatus, “nothing in the visual field allows you to infer it is seen by an eye” (5.633)). For some reason, all or our cognitive and perceptual modalities suffer their own version of medial neglect.

For Dretske, the important point is the Heideggerean one (though I’m sure the closest he ever came to Heidegger was a night of drinking with Dreyfus!): that LWOS prevent any perspective on our perspective as such. For a perspective to contemporaneously appear in experience, it would cease to possess LWOS and so cease to be a perspective.

We perceive and cognize but a slice of ourselves and our environments, as must be the case on any plausible biological account of cognition. In a sense, what Dretske is calling attention to is so obvious as to escape interrogation altogether: Why medial neglect? We have a vast number of cognitive degrees of freedom relative to our environments, and yet we have so few relative to ourselves. Why? Biologically speaking, why should a human find itself so difficult to cognize?

Believe it or not, no one in Gertler’s collection tackles this question. In fact, since they begin presuming the veracity of various traditional ontologizations of experience and cognition, consciousness and intentionality, they actually have no way of posing this question. Rather than seeing the question of self-knowledge as the question of how a brain could possibly communicate/cognize its own activity, they see it as the question of how a mind can know its own mental states. They insist on beginning, as Dretske shows, where the evidence is not.

Biologically speaking, humanity was all but doomed to be confounded by itself. One big reason is simply indisposition: the machinery of seeing is indisposed, too busy seeing. This is what renders modality specific medial neglect, our inability ‘to see seeing’ and the like inescapable. Another involves the astronomical complexity of cognitive processes. Nothing prevents us from seeing where touch ends, or where hearing is mistaken. What one modality neglects can be cognized by another, then subsequently integrated. The problem is that the complexity of these cognitive processes far, far outruns their cognitive capacity. As the bumper-sticker declares, if our brains were so simple we could understand them, we would be too simple to understand our brains!

The facts underwriting medial neglect mean that, from an evolutionary perspective, we should expect cognitive sensitivity to enabling systems to be opportunistic (special purpose) as opposed to accurate (general purpose). Suddenly Dretske’s question of how we know we’re aware becomes the far less demanding question of how could a species such as ours report awareness? As Dretske says, we perceive/cognize but a slice of our environments, those strategic bits unearthed by evolution. Given that introspection is a biological capacity (and what else would it be?), we can surmise that it perceives/cognizes but a slice as well. And given the facts of indisposition and complexity, we can suppose that slice will be both fractionate and heuristic. In other words, we should expect introspection (to the extent it makes sense to speak of any such unified capacity) consists of metacognitive hacks geared to the solution of ancestral problems.

What Gertler and her academic confrere’s call ‘privileged access’ is actually a matter of specialized access and capacity, the ability to derive as many practical solutions as possible out of as little information as possible.

So what are we to make of the philosophical retasking of these metacognitive hacks? Given our blindness to the structure and dynamics of our metacognitive capacities, we had no way of intuiting how few degrees of metacognitive freedom we possessed–short, that is, of the consequences of our inquiries. How much more evidence of this lack of evidence do we need? Brie Gertler’s anthology, I think, wonderfully illustrates the way repurposing metacognitive hacks to answer philosophical questions inevitably crashes them. If we persist it’s because our fractionate slice is utterly insensitive to its own heuristic parochialism—because these capacities also suffer medial neglect! Availability initially geared to catching our tongue and the like becomes endless speculative fodder.

Consider an apparently obvious but endlessly controversial property of conscious experience, ‘transparency’ (or ‘intentional inexistence’) the way the only thing ‘in experience’ (its ‘content’) is precisely what lies outside experience. Why not suppose transparency—something which remains spectacularly inexplicable—is actually a medial artifact? The availability for conscious experience of only things admitting (originally ancestral) conscious solution is surely no accident. Conscious experience, as a biological artifact, is ‘need to know’ the same as everything else. Does the interval between sign and signified, subject and object, belief and proposition, experience and environment shout transparency, a miraculous vehicular vanishing act, or does it bellow medial neglect, our opportunistic obliviousness to the superordinate machinery enabling consciousness and cognition.

The latter strikes me as the far more plausible possibility, especially since its the very kind of problem one should expect, given the empirical inescapability of medial neglect.

Where transparency renders conscious experience naturalistically inscrutable, something hanging inexplicably in the neural millhouse, medial neglect renders it a component of a shallow information ecology, something broadcast to facilitate any number of possible behavioural advantages in practical contexts. Consciousness cuts the natural world at the joints—of this I have no doubt—but conscious experience, what we report day-in and day-out, cuts only certain classes of problems ‘at the joints.’ And what Dretske shows us, quite clearly, I think, is that the nature of conscious experience does not itself belong to that class of problems—at least not in any way that doesn’t leave us gasping for decisive evidence.

How do we know we’re not zombies? On Heuristic Neglect, the answer is straightforward (at certain level of biological generality at least): via one among multiple metacognitive hacks adapted to circumventing medial neglect, and even then, only so far as our ancestors required.

In other words, barely, if at all. The fact is, self-knowledge was never so important to reproduction as to warrant the requisite hardware.

The Liar’s Paradox Naturalized

by rsbakker

Can the Liar’s Paradox be understood in a biologically consilient way?

Say what you will about ‘Truth,’ everyone agrees that truth-talk has something to do with harmonizing group orientations relative to group environments. Whenever we find ourselves at odds either with one another or our environments, we resort to the vocabulary of truth and rectitude. The question is what this talk consists in and how it manages to do what it does.

The idea here is to steer clear presumptions of intentionality and look at the problem in the register providing the most information: biomechanically. Whatever our orientation to our environments consists in, everyone agrees that it is physical in some fundamental respect. Strokes are catastrophic for good reason. So, let’s stipulate that an orientation to an environment, in distinction to, say, a ‘perspective on’ an environment, consists of all physical (high-dimensional) facts underwriting our capacity to behaviourally resolve environments in happy (system conserving) ways.

We all agree that causal histories underwrite communication and cognition, but we have no inkling as to the details of that story, nor the details of the way we solve communicative and cognitive problems absent those details. Heuristic neglect simply provides a way to understand this predicament at face value. No one denies that human cognition neglects the natural facts of cognition; the problem is that everyone presumes this fact has little or no bearing on our attempts to solve the nature of cognition. Even though our own intuitive access to our cognitive capacities, given the complexity of those capacities, elides everything save what our ancestors needed to solve ancestral problems, most everyone thinks that intuitive access, given the right interpretation, provides everything cognitive science needs to solve cognitive scientific problems.

It really is remarkable when you think about it.  Out of sight, out of explanatory paradigm.

Beginning with orientations rather than perspectives allows us to radically reconceptualize a great many traditional philosophical problematics in ‘post-intentional’ terms. The manifest advantage of orientations, theoretically speaking, lies in their environmental continuity, their mediocrity, the way they comprise (unlike perspectives, meanings, norms, and so on) just more environment. Rather than look at linguistic communication in terms of ‘contents,’ the physical conveyance of ontologically inscrutable ‘meanings,’ we can understand it behaviouristically, as orientations impacting orientations via specialized mechanisms, behaviours, and sensitivities. Rather than conceive the function of communication ‘intersubjectively,’ as the coordination of intentional black boxes, we can view it biologically, as the formation of transient superordinate processes, ephemeral ‘superorganisms,’ taking individuals and their environments as component parts.

Granting that human communication consists in the harmonization of orientations relative to social and natural environments amounts to granting that human communication is biological, that it, like every other basic human capacity, possesses an evolutionary history. Human communication, in other words, is in the business of providing economical solutions to various environmental problems.

This observation motivates a dreadfully consequential question: What is the most economical way for two or more people to harmonize their environmental orientations? To communicate environmental discrepancies, while taking preexisting harmonies for granted. I don’t rehash my autobiography when I see my friends, nor do I lecture them on the physiology of human cognition or the evolution of the human species. I ‘dish dirt.’ I bring everyone ‘up to speed.’

What if we were to look at language as primarily a discrepancy minimization device, as a system possessing exquisite sensitivities (via, say, predictive processing) to the desynchronization of orientations?

In such a system, the sufficiency of preexisting harmonies—our shared physiology, location, and training—would go without saying. I update my friends and they update me. The same can be said of the system itself: the sufficiency of language, it’s biomechanical capacity to effect synchronization would also go without saying—short, that is, the detection of discrepancies. I update my friends and they update me, and so long as everyone agrees, nary a word about truth need be spoken.

Taking a discrepancy view, in other words, elegantly explains why truth is the communicative default: the economical thing is to neglect our harmonized orientations—which is to say, to implicitly presume their sufficiency. It’s only when we question the sufficiency of these communications that truth-talk comes into play.

Truth-talk, in other words, is typically triggered when communication observably fails to minimize discrepancies, when operational sufficiency, for whatever reason, ceases to be automatically presumed. Truth-talk harmonizes group orientations relative to group environments in cases of communicative discrepancy, an incompatibility between updates, say. [Would it be possible to build ways to do new things with existing polling data using discrepancy models? How does consensus within a network arise and cluster? What kind of information is salient or ignored? How do modes or channels facilitate or impede such consensus? Would it be possible, via big data, to track the regional congealing of orientations into tacit cooperatives, simply by tracking ingroup truth-talk? Can a discrepancy view subsume existing metrics? Can we measure the resilience or creativity or solidarity or motivation of a group via patterns in truth-talk activity?]

Neglecting harmonies isn’t simply economical, it’s also necessary, at least to the extent that humans have only the most superficial access to the details of those harmonies. It’s not that I don’t bother lecturing my ingroup on the physiology of human cognition and the evolution of the human species, it’s that, ancestrally speaking, I have no way of doing so. I suffer, as all humans suffer, from medial neglect, an inability to intuit the nature of my cognitive capacities, as well as frame neglect, an inability to put those capacities in natural context.

Neglecting the circumstances and constitution of verbal communication is a condition of verbal communication. Speech is oblivious to its biological and historical conditions. Verbal communication appears ‘extensional,’ as the philosophers of language say, because we have no other way of cognizing it. We have instances of speech and we have instances of the world, and we have no way of intuitively fathoming the actual relations between. Luckily for us, if our orientations are sufficiently isomorphic, we can communicate—harmonize our orientations—without fathoming these relations.

We can safely presume that the most frequent and demanding discrepancies will be environmental discrepancies, those which, given otherwise convergent orientations (the same physiology, location, and training), can be communicated absent contextual and constitutional information. If you and I share the same general physiology, location, and training, then only environmental discrepancies require our communicative attention. Such discrepancies can be resolved while remaining almost entirely ‘performance blind.’ All I need do is ‘trust’ your communication and cognition, build upon your unfathomable relations the same blind way I build upon my own. You cry, ‘Wolf!’ and I run for the shotgun: our orientations converge.

The problem, of course, is that all communicative discrepancies amount to some insufficiency in those ‘actual relations between.’ They require that we somehow fathom the unfathomable.

There is no understanding truth-talk without understanding that it’s in the ‘fathoming the unfathomable’ business. Truth-talk, in other words, resolves communicative discrepancies neglecting the natural facts underwriting those discrepancies. Truth-talk is radically heuristic, insofar as it leverages solutions to communicative problems absent information pertaining to the nature of those communicative problems.

So, to crib the example I gave in my recent Dennett posts: say you and I report seeing two different birds, a vulture versus an albatross, in circumstances where such a determination potentially matters—looking for a lost hunting party, say. An endless number of frame and medial confounds could possibly explain the discrepancy between our orientations. Perhaps I have bad eyesight, or I think albatrosses are black, or I was taught as much by an ignorant father, or I’m blinded by the glare of the sun, or I’m suffering schizophrenia, or I’m drunk, or I’m just sick and tired of you being right all the time, or I’m teasing you out of boredom, or more insidiously, I’m responsible for the loss of the hunting party, and want to prevent you from finding the scene of my crime.

There’s no question that, despite neglect, certain forms of access and capacity regarding the enabling dimension of cognition and communication could provide much in the way of problem resolution. Given the inaccessibility and complexity of the factors involved, however, it follows that any capacity to accommodate them will be heuristic in the extreme. This means that our cognitive capacity to flag/troubleshoot issues of cognitive sufficiency will be retail, fractionate, geared to different kinds of manifest problems:

  • Given the topological dependence of our orientations, capacities to solve for positional sufficiency. “Trump is peering through a keyhole.”
  • Given the environmental sensory dependence of our orientations, capacity to solve for the sufficiency of environmental conditions. “Trump is wandering in the dark.”
  • Given the physiological sensory dependence of our orientations, capacities to solve for physiological sufficiency. “Trump is myopic.”
  • Given the communal interdependence of our orientations, capacities to solve for social sufficiency, or trust. “Trump is a notorious liar.”
  • Given the experiential dependence of our orientations, capacities to solve for epistemic sufficiency. “Trump has no government experience whatsoever.”
  • Given the linearity of verbal communication, capacities to solve for combinatorial or syntactic sufficiency. “Trump said the exact opposite this morning.”

It’s worth pausing here, I think, to acknowledge the way this radically spare approach to truth-talk provides ingress to any number of philosophical discourses on the ‘nature of Truth.’ Heuristic Neglect Theory allows us to see just why ‘Truth’ has so thoroughly confounded humanity despite millennia of ardent inquiry.

The apparent ‘extensionality’ of language, the way utterances and environments covary, is an artifact of frame and medial neglect. Once again, we are oblivious to the astronomical complexities, all the buzzing biology, responsible for the systematic relations between our utterances and our environments. We detect discrepancies with those relations, in other words, without detecting the relations themselves. Since truth-talk ministers to these breakdowns in an otherwise inexplicable covariance, ‘correspondence’ strikes many as a natural way to define Truth. With circumstantial and enabling factors out of view, it appears as though the environment itself sorts our utterances—provides ‘truth conditions.’

Given the abject inability to agree on any formulation of this apparently more than natural correspondence, the turn to circumstantial and enabling factors was inevitable. Perhaps Truth is a mere syntactic device, a bridge between mention and use. After all, we generally only say ‘X is true’ when saying X is disputed. Or perhaps Truth is a social artifact of some description, something conceded to utterances in ‘games of giving and asking for reasons.’ After all, we generally engage in truth-talk only when resolving disputes with others. Perhaps ‘Truth’ doesn’t so much turn on ‘truth conditions’ as ‘assertion conditions.’

The heuristic neglect approach allows us to make sense of why these explanatory angles make the apparent sense they do, why, like the blind swamis and the elephant, each confuses some part for some chimerical whole. Neglecting the machinery of discrepancy minimization not only strands reflection with a strategic sliver of a far more complicated process, it generates the presumption that this sliver is somehow self-sufficient and whole.

Setting the ontological truth of Truth aside, the fact remains that truth-talk leverages life-saving determinations on the neural cheap. This economy turns on ignoring everything that makes truth-talk possible. The intractable nature of circumstantial and enabling factors enforces frame and medial neglect, imposing what might be called qualification costs on the resolution of communicative discrepancies. IGNORE THE MEDIAL is therefore the baseline heuristic governing truth-talk: we automatically ‘externalize’ because, ancestrally at least, our communicative problems did not require cognitive science to solve.

Of course, as a communicative heuristic, IGNORE THE MEDIAL possesses a problem-ecology, which is to say, limits to its applicability. What philosophers, mistaking a useful incapacity for a magical capacity, call ‘aboutness’ or ‘directedness’ or ‘subjectivity,’ is only useful so far.

As the name suggests, IGNORE THE MEDIAL will crash when applied to problems where circumstantial and/or enabling factors either are not or cannot be ignored.

We find this most famously, I think, in the Liar’s Paradox:

The following sentence is true. The preceding sentence is false.

Truth-talk pertains to the neglected sufficiency of orientations relative to ongoing natural and social environments. Collective ‘noise reduction’ is the whole point. As a component in a discrepancy minimization system, truth-talk is in the business of restoring positional and source neglect, our implicit ‘view from nowhere,’ allowing (or not) utterances originally sourced to an individual performance to update the tacit orientations of everyone—to purge discrepancies and restore synchronization.

Self-reference rather obviously undermines this natural function.

Reading From Bacteria to Bach and Back III: Beyond Stances

by rsbakker

 

The problem with his user-illusion model of consciousness, Dennett realizes, lies in its Cartesian theatricalization, the reflex to assume the reality of the illusion, and to thence argue that it is in fact this… the dumbfounding fact, the inexplicable explanandum. We acknowledge that consciousness is a ‘user-illusion,’ then insist this ‘manifest image’ is the very thing requiring explanation. Dennett’s de-theatricalization, in other words, immediately invites re-theatricalization, intuitions so powerful he feels compelled to devote an entire chapter to resisting the invitation, only to have otherwise generally sympathetic readers, like Tom Clark, to re-theatricalize everything once again. To deceive us at all, the illusion itself has to be something possessing, minimally it seems, the capacity to deceive. Faced with the question of what the illusion amounts to, he writes, “It is a representation of a red stripe in some neural system of representation” (358), allowing Clark and others to reply, ‘and so possesses content called qualia.’

One of the striking features of From Bacteria to Bach and Back is the degree to which his trademark Intentional Systems Theory (IST) fades into the background. Rather than speak of the physical stance, design stance, and intentional stance, he continually references Sellars tripartite nomenclature from “Philosophy and the Scientific Image of Man,” the ‘original image’ (which he only parenthetically mentions), the ‘manifest image,’ and the ‘scientific image.’ The manifest image in particular, far more than the intentional stance, becomes his primary theoretical term.

Why might this be?

Dennett has always seen himself threading a kind of theoretical needle, fending the scientifically preposterous claims of intentionalism on the one hand, and the psychologically bankrupt claims of eliminativism on the other. Where intentionalism strands us with impossible explanatory vocabularies, tools that cause more problems than they solve, eliminativism strands us with impoverished explanatory vocabularies, purging tools that do real work from our theoretical kits without replacing them. It’s not simply that Dennett wants, as so many of his critics accuse him, ‘to have it both ways’; it’s that he recognizes that having it both ways is itself the only way, theoretically speaking. What we want is to square the circle of intentionality and consciousness without running afoul either squircles or blank screens, which is to say, inexplicable intentionalisms or deaf-mute eliminativisms.

Seen in this light, Dennett’s apparent theoretical opportunism, rapping philosophical knuckles for some applications of intentional terms, shaking scientific hands for others, begins to look well motivated—at least from a distance. The global theoretical devil, of course, lies in the local details. Intentional Systems Theory constitutes Dennett’s attempt to render his ‘middle way’ (and so his entire project) a principled one. In From Bacteria to Bach and Back he explains it thus:

There are three different but closely related strategies or stances we can adopt when trying to understand, explain, and predict phenomena: the physical stance, the design stance, in the intentional stance. The physical stance is the least risky but also the most difficult; you treat the phenomenon in question as a physical phenomenon, obeying the laws of physics, and use your hard-won understanding of physics to predict what will happen next. The design stance works only for things that are designed, either artifacts or living things or their parts, and have functions or purposes. The intentional stance works primarily for things that are designed to use information to accomplish their functions. It works by treating the thing as a rational agent, attributing “beliefs” and “desires” and “rationality” to the thing, and predicting that it will act rationally. 37

The strategy is straightforward enough. There’s little doubt that the physical stance, design stance, and intentional stance assist solving certain classes of phenomena in certain circumstances, so when confronted by those kinds of phenomena in those kinds of circumstances, taking the requisite stance is a good bet. If we have the tools, then why not use them?

But as I’ve been arguing for years here at Three Pound Brain, the problems stack up pretty quick, problems which, I think, find glaring apotheosis in From Bacteria to Bach and Back. The first problem lies in the granularity of stances, the sense in which they don’t so much explain cognition as merely divvy it up into three families. This first problem arises from the second, their homuncularity, the fact that ‘stances’ amount to black-box cognitive comportments, ways to manipulate/explain/predict things that themselves resist understanding. The third, and (from the standpoint his thesis) most devastating problem, also turns on the second: the fact that stances are the very thing requiring explanation.

The reason the intentional stance, Dennett’s most famed explanatory tool, so rarely surfaces in From Bacteria to Bach and Back is actually quite simple: it’s his primary explanandum. The intentional stance cannot explain comprehension simply because it is, ultimately, what comprehension amounts to…

Well, almost. And it’s this ‘almost,’ the ways in which the intentional stance defects from our traditional (cognitivist) understanding of comprehension, which has ensnared Dennett’s imagination—or so I hope to show.

What does this defection consist in? As we saw, the retasking of metacognition to solve theoretical questions was doomed to run afoul sufficiency-effects secondary to frame and medial neglect. The easiest way to redress these illusions lies in interrogating the conditions and the constitution of cognition. What the intentional stance provides Dennett is a granular appreciation of the performative, and therefore the social, fractionate, constructive, and circumstantial nature of comprehension. Like Wittgenstein’s ‘language games,’ or Kuhn’s ‘paradigms,’ or Davidson’s ‘charity,’ Dennett’s stances allow him to capture something of the occluded external and internal complexities that have for so long worried the ‘clear and distinct’ intuition of the ambiguous human cylinder.

The intentional stance thus plays a supporting role, popping up here and there in From Bacteria to Bach and Back insofar as it complicates comprehension. At every turn, however, we’re left with the question of just what it amounts to. Intentional phenomena such as representations, beliefs, rules, and so on are perspectival artifacts, gears in what (according to Dennett) is the manifest ontology we use to predict/explain/manipulate one another using only the most superficial facts. Given the appropriate perspective, he assures us, they’re every bit as ‘real’ as you and I need. But what is a perspective, let alone a perspectival artifact? How does it—or they—function? What are the limits of application? What constitutes the ‘order’ it tracks, and why is it ‘there’ as opposed to, say, here?

Dennett—and he’s entirely aware of this—really doesn’t have much more than suggestions and directions when it comes to these and other questions. As recently as Intuition Pumps, he explicitly described his toolset as “good at nibbling, at roughly locating a few ‘fixed’ points that will help us see the general shape of the problem” (79). He knows the intentional stance cannot explain comprehension, but he also knows it can inflect it, nudge it closer to a biological register, even as it logically prevents the very kind of biological understanding Dennett—and naturalists more generally—take as the primary desideratum. As he writes (once again in 2013):

I propose we simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is. Whatever the right answer to that question is—if it has a right answer—this will not jeopardize the plain fact that the intentional stance works remarkably well as a prediction method in these and other areas, almost as well as it works in our daily lives as folk-psychologists dealing with other people. This move of mine annoys and frustrates some philosophers, who want to blow the whistle and insist on properly settling the issue of what a mind, a belief, a desire is before taking another step. Define your terms, sir! No, I won’t. that would be premature. I want to explore first the power and the extent of application of this good trick, the intentional stance. Intuition Pumps, 79

But that was then and this is now. From Bacteria to Bach and Back explicitly attempts to make good on this promissory note—to naturalize comprehension, which is to say, to cease merely exploring the scope and power of the intentional stance, and to provide us with a genuine naturalistic explanation. To explain, in the high-dimensional terms of nature, what the hell it is. And the only way to do this is to move beyond the intentional stance, to cease wielding it as a tool, to hoist it on the work-bench, and to adduce the tools that will allows us to take it apart.

By Dennett’s own lights, then, he needs to reverse-engineer the intentional stance. Given his newfound appreciation for heuristic neglect, I understand why he feels the potential for doing this. A great deal of his argument for Cartesian gravity, as we’ve seen, turns on our implicit appreciation of the impact of ‘no information otherwise.’ But sensing the possibility of those tools, unfortunately, does not amount to grasping them. Short explicit thematizations of neglect and sufficiency, he was doomed to remain trapped on the wrong side of the Cartesian event horizon.

On Dennett’s view, intentional stances are homuncular penlights more than homuncular projectors. What they see, ‘reasons,’ lies in the ‘eye of the beholder’ only so far as natural and neural selection provisions the beholder with the specialized competencies required to light them up.

The reasons tracked by evolution I have called ‘free-floating rationales,’ a term that has apparent jangled the nerves of some few thinkers, who suspect I am conjuring up ghosts of some sort. Not at all. Free-floating rationales are no more ghostly or problematic than numbers or centers of gravity. Cubes had eight corners before people invented ways of articulating arithmetic, and asteroids had centers of gravity before there were physicists to dream up the idea and calculate with it. Reasons existed long before there were reasoners. 50

To be more precise, the patterns revealed by the intentional stance exist independent of the intentional stance. For Dennett, the problematic philosophical step—his version of the original philosophical sin of intentionalism—is to think the cognitive bi-stability of these patterns, the fact they appear to be radically different when spied with a first-person penlight versus scientific floodlights, turns on some fundamental ontological difference.

And so, Dennett holds that a wide variety of intentional phenomena are real, just not in the way we have traditionally understood them to be real. This includes reasons, beliefs, functions, desires, rules, choices, purposes, and—pivotally, given critiques like Tom Clark’s—representations. So far as this bestiary solves real world problems, they have to grab hold of the world somehow, don’t they? The suggestion that intentional posits are no more problematic than formal or empirical posits (like numbers and centers of gravity) is something of a Dennettian refrain—as we shall see, it presumes the heuristics involved in intentional cognition possess the same structure as heuristics in other domains, which is simply not the case. Otherwise, so long as intentional phenomena actually facilitate cognition, it seems hard to deny that they broker some kind high-dimensional relationship with the high-dimensional facts of our environment.

So what kind of relationship? Well, Dennett argues that it will be—has to be, given evolution—heuristic. So far as that relationship is heuristic, we can presume that it solves by taking the high-dimensional facts of the matter—what we might call the deep information environment—for granted. We can presume, in other words, that it will ignore the machinery, and focus on cues, available information systematically related to that machinery in ways that enable the prediction/explanation/manipulation of that machinery. In other words, rather than pick out the deep causal patterns responsible it will exploit those available patterns possessing some exploitable correlation to those patterns.

So then where, one might ask, do the real patterns pertaining to ‘representation’ lie in this? What part or parts of this machine-solving machinery gainsays the ‘reality’ of representations? Just where do we find the ‘real patterns’ underwriting the content responsible for individuating our reports? It can’t be the cue, the available information happily correlated to the system or systems requiring solution, simply because the cue is often little more than a special purpose trigger. The Heider-Simmel Illusion, for instance, provides a breathtaking example of just how little information it takes. So perhaps we need to look beyond the cue, to the adventitious correlations binding it to the neglected system or systems requiring solution. But if these are the ‘real patterns’ illuminated by the intentional stance, it’s hard to understand what makes them representational—more than hard in fact, since these relationships consist in regularities, which, as whole philosophical traditions have discovered, are thoroughly incompatible with the distinctively cognitive properties of representation. Well, then, how about the high-dimensional machinery indirectly targeted for solution? After all, representations provide us a heuristic way to understand otherwise complex cognitive relationships. This is where Dennett (and most everyone else, for that matter) seems to think the real patterns lie, the ‘order which is there,’ in the very machinery that heuristic systems are adapted—to avoid! Suddenly, we find ourselves stranded with regularities only indirectly correlated to the cues triggering different heuristic cognitive systems. How could the real patterns gainsaying the reality of representations be the very patterns our heuristic systems are adapted to ignore?

But if we give up on the high-dimensional systems targeted for solution, perhaps we should be looking at the heuristic systems cognizing—perhaps this is where the real patterns gainsaying the reality of representations lie, here, in our heads. But this is absurd, of course, since the whole point of saying representations are real (enough) is to say they’re out there (enough), independent of our determinations one way or another.

No matter how we play this discursive shell game, the structure of heuristic cognition guarantees that we’ll never discover the ‘real pattern pea,’ even with intentional phenomena so apparently manifest (because so useful in both everyday and scientific contexts) as representations. There’s real systems, to be sure, systems that make ‘identifying representations’ as easy as directing attention to the television screen. But those systems are as much here as they are there, making that television screen simply another component in a greater whole. Without the here, there is no there, which is to say, no ‘representation.’ Medial neglect assures the astronomical dimensionality of the here is flattened into near oblivion, stranding cognition with a powerful intuition of a representational there. Thanks to our ancestors, who discovered myriad ways to manipulate information to cue visual cognition out of school, to drape optical illusions across their cave walls, or to press them into lumps of clay, we’ve become so accustomed to imagery as to entirely forget the miraculousness of seeing absent things in things present. Those cues are more or less isomorphic to the actual systems comprising the ancestral problem ecologies visual cognition originally evolved to manage. This is why they work. They recapitulate certain real patterns of information in certain ways—as does your, retina, your optic nerve, and every stage of visual cognition culminating in visual experience. The only thing ‘special’ about the recapitulations belonging to your television screen is their availability, not simply to visual cognition, but to our attempts to cognize/troubleshoot such instances of visual cognition. The recapitulations on the screen, unlike, say, the recapitulations captured by our retinas, are the one thing we can readily troubleshoot should they begin miscuing visual cognition. Neglect ensures the intuition of sufficiency, the conviction that the screen is the basis, as opposed to simply another component in a superordinate whole. So, we fetishize it, attribute efficacies belonging to the system to what is in fact just another component. All its enabling entanglements vanish into the apparent miracle of unmediated semantic relationships to whatever else happens to be available. Look! we cry. Representation

Figure 1: This image of the Martian surface taken by Viking 1 in 1976 caused a furor on earth, for obvious reasons.

Figure 2: Images such as this one taken by the Mars Reconnaissance Orbiter reveal the former to be an example of facial pareidolia, an instance where information cues facial recognition where no faces are to be found. The “Face on Mars” seems be an obvious instance of projection—mere illusion—as opposed to discovery. Until, that is, one realizes that both of these images consist of pixels cuing your visual systems ‘out of school’! Both, in other words, constitute instances of pareidolia: the difference lies in what they enable.

Some apparent squircles, it turns out, are dreadfully useful. So long as the deception is systematic, it can be instrumentalized any which way. Environmental interaction is the basis of neural selection (learning), and neural selection is the basis of environmental domination. What artificial visual cuing—‘representation’—provides is environmental interaction on the cheap, ways to learn from experience without having to risk or endure experience. A ‘good trick’ indeed!

This brings us to a great fault-line running through the entirety of Dennett’s corpus. The more instrumental a posit, the more inclined he’s to say it’s ‘real.’ But when critics accuse him of instrumentalism, he adverts to the realities underwriting the instrumentalities, what enables them to work, to claim a certain (ambiguous, he admits) brand of realism. But as should now be clear, what he elides when he does this is nothing less than the structure of heuristic cognition, which blindly exploits the systematic correlations between information available and the systems involved to solve those systems as far as constraints on availability and capacity allow.

The reason he can elide the structure of heuristic cognition (and so find his real patterns argument convincing) lies, pretty clearly, I think, in the conflation of human intentional cognition (which is radically heuristic) with the intentional stance. In other words, he confuses what’s actually happening in instances of intentional cognition with what seems to be happening in instances of intentional cognition, given neglect. He runs afoul Cartesian gravity. “We tend to underestimate the strength of the forces that distort our imaginations,” he writes, “especially when confronted by irreconcilable insights that are ‘undeniable’” (22). Given medial neglect, the inability to cognize our contemporaneous cognizing, we are bound to intuit the order as ‘there’ (as ‘lateral’) even when we, like Dennett, should know better. Environmentalization is, as Hume observed, the persistent reflex, the sufficiency effect explaining our default tendency to report medial artifacts, features belonging to the signal, as genuine environmental phenomena, or features belonging to the source.

As a heuristic device, an assumption circumventing the brute fact of medial neglect, the environmentalization heuristic possesses an adaptive problem ecology—or as Dennett would put it, ‘normal’ and ‘abnormal’ applications. The environmentalization heuristic, in other words, possesses adaptive application conditions. What Dennett would want to argue, I’m sure, is that ‘representations’ are no more or less heuristic than ‘centres of gravity,’ and that we are no more justified in impugning the reality of the one than the reality of the other. “I don’t see why my critics think their understanding about what really exists is superior to mine,” he complains at one point in From Bacteria to Bach and Back, “so I demure” (224). And he’s entirely right on this score: no one has a clue as to what attributing reality amounts to. As he writes regarding the reality of beliefs in “Real Patterns”:

I have claimed that beliefs are best considered to be abstract objects rather like centers of gravity. Smith considers centers of gravity to be useful fictions while Dretske considers them to be useful (and hence?) real abstractions, and each takes his view to constitute a criticism of my position. The optimistic assessment of these opposite criticisms is that they cancel each other out; my analogy must have hit the nail on the head. The pessimistic assessment is that more needs to be said to convince philosophers that a mild and intermediate sort of realism is a positively attractive position, and not just the desperate dodge of ontological responsibility it has sometimes been taken to be. I have just such a case to present, a generalization and extension of my earlier attempts, via the concept of a pattern. 29

Heuristic Neglect Theory, however, actually put us in a position to make a great deal of sense of ‘reality.’ We can see, rather plainly, I think, the disanalogy between ‘centres of gravity’ and ‘beliefs,’ the disanalogy that leaps out as soon as we consider how only the latter patterns require the intentional stance (or more accurately, intentional cognition) to become salient. Both are heuristic, certainly, but in quite different ways.

We can also see the environmentalization heuristic at work in the debate between whether ‘centres of gravity’ are real or merely instrumental, and Dennett’s claim that they lie somewhere in-between. Do ‘centres of gravity’ belong to the order which is there, or do we simply project them in useful ways? Are they discoveries, or impositions? Why do we find it so natural to assume either the one or the other, and so difficult to imagine Dennett’s in-between or ‘intermediate’ realism? Why is it so hard conceiving of something half-real, half-instrumental?

The fundamental answer lies in the combination of frame and medial neglect. Our blindness to the enabling dimension of cognition renders cognition, from the standpoint of metacognition, an all but ethereal exercise. ‘Transparency’ is but one way of thematizing the rank incapacity generally rendering environmentalization such a good trick. “Of course, centres of gravity lie out there!” We are more realists than instrumentalists. The more we focus on the machinery of cognition, however, the more dimensional the medial becomes, the more efficacious, and the more artifactual whatever we’re focusing on begins to seem. Given frame neglect, however, we fail to plug this higher-dimensional artifactuality into the superordinate systems encompassing all instances of cognition, thus transforming gears into tools, fetishizing those instances, in effect. “Of course, centres of gravity organize out there!” We become instrumentalists.

If these incompatible intuitions are all that the theoretician has to go on, then Dennett’s middle way can only seem tendentious, an attempt to have it both ways. What makes Dennett’s ‘mild or intermediate’ realism so difficult to imagine is nothing less than Cartesian gravity, which is to say, the compelling nature of the cognitive illusions driving our metacognitive intuitions either way. Squares viewed on this angle become circles viewed on that. There’s no in-between! This is why Dennett, like so many revolutionary philosophical thinkers before him, is always quick to reference the importance of imagination, of envisioning how things might be otherwise. He’s always bumping against the limits of our shackles, calling attention to the rattle in the dark. Implicitly, he understands the peril that neglect, by way of sufficiency, poses to our attempts to puzzle through these problems.

But only implicitly, and as it turns out (given tools so blunt and so complicit as the intentional stance), imperfectly. On Heuristic Neglect Theory, the practical question of what’s real versus what’s not is simply one of where and when the environmentalization heuristic applies, and the theoretical question of what’s ‘really real’ and what’s ‘merely instrumental’ is simply an invitation to trip into what is obviously (given the millennial accumulation of linguistic wreckage) metacognitive crash space. When it comes to ‘centres of gravity,’ environmentalization—or the modifier ‘real’—applies because of the way the posit economizes otherwise available, as opposed to unavailable, information. Heuristic posits centres of gravity might be, but ones entirely compatible with the scientific examination of deep information environments.

Such is famously not the case with posits like ‘belief’ or ‘representation’—or for that matter, ‘real’! The heuristic mechanisms underwriting environmentalization are entirely real, as is the fact that these heuristics do not simply economize otherwise available information, but rather compensate for structurally unavailable information. To this extent, saying something is ‘real’—acknowledging the applicability of the environmentalization heuristic—involves the order here as much as the order there, so far as it compensates for structural neglect, rather than mere ignorance or contingent unavailability. ‘Reality’ (like ‘truth’) communicates our way of selecting and so sorting environmental interactions while remaining almost entirely blind to the nature of those environmental interactions, which is to say, neglecting our profound continuity with those environments.

At least as traditionally (intentionally) conceived, reality does not belong to the real, though reality-talk is quite real, and very useful. It pays to communicate the applicability of environmentalization, if only to avoid the dizzying cognitive challenges posed by the medial, enabling dimensions of cognition. Given the human circuit, truth-talk can save lives. The apparent paradox of such declarations—such as saying, for instance, that it’s true that truth does not exist—can be seen as a direct consequence of frame and medial neglect, one that, when thought carefully through step by empirically tractable step, was pretty much inevitable. We find ourselves dumbfounding for good reason!

The unremarkable fact is that the heuristic systems we resort to when communicating and trouble-shooting cognition are just that: heuristic systems we resort to when communicating and trouble-shooting cognition. And what’s more, they possess no real theoretical power. Intentional idioms are all adapted to shallow information ecologies. They comprise the communicative fraction of compensatory heuristic systems adapted not simply to solve astronomically complicated systems on the cheap, but absent otherwise instrumental information belonging to our deep information environments. Applying those idioms to theoretical problems amounts to using shallow resources to solve the natural deeps. The history of philosophy screams underdetermination for good reason! There’s no ‘fundamental ontology’ beneath, no ‘transcendental functions’ above, and no ‘language-games’ or ‘intentional stances’ between, just the machinations of meat, which is why strokes and head injuries and drugs produce the boggling cognitive effects they do.

The point to always keep in mind is that every act of cognition amounts to a systematic meeting of at least two functionally distinct systems, the one cognized, the other cognizing. The cognitive facts of life entail that all cognition remains, in some fundamental respect, insensitive to the superordinate system explaining the whole let alone the structure and activity of cognition. This inability to cognize our position within superordinate systems (frame neglect) or to cognize our contemporaneous cognizing (medial neglect) is what renders the so-called first-person (intentional stance) homuncular, blind to its own structure and dynamics, which is to say, oblivious to the role here plays ordering ‘there.’ This is what cognitive science needs to internalize, the way our intentional and phenomenal idioms steer us blindly, absent any high-dimensional input, toward solutions that, when finally mapped, will bear scant resemblance to the metacognitive shadows parading across our cave walls. And this is what philosophy needs to internalize as well, the way their endless descriptions and explanations, all the impossible figures—squircles—comprising the great bestiary of traditional reflection upon the nature of the soul, are little more than illusory artifacts of their inability to see their inability to see. To say something is ‘real’ or ‘true’ or ‘factual’ or ‘represents,’ or what have you is to blindly cue blind orientations in your fellows, to lock them into real but otherwise occluded systems, practically and even experimentally efficacious circuits, not to invoke otherworldly functions or pick out obscure-but-real patterns like ‘qualia’ or ‘representations.’

The question of ‘reality’ is itself a heuristic question. As horribly counter-intuitive as all this must sound, we really have no way of cognizing the high-dimensional facts of our environmental orientation, and so no choice but to problem-solve those facts absent any inkling of them. The issue of ‘reality,’ for us, is a radically heuristic one. As with all heuristic matters, the question of application becomes paramount: where does externalization optimize, and where does it crash? It optimizes where the cues relied upon generalize, provide behavioural handles that can be reverse-engineered—‘reduced’—absent reverse-engineering us. It optimizes, in other words, wherever frame and medial neglect do not matter. It crashes, however, where the cues relied upon compensate, provide behavioural handles that can only be reverse-engineered by reverse-engineering ourselves.

And this explains the ‘gobsmacking fact’ with which we began, how we can source the universe all the way back to first second, and yet remain utterly confounded by our ability to do so. Short cognitive science, compensatory heuristics were all that we possessed when it came to question of ourselves. Only now do we find ourselves in a position to unravel the nature of the soul.

The crazy thing to understand, here, the point Dennett continually throws himself toward in From Bacteria to Bach and Back only to be drawn back out on the Cartesian tide, is that there is no first-person. There is no original or manifest or even scientific ‘image’: these all court ‘imaginative distortion’ because they, like the intentional stance, are shallow ecological artifacts posturing as deep information truths. It is not the case that, “[w]e won’t have a complete science of consciousness until we can align our manifest-image identifications of mental states by their contents with scientific-image identifications of the subpersonal information structures and events that are causally responsible for generating the details of the user-illusion we take ourselves to operate in” (367)—and how could it be, given our abject inability to even formulate ‘our manifest-image identifications,’ to agree on the merest ‘detail of our user-illusion’? There’s a reason Tom Clark emphasizes this particular passage in his defense of qualia! If it’s the case that Dennett believes a ‘complete science of consciousness’ requires the ‘alignment’ of metacognitive reports with subpersonal mechanisms then he is as much a closet mysterian as any other intentionalist. There’s simply too many ways to get lost in the metacognitive labyrinth, as the history of intentional philosophy amply shows.

Dennett needs only continue following the heuristic tracks he’s started down in From Bacteria to Bach and Back—and perhaps recall his own exhortation to imagine—to see as much. Imagine how it was as a child, living blissfully unaware of philosophers and scientists and their countless confounding theoretical distinctions and determinations. Imagine the naïveté, not of dwelling within this or that ‘image,’ but within an ancestral shallow information ecology, culturally conditioned to be sure, but absent the metacognitive capacity required to run afoul sufficiency effects. Imagine thinking without ‘having thoughts,’ knowing without ‘possessing knowledge,’ choosing without ‘exercising freedom.’ Imagine this orientation and how much blinkered metacognitive speculation and rationalization is required to transform it into something resembling our apparent ‘first-person perspective’—the one that commands scarcely any consensus beyond exceptionalist conceit.

Imagine how much blinkered metacognitive speculation and rationalization is required to transform it into the intentional stance.

So, what, then, is the intentional stance? An illusory artifact of intentional cognition, understood in the high-dimensional sense of actual biological mechanisms (both naturally and neurally selected), not the low-dimensional, contentious sense of an ‘attitude’ or ‘perspective.’ The intentional stance represents an attempt to use intentional cognition to fundamentally explain intentional cognition, and in this way, it is entirely consonant with the history of philosophy as a whole. It differs—perhaps radically so—in the manner it circumvents the metacognitive tendency to report intentional phenomena as intrinsic (self-sufficient), but it nevertheless remains a way to theorize cognition and experience via, as Dennett himself admits, resources adapted to their practical troubleshooting.

The ‘Cartesian wound’ is no more than theatrical paint, stage make-up, and so something to be wiped away, not healed. There is no explanatory gap because there is no first-person—there never has been, apart from the misapplication of radically heuristic, practical problem-solving systems to the theoretical question of the soul. Stripped of the first-person, consciousness becomes a natural phenomenon like any other, baffling only for its proximity, for overwriting the very page it attempts to read. Heuristic Neglect Theory, in other words, provides a way for us to grasp what we are, what we always have been: a high-dimensional physical system possessing selective sensitivities and capacities embedded in other high-dimensional physical systems. This is what you’re experiencing now, only so far as your sensitivities and capacities allow. This, in other words, is this… You are fundamentally inscrutable unto yourself outside practical problem-solving contexts. Everything else, everything apparently ‘intentional’ or ‘phenomenal’ is simply ‘seems upon reflection.’ There is no manifest image,’ only a gallery of competing cognitive illusions, reflexes to report leading to the crash space we call intentional philosophy. The only ‘alignment’ required is that between our shallow information ecology and our deep information environments: the ways we do much with little, both with reference to each other and with ourselves. This is what you reference when describing a concert to your buddies. This is what you draw on when you confess your secrets, your feelings, your fears and aspirations. Not a ‘mind,’ not a ‘self-model,’ nor even a ‘user illusion,’ but the shallow cognitive ecology underwriting your brain’s capacity to solve and report itself and others.

There’s a positively vast research project buried in this outlook, and as much would become plain, I think, if enough souls could bring themselves see past the fact that it took an institutional outsider to discover. The resolutely post-intentional empirical investigation of the human has scarcely begun.

Reading From Bacteria to Bach and Back II: The Human Squircle

by rsbakker

The entry placing second (!!) in the 2016 Illusion of the Year competition, the Ambiguous Cylinder Illusion, blew up on Reddit for good reason. What you’re seeing below is an instance where visual guesswork arising from natural environmental frequencies have been cued ‘out of school.’ In this illusion, convex and concave curves trick the visual system into interpreting a ‘squircle’ as either a square or a circle—thus the dazzling images. Ambiguous cylinders provide dramatic illustration of a point Dennett makes many times in From Bacteria to Bach and Back: “One of the hallmarks of design by natural selection,” he writes, “is that it is full of bugs, in the computer programmer’s sense: design flaws that show up only under highly improbable conditions, conditions never encountered in the finite course of R&D that led to the design to date, and hence not yet patched or worked around by generations of tinkering” (83). The ‘bug’ exploited in this instance could be as much a matter of neural as natural selection, of course—perhaps, as with the Muller-Lyer illusion, individuals raised in certain environments are immune to this effect. But the upshot remains the same. By discovering ways to cue heuristic visual subsystems outside their adaptive problem ecologies, optical illusionists have developed a bona fide science bent on exploring what might be called ‘visual crash space.’

One of the ideas behind Three Pound Brain is to see traditional intentional philosophy as the unwitting exploration of metacognitive crash space. Philosophical reflection amounts to the application of metacognitive capacities adapted to trouble-shooting practical cognitive and communicative issues to theoretical problems. What Dennett calls ‘Cartesian gravity,’ in other words, has been my obsession for quite some time, and I think I have a fair amount of wisdom to share, especially when it comes to philosophical squircles, things that seem undeniable, yet nevertheless contradict our natural scientific understanding. Free will is perhaps the most famous of these squircles, but there’s really no end to them. The most pernicious squircle of all, I’m convinced, is the notion of intentionality, be it ‘derived’ or ‘original.’

On Heuristic Neglect Theory, Cartesian gravity boils down to metacognitive reflexes, the application of heuristic systems to questions they have no hope of answering absent any inkling of as much. The root of the difficulty lies in neglect, the way insensitivity to the limits of felicitous application results in various kinds of systematic errors (what might be seen as generalized versions of the WYSIATI effects discovered by Daniel Kahneman).

The centrality of neglect (understood as an insensitivity that escapes our sensitivity) underwrites my reference to the ‘Grand Inversion’ in the previous installment. As an ecological artifact, human cognition trivially possesses what might be called a neglect structure: we are blind to the vast bulk of the electromagnetic spectrum, for instance, because sensing things like gamma radiation, infrared, or radio waves paid no ancestral dividends. If fact, one can look at the sum of scientific instrumentation as mapping out human ‘insensitivity space,’ providing ingress into all those places our ancestral sensitivities simply could not take us. Neglect, in other words, allows us to quite literally invert our reflexive ways of comprehending comprehension, not only in a wholesale manner, but in a way entirely compatible with what Dennett calls, following Sellars, the scientific image.

Simply flipping our orientation in this way allows us to radically recharacterize Dennett’s project in From Bacteria to Bach and Back as a matter of implicitly mapping our human neglect structure by filling in all the naturalistic blanks. I say implicit because his approach remains primarily focused on what is neglected, rather than neglect considered in its own right. Despite this, Dennett is quite cognizant of the fact that he’s discussing a single phenomenon, albeit one he characterizes (thanks to Cartesian gravity!) in positive terms:

Darwin’s “strange inversion of reasoning” and Turing’s equally revolutionary inversion form aspects of a single discovery: competence without comprehension. Comprehension, far from being a god-like talent from which all design must flow, is an emergent effect of systems of uncomprehending competence… (75)

The problem with this approach is one that Dennett knows well: no matter how high you build your tower of natural processes, all you’ve managed to do, in an important sense, is recapitulate the mystery you’ve set out to solve. No matter how long you build your ramp, talk of indefinite thresholds and ‘emergent effects’ very quickly reveals you’re jumping the same old explanatory shark. In a sense, everyone in the know knows at least the moral of the story Dennett tells: competences stack into comprehension on any Darwinian account. The million-dollar question is how ‘all that’ manages to culminate in this

Personally speaking, I’ve never had an experience quite like the one I had reading this book. Elation, realizing that one of the most celebrated minds in philosophy had (finally!) picked up on the same trail. Urgency, knowing I had to write a commentary, like, now. And then, at a certain point, wonder at the sense of knowing, quite precisely, what it was tantalizing his intuitions: the profound connection between his Darwinian commitments and his metaphilosophical hunches regarding Cartesian gravitation.

Heuristic Neglect Theory not only allows us to economize Dennett’s bottom-up saga of stacking competences, it also provides a way to theorize his top-down diagnosis of comprehension. It provides, in other words, the common explanatory framework required to understand this… in terms of ‘all that.’ No jumps. No sharks. Just one continuous natural story folding comprehension into competence (or better, behaviour).

What applies to human cognition applies to human metacognition—understood as the deliberative derivation of endogenous or exogenous behaviour via secondary (functionally distinct) access to one’s own endogenous or exogenous behaviour. As an ecological artifact, human metacognition is fractionate and heuristic, and radically so, given the complexity of the systems it solves. As such, it possesses its own neglect structure. Understanding this allows us to ‘reverse-engineer’ far more than Dennett suspects, insofar as it lets us hypothesize the kinds of blind spots we should expect to plague our attempts to theorize ourselves given the deliverances of philosophical reflection. It provides the theoretical basis, I think, for understanding philosophy as the cognitive psychological phenomenon that it is.

It’s a truism to say that the ability to cognize any system crucially depends on a cognitive system’s position relative to that system. But things get very interesting once we begin picking at the how and why. The rationality of geocentrism, for instance, is generally attributed to the fact that from our terrestrial perspective, the sky does all the moving. We remain, as far as we can tell, motionless. Why is motionlessness the default? Why not assume ignorance? Why not assume that the absence of information warranted ‘orbital agnosticism’? Basically, because we lacked the information to determine our lack of information.

Figure 1: It is a truism to state that where we find ourselves within a system determines our ability to cognize that system. ‘Frame neglect’ refers to our cognitive insensitivity, not only to our position within unknown systems, but to this insensitivity.

Figure 2: Thus, the problem posed by sufficiency, the automatic presumption that what we see is all there is. The ancients saw the stars comprising Orion as equidistant simply because they lacked the information and theory required to understand their actual position—because they had no way of knowing otherwise.

Figure 3: It is also a truism to state that the constitution of our cognitive capacities determines our ability to cognize systems. ‘Medial neglect’ refers to our cognitive insensitivity, not only to the constitution of our cognitive capacities, but to this insensitivity. We see, but absent any sensitivity to the machinery enabling sight.

Figure 4: Thus, once again, the problem posed by sufficiency. Our brain interprets ambiguous cylinders as magical squircles because it possesses no sensitivity to the kinds of heuristic mechanisms involved in processing visual information.

Generally speaking, we find these ‘no information otherwise’ justifications so intuitive that we just move on. We never ask how or why the absence of sensible movement cues reports of motionlessness. Plato need only tell us that his prisoners have been chained before shadows their whole lives and we get it, we understand that for them, shadows are everything. By merely conjuring an image, Plato secures our acknowledgment that we suffer a congenital form of frame neglect, a cognitive insensitivity to the limits of cognition that can strand us with fantastic (and so destructive) worldviews—and without our permission, no less. Despite the risk entailed, we neglect this form of neglect. Though industry and science are becoming ever more sensitive to the problems posed by the ‘unknown unknown,’ it remains the case that each of us at once understands the peril and presumes we’re the exception, the system apart from the systems about us. The motionless one.

Frame neglect, our insensitivity to the superordinate systems encompassing us, blinds us to our position within those systems. As a result, we have no choice but to take those positions for granted. This renders our cognitive orientations implicit, immune to deliberative revision and so persistent (as well as vulnerable to manipulation). Frame neglect, in other words, explains why bent orientations stay bent, why we suffer the cognitive inertia we do. More importantly, it highlights what might be called default sufficiency, the congenital presumption of implicit cognitive adequacy. We were in no position to cognize our position relative the heavens, and yet we nevertheless assumed that we were simply because we were in no position to cognize the inadequacy of our position.

Why is sufficiency the presumptive default? The stacking of ‘competences’ so brilliantly described by Dennett requires that every process ‘do its part’: sufficiency, you could say, is the default presumption of any biological system, so far as its component systems turn upon the iterative behaviour of other component systems. Dennett broaches the notion, albeit implicitly, via the example of asking someone to report on a nearby house via cell phone:

Seeing is believing, or something like that. We tacitly take the unknown pathways between his open eyes and speaking lips to be secure, just like the requisite activity in the pathways in the cell towers between his phone and ours. We’re not curious on the occasion about how telephones work; we take them for granted. We also don’t scratch our heads in bafflement over how he can just open his eyes and then answer questions with high reliability about what is positioned in front of him in the light, because we can all do it (those of us who are not blind). 348-349

Sufficiency is the default. We inherit our position, our basic cognitive orientation, because it sufficed to solve the kinds of high-frequency and/or high impact problems faced by our ancestors. This explains why unprecedented circumstances generate the kinds of problems they do: it’s always an open question whether our basic cognitive orientation will suffice when confronted with a novel problem.

When it comes to vision, for instance, we possess a wide range of ways to estimate sufficiency and so can adapt our behaviour to a variety of lighting conditions, waving our hand in fog, peering against glares, and so on. Darkness in particular demonstrates how the lack of information requires information, lest it ‘fall off the radar’ in the profound sense entailed by neglect. So even though we possess myriad ways to vet visual information, squircles possess no precedent and so no warning, the sufficiency of the information available is taken for granted, and we suffer the ambiguous cylinder illusion. Our cognitive ecology plays a functional role in the efficacy of our heuristic applications—all of them.

From this a great deal follows. Retasking some system of competences always runs the risk of systematic deception on the one hand, where unprecedented circumstances strand us with false solutions (as with the millennia-long ontological dualism of the terrestrial and the celestial), and dumbfounding on the other, where unprecedented circumstances crash some apparently sufficient application in subsequently detectable ways, such as ambiguous for human visual systems, or the problem of determinism for undergraduate students.

To the extent that ‘philosophical reflection’ turns on the novel application of preexisting metacognitive resources, it almost certainly runs afoul instances of systematic deception and dumbfounding. Retasked metacognitive channels and resources, we can be assured, would report as sufficient, simply because our capacity to intuit insufficiency would be the product of ancestral, which is to say, practical, applications. How could information and capacity geared to catching our tongue in social situations, assessing what we think we saw, rehearsing how to explain some disaster, and so on hope to leverage theoretical insights into the fundamental nature of cognition and experience? It can’t, no more than auditory cognition, say, could hope to solve the origin of the universe. But even more problematically, it has no hope of intuiting this fundamental inability. Once removed from the vacuum of ecological ignorance, the unreliability of ‘philosophical reflection,’ its capacity to both dumbfound and to systematically deceive, becomes exactly what we should expect.

This follows, I think, on any plausible empirical account of human metacognition. I’ve been asking interlocutors to provide me a more plausible account for years now, but they always manage to lose sight of the question somehow.

On the availability side, we should expect the confusion of task-insufficient information with task-sufficient information. On the capacity side, we should expect the confusion of task-insufficient applications with task-sufficient applications. And this is basically what Dennett’s ‘Cartesian gravity’ amounts to, the reflexive deliberative metacognitive tendency to confuse scraps with banquets and hammers with swiss-army knives.

But the subtleties secondary to these reflexes can be difficult to grasp, at least at first. Sufficiency means that decreases in dimensionality, the absence of kinds and quantities of information, simply cannot be cognized as such. Just over two years ago I suffered a retinal tear, which although successfully repaired, left me with a fair amount of debris in my right eye (‘floaters,’ as they call them, which can be quite distracting if you spend as much time staring at white screens as I do). Last autumn I noticed I had developed a ‘crimp’ in my right eye’s field of vision: apparently some debris had become attached to my fovea, a mass that accumulated as I was passed from doctor to doctor and thence to the surgeon. I found myself with my own, entirely private visual illusion: the occluded retinal cells were snipped out of my visual field altogether, mangling everything I tried to focus on with my right eye. The centre of every word I looked at would be pinched into oblivion, leaving only the beginning and ending characters mashed together. Faces became positively demonic—to the point where I began developing a Popeye squint for equanimity’s sake. The world had become a grand bi-stable image: things were fine when my left eye predominated, but then for whatever reason, click, my friends and family would be eyeless heads of hair. Human squircles.

My visual centres simply neglected the missing information, and muddled along assuming the sufficiency of the information that was available. I understood the insufficiency of what I was seeing. I knew the prisoners were there, chained in their particular neural cave with their own particular shadows, but I had no way of passing that information upstream—the best I could do was manage the downstream consequences.

But what happens when we have no way of intuiting information loss? What happens when our capacity to deliberate and report finds itself chained ‘with no information otherwise’? Well, given sufficiency, it stands to reason that what metacognition cannot distinguish we will report as same, that what it cannot vet we will report as accurate, that what it cannot swap we will report inescapable, and that what it cannot source we will report as sourceless, and so on. The dimensions of information occluded, in other words, depend entirely on what we happen to be reporting. If we ponder the proximate sources of our experiences, they will strike us as sourceless. If we ponder the composition of our experiences, they will strike us simple. Why? Because human metacognition not only failed to evolve the extraordinary ability to theoretically source or analyze human experience, it failed to evolve the ability to intuit this deficit. And so, we find ourselves stranded with squircles, our own personal paradox (illusion) of ourselves, of what it is fundamentally like to be ‘me.’

Dialectically, it’s important to note how this consequence of the Grand Inversion overturns the traditional explanatory burden when it comes to conscious experience. Since it takes more metacognitive access and capacity, not less, to discern things like disunity and provenance, the question Heuristic Neglect Theory asks of the phenomenologist is, “Yes, but how could you report otherwise?” Why think the intuition of apperceptive unity (just for instance) is anything more than a metacognitive cousin of the flicker-fusion you’re experiencing staring at the screen this very instant?

Given the wildly heuristic nature of our metacognitive capacities, we should expect to possess the capacity to discriminate only what our ancestors needed to discriminate, and precious little else. So, then, how could we intuit anything but apperceptive unity? Left with a choice between affirming a low-dimensional exception to nature on the basis of an empirically implausible metacognitive capacity, and a low-dimensional artifact of the very kind we might expect given an empirically plausible metacognitive account, there really is no contest.

And the list goes on and on. Why think intuitions of ‘self-identity’ possess anything more than the information required to resolve practical, ancestral issues involving identification?

One can think of countless philosophical accounts of the ‘first-person’ as the product of metacognitive ‘neglect origami,’ the way sufficiency precludes intuiting the radical insufficiency of the typically scant dimensions of information available. If geocentrism is the default simply for the way our peripheral position in the solar system precludes intuiting our position as peripheral, then ‘noocentrism’ is the default for the way our peripheral position vis a vis ourselves precludes intuiting our position as peripheral. The same way astrophysical ignorance renders the terrestrial the apparently immovable anchor of celestial motion, metacognitive neglect renders the first-person the apparently transcendent anchor of third-person nature. In this sense, I think, ‘gravity’ is a well-chosen metaphor to express the impact of metacognitive neglect upon the philosophical imagination: metacognitive neglect, like gravity, isn’t so much a discrete force as a structural feature, something internal to the architecture of philosophical reflection. Given it, humanity was all but doomed to wallow in self-congratulatory cartoons once literacy enabled regimented inquiry into its own nature. If we’re not the centres of the universe, then surely we’re the centre of our knowledge, our projects, our communities—ourselves.

Figure 5: The retasking of deliberative metacognition is not unlike discovering something practical—such as ‘self’ (or in this case, Brian’s sandal)—in apparently exceptional, because informationally impoverished, circumstances.

Figure 6: We attempt to interpret this practical deliverance in light of these exceptional circumstances.

Figure 7: Given neglect, we presume the practical deliverance theoretically sufficient, and so ascribe it singular significance.

Figure 8: We transform ‘self’ into a fetish, something both self-sustaining and exceptional. A squircle.

Of all the metacognitive misapplications confounding traditional interpretations of cognition and experience, Dennett homes in on the one responsible for perhaps the most theoretical mischief in the form of Hume’s ‘strange inversion of reasoning’ (354-358), where the problem, as we saw in the previous post, lies in mistaking the ‘intentional object’ of the red stripe illusion for the cause of the illusion. Hume, recall, notes our curious propensity to confuse mental determinations for environmental determinations, to impute something belonging to this… to ‘all that.’ Dennett notes that the problem lies in the application: normally, this ‘confusion’ works remarkably well; it’s only in abnormal circumstances, like those belonging to the red stripe illusion, where this otherwise efficacious cognitive reflex leads us astray.

The first thing to note about this cognitive reflex is the obvious way it allows us to neglect the actual machinery of our environmental relations. Hume’s inversion, in other words, calls attention to the radically heuristic nature of so-called intentional thinking. Given the general sufficiency of all the processes mediating our environmental relationships, we need not cognize them to cognize those relationships, we can take them for granted, which is a good thing, because their complexity (the complexity cognitive science is just now surmounting) necessitates they remain opaque. ‘Opaque,’ in this instance, means heuristically neglected, the fact that all the mad dimensionalities belonging to our actual cognitive relationships appear nowhere in cognition, not even as something missing. What does appear? Well, as Dennett himself would say, only what’s needed to resolve practical ancestral problems.

Reporting environments economically entails taking as much for granted as possible. So long as the machinery you and I use to supervise and revise our environmental orientations is similar enough, we can ignore each other’s actual relationships in communication, focusing instead on discrepancies and how to optimize them. This is why we narrate only those things most prone to vary—environmentally and neurally sourced information prone to facilitate reproduction—and remain utterly oblivious to the all the things that go without saying, the deep information environment plumbed by cognitive science. The commonality of our communicative and cognitive apparatuses, not to mention their astronomical complexity, assures that we will suffer what might be called, medial neglect, congenital blindness to the high-dimensional systems enabling communication and cognition. “All the subpersonal, neural-level activity is where the actual causal interactions happen that provide your cognitive powers, but all “you” have access to is the results” (348).

From Bacteria to Bach and Back is filled with implicit references to medial neglect. “Our access to our own thinking, and especially to the causation and dynamics of its subpersonal parts, is really no better than our access to our digestive processes,” Dennett writes; “we have to rely on the rather narrow and heavily edited channel that responds to our incessant curiosity with user-friendly deliverances, only one step closer to the real me than the access to the real me that is enjoyed by my family and friends” (346).

Given sufficiency, “[t]he relative accessibility and familiarity of the outer part of the process of telling people what we can see—we know our eyes have to be open, and focused, and we have to attend, and there has to be light—conceals from us the other blank from the perspective of introspection or simple self-examination of the rest of the process” (349). The ‘outer part of the process,’ in other words, is all that we need.

Medial neglect may be both necessary and economical, but it remains an incredibly risky bet to make given the perversity of circumstance and the radical interdependency characterizing human communities. The most frequent and important discrepancies will be environmental discrepancies, those which, given otherwise convergent orientations (the same physiology, location, and training), can be communicated absent medial information, difference making differences geared to the enabling axis of communication and cognition. Such discrepancies can be resolved while remaining almost entirely ‘performance blind.’ All I need do is ‘trust’ your communication and cognition, build upon it the same blind way I build upon my own. You cry, ‘Wolf!’ and I run for the shotgun: our orientations converge.

But as my example implies, things are not always so simple. Say you and I report seeing two different birds, a vulture versus an albatross, in circumstances where such a determination potentially matters—looking for a lost hunting party, say. An endless number of medial confounds could possibly explain our sudden disagreement. Perhaps I have bad eyesight, or I think albatrosses are black, or I’m blinded by the glare of the sun, or I’m suffering schizophrenia, or I’m drunk, or I’m just sick and tired of you being right all the time, or I’m teasing you out of boredom, or more insidiously, I’m responsible for the loss of the hunting party, and want to prevent you from finding the scene of my crime.

There’s no question that, despite medial neglect, certain forms of access and capacity regarding the enabling dimension of cognition and communication could provide much in the way of problem resolution. Given the stupendous complexity of the systems involved, however, it follows that any capacity to accommodate medial factors will be heuristic in the extreme. This means that our cognitive capacity to flag/troubleshoot issues of sufficiency will be retail, fractionate, geared to different kinds of high-impact, high-frequency problems. And the simplest solution, the highest priority reflex, will be to ignore the medial altogether. If our search party includes a third soul who also reports seeing a vulture, for instance, I’ll just be ‘wrong’ for ‘reasons’ that may or not be determined afterward.

The fact of medial neglect, in other words, underwrites what might be called an environmentalization heuristic, the reflexive tendency to ‘blame’ the environment first.

When you attempt to tell us about what is happening in your experience, you ineluctably slide into a metaphorical idiom simply because you have no deeper, truer, more accurate knowledge of what was going on inside you. You cushion your ignorance with a false—but deeply tempting—model: you simply reproduce, with some hand waving and apologies, your everyday model of how you know about what is going on outside you. 348

Because that’s typically all that you need. Dennett’s hierarchical mountain of competences is welded together by default sufficiency, the blind mechanical reliance of one system upon other systems. Communicative competences not only exploit this mechanical reliance, they extend it, opening entirely novel ecosystems leveraging convergent orientation, brute environmental parallels and physiological isomorphisms, to resolve discrepancies. So long as those discrepancies are resolved, medial factors potentially impinging on sufficiency can be entirely ignored, and so will be ignored. Communications will be ‘right’ or ‘wrong,’ ‘true’ or ‘false.’ We remain as blind to the sources of our cognitive capacities as circumstances allow us to be. And we remain blind to this blindness as well.

When I say from the peak of my particular competence mountain, “Albatross…” and you turn to me in perplexity, and say from the peak of your competence mountain, “What the hell are you talking about?” your instance of ‘about-talk’ is geared to the resolution of a discrepancy between our otherwise implicitly convergent systems. This is what it’s doing. The idea that it reveals an exceptional kind of relationship, ‘aboutness,’ spanning the void between ‘albatross’ here and albatrosses out there is a metacognitive artifact, a kind of squircle. For one, the apparent void is jam-packed with enabling competences—vast networks of welded together by sufficiency. Medial neglect merely dupes metacognition into presuming otherwise, into thinking the apparently miraculous covariance (the product of vast histories of natural and neural selection) between ‘sign’ (here) and ‘signified’ (out there) is indeed some kind of miracle.

Philosophers dwell among general descriptions and explanations: this is why they have difficulty appreciating that naïveté generally consists in having no ‘image,’ no ‘view,’ regarding this or that domain. They habitually overlook the oxymoronic implication of attaching any ‘ism’ to the term ‘naïve.’ Instances of ‘about-talk’ do not implicitly presume ‘intentionality’ even in some naïve, mistaken sense. We are not born ‘naïve intentionalists’ (any more than we’re ‘naïve realists’). We just use meaning talk to solve what kinds of problems we can where we can. Granted, our shared metacognitive shortcomings lead us, given different canons of interrogation, into asserting this or that interpretation of ‘intentionality,’ popular or scholastic. We’re all prone to see squircles when prompted to peer into our souls.

So, when someone asks, “Where does causality lie?” we just point to where we can see it, out there on the billiard table. After all, where the hell else would it be (given medial neglect)? This is why dogmatism comes first in the order of philosophical complication, why Kant comes after Descartes. It takes time and no little ingenuity to frame plausible alternatives of this ‘elsewhere.’ And this is the significance of Hume’s inversion to Cartesian gravity: the reflexive sufficiency of whatever happens to be available, a sufficiency that may or may not obtain given the kinds of problem posed. The issue has nothing to do with confusing normal versus abnormal attributions of causal efficacy to intentional objects, because, for one, there’s just no such thing as ‘intentional objects,’ and for another, ‘intentional object-talk’ generates far more problems than it solves.

Of course, it doesn’t seem that way to Dennett whilst attempting to solve for Cartesian gravity, but only because, short theoretical thematizations of neglect and sufficiency, he lacks any real purchase on the problem of explaining the tendency to insist (as Tom Clark does) on the reality of the illusion. As a result, he finds himself in the strange position of embracing the sufficiency of intentionality in certain circumstances to counter the reflexive tendency to assume the sufficiency of phenomenality in other circumstances—of using one squircle, in effect, to overcome another. And this is what renders him eminently vulnerable to readings like Clark’s, which turns on Dennett’s avowal of intentional squircles to leverage, on pain of inconsistency, his commitment to phenomenal squircles. This problem vanishes once we recognize ourselves for the ambiguous cylinders we have always been. Showing as much, however, will require one final installment.

Reading From Bacteria to Bach and Back I: On Cartesian Gravity

by rsbakker

ABDUCTION AND DIAGNOSIS

Problem resolution generally possesses a diagnostic component; sometimes we can find workarounds, but often we need to know what the problem consists in before we can have any real hope of advancing beyond it. This is what Daniel Dennett proposes to do in his recent From Bacteria to Bach and Back, to not only sketch a story of how human comprehension arose from the mindless mire of biological competences, but to provide a diagnostic account of why we find such developmental stories so difficult to credit. He hews to the slogan I’ve oft repeated here on Three Pound Brain: We are natural in such a way that we find it impossible to intuit ourselves as natural. It’s his account of this ‘in such a way,’ that I want to consider here. As I’ve said many times before, I think Dennett has come as close as any philosopher in history to unravelling the conjoined problems of cognition and consciousness—and I am obliged to his acumen and creativity in more ways than I could possibly enumerate—but I’m convinced he remains entangled, both theoretically and dialectically, by several vestigial commitments to intentionalism. He remains a prisoner of ‘Cartesian gravity.’ Nowhere is this clearer than in his latest book, where he sets out to show how blind competences, by hook, crook, and sheer, mountainous aggregation, can actually explain comprehension, which is to say, understanding as it appears to the intentional stance.

Dennett offers two rationales for braving the question of comprehension, the first turning on the breathtaking advances made in the sciences of life and cognition, the second anchored in his “better sense of the undercurrents of resistance that shackle our imaginations” (16). He writes:

I’ve gradually come to be able to see that there are powerful forces at work, distorting imagination—my own imagination included—pulling us first one way and then another. If you learn to see these forces too, you will find that suddenly things begin falling into place in a new way. 16-17

The original force, the one begetting subsequent distortions, he calls Cartesian gravity. He likens the scientific attempt to explain cognition and consciousness to a planetary invasion, with the traditional defenders standing on the ground with their native, first-person orientation, and the empirical invaders finding their third-person orientation continually inverted the closer they draw to the surface. Cartesian gravity, most basically, refers to the tendency to fall into first-person modes of thinking cognition and consciousness. This is a problem because of the various, deep incompatibilities between the first-person and third-person views. Like a bi-stable image (Dennett provides the famous Duck-Rabbit as an example), one can only see the one at the expense of seeing the other.

Cartesian gravity, in other words, refers to the intuitions underwriting the first-person side of the famed Explanatory Gap, but Dennett warns against viewing it in these terms because of the tendency in the literature to view the divide as an ontological entity (a ‘chasm’) instead of an epistemological artifact (a ‘glitch’). He writes:

[Philosophers] may have discovered the “gap,” but they don’t see it for what it actually is because they haven’t asked “how it got that way.” By reconceiving of the gap as a dynamic imagination-distorter that has arisen for good reasons, we can learn to traverse it safely or—what may amount to the same thing—make it vanish. 20-21

It’s important, I think, to dwell on the significance of what he’s saying here. First of all, taking the gap as a given, as a fundamental feature of some kind, amounts to an explanatory dereliction. As I like to put it, the fact that we, as a species, can explain the origins of nature down to the first second and yet remain utterly mystified by the nature of this explanation is itself a gobsmacking fact requiring explanation. Any explanation of human cognition that fails to explain why humans find themselves so difficult to explain is woefully incomplete. Dennett recognizes this, though I sometimes think he fails to recognize the dialectical potential of this recognition. There’s few better ways to isolate the sound of stomping feet from the speculative cacophony, I’ve found, than by relentlessly posing this question.

Secondly, the argumentative advantage of stressing our cognitive straits turns directly on its theoretical importance: to naturalistically diagnose the gap is to understand the problem it poses. To understand the problem it poses is to potentially resolve that problem, to find some way to overcome the explanatory gap. And overcoming the gap, of course, amounts to explaining the first-person in third-person terms—to seize upon what has become the Holy Grail of philosophical and scientific speculation.

The point being that the whole cognition/consciousness debate stands balanced upon some diagnosis of why we find ourselves so difficult to fathom. As the centerpiece of his diagnosis, Cartesian gravity is absolutely integral to Dennett’s own position, and yet surveying the reviews From Bacteria to Bach and Back has received (as of 9/12/2017, at least), you find the notion is mentioned either in passing (as in Thomas Nagel’s piece in The New York Review of Books), dismissively (as in Peter Hankin’s review in Conscious Entities), or not at all.

Of course, it would probably help if anyone had any clue as to what ‘first-person’ or ‘third-person’ actually meant. A gap between gaps often feels like no gap at all.

ACCUMULATING MASS

“The idea of Cartesian gravity, as so far presented, is just a metaphor,” Dennett admits, “but the phenomenon I am calling by this metaphorical name is perfectly real, a disruptive force that bedevils (and sometimes aids) our imaginations, and unlike the gravity of physics, it is itself an evolved phenomenon. In order to understand it, we need to ask how and why it arose on the planet earth” (21). Part of the reason so many reviewers seem to have overlooked its significance, I think, turns on the sheer length of the story he proceeds to tell. Compositionally speaking, it’s rarely a good idea to go three hundred pages—wonderfully inventive, controversial pages, no less—without substantially revisiting your global explanandum. By time Dennett tells us “[w]e are ready to confront Cartesian gravity head on” (335) it feels like little more than a rhetorical device—and understandably so.

The irony, of course, is that Dennett thinks that nothing less than Cartesian gravity has forced the circuitous nature of his route upon him. If he fails to regularly reference his metaphor, he continually adverts to its signature consequence: cognitive inversion, the way the sciences have taken our traditional, intuitive, ab initio, top-down presumptions regarding life and intelligence and turned them on their head. Where Darwin showed how blind, bottom-up processes can generate what appear to be amazing instances of design, Turing showed how blind, bottom-up processes can generate what appear to be astounding examples of intelligence, “natural selection on the one hand, and mindless computation on the other” (75). Despite some polemical and explanatory meandering (most all of it rewarding), he never fails to keep his dialectical target, Cartesian exceptionalism, firmly (if implicitly) in view.

A great number of the biological examples Dennett adduces in From Bacteria to Bach and Back will be familiar to those following Three Pound Brain. This is no coincidence, given that Dennett is both an info-junkie like myself, as well as constantly on the lookout for examples of the same kinds of cognitive phenomena: in particular, those making plain the universally fractionate, heuristic nature of cognition, and those enabling organisms to neglect, and therefore build-upon, pre-existing problem-solving systems. As he writes:

Here’s what we have figured out about the predicament of the organism: It is floating in an ocean of differences, a scant few of which might make a difference to it. Having been born to a long lineage of successful copers, it comes pre-equipped with gear and biases for filtering out and refining the most valuable differences, separating the semantic information from the noise. In other words, it is prepared to cope in some regards; it has built-in expectations that have served its ancestors well but may need revision at any time. To say that it has these expectations is to say that it comes equipped with partially predesigned appropriate responses all ready to fire. It doesn’t have to waste precious time figuring out from first principles what to do about an A or a B or a C. These are familiar, already solved problems of relating input to output, perception to action. These responses to incoming simulation of its sensory systems may be external behaviors: a nipple affords sucking, limbs afford moving, a painful collision affords retreating. Or they may be entirely covert, internal responses, shaping up the neural armies into more effective teams for future tasks. 166

Natural environments consist of regularities, component physical processes systematically interrelated in ways that facilitate, transform, and extinguish other component physical processes. Although Dennett opts for the (I think) unfortunate terminology of ‘affordances’ and ‘Umwelts,’ what he’s really talking about are ecologies, the circuits of selective sensitivity and corresponding environmental frequency allowing for niches to be carved, eddies of life to congeal in the thermodynamic tide. With generational turnover, risk sculpts ever more morphological and behavioural complexity, and the life once encrusting rocks begins rolling them, then shaping and wielding them.

Now for Dennett, the crucial point is to see the facts of human comprehension in continuity with the histories that make it possible, all the while understanding why the appearance of human comprehension systematically neglects these self-same conditions. Since his accounts of language and cultural evolution (via memes) warrant entire posts in their own right, I’ll elide them here, pointing out that each follow this same incremental, explanatory pattern of natural processes enabling the development of further natural processes, tangled hierarchies piling toward something recognizable as human cognition. For Dennett, the coincidental appearance of La Sagrada Familia (arguably a paradigmatic example of top-down thinking given Gaudi’s reputed micro-managerial mania) and Australian termite castles expresses a profound continuity as well, one which, when grasped, allows for the demystification of comprehension, and inoculation against the pernicious effects of Cartesian gravity. The leap between the two processes, what seems to render the former miraculous in a way the latter does not, lies in the sheer plasticity of the processes responsible, the way the neurolinguistic mediation of effect feedback triggers the adaptive explosion we call ‘culture.’ Dennett writes:

Our ability to do this kind of thinking [abstract reasoning/planning] is not accomplished by any dedicated brain structure not found in other animals. There is no “explainer nucleus” for instance. Our thinking is enabled by the installation of a virtual machine made of virtual machines made of virtual machines. The goal of delineating and explaining this stack of competences via bottom-up neuroscience alone (without the help of cognitive neuroscience) is as remote as the goal of delineating and explaining the collection of apps on your smart phone by a bottom-up deciphering of its hardware circuit design and the bit-strings in memory without taking a peek at the user interface. The user interface of an app exists in order to make the competence accessible to users—people—who can’t know, and don’t need to know, the intricate details of how it works. The user-illusions of all the apps stored in our brains exist for the same reason: they make our competences (somewhat) accessible to users—other people—who can’t know, and don’t need to know, the intricate details. And then we get to use them ourselves, under roughly the same conditions, as guests in our own brain. 341

This is the Dennettian portrait of the first-person, or consciousness as it’s traditionally conceived: a radically heuristic point of contact and calibration between endogenous and exogenous systems, one resting on occluded stacks of individual, collective, and evolutionary competence. The overlap between what can be experienced and what can be reported is no cosmic coincidence: the two are (likely) coeval, part of a system dedicated to keeping both ourselves and our compatriots as well informed/misinformed—and as well armed with the latest competences available—as possible.

We can give this strange idea an almost paradoxical spin: it is like something to be you because you have been enabled to tell us—or refrain from telling us—what it’s like to be you!

When we evolved into in us, a communicating community of organisms that can compare notes, we became the beneficiaries of a system of user-illusions that rendered versions of our cognitive processes—otherwise as imperceptible as our metabolic processes—accessible to us for purposes of communication. 344

Far from the phenomenological plenum the (Western) tradition has taken it to be, then, consciousness is a presidential brief prepared by unscrupulous lobbyists, a radically synoptic aid to specific, self-serving forms of individual and collective action.

our first-person point of view of our own minds is not so different from our second-person point of view of others’ minds: we don’t see, or hear, or feel, the complicated neural machinery turning away in our brains but have to settle for an interpreted, digested version, a user-illusion that is so familiar to us that we take it not just for reality but also for the most indubitable and intimately known reality of all. That’s what it is like to be us. 345

Thus, the astounding problem posed by Cartesian gravity. As a socio-communicative interface possessing no access whatsoever to our actual sources, we can only be duped by our immediate intuitions. Referring to John Searle’s Cartesian injunction to insist upon a first-person solution of meaning and consciousness, Dennett writes:

The price you pay for following Searle’s advice is that you get all your phenomena, the events and things that have to be explained by your theory, through a channel designed not for scientific investigation but for handy, quick-and-dirty use in the rough and tumble of time-pressured life. You can learn a lot about how the brain it—you can learn quite a lot about computers by always insisting on the desk-top point of view, after all—but only if you remind yourself that your channel is systematically oversimplified and metaphorical, not literal. That means you must resist the alluring temptation to postulate a panoply of special subjective properties (typically called qualia) to which you (alone) have access. Those are fine items for our manifest image, but they must be “bracketed,” as the phenomenologist’s say, when we turn to scientific explanation. Failure to appreciate this leads to an inflated list of things that need to be explained, featuring, preeminently, a Hard Problem that is nothing but an artifact of the failure to recognize that evolution has given us a gift that sacrifices literal truth for utility. 365-366

Sound familiar? Human metacognitive access and capacity is radically heuristic, geared to the solution of practical ancestral problems. As such, we should expect that tasking that access and capacity, ‘relying on the first-person,’ with solving theoretical questions regarding the nature of experience and cognition will prove fruitless.

It’s worth pausing here, I think, to emphasize just how much this particular argumentative tack represents a departure from Dennett’s prior attempts to clear intuitive ground for his views. Nothing he says here is unprecedented: heuristic neglect has always lurked in the background of his view, always found light of day in this or that corner of this or that argument. But at no point—not in Consciousness Explained, nor even in “Quining Qualia”—has it occupied the dialectical pride of place he concedes it in From Bacteria to Bach and Back. Prior to this book, Dennett’s primary strategy has been to exploit the kinds of ‘crashes’ brought about by heuristic misapplication (though he never explicitly characterizes them as such). Here, with Cartesian gravity, he takes a gigantic step toward theorizing the neurocognitive bases of the problematic ‘intuition pumps’ he has targeted over the years. This allows him to generalize his arguments against first-person theorizations of experience in a manner that had hitherto escaped him.

But he still hasn’t quite found his way entirely clear. As I hope to show, heuristic neglect is far more than simply another tool Dennett can safely store with his pre-existing commitments. The best way to see this, I think, is to consider one particular misreading of the new argument against qualia in Chapter 14.

GRAVITY MEETS REALITY

In “Dennett and the Reality of Red,” Tom Clark presents a concise and elegant account of how Dennett’s argument against the reality of qualia in From Bacteria to Bach and Back turns upon a misplaced physicalist bias. The extraordinary thing about his argument—and the whole reason we’re considering it here—lies in the way he concedes so much of Dennett’s case, only to arrive at a version of the very conclusion Dennett takes himself to be arguing against:

I’d suggest that qualia, properly understood, are simply the discriminable contents of sensory experience – all the tastes, colors, sounds, textures, and smells in terms of which reality appears to us as conscious creatures. They are not, as Dan correctly says, located or rendered in any detectable mental medium. They’re not located anywhere, and we are not in an observational or epistemic relationship to them; rather they are the basic, not further decomposable, hence ineffable elements of the experiences we consist of as conscious subjects.

The fact that ‘Cartesian gravity’ appears nowhere in his critique, however, pretty clearly signals that something has gone amiss. Showing as much, however, requires I provide some missing context.

After introducing his user-illusion metaphor for consciousness, Dennett is quick to identify the fundamental dialectical problem Cartesian gravity poses his characterization:

if (as I have just said) your individual consciousness is rather like the user-illusion on your computer screen, doesn’t this imply that there is a Cartesian theatre after all, where this portrayal happens, where the show goes on, rather like the show you perceive on the desktop? No, but explaining what to put in place of the Cartesian theatre will take some stretching of the imagination. 347

This is the point where he introduces a third ‘strange inversion of reasoning,’ this one belonging to Hume. Hume’s inversion, curiously enough, lies in his phenomenological observation of the way we experience causation ‘out there,’ in the world, even though we know given our propensity to get it wrong that it belongs to the machinery of cognition. (This is a canny move on Dennett’s part, but I think it demonstrates the way in which the cognitive consequences of heuristic neglect remain, as yet, implicit for him). What he wants is to ‘theatre-proof’ his account of conscious experience as a user-illusion. Hume’s inversion provides him a way to both thematize and problematize the automatic assumption that the illusion must itself be ‘real.’

The new argument for qualia eliminativism he offers, and that Clark critiques, is meant to “clarify [his] point, if not succeed in persuading everybody—as Hume says, the contrary notion is so riveted in our minds” (358). He gives the example of the red afterimage experienced in complementary colour illusions.

The phenomenon in you that is responsible for this is not a red stripe. It is a representation of a red stripe in some neural system of representation that we haven’t yet precisely located and don’t yet know how to decode, but we can be quite sure it is neither red nor a stripe. You don’t know exactly what causes you to seem to see a red stripe out in the world, so you are tempted to lapse into Humean misattribution: you misinterpret your sense (judgment, conviction, belief, inclination) that you are seeing a red stripe as arising from a subjective property (a quale, in the jargon of philosophy) that is the source of your judgment, when in fact, that is just about backward. It is your ability to describe “the red stripe,” your judgment, your willingness to make the assertions you just made, and your emotional reactions (if any) to “the red stripe” that is the source of your conviction that there is a subjective red stripe. 358-359

The problem, Dennett goes on to assert, lies in “mistaking the intentional object of a belief for its cause” (359). In normal circumstances, when we find ourselves in the presence of an apple, say, we’re entirely justified in declaring the apple the cause of our belief. In abnormal circumstances, however, this reflex dupes us into thinking that something extra-environmental—‘ineffable,’ supernatural—has to be the cause. And thus are inscrutable (and therefore perpetually underdetermined) theoretical posits like qualia born, giving rise to scholastic excesses beyond numbering.

Now the key to this argument lies in the distinction between normal and abnormal circumstances, which is to say the cognitive ecology occasioning the application of a certain heuristic regime—namely the one identified by Hume. For Clark, however, the salient point of Dennett’s argument is that the illusory red stripe lies nowhere.

Dan, a good, sophisticated physicalist, wants everything real to be locatable in the physical external world as vetted by science. What’s really real is what’s in the scientific image, right? But if you believe that we really have experiences, that experiences are specified in terms of content, and that color is among those contents, then the color of the experienced afterimage is as real as experiences. But it isn’t locatable, nor are any of the contents of experience: experiences are not observables. We don’t find them out there in spacetime or when poking around in the brain; we only find objects of various qualitative, quantitative and conceptual descriptions, including the brains with which experiences are associated. But since experiences and their contents are real, this means that not all of what’s real is locatable in the physical, external world.

Dennett never denies that we have experiences, and he even alludes to the representational basis of those experiences in the course of making his red stripe argument. A short time later, in his consideration of Cartesian gravity, he even admits that our ability to report our experiences turns on their content: “By taking for granted the content of your mental states, by picking them out by their content, you sweep under the rug all the problems of indeterminacy or vagueness of content” (367).

And yet, even though Clark is eager to seize on these and other instances of experience-talk, representation-talk, and content-talk, he completely elides the circumstances occasioning them, and thus the way Dennett sees all of these usages as profoundly circumstantial—‘normal’ or ‘abnormal.’ Sometimes they’re applicable, and sometimes they’re not. In a sense, the reality/unreality of qualia is actually beside the point; what’s truly at issue is the applicability of the heuristic tools philosophy has traditionally applied to experience. The question is, What does qualia-talk add to our ability to naturalistically explain colour, affect, sound, and so on? No one doubts our ability to correlate reportable metacognitive aspects of experience to various neural and environmental facts. No one doubts our sensory discriminatory abilities outrun our metacognitive discriminatory abilities—our ability to report. The empirical relationships are there regardless: the question is one of whether the theoretical paradigms we reflexively foist on these relationships lead anywhere other than endless disputation.

Clark not only breezes past the point of Dennett’s Red Stripe argument, he also overlooks the rather stark challenge it poses it his own position. Simply raising the spectre of heuristic metacognitive inadequacy, as Dennett does, obliges Clark to warrant his assumptive metacognitive claims. (Arguing, as Clark does, that we have no epistemic relation to our experiences simply defers the obligation to this second extraordinary claim: heaping speculation atop speculation generates more problems, not less). Dennett spends hundreds of pages amassing empirical evidence for the fractionate, heuristic nature of cognition. Given that our ancestors required only the solution of practical problems, the chances that human metacognition furnishes the information and capacity required to intuit the nature of experience (that it consists of representations consisting of contents consisting of qualia) is vanishingly small. What we should expect is that our metacognitive reflexes will do what they’ve always done: apply routines adapted to practical cognitive and communicative problem resolution to what amounts to radically unprecedented problem ecology. All things being equal, it’s almost certain that the so-called first-person can do little more than flounder before the theoretical question of itself.

The history of intentional philosophy and psychology, if nothing else, vividly illustrates as much.

In the case of content, it’s hard not to see Clark’s oversight as tendentious insofar as Dennett is referring to the way content talk exposes us to Cartesian gravity (“Reading your own mind is too easy” (367)) and the relative virtues of theorizing cognition via nonhuman species. But otherwise, I’m inclined to think Clark’s reading of Dennett is understandable. Clark misses the point of heuristic neglect entirely, but only because Dennett himself remains fuzzy on just how his newfound appreciation for the Grand Inversion—the one we’ve been exploring here on Three Pound Brain for years now—bears on his preexisting theoretical commitments. In particular, he has yet to see the hash it makes of his ‘stances’ and the ‘real patterns’ underwriting them. As soon as Dennett embraced heuristic neglect, opportunistic eliminativism ceased being an option. As goes the ‘reality’ of qualia, so goes the ‘reality’ supposedly underwriting the entire lexicon of traditional intentionalist philosophy. Showing as much, however, requires showing how Heuristic Neglect Theory arises out of the implications of Dennett’s own argument, and how it transforms Cartesian gravity into a proto-cognitive psychological explanation of intentional philosophy—an empirically tractable explanation for why humanity finds humanity so dumbfounding. But since I’m sure eyes are crossing and chins are nodding, I’ll save the way HNT can be directly drawn from the implicature of Dennett’s position for a second installment, then show how HNT both denies representation ‘reality,’ while explaining what makes representation talk so useful in my third and final post on what has been one the most exciting reading adventures in my life.

Bleak Theory (By Paul J. Ennis)

by rsbakker

In the beginning there was nothing and it has been getting steadily worse ever since. You might know this, and yet repress it. Why? Because you have a mind that is capable of generating useful illusions, that’s why. How is this possible? Because you are endowed with a brain that creates a self-model which has the capacity to hide things from ‘you.’ This works better for some than for others. Some of us are brain-sick and, for whatever perverse reasons, we chip away at our delusions. In such cases recourse is possible to philosophy, which offers consolation (or so I am told), or to mysticism, which intentionally offers nothing, or to aesthetics, which is a kind of self-externalizing that lets the mind’s eye drift elsewhere. All in all, however, the armor on offer is thin. Such are the options: to mirror (philosophy), to blacken (mysticism), or to embrace contingency (aesthetics). Let’s select the latter for now. By embracing contingency I mean that aesthetics consists of deciding upon and pursuing something quite specific for intuitive rather than rational reasons. This is to try to come to know contingency in your very bones.

As a mirrorer by trade I have to abandon some beliefs to allow myself to proceed this way. My belief that truth comes first and everything else later will be bracketed. I replace this with a less demanding constraint: truth comes when you know why you believe what you believe. Oftentimes I quite simply believe things because they are austere and minimal and I have a soft spot for that kind of thing. When I allow myself to think in line with these bleak tones an unusual desire is generated: to outbleak black, to be bleaker than black. This desire comes from I know not where. It seemingly has no reason. It is an aesthetic impulse. That’s why I ask that you take from what follows what you will. It brings me no peace either way.

I cannot hope to satisfy anyone with a definition of aesthetic experience, but let me wager that those moments that let me identify with the world a-subjectively – but not objectively – are commonly associated in my mind with bleakness. My brain chemistry, my environment, and similar contingent influences have rendered me this way. So be it. Bleakness manifests most often when I am faced with what is most distinctly impersonal: with cloudscapes and dimmed, wet treescapes. Or better yet, any time I witness a stark material disfiguration of the real by our species. And flowering from this is a bleak outlook correlated with the immense, consistent, and mostly hidden, suffering that is our history – our being. The intensity arising from the global reach of suffering becomes impressive when dislocated from the personal and the particular because then you realize that it belongs to us. Whatever the instigator the result is the same: I am alerted not just to the depths of unknowing that I embody, to the fact that I will never know most of life, but also to the industrial-scale sorrow consistently operative in being. All that is, is a misstep away from ruin. Consciousness is the holocaust of happiness.

Not that I expect anything more. Whatever we may say of our cultural evolution there was nothing inscribed in reality suggesting our world should be a fit for us. I am, on this basis, not surprised by our bleak surroundings. The brain, model-creator that it is, does quite a job at systematizing the outside into a representation that allows you to function; assuming, that is, that you have been gifted with a working model. Some have not. Perhaps the real horror is to try to imagine what has been left out (even the most ardent realist surely knows you do not look at the world directly as it is). Thankfully there is no real reason for us to register most of the information out there and we were not designed to know most of it anyway. This is the minimal blessing our evolution has gifted us with. The maximal damage is that from the exaption we call consciousness cultural evolution flowers and puts our self-model at the mercy of a bombardment of social complexity – our factical situation. It is impossible to know how our information age is toying with our brain, suffice to say that the spike in depression, anxiety and self-loathing is surely some kind of signal. The brain though, like the body, can function even when maltreated. Whether this is truly to the good is difficult to say.

And yet we must be careful to remember that even in so-called eliminative materialism the space of reasons remains. The normative dimension is, as Brandom would put it, irreducible. It does not constitute the entire range of cognition, and is perhaps best deflated in light of empirical evidence, but that is beside the point. To some degree, perhaps minor, we are rational animals with the capacity for relatively free decision-making. My intuition is that ultimately the complexity of our structure means that we will never be free of certain troubles arising from what we are. Being embodied is to be torn between immense capacity and the constant threat of losing capacities. A stroke, striking as if from nowhere, can fundamentally alter anyone. This is not to suggest that progress does not occur. It can and it does, but it can also be, and often is, undone. It’s an unfortunate state of affairs, bleak even, but being attuned to the bleakness of reality does not result in passivity by necessity.

Today there are projects that explicitly register all this, and nonetheless intend to operate in line with the potentiality contained within the capacities of reason. What differentiates these projects, oftentimes rationalist in nature, is that they do not follow our various universalist legacies in simply conceiving of the general human as deserving of dignity simply because we all belong to the same class of suffering beings. This is not sufficient to make humans act well. The phenomenon of suffering is easily recognizable and most humans are acutely aware of it, and yet they continue to act in ways contrary to how we ‘ought’ to respond. In fact, it is clear that knowing the sheer scale of suffering may lead to hedonism, egoism or repression. Various functional delusions can be generated by our mind, and it is hardly beyond us to rationalize selfishness on the basis of the universal. We are versatile like that. For this reason, I find myself torn between two poles. I maintain a philosophical respect for various neo-rationalist projects under development. And I remain equally under no illusion they will ever be put to much use. And I do not blame people for falling short of these demands. I am so far from them I only really take them seriously on the page. I find myself drawn, for these reasons, to the pessimist attitude, often considered a suspect stance.

One might suggest that we need only a minimal condition to be ethical. An appeal to the reality of pain in sentient and sapient creatures, perhaps. In that decision you might find solace – despite everything (or in spite of everything). It is a choice, however. Our attempts to assert an ethical universalism are bound up with a counter-logic: the bleak truth of contingency on the basis of the impersonal-in-the-personal. It is a logic quietly operative in the philosophical tradition and one I believe has been suppressed. Self-suppressed it flirts too much with a line leading us to the truth of our hallucination. It’s Nietzsche telling you about perspectivism hinging on the impersonal will-to-power and then you maturing, and forgetting. Not knocking his arguments out of the water, mind. Simply preferring not to accept it. Nobody wants to circle back round to the merry lunatic truths that make a mockery of your life. You might find it hard to get out of bed…whereas now I am sure you leap up every morning, smile on your face…The inhuman, impersonal attachment to each human has many names, but let us look at some that are found right at the heart of the post-Kantian tradition: transcendental subject, Dasein, Notion. Don’t believe me? I don’t mind, it makes no difference to me.

Let’s start with the sheer impersonality involved in Heidegger’s sustained fascination with discussing the human without using the word. Dasein is not supposed to be anything or anyone, in particular. Now once you think about it Dasein really does come across as extraordinarily peculiar. It spends a lot of its time being infested by language since this is, Heidegger insists, the place where its connection to being can be expressed. Yet it is also an easily overrun fortress that has been successfully invaded by techno-scientific jargon. When you hook this thesis up with Heidegger’s epochal shifts then the impersonal forces operative in his schema start to look downright ominous. However, we can’t blame Heidegger on what we can blame on Kant. His transcendental field of sense also belongs to one and all. And so, like Dasein, no one in particular. This aspect of the transcendental field still remains contentious. The transcendental is, at once, housed in a human body but also, in its sense-making functions, to be considered somehow separate from it. It is not quite human, but not exactly inhuman either.

There is, then, some strange aspect, I can think of no other word for it, inhabiting our own flowing world of a coherent ego, or ‘I,’ that allows for the emergence of a pooled intersubjectivity. Kant’s account, of course, had two main aims: to constrain groundless metaphysical speculation and, in turn, to ground the sciences. Yet his readers did not always follow his path. Kant’s decision to make a distinction between the phenomena and the noumena is perhaps the most consequential one in our tradition and is surely one of the greatest examples of opening up what you intended to close down. The nature of the noumenal realm has proven irresistible to philosophers and it has recursive consequences for how we see ourselves. If the nominal realm names a reality that is phenomenally clouded then it surely precedes, ontologically, the ego-as-center; even if it is superseded by the ego’s modelling function for us. Seen within the wider context of the noumenal realm it is legitimate to ask whether the ‘I’ is merely a densely concentrated, discrete packet amidst a wider flow; a locus amidst the chaos. The ontological generation of egos is then shorn back until all you have is Will (Schopenhaeur), Will to Power (Nietzsche), or, in a less generative sense ‘what gives,’ es gibt (Heidegger). This way of thinking belongs, when one takes the long-view, to the slow-motion deconstruction of the Cartesian ego in post-Kantian philosophy, albeit with Husserl cutting a lonely revivalist figure here. Today the ego is trounced everywhere, but there is perhaps no better example that the ‘no-self-at-all’ argument of Metzinger, but even the one-object-amongst-many thesis of object oriented ontology traces a similar line.

The destruction of the Cartesian ego may have its lineage in Kant, but the notion of the impersonal as force, process, or will, owes much to Hegel. In his metaphysics Hegel presents us with a cosmic loop explicable through retroactive justification. At the beginning, the un-articulated Notion, naming what is at the heart-of-the-real, sets off without knowledge of itself, but with the emergence of thinking subjects the Notion is finally able to think itself. In this transition the gap between the un-articulated and articulated Notion is closed, and the entire thing sets off again in directions as yet unknown. Absolute knowing is, after all, not totalized knowing, but a constant, vigilant knowing navigating its way through contingency and recognizing the necessity below it all. But that’s just the thing: despite being important conduits to this process, and having a quite special and specific function, it’s the impersonal process that really counts. In the end Kant’s attempt to close down discussion about the nature of the noumenal realm simply made it one of the most appealing themes for a philosopher to pursue. Censorship helps sales.

Speaking of sales, all kinds of new realism are being hawked on the various para-academic street-corners. All of them benefit from a tint of recognizability rooted, I would suggest, in the fact that ontological realism has always been hidden in plain sight; for any continentalist willing to look. What is different today is how the question of the impersonal attachments affecting the human comes not from inside philosophy, but from a number of external pressures. In what can only be described as a tragic situation for metaphysicians, truth now seeps into the discipline from the outside. We see thinking these days where philosophers promised there was none. The brilliance of continental realism lies in reminding us how this is an immense opportunity for philosophers to wake up from various self-induced slumbers, even if that means stepping outside the protected circle from time to time. It involves bringing this bubbling, left-over question of ontological realism right to the fore. This does not mean ontological realism will come to be accepted and then casually integrated into the tradition. If anything the backlash may eviscerate it, but the attempt will have been made. Or was, and quietly passed.

And the attempt should be made because the impersonality infecting ontological realist excesses such as the transcendental subject (in-itself), the Notion, or Dasein are attuned to what we can now see as the (delayed) flowering of the Copernican revolution. The de-centering is now embedded enough that whatever defense of the human we posit it must not be dishonest. We cannot hallucinate our way out of our ‘cold world’. If we know that our self-model is itself a hallucination, but a very real one, then what do we do then? Is it enough to situate the real in our ontological flesh and blood being-there that is not captured by thinking? Or is it best to remain with thinking as a contingent error that despite its aberrancy nonetheless spews out the truth? These avenues are grounded in consciousness and in our bodies and although both work wonders they can just as easily generate terrors. Truth qualified by these terrors is where one might go. No delusion can outflank these constraints forever. Bled of any delusional disavowal, one tries to think without hope. Hope is undignified anyway. Dignity involves resisting all provocation and remaining sane when you know it’s bleakness all the way down.

Some need hope, no? As I write this I feel the beautiful soul rising from his armchair, but I do not want to hear it. Bleak theory is addressed to your situation: a first worlder inhabiting an accelerated malaise. The ethics to address poverty, inequality, and hardship will be different. Our own heads are disordered and we do not quite know how to respond to the field outside it. You will feel guilty for your myopia, and you deserve it, but you cannot elide by endlessly pointing to the plank in the other’s eye.  You can pray through your tears, and in doing so ironically demonstrate the disturbance left by the death of God, but what does this shore up? It builds upon cathedral ruins: those sites where being is doubled-up and bent-over-backwards trying to look inconspicuous as just another option. Do you want to write religion back into being? Why not, as Ayache suggests, just ruin yourself? I hope it is clear I don’t have any answers: all clarity is a lie these days. I can only offer bleak theory as a way of seeing and perhaps a way of operating. It ‘works’ as follows: begin with confusion and shear away at what you can. Whatever is left is likely the closest thing approximating to what we name truth. It will be strictly negative. Elimination of errors is the best you can hope for.

I don’t know how to end this, so I am just going to end it.