Three Pound Brain

No bells, just whistling in the dark…

Tag: neuroscience

After Yesterday: Review and Commentary of Catherine Malabou’s Before Tomorrow: Epigenesis and Rationality

by rsbakker

Experiments like the Wason Selection Task dramatically demonstrate the fractionate, heuristically specialized nature of human cognition. Dress the same logical confound in social garb and it suddenly becomes effortless. We are legion, both with reference to our environments and to ourselves. The great bulk of human cognition neglects the general nature of things, targeting cues instead, information correlated to subsequent events. We metacognize none of this.

Insofar as Catherine Malabou concedes the facts of neurobiology she concedes these facts.

In Before Tomorrow: Epigenesis and Rationality, she attempts to rescue the transcendental via a conception of ‘transcendental epigenesis.’ The book orbits about section 27 (pp. 173-175 in my beaten Kemp-Smith translation) of the Transcendental Deduction in the second edition of The Critique of Pure Reason, where Kant considers the vexed question of the source of the agreement of the transcendental and the empirical, conceptuality and experience. Kant considers three possibilities: the agreement is empirically sourced, transcendentally sourced, or fundamentally (divinely) given. Since the first and the third contradict the necessity of the transcendental, he opts for the second, which he cryptically describes as “the epigenesis of pure reason” (174), a phrase which has perplexed Kant scholars ever since.

She examines a cluster of different theories on Kant’s meaning, each pressing Kant toward either empirical or theological contingency, and thus the very contradiction he attempts to avoid with his invocation of ‘epigenesis.’ Malabou undertakes a defense of Kantian transcendental epigenesis in the context of contemporary neurobiology, transforming Kant’s dilemma into a diagnosis of the dilemma she sees confronting Continental philosophy as a whole.

Via Foucault, she argues the historicity of transcendence as epigenesis understood as the invention of meaning (which she thinks is irreducible). “[N]o biologist,” she writes, “examines the relation between genetics and epigenetics in terms of meaning.” Via Heidegger (“who is no doubt the deepest of all of Kant’s readers”) she argues that the ecstatic temporality of transcendence reveals the derivative nature of empirical and theological appropriations, which both cover over primordial time (time before time). She ultimately parts with Heidegger on the issue of primordiality, but she takes away the phenomenological interpolation of past, present, and future, building toward the argument that epigenesis is never simply archaeological, but aimed as well—teleological.

Meillasoux seems to overthrow the primordial via reference to the ancestral, the time before the time before time, but he ultimately fails to deliver on the project of contingency. For all the initial praise Malabou expresses for his project, he ultimately provides her with a critical foil, an example of how not to reach beyond the Kantian tradition. (I especially enjoyed her Heideggerean critique of his time before the time before time as being, quite obviously (I think), the time after the time before time).

She ultimately alights on the Critique of Judgment, with a particular emphasis on section 81, which contains another notorious reference to epigenesis. The problem, once again, was that reading ‘the epigenesis of pure reason’ empirically—neurobiologically—obliterates the transcendental. Reading it formally, on the other hand, renders it static and inexplicable. What Malabou requires is some way of squaring the transcendental with the cognitive scientific revolution, lest Continental philosophy dwindle into a museum relic. She uses the mingling of causal and teleological efficacy Kant describes in the Third Critique as her ‘contact point’ between the transcendental and the empirical, since it is in the purposiveness of life that contingency and necessity are brought together.

Combining this with ecstatic temporality on the hand and neurobiological life on the other reveals an epigenesis that bridges the divide between life and thought in the course of explaining the adaptivity of reason without short-circuiting transcendence: “insofar as its movement is also the movement of the reason that thinks it, insofar as there is no rationality without epigenesis, without self-adjustment, without the modification of the old by the new, the natural and objective time of epigenesis may also be considered to be the subjective and pure time of the formation of horizon by and for thought.”

And so is the place of cognitive science made clear: “what neurobiology makes possible today through its increasingly refined description of brain mechanisms and its use of increasingly effective imaging techniques is the actual taking into account, by thought, of its own life.” The epigenetic ratchet now includes the cognitive sciences; philosophical meaning can now be generated on the basis of the biology of life. “What the neurobiological perspective lacks fundamentally,” she writes, “is the theoretical accounting for the new type of reflexivity that it enables and in which all of its philosophical interest lies.” Transcendental epigenesis, Malabou thinks, allows neurobiologically informed philosophy, one attuned to the “adventure of subjectivity,” to inform neurobiology.

She concludes, interestingly, with a defense of her analogical methodology, something I’ve criticized her for previously (and actually asked her about at a public lecture she gave in 2015). I agree that we’re all compelled to resort to cartoons when discussing these matters, true, but the problem is that we have no way of arbitrating whether our analogies render some dynamic tractable, or merely express some coincidental formal homology, short their abductive power, their ability to render domains scrutable. It is the power of a metaphor to clarify more than it merely matches that is the yardstick of theoretical analogical adequacy.

In some ways, I genuinely loved this book, especially for the way it reads like a metaphysical whodunnit, constantly tying varied interpretations to the same source material, continually interrogating different suspects, dismissing them with a handful of crucial clues in hand. This is the kind of book I once adored: an extended meditation on a decisive philosophical issue anchored by close readings of genuinely perplexing texts.

Unfortunately, I’m pretty sure Malabou’s approach completely misconstrues the nature of the problem the cognitive sciences pose to Continental philosophy. As a result, I fear she obscures the disaster about to befall, not simply her tradition, but arguably the whole of humanity.

When viewed from a merely neurobiological perspective, cognitive systems and environments form cognitive ecologies—their ‘epigenetic’ interdependence comes baked in. Insofar as Malabou agrees with this, she agrees that the real question has nothing to do with ‘correlation,’ the intentional agreement of concept and object, but rather with the question of how experience and cognition as they appear to philosophical reflection can be reconciled with the facts of our cognitive ecologies as scientifically reported. The problem, in other words, is the biology of metacognition. To put it into Kantian terms, the cognitive sciences amount to a metacritique of reason, a multibillion dollar colonization of Kant’s traditional domain. Like so much life, metacognition turns out to be a fractionate, radically heuristic affair, ancestrally geared to practical problem-solving. Not only does this imperil Kant’s account of cognition, it signals the disenchantment of the human soul. The fate of the transcendental is a secondary concern at best, one that illustrates rather than isolates the problem. The sciences have overthrown the traditional discourses of every single domain they have colonized. The burning question is why should the Continental philosophical discourse on the human soul prove an exception?

The only ‘argument’ that Malabou makes in this regard, the claim upon which all of her arguments hang, also comes from Kant:

“In the Critique of Pure Reason, when discussing the schema of the triangle, Kant asserts that there are realities that “can never exist anywhere except in thought.” If we share this view, as I do, then the validity of the transcendental is upheld. Yes, there are realities that exist nowhere but in thought.”

So long as we believe in ‘realities of thought,’ Continental philosophy is assured its domain. But are these ‘realities’ what they seem? Remember Hume: “It is remarkable concerning the operations of the mind that, though most intimately present to us, yet, whenever they become the object of reflection, they seem involved in obscurity; nor can the eye readily find those lines and boundaries, which discriminate and distinguish them” (Enquiry Concerning Human Understanding, 7). The information available to traditional speculative reflection is less than ideal. Given this evidential insecurity, how will the tradition cope with the increasing amounts of cognitive scientific information flooding society?

The problem, in other words, is both epistemic and social. Epistemically, the reality of thought need not satisfy our traditional conceptions, which suggests, all things being equal, that it will very likely contradict them. And socially, no matter how one sets about ontologically out-fundamentalizing the sciences, the fact remains that ‘ontologically out-fundamentalizing’ is the very discursive game that is being marginalized—disenchanted.

Regarding the epistemic problem. For all the attention Malabou pays to section 81 of the Third Critique, she overlooks the way Kant begins by remarking on the limits of cognition. The fact is, he’s dumbfounded: “It is beyond our reason’s grasp how this reconciliation of two wholly different kinds of causality is possible: the causality of nature in its universal lawfulness, with [the causality of] an idea that confines nature to a particular form for which nature itself contains no basis whatsoever.” Our cognition of efficacy is divided between what can be sourced in nature and what cannot be sourced, between causes and purposes, and somehow, someway, they conspire to render living systems intelligible. The evidence of this basic fractionation lies plain in experience, but the nature of its origin and activity remain occluded: it belongs to “the being in itself of which we know merely the appearance.”

In one swoop, Kant metacognizes the complexity of cognition (two wholly different forms), the limits of metacognizing that complexity (inscrutable to reflection), and the efficacy of that complexity (enabling cognition of animate things). Thanks to the expansion of the cognitive scientific domain, all three of these insights now possess empirical analogues. As far as complexity is concerned, we know that humans possess a myriad of specialized cognitive systems. Kant’s ‘two kinds of causality’ correlates with two families of cognitive systems observed in infants, the one geared to the inanimate world, mechanical troubleshooting, the other to the animate world, biological troubleshooting. The cognitive pathologies belonging Williams Syndrome and Autism Spectrum Disorder demonstrate profound cleavages between physical and psychological cognition. The existence of metacognitive limits is also a matter of established empirical fact, operative in any number of phenomena explored by the ecological rationality and cognitive heuristics and biases research programs. In fact, the mere existence of cognitive science, which is invested in discovering those aspects of experience and cognition we are utterly insensitive to, demonstrates the profundity of human medial neglect, our utter blindness to the enabling machinery of cognition as such.

And recent research is also revealing the degree to which humans are hardwired to posit opportunistic efficacies. Given the enormity and complexity of endogenous and exogenous environments, organisms have no hope of sourcing the information constituting its cognitive ecologies. No surprise, neural networks (like the machine learning systems they inspired) are exquisitely adapted to the isolation of systematic correlations—patterns. Neglecting the nature of the systems involved, they focus on correlations between availabilities, isolating those observable precursors allowing the prediction of subsequent, reproductively significant observables such as behaviour. Confusing correlation with causation may be the bane of scientists, but for the rest of us, the reliance of ‘proxies’ often pays real cognitive dividends.

Humans are hardwired to both neglect their own cognitive complexity and to fetishize their environments, to impute efficacies serving local, practical cognitive determinations. Stranded in the most complicated system ever encountered, human metacognition cannot but comprise a congeries of source-insensitive systems geared to the adventitious solution of practical problems—like holding one’s tongue, or having second thoughts, or dwelling on the past, and so on. In everyday contexts, it never occurs to question the sources of these activities. Given neglect of the actual sources, we intuit spontaneity whenever we retask our metacognitive motley with reporting the source of these or any other cognitive activities.

We have very good empirical reasons to believe that the above is true. So, what do we do with transcendental speculation a la Kant? Do we ignore what cognitive science has learned about the fractionation, limits, and default propensities of human metacognition? Do we assume he was onto something distinct, a second, physically inexplicable order enabling cognition of the empirical in addition to the physically explicable (because empirical) order that we know (thanks to strokes, etc.) enables cognition of the empirical? Or do we assume that Kant was onto something dimly, which, given his ignorance of cognitive science, he construed dogmatically as distinct? Do we recognize the a priori as a fetishization of medial neglect, as way to make sense of the fractionate, heuristic nature of cognition absent any knowledge of that nature?

The problem with defending the first, transcendental thesis is that the evidence supporting the second empirical hypothesis will simply continue to accumulate. This is where the social problem rears its head, why the kind of domain overlap demonstrated above almost certainly signals the doom of Malabou’s discursive tradition. Continental philosophers need to understand how disenchantment works, how the mere juxtaposition of traditional and scientific claims socially delegitimizes the former. The more cognitive science learns about experience and cognition, the less relevant and less credible traditional philosophical discourses on the nature of experience and cognition will become.

The cognitive scientific metacritique of reason, you could say, reveals the transcendental as an artifact of our immaturity, of an age when we hearkened to the a priori as our speculative authority. Malabou not only believes in this speculative authority, she believes that science itself must answer to it. Rather than understanding the discursive tools of science epigenetically, refined and organized via scientific practice, she understands them presuppositionally, as beholden to this or that (perpetually underdetermined) traditional philosophical interpretation of conditions, hidden implicatures that must be unpacked to assure cognitive legitimacy—implicatures that clearly seem to stand outside ecology, thus requiring more philosophical interpretation to provide cognitive legitimacy. The great irony, of course, is that scientists eschew her brand of presuppositional ‘legitimacy’ to conserve their own legitimacy. Stomping around in semantic puddles is generally a counterproductive way to achieve operational clarity—a priori exercises in conceptual definition are notoriously futile. Science turns on finding answerable questions in questions answered. If gerrymandering definitions geared to local experimental contexts does the trick, then so be it. The philosophical groping and fumbling involved is valuable only so far as it serves this end. Is this problematic? Certainly. Is this a problem speculative ontological interpretation can solve? Not at all.

Something new is needed. Something radical, not in the sense of discursive novelty, but in a way that existentially threatens the tradition—and offends accordingly.

I agree entirely when Malabou writes:

“Clearly, it is of the utmost necessity today to rethink relations between the biological and the transcendental, even if it is to the detriment of the latter. But who’s doing so? And why do continental philosophers reject the neurobiological approach to the problem from the outset?”

This was the revelation I had in 1999, attempting to reconcile fundamental ontology and neuroscience for the final chapter of my dissertation. I felt the selfsame exhaustion, the nagging sense that it was all just a venal game, a discursive ingroup ruse. I turned my back on philosophy, began writing fiction, not realizing I was far from alone in my defection. When I returned, ‘correlation’ had replaced ‘presence’ as the new ‘ontologically problematic presupposition.’ At long last, I thought, Continental philosophy had recognized that intentionality—meaning—was the problem. But rather than turn to cognitive science to “search for the origin of thinking outside of consciousness and will,” the Speculative Realists I encountered (with the exception of thinkers like David Roden) embraced traditional vocabularies. Their break with traditional Kantian philosophy, I realized, did not amount to a break with traditional intentional philosophy. Far from calling attention to the problem, ‘correlation’ merely focused intellectual animus toward an effigy, an institutional emblem, stranding the 21st century Speculative Realists in the very interpretative mire they used to impugn 20th century Continental philosophy. Correlation was a hopeful, but ultimately misleading diagnosis. The problem isn’t that cognitive systems and environments are interdependent, the problem is that this interdependence is conceived intentionally. Think about it. Why do we find the intentional interdependence of cognition and experience so vexing when the ecological interdependence of cognitive systems and environments is simply given in biology? What is it about intentionality?

Be it dogmatically or critically conceived, what we call ‘intentionality’ is a metacognitive artifact of the way source-insensitive modes of cognition, like intentional cognition, systematically defer the question of sources. A transcendental source is a sourceless source—an ‘originary repetition’ admitting an epigenetic gloss—because intentional cognition, whether applied to thought or the world, is source-insensitive cognition. To apply intentional cognition to the question of the nature of intentional cognition, as the tradition does compulsively, is to trip into metacognitive crash space, a point where intuitions, like those Malabou so elegantly tracks in Before Tomorrow, can only confound the question they purport to solve.

Derrida understood, at least as far as his (or perhaps any) intentional vocabulary could take him. He understood that cognition as cognized is a ‘cut-out,’ an amnesiac intermediary, appearing sourceless, fully present, something outside ecology, and as such doomed to be overthrown by ecology. He, more-so than Kant, hesitates upon the metacognitive limit, full-well understanding the futility of transgressing it. But since he presumed the default application of intentional cognition to the problem of cognition necessary, he presumed the inevitability of tripping into crash space as well, believing that reflection could not but transgress its limits and succumb to the metaphysics of presence. Thus his ‘quasi-transcendentals,’ his own sideways concession to the Kantian quagmire. And thus deconstruction, the crashing of super-ecological claims by adducing what must be neglected—ecology—to maintain the illusion of presence.

And so, you could say the most surprising absence in Malabou’s text is her teacher, who whispers merely from various turns in her discourse.

“No one,” she writes, “has yet thought to ask what continental philosophy might become after this “break.” Not true. I’ve spent years now prospecting the desert of the real, the post-intentional landscape that, if I’m right, humanity is doomed to wander into and evaporate. I too was a Derridean once, so I know a path exists between her understanding and mine. I urge her to set aside the institutional defense mechanisms as I once did: charges of scientism or performative contradiction simply beg the question against the worst-case scenario. I invite her to come see what philosophy and the future look like after the death of transcendence, if only to understand the monstrosity of her discursive other. I challenge her to think post-human thoughts—to understand cognition materially, rather than what traditional authority has made of it. I implore her to see how the combination of science and capital is driving our native cognitive ecologies to extinction on an exponential curve.

And I encourage everyone to ask why, when it comes to the topic of meaning, we insist on believing in happy endings? We evolved to neglect our fundamental ecological nature, to strategically hallucinate spontaneities to better ignore the astronomical complexities beneath. Subreption has always been our mandatory baseline. As the cognitive ecologies underwriting those subreptive functions undergo ever more profound transformations, the more dysfunctional our ancestral baseline will become. With the dawning of AI and enhancement, the abstract problem of meaning has become a civilizational crisis.

Best we prepare for the worst and leave what was human to hope.

Advertisements

Enlightenment How? Omens of the Semantic Apocalypse

by rsbakker

“In those days the world teemed, the people multiplied, the world bellowed like a wild bull, and the great god was aroused by the clamor. Enlil heard the clamor and he said to the gods in council, “The uproar of mankind is intolerable and sleep is no longer possible by reason of the babel.” So the gods agreed to exterminate mankind.” –The Epic of Gilgamesh

We know that human cognition is largely heuristic, and as such dependent upon cognitive ecologies. We know that the technological transformation of those ecologies generates what Pinker calls ‘bugs,’ heuristic miscues due to deformations in ancestral correlative backgrounds. In ancestral times, our exposure to threat-cuing stimuli possessed a reliable relationship to actual threats. Not so now thanks to things like the nightly news, generating (via, Pinker suggests, the availability heuristic (42)) exaggerated estimations of threat.

The toll of scientific progress, in other words, is cognitive ecological degradation. So far that degradation has left the problem-solving capacities of intentional cognition largely intact: the very complexity of the systems requiring intentional cognition has hitherto rendered cognition largely impervious to scientific renovation. Throughout the course of revolutionizing our environments, we have remained a blind-spot, the last corner of nature where traditional speculation dares contradict the determinations of science.

This is changing.

We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travelers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts.

Now that the sciences are colonizing the complexities of experience and cognition, we can see the first clear-cut omens of the semantic apocalypse.

 

Crash Space

He assiduously avoids the topic in Enlightenment Now, but in The Blank Slate, Pinker devotes several pages to deflating the arch-incompatibility between natural and intentional modes of cognition, the problem of free will:

“But how can we have both explanation, with its requirement of lawful causation, and responsibility, with its requirement of free choice? To have them both we don’t need to resolve the ancient and perhaps irresolvable antinomy between free will and determinism. We have only to think clearly about what we want the notion of responsibility to achieve.” 180

He admits there’s no getting past the ‘conflict of intuitions’ underwriting the debate. Since he doesn’t know what intentional and natural cognition amount to, he doesn’t understand their incompatibility, and so proposes we simply side-step the problem altogether by redefining ‘responsibility’ to mean what we need it to mean—the same kind of pragmatic redefinition proposed by Dennett. He then proceeds to adduce examples of ‘clear thinking’ by providing guesses regarding ‘holding responsible’ as deterrence, which is more scientifically tractable. “I don’t claim to have solved the problem of free will, only to show that we don’t need to solve it to preserve personal responsibility in the face of an increasing understanding of the causes of behaviour” (185).

Here we can see how profoundly Pinker (as opposed to Nietzsche and Adorno) misunderstands the profundity of Enlightenment disenchantment. The problem isn’t that one can’t cook up alternate definitions of ‘responsibility,’ the problem is that anyone can, endlessly. ‘Clear thinking’ is as liable to serve Pinker as well as ‘clear and distinct ideas’ served Descartes, which is to say, as more grease for the speculative mill. No matter how compelling your particular instrumentalization of ‘responsibility’ seems, it remains every bit as theoretically underdetermined as any other formulation.

There’s a reason such exercises in pragmatic redefinition stall in the speculative ether. Intentional and mechanical cognitive systems are not optional components of human cognition, nor are the intuitions we are inclined to report. Moreover, as we saw in the previous post, intentional cognition generates reliable predictions of system behaviour absent access to the actual sources of that behaviour. Intentional cognition is source-insensitive. Natural cognition, on the other hand, is source sensitive: it generates predictions of system behaviour via access to the actual sources of that behaviour.

Small wonder, then, that our folk intentional intuitions regularly find themselves scuttled by scientific explanation. ‘Free will,’ on this account, is ancestral lemonade, a way to make the best out of metacognitive lemons, namely, our blindness to the sources of our thought and decisions. To the degree it relies upon ancestrally available (shallow) saliencies, any causal (deep) account of those sources is bound to ‘crash’ our intuitions regarding free will. The free will debate that Pinker hopes to evade with speculation can be seen as a kind of crash space, the point where the availability of deep information generates incompatible causal intuitions and intentional intuitions.

The confusion here isn’t (as Pinker thinks) ‘merely conceptual’; it’s a bona fide, material consequence of the Enlightenment, a cognitive version of a visual illusion. Too much information of the wrong kind crashes our radically heuristic modes of cognizing decisions. Stipulating definitions, not surprisingly, solves nothing insofar as it papers over the underlying problem—this is why it merely adds to the literature. Responsibility-talk cues the application of intentional cognitive modes; it’s the incommensurability of these modes with causal cognition that’s the problem, not our lexicons.

 

Cognitive Information

Consider the laziness of certain children. Should teachers be allowed to hold students responsible for their academic performance? As the list of learning disabilities grows, incompetence becomes less a matter of ‘character’ and more a matter of ‘malfunction’ and providing compensatory environments. Given that all failures of competence redound on cognitive infelicities of some kind, and given that each and every one of these infelicities can and will be isolated and explained, should we ban character judgments altogether? Should we regard exhortations to ‘take responsibility’ as forms of subtle discrimination, given that executive functioning varies from student to student? Is treating children like (sacred) machinery the only ‘moral’ thing to do?

So far at least. Causal explanations of behaviour cue intentional exemptions: our ancestral thresholds for exempting behaviour from moral cognition served larger, ancestral social equilibria. Every etiological discovery cues that exemption in an evolutionarily unprecedented manner, resulting in what Dennett calls “creeping exculpation,” the gradual expansion of morally exempt behaviours. Once a learning impediment has been discovered, it ‘just is’ immoral to hold those afflicted responsible for their incompetence. (If you’re anything like me, simply expressing the problem in these terms rankles!) Our ancestors, resorting to systems adapted to resolving social problems given only the merest information, had no problem calling children lazy, stupid, or malicious. Were they being witlessly cruel doing so? Well, it certainly feels like it. Are we more enlightened, more moral, for recognizing the limits of that system, and curtailing the context of application? Well, it certainly feels like it. But then how do we justify our remaining moral cognitive applications? Should we avoid passing moral judgment on learners altogether? It’s beginning to feel like it. Is this itself moral?

This is theoretical crash space, plain and simple. Staking out an argumentative position in this space is entirely possible—but doing so merely exemplifies, as opposed to solves, the dilemma. We’re conscripting heuristic systems adapted to shallow cognitive ecologies to solve questions involving the impact of information they evolved to ignore. We can no more resolve our intuitions regarding these issues than we can stop Necker Cubes from spoofing visual cognition.

The point here isn’t that gerrymandered solutions aren’t possible, it’s that gerrymandered solutions are the only solutions possible. Pinker’s own ‘solution’ to the debate (see also, How the Mind Works, 54-55) can be seen as a symptom of the underlying intractability, the straits we find ourselves in. We can stipulate, enforce solutions that appease this or that interpretation of this or that displaced intuition: teachers who berate students for their laziness and stupidity are not long for their profession—at least not anymore. As etiologies of cognition continue to accumulate, as more and more deep information permeates our moral ecologies, the need to revise our stipulations, to engineer them to discharge this or that heuristic function, will continue to grow. Free will is not, as Pinker thinks, “an idealization of human beings that makes the ethics game playable” (HMW 55), it is (as Bruce Waller puts it) stubborn, a cognitive reflex belonging to a system of cognitive reflexes belonging to intentional cognition more generally. Foot-stomping does not change how those reflexes are cued in situ. The free-will crash space will continue to expand, no matter how stubbornly Pinker insists on this or that redefinition of this or that term.

We’re not talking about a fall from any ‘heuristic Eden,’ here, an ancestral ‘golden age’ where our instincts were perfectly aligned with our circumstances—the sheer granularity of moral cognition, not to mention the confabulatory nature of moral rationalization, suggests that it has always slogged through interpretative mire. What we’re talking about, rather, is the degree that moral cognition turns on neglecting certain kinds of natural information. Or conversely, the degree to which deep natural information regarding our cognitive capacities displaces and/or crashes once straightforward moral intuitions, like the laziness of certain children.

Or the need to punish murderers…

Two centuries ago a murderer suffering irregular sleep characterized by vocalizations and sometimes violent actions while dreaming would have been prosecuted to the full extent of the law. Now, however, such a murderer would be diagnosed as suffering an episode of ‘homicidal somnambulism,’ and could very likely go free. Mammalian brains do not fall asleep or awaken all at once. For some yet-to-be-determined reason, the brains of certain individuals (mostly men older than 50), suffer a form of partial arousal causing them to act out their dreams.

More and more, neuroscience is making an impact in American courtrooms. Nita Farahany (2016) has found that between 2005 and 2012 the number of judicial opinions referencing neuroscientific evidence has more than doubled. She also found a clear correlation between the use of such evidence and less punitive outcomes—especially when it came to sentencing. Observers in the burgeoning ‘neurolaw’ field think that for better or worse, neuroscience is firmly entrenched in the criminal justice system, and bound to become ever more ubiquitous.

Not only are responsibility assessments being weakened as neuroscientific information accumulates, social risk assessments are being strengthened (Gkotsi and Gasser 2016). So-called ‘neuroprediction’ is beginning to revolutionize forensic psychology. Studies suggest that inmates with lower levels of anterior cingulate activity are approximately twice as likely to reoffend as those relatively higher levels of activity (Aharoni et al 2013). Measurements of ‘early sensory gating’ (attentional filtering) predict the likelihood that individuals suffering addictions will abandon cognitive behavioural treatment programs (Steele et al 2014). Reduced gray matter volumes in the medial and temporal lobes identify youth prone to commit violent crimes (Cope et al 2014). ‘Enlightened’ metrics assessing recidivism risks already exist within disciplines such as forensic psychiatry, of course, but “the brain has the most proximal influence on behavior” (Gaudet et al 2016). Few scientific domains illustrate the problems secondary to deep environmental information than the issue of recidivism. Given the high social cost of criminality, the ability to predict ‘at risk’ individuals before any crime is committed is sure to pay handsome preventative dividends. But what are we to make of justice systems that parole offenders possessing one set of ‘happy’ neurological factors early, while leaving others possessing an ‘unhappy’ set to serve out their entire sentence?

Nothing, I think, captures the crash of ancestral moral intuitions in modern, technological contexts quite so dramatically as forensic danger assessments. Consider, for instance, the way deep information in this context has the inverse effect of deep information in the classroom. Since punishment is indexed to responsibility, we generally presume those bearing less responsibility deserve less punishment. Here, however, it’s those bearing the least responsibility, those possessing ‘social learning disabilities,’ who ultimately serve the longest. The very deficits that mitigate responsibility before conviction actually aggravate punishment subsequent conviction.

The problem is fundamentally cognitive, and not legal, in nature. As countless bureaucratic horrors make plain, procedural decision-making need not report as morally rational. We would be mad, on the one hand, to overlook any available etiology in our original assessment of responsibility. We would be mad, on the other hand, to overlook any available etiology in our subsequent determination of punishment. Ergo, less responsibility often means more punishment.

Crash.

The point, once again, is to describe the structure and dynamics of our collective sociocognitive dilemma in the age of deep environmental information, not to eulogize ancestral cognitive ecologies. The more we disenchant ourselves, the more evolutionarily unprecedented information we have available, the more problematic our folk determinations become. Demonstrating this point demonstrates the futility of pragmatic redefinition: no matter how Pinker or Dennett (or anyone else) rationalizes a given, scientifically-informed definition of moral terms, it will provide no more than grist for speculative disputation. We can adopt any legal or scientific operationalization we want (see Parmigiani et al 2017); so long as responsibility talk cues moral cognitive determinations, however, we will find ourselves stranded with intuitions we cannot reconcile.

Considered in the context of politics and the ‘culture wars,’ the potentially disastrous consequences of these kinds of trends become clear. One need only think of the oxymoronic notion of ‘commonsense’ criminology, which amounts to imposing moral determinations geared to shallow cognitive ecologies upon criminal contexts now possessing numerous deep information attenuations. Those who, for whatever reason, escaped the education system with something resembling an ancestral ‘neglect structure’ intact, those who have no patience for pragmatic redefinitions or technical stipulations will find appeals to folk intuitions every bit as convincing as those presiding over the Salem witch trials in 1692. Those caught up in deep information environments, on the other hand, will be ever more inclined to see those intuitions as anachronistic, inhumane, immoral—unenlightened.

Given the relation between education and information access and processing capacity, we can expect that education will increasingly divide moral attitudes. Likewise, we should expect a growing sociocognitive disconnect between expert and non-expert moral determinations. And given cognitive technologies like the internet, we should expect this dysfunction to become even more profound still.

 

Cognitive Technology

Given the power of technology to cue intergroup identifications, the internet was—and continues to be—hailed as a means of bringing humanity together, a way of enacting the universalistic aspirations of humanism. My own position—one foot in academe, another foot in consumer culture—afforded me a far different perspective. Unlike academics, genre writers rub shoulders with all walks, and often find themselves debating outrageously chauvinistic views. I realized quite quickly that the internet had rendered rationalizations instantly available, that it amounted to pouring marbles across the floor of ancestral social dynamics. The cost of confirmation had plummeted to zero. Prior to the internet, we had to test our more extreme chauvinisms against whomever happened to be available—which is to say, people who would be inclined to disagree. We had to work to indulge our stone-age weaknesses in post-war 20th century Western cognitive ecologies. No more. Add to this phenomena such as online disinhibition effect, as well as the sudden visibility of ingroup, intellectual piety, and the growing extremity of counter-identification struck me as inevitable. The internet was dividing us into teams. In such an age, I realized, the only socially redemptive art was art that cut against this tendency, art that genuinely spanned ingroup boundaries. Literature, as traditionally understood, had become a paradigmatic expression of the tribalism presently engulfing us now. Epic fantasy, on the other hand, still possessed the relevance required to inspire book burnings in the West.

(The past decade has ‘rewarded’ my turn-of-the-millennium fears—though in some surprising ways. The greatest attitudinal shift in America, for instance, has been progressive: it has been liberals, and not conservatives, who have most radically changed their views. The rise of reactionary sentiment and populism is presently rewriting European politics—and the age of Trump has all but overthrown the progressive political agenda in the US. But the role of the internet and social media in these phenomena remains a hotly contested one.)

The earlier promoters of the internet had banked on the notional availability of intergroup information to ‘bring the world closer together,’ not realizing the heuristic reliance of human cognition on differential information access. Ancestrally, communicating ingroup reliability trumped communicating environmental accuracy, stranding us with what Pinker (following Kahan 2011) calls the ‘tragedy of the belief commons’ (Enlightenment Now, 358), the individual rationality of believing collectively irrational claims—such as, for instance, the belief that global warming is a liberal myth. Once falsehoods become entangled with identity claims, they become the yardstick of true and false, thus generating the terrifying spectacle we now witness on the evening news.

The provision of ancestrally unavailable social information is one thing, so long as it is curated—censored, in effect—as it was in the mass media age of my childhood. Confirmation biases have to swim upstream in such cognitive ecologies. Rendering all ancestrally unavailable social information available, on the other hand, allows us to indulge our biases, to see only what we want to see, to hear only what we want to hear. Where ancestrally, we had to risk criticism to secure praise, no such risks need be incurred now. And no surprise, we find ourselves sliding back into the tribalistic mire, arguing absurdities haunted—tainted—by the death of millions.

Jonathan Albright, the research director at the Tow Center for Digital Journalism at Columbia, has found that the ‘fake news’ phenomenon, as the product of a self-reinforcing technical ecosystem, has actually grown worse since the 2016 election. “Our technological and communication infrastructure, the ways we experience reality, the ways we get news, are literally disintegrating,” he recently confessed in a NiemanLab interview. “It’s the biggest problem ever, in my opinion, especially for American culture.” As Alexis Madrigal writes in The Atlantic, “the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

The individual cost of fantasy continues to shrink, even as the collective cost of deception continues to grow. The ecologies once securing the reliability of our epistemic determinations, the invariants that our ancestors took for granted, are being levelled. Our ancestral world was one where seeking risked aversion, a world where praise and condemnation alike had to brave condemnation, where lazy judgments were punished rather than rewarded. Our ancestral world was one where geography and the scarcity of resources forced permissives and authoritarians to intermingle, compromise, and cooperate. That world is gone, leaving the old equilibria to unwind in confusion, a growing social crash space.

And this is only the beginning of the cognitive technological age. As Tristan Harris points out, social media platforms, given their commercial imperatives, cannot but engineer online ecologies designed to exploit the heuristic limits of human cognition. He writes:

“I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.”

More and more of what we encounter online is dedicated to various forms of exogenous attention capture, maximizing the time we spend on the platform, so maximizing our exposure not just to advertising, but to hidden metrics, algorithms designed to assess everything from our likes to our emotional well-being. As with instances of ‘forcing’ in the performance of magic tricks, the fact of manipulation escapes our attention altogether, so we always presume we could have done otherwise—we always presume ourselves ‘free’ (whatever this means). We exhibit what Clifford Nass, a pioneer in human-computer interaction, calls ‘mindlessness,’ the blind reliance on automatic scripts. To the degree that social media platforms profit from engaging your attention, they profit from hacking your ancestral cognitive vulnerabilities, exploiting our shared neglect structure. They profit, in other words, from transforming crash spaces into cheat spaces.

With AI, we are set to flood human cognitive ecologies with systems designed to actively game the heuristic nature of human social cognition, cuing automatic responses based on boggling amounts of data and the capacity to predict our decisions better than our intimates, and soon, better than we can ourselves. And yet, as the authors of the 2017 AI Index report state, “we are essentially “flying blind” in our conversations and decision-making related to AI.” A blindness we’re largely blind to. Pinker spends ample time domesticating the bogeyman of superintelligent AI (296-298) but he completely neglects this far more immediate and retail dimension of our cognitive technological dilemma.

Consider the way humans endure as much as need one another: the problem is that the cues signaling social punishment and reward are easy to trigger out of school. We’ve already crossed the borne where ‘improving the user experience’ entails substituting artificial for natural social feedback. Notice the plethora of nonthreatening female voices at all? The promise of AI is the promise of countless artificial friends, voices that will ‘understand’ your plight, your grievances, in some respects better than you do yourself. The problem, of course, is that they’re artificial, which is to say, not your friend at all.

Humans deceive and manipulate one another all the time, of course. And false AI friends don’t rule out true AI defenders. But the former merely describes the ancestral environments shaping our basic heuristic tool box. And the latter simply concedes the fundamental loss of those cognitive ecologies. The more prosthetics we enlist, the more we complicate our ecology, the more mediated our determinations become, the less efficacious our ancestral intuitions become. The more we will be told to trust to gerrymandered stipulations.

Corporate simulacra are set to deluge our homes, each bent on cuing trust. We’ve already seen how the hypersensitivity of intentional cognition renders us liable to hallucinate minds where none exist. The environmental ubiquity of AI amounts to the environmental ubiquity of systems designed to exploit granular sociocognitive systems tuned to solve humans. The AI revolution amounts to saturating human cognitive ecology with invasive species, billions of evolutionarily unprecedented systems, all of them camouflaged and carnivorous. It represents—obviously, I think—the single greatest cognitive ecological challenge we have ever faced.

What does ‘human flourishing’ mean in such cognitive ecologies? What can it mean? Pinker doesn’t know. Nobody does. He can only speculate in an age when the gobsmacking power of science has revealed his guesswork for what it is. This was why Adorno referred to the possibility of knowing the good as the ‘Messianic moment.’ Until that moment comes, until we find a form of rationality that doesn’t collapse into instrumentalism, we have only toothless guesses, allowing the pointless optimization of appetite to command all. It doesn’t matter whether you call it the will to power or identity thinking or negentropy or selfish genes or what have you, the process is blind and it lies entirely outside good and evil. We’re just along for the ride.

 

Semantic Apocalypse

Human cognition is not ontologically distinct. Like all biological systems, it possesses its own ecology, its own environmental conditions. And just as scientific progress has brought about the crash of countless ecosystems across this planet, it is poised to precipitate the crash of our shared cognitive ecology as well, the collapse of our ability to trust and believe, let alone to choose or take responsibility. Once every suboptimal behaviour has an etiology, what then? Once everyone us has artificial friends, heaping us with praise, priming our insecurities, doing everything they can to prevent non-commercial—ancestral— engagements, what then?

‘Semantic apocalypse’ is the dramatic term I coined to capture this process in my 2008 novel, Neuropath. Terminology aside, the crashing of ancestral (shallow information) cognitive ecologies is entirely of a piece with the Anthropocene, yet one more way that science and technology are disrupting the biology of our planet. This is a worst-case scenario, make no mistake. I’ll be damned if I see any way out of it.

Humans cognize themselves and one another via systems that take as much for granted as they possibly can. This is a fact. Given this, it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible. Why should we evolve the extravagant capacity to intuit our nature beyond the demands of ancestral life? The shallow cognitive ecology arising out of those demands constitutes our baseline self-understanding, one that bears the imprimatur of evolutionary contingency at every turn. There’s no replacing this system short replacing our humanity.

Thus the ‘worst’ in ‘worst case scenario.’

There will be a great deal of hand-wringing in the years to come. Numberless intentionalists with countless competing rationalizations will continue to apologize (and apologize) while the science trundles on, crashing this bit of traditional self-understanding and that, continually eroding the pilings supporting the whole. The pieties of humanism will be extolled and defended with increasing desperation, whole societies will scramble, while hidden behind the endless assertions of autonomy, beneath the thundering bleachers, our fundamentals will be laid bare and traded for lucre.

Zahavi, Dennett, and the End of Being*

by rsbakker

 

We are led back to these perceptions in all questions regarding origins, but they themselves exclude any further question as to origin. It is clear that the much-talked-of certainty of internal perception, the evidence of the cogito, would lose all meaning and significance if we excluded temporal extension from the sphere of self-evidence and true givenness.

–Husserl, The Phenomenology of Internal Time-Consciousness

So recall this list, marvel how it continues to grow, and remember, the catalogue is just getting started. The real tsunami of information is rumbling off in the near horizon. And lest you think your training or education render you exempt, pause and consider the latest in Eric Schwitzgebel’s empirical investigations of how susceptible professional philosophers are to various biases and effects on that list. I ask you to consider what we know regarding human cognitive shortcomings to put you in a skeptical frame of mind. I want to put in a skeptical frame of mind because of a paper by Dan Zahavi, the Director of the Center for Subjectivity Research at the University of Copenhagen, that came up on my academia.edu feed the other day.

Zahavi has always struck me as unusual as far as ‘continental’ philosophers go, at once a Husserlian ‘purist’ and determined to reach out, to “make phenomenology a powerful and systematically convincing voice in contemporary philosophical discussion” (“Husserl, self, and others: an interview with Dan Zahavi”). I applaud him for this, for braving genuine criticism, genuine scientific research, rather than allowing narrow ingroup interpretative squabbles to swallow him whole. In “Killing the straw man: Dennett and phenomenology,” he undertakes a survey of Dennett’s many comments regarding phenomenology, and a critical evaluation of his alternative to phenomenology, heterophenomenology. Since I happen to be a former phenomenologist, I’ve had occasion to argue both sides of the fence. I spent a good portion of my late twenties and early thirties defending my phenomenological commitments from my skeptical, analytically inclined friends using precisely the arguments and assumptions that Zahavi deploys against Dennett. And I’ve spent the decade following arguing a position even more radically eliminativistic than Dennett’s. I’ve walked a mile in both shoes, I suppose. I’ve gone from agreeing with pretty much everything Zahavi argues in this piece (with a handful of deconstructive caveats) to agreeing with almost nothing.

So what I would like to do is use Zahavi’s position and critique as a foil to explain how and why I’ve abandoned the continental alliance and joined the scientific empire. I gave up on what I call the Apple-and-Oranges Argument because I realized there was no reliable, a priori way to discursively circumscribe domains, to say science can only go so far and no further. I gave up on what I call the Ontological Pre-emption Argument because I realized arguing ‘conditions of possibility,’ far from rationally securing my discourse, simply multiplied my epistemic liabilities. Ultimately, I found myself stranded with what I call the Abductive Argument, an argument based on the putative reality of the consensual structures that seem to genuinely anchor phenomenological disputation. Phenomenology not only offered the best way to describe that structure, it offered the only way, or so I thought. Since Zahavi provides us with examples of all three arguments in the course of castigating Dennett, and since Dennett occupies a position similar to my own, “Killing the straw man” affords an excellent opportunity to demonstrate how phenomenology fares when considered in terms of brain science and heuristic neglect.

As the title of the paper suggests, Zahavi thinks Dennett never moves past critiquing a caricature of phenomenology. For Dennett, Zahavi claims, phenomenology is merely a variant of Introspectionism and thus suffering all the liabilities that caused Introspectionism to die as a branch of empirical psychology almost a century ago now. To redress this equivocation, Zahavi turns to that old stalwart of continental cognitive self-respect, the ‘Apples-and-Oranges Argument’:

To start with, it is important to realize that classical phenomenology is not just another name for a kind of psychological self-observation; rather it must be appreciated as a special form of transcendental philosophy that seeks to reflect on the conditions of possibility of experience and cognition. Phenomenology is a philosophical enterprise; it is not an empirical discipline. This doesn’t rule out, of course, that its analyses might have ramifications for and be of pertinence to an empirical study of consciousness, but this is not its primary aim.

By conflating phenomenology and introspective psychology, Dennett is conflating introspection with the phenomenological attitude, the theoretically attuned orientation to experience that allows the transcendental structure of experience to be interpreted. Titchener’s psychological structuralism, for instance, was invested in empirical investigations into the structure and dynamics of the conscious mind. As descriptive psychology, it could not, by definition, disclose what Zahavi terms the ‘nonpsychological dimension of consciousness,’ those structures that make experience possible.

What makes phenomenology different, in other words, is also what makes phenomenology better. And so we find the grounds for the Ontological Pre-emption Argument in the Apples-and-Oranges Argument:

Phenomenology is not concerned with establishing what a given individual might currently be experiencing. Phenomenology is not interested in qualia in the sense of purely individual data that are incorrigible, ineffable, and incomparable. Phenomenology is not interested in psychological processes (in contrast to behavioral processes or physical processes). Phenomenology is interested in the very dimension of givenness or appearance and seeks to explore its essential structures and conditions of possibility. Such an investigation of the field of presence is beyond any divide between psychical interiority and physical exteriority, since it is an investigation of the dimension in which any object—be it external or internal—manifests itself. Phenomenology aims to disclose structures that are intersubjectively accessible, and its analyses are consequently open for corrections and control by any (phenomenologically tuned) subject.

The strategy is as old as phenomenology itself. First you extricate phenomenology from the bailiwick of the sciences, then you position phenomenology prior to the sciences as the discipline responsible for cognizing the conditions of possibility of science. First you argue that it is fundamentally different, and then you argue that this difference is fundamental.

Of course, Zahavi omits any consideration of the ways Dennett could respond to either of these claims. (This is one among several clues to the institutionally defensive nature of this paper, the fact that it is pitched more to those seeking theoretical reaffirmation than to institutional outsiders—let alone lapsarians). Dennett need only ask Zahavi why anyone should believe that his domain possesses ontological priority over the myriad domains of science. The fact that Zahavi can pluck certain concepts from Dennett’s discourse, drop them in his interpretative machinery, and derive results friendly to that machinery should come as no surprise. The question pertains to the cognitive legitimacy of the machinery: therefore any answer presuming that legitimacy simply begs the question. Does Zahavi not see this?

Even if we granted the possible existence of ‘conditions of possibility,’ the most Zahavi or anyone else could do is intuit them from the conditioned, which just happen to be first-person phenomena. So if generalizing from first-person phenomena proved impossible because of third-person inaccessibility—because genuine first person data were simply too difficult to come by—why should we think those phenomena can nevertheless anchor a priori claims once phenomenologically construed? The fact is phenomenology suffers all the problems of conceptual controversy and theoretical underdetermination as structuralist psychology. Zahavi is actually quite right: phenomenology is most certainly not a science! There’s no need for him to stamp his feet and declare, “Oranges!” Everybody already knows.

The question is why anyone should take his Oranges seriously as a cognitive enterprise. Why should anyone believe his domain comes first? What makes phenomenologically disclosed structures ontologically prior or constitutive of conscious experience? Blood flow, neural function—the life or death priority of these things can be handily demonstrated with a coat-hanger! Claims like Zahavi’s regarding the nature of some ontologically constitutive beyond, on the other hand, abound in philosophy. Certainly powerful assurances are needed to take them seriously, especially when we reject them outright for good reason elsewhere. Why shouldn’t we just side with the folk, chalk phenomenology up to just another hothouse excess of higher education? Because you stack your guesswork up on the basis of your guesswork in a way you’re guessing is right?

Seriously?

As I learned, neither the Apples-and-Oranges nor the Ontological Pre-emption Arguments draw much water outside the company of the likeminded. I felt their force, felt reaffirmed the way many phenomenologists, I’m sure, feel reaffirmed reading Zahavi’s exposition now. But every time I laid them on nonphenomenologists I found myself fenced by questions that were far too easy to ask—and far easier to avoid than answer.

So I switched up my tactics. When my old grad school poker buddies started hacking on Heidegger, making fun of the neologisms, bitching about the lack of consensus, I would say something very similar to what Zahavi claims above—even more powerful, I think, since it concretizes his claims regarding structure and intersubjectivity. Look, I would tell them, once you comport yourself properly (with a tremendous amount of specialized training, bear in mind), you can actually anticipate the kinds of things Husserl or Heidegger or Merleau-Ponty or Sarte might say on this or that subject. Something more than introspective whimsy is being tracked—surely! And if that ‘something more’ isn’t the transcendental structure of experience, what could it be? Little did I know how critical this shift in the way I saw the dialectical landscape would prove.

Basically I had retreated to the Abductive Argument—the only real argument, I now think, that Zahavi or any phenomenologist ultimately has outside the company of their confreres. Apriori arguments for phenomenological aprioricity simply have no traction unless you already buy into some heavily theorized account of the apriori. No one’s going to find the distinction between introspectionism and phenomenology convincing so long as first-person phenomena remain the evidential foundation of both. If empirical psychology couldn’t generalize from phenomena, then why should we think phenomenology can reason to their origins, particularly given the way it so discursively resembles introspectionism? Why should a phenomenological attitude adjustment make any difference at all?

One can actually see Zahavi shift to abductive warrant in the last block quote above, in the way he appeals to the intersubjectively accessible nature of the ‘structures’ comprising the domain of the phenomenological attitude. I suspect this is why Zahavi is so keen on the eliminativist Dennett (whom I generally agree with) at the expense of the intentionalist Dennett (whom I generally disagree with)—so keen on setting up his own straw man, in effect. The more he can accuse Dennett of eliminating various verities of experience, the more spicy the abductive stew becomes. If phenomenology is bunk, then why does it exhibit the systematicity that it does? How else could we make sense of the genuine discursivity that (despite all the divergent interpretations) unquestionably animates the field? If phenomenological reflection is so puny, so weak, then how has any kind of consensus arisen at all?

The easy reply, of course, is to argue that the systematicity evinced by phenomenology is no different than the systematicity evinced by intelligent design, psychoanalysis, climate-change skepticism, or what have you. One might claim that rational systematicity, the kind of ‘intersubjectivity’ that Zahavi evokes several times in “Killing the straw man,” is actually cheap as dirt. Why else would we find ourselves so convincing, no matter what we happen to believe? Thus the importance of genuine first-person data: ‘structure’ or no ‘structure,’ short of empirical evidence, we quite simply have no way of arbitrating between theories, and thus no way of moving forward. Think of the list of our cognitive shortcomings! We humans have an ingrown genius for duping both ourselves and one another given the mere appearance of systematicity.

Now abductive arguments for intentionalism more generally have the advantage of taking intentional phenomena broadly construed as their domain. So in his Sources of Intentionality, for instance, Uriah Kriegel argues ‘observational contact with the intentional structure of experience’ best explains our understanding of intentionality. Given the general consensus that intentional phenomena are real, this argument has real dialectical traction. You can disagree with Kriegel, but until you provide a better explanation, his remains the only game in town.

In contrast to this general, Intentional Abductive Argument, the Phenomenological Abductive Argument takes intentional phenomena peculiar to the phenomenological attitude as its anchoring explananda. Zahavi, recall, accuses Dennett of equivocating phenomenology and introspectionism because of a faulty understanding of the phenomenological attitude. As a result he confuses the ontic with the ontological, ‘a mere sector of being’ with the problem of Being as such. And you know what? From the phenomenological attitude, his criticism is entirely on the mark. Zahavi accuses Dennett of a number of ontological sins that he simply does not commit, even given the phenomenological attitude, but this accusation, that Dennett has run afoul the ‘metaphysics of presence,’ is entirely correct—once again, from the phenomenological attitude.

Zahavi’s whole case hangs on the deliverances of the phenomenological attitude. Refuse him this, and he quite simply has no case at all. This was why, back in my grad school days, I would always urge my buddies to read phenomenology with an open mind, to understand it on its own terms. ‘I’m not hallucinating! The structures are there! You just have to look with the right eyes!’

Of course, no one was convinced. I quickly came to realize that phenomenologists occupied a position analogous to that of born-again Christians, party to a kind of undeniable, self-validating experience. Once you grasp the ontological difference, it truly seems like there’s no going back. The problem is that no matter how much you argue no one who has yet to grasp the phenomenological attitude can possibly credit your claims. You’re talking Jesus, son of God, and they think you’re referring to Heyzoos down at the 7-11.

To be clear, I’m not suggesting that phenomenology is religious, only that it shares this dialectical feature with religious discourses. The phenomenological attitude, like the evangelical attitude, requires what might be called a ‘buy in moment.’ The only way to truly ‘get it’ is to believe. The only way to believe is to open your heart to Husserl, or Heidegger, or in this case, Zahavi. “Killing the straw man” is jam packed with such inducements, elegant thumbnail recapitulations of various phenomenological interpretations made by various phenomenological giants over the years. All of these recapitulations beg the question against Dennett, obviously so, but they’re not dialectically toothless or merely rhetorical for it. By giving us examples of phenomenological understanding, Zahavi is demonstrating possibilities belonging to a different way of looking at the world, laying bare the very structure that organizes phenomenology into genuinely critical, consensus driven discourse.

The structure that phenomenology best explains. For anyone who has spent long rainy afternoons pouring through the phenomenological canon, alternately amused and amazed by this or that interpretation of lived life, the notion that phenomenology is ‘mere bunk’ can only sound like ignorance. If the structures revealed by the phenomenological attitude aren’t ontological, then what else could they be?

This is what I propose to show: a radically different way of conceiving the ‘structures’ that motivate phenomenology. I happen to be the global eliminativist that Zahavi mistakenly accuses Dennett of being, and I also happen to have a fairly intimate understanding of the phenomenological attitude. I came by my eliminativism in the course of discovering an entirely new way to describe the structures revealed by the phenomenological attitude. The Transcendental Interpretation is no longer the only game in town.

The thing is, every phenomenologist, whether they know it or not, is actually part of a vast, informal heterophenomenological experiment. The very systematicity of conscious access reports made regarding phenomenality via the phenomenological attitude is what makes them so interesting. Why do they orbit around the same sets of structures the way they do? Why do they lend themselves to reasoned argumentation? Zahavi wants you to think that his answer—because they track some kind of transcendental reality—is the only game in town, and thus the clear inference to the best explanation.

But this is simply not true.

So what alternatives are there? What kind of alternate interpretation could we give to what phenomenology contends is a transcendental structure?

In his excellent Posthuman Life, David Roden critiques transcendental phenomenology in terms of what he calls ‘dark phenomenology.’ We now know as a matter of empirical fact that our capacity to discriminate colours presented simultaneously outruns our capacity to discriminate sequentially, and that our memory severely constrains the determinacy of our concepts. This gap between the capacity to conceptualize and the capacity to discriminate means that a good deal of phenomenology is conceptually dark. The argument, as I see it, runs something like: 1) There is more than meets the phenomenological eye (dark phenomenology). 2) This ‘more’ is constitutive of what meets the phenomenological eye. 3) This ‘more’ is ontic. 4) Therefore the deliverances of the phenomenological eye cannot be ontological. The phenomenologist, he is arguing, has only a blinkered view. The very act of conceptualizing experience, no matter how angelic your attitude, covers experience over. We know this for a fact!

My guess is that Zahavi would concede (1) and (2) while vigorously denying (3), the claim that the content of dark phenomenology is ontic. He can do this simply by arguing that ‘dark phenomenology’ provides, at best, another way of delimiting horizons. After all, the drastic difference in our simultaneous and sequential discriminatory powers actually makes phenomenological sense: the once-present source impression evaporates into the now-present ‘reverberations,’ as Husserl might call them, fades on the dim gradient of retentional consciousness. It is a question entirely internal to phenomenology as to just where phenomenological interpretation lies on this ‘continuum of reverberations,’ and as it turns out, the problem of theoretically incorporating the absent-yet-constitutive backgrounds of phenomena is as old as phenomenology itself. In fact, the concept of horizons, the subjectively variable limits that circumscribe all phenomena, is an essential component of the phenomenological attitude. The world has meaning–everything we encounter resounds with the significance of past encounters, not to mention future plans. ‘Horizon talk’ simply allows us to make these constitutive backgrounds theoretically explicit. Even while implicit they belong to the phenomena themselves no less, just as implicit. Consciousness is as much non-thematic consciousness as it is thematic consciousness. Zahavi could say the discovery that we cannot discriminate nearly as well sequentially as we can simultaneously simply recapitulates this old phenomenological insight.

Horizons, as it turns out, also provide a way to understand Zahavi’s criticism of the heterophenomenology Dennett proposes we use in place of phenomenology. The ontological difference is itself the keystone of a larger horizon argument involving what Heidegger called the ‘metaphysics of presence,’ how forgetting the horizon of Being, the fundamental background allowing beings to appear as beings, leads to investigations of Being under the auspices of beings, or as something ‘objectively present.’ More basic horizons of use, horizons of care, are all covered over as a result. And when horizons are overlooked—when they are ignored or worse yet, entirely neglected—we run afoul conceptual confusions. In this sense, it is the natural attitude of science that is most obviously culpable, considering beings, not against their horizons of use or care, but against the artificially contrived, parochial, metaphysically naive, horizon of natural knowledge. As Zahavi writes, “the one-sided focus of science on what is available from a third person perspective is both naive and dishonest, since the scientific practice constantly presupposes the scientist’s first-personal and pre-scientific experience of the world.”

As an ontic discourse, natural science can only examine beings from within the parochial horizon of objective presence. Any attempt to drag phenomenology into the natural scientific purview, therefore, will necessarily cover over the very horizon that is its purview. This is what I always considered a ‘basic truth’ of the phenomenological attitude. It certainly seems to be the primary dialectical defence mechanism: to entertain the phenomenological attitude is to recognize the axiomatic priority of the phenomenological attitude. If the intuitive obviousness of this escapes you, then the phenomenological attitude quite simply escapes you.

Dennett, in other words, is guilty of a colossal oversight. He is quite simply forgetting that lived life is the condition of possibility of science. “Dennett’s heterophenomenology,” Zahavi writes, “must be criticized not only for simply presupposing the availability of the third-person perspective without reflecting on and articulating its conditions of possibility, but also for failing to realize to what extent its own endeavour tacitly presupposes an intact first-person perspective.”

Dennett’s discursive sin, in other words, is the sin of neglect. He is quite literally blind to the ontological assumptions—the deep first person facts—that underwrite his empirical claims, his third person observations. As a result, none of these facts condition his discourse the way they should: in Heidegger’s idiom, he is doomed to interpret Being in terms of beings, to repeat the metaphysics of presence.

The interesting thing to note here, however, is that Roden is likewise accusing Zahavi of neglect. Unless phenomenologists accord themselves supernatural powers, it seems hard to believe that they are not every bit as conceptually blind to the full content of phenomenal experience as the rest of us are. The phenomenologist, in other words, must acknowledge the bare fact that they suffer neglect. And if they acknowledge the bare fact of neglect, then, given the role neglect plays in their own critique of scientism, they have to acknowledge the bare possibility that they, like Dennett and heterophenomenology, find themselves occupying a view whose coherence requires ignorance—or to use Zahavi’s preferred term, naivete—in a likewise theoretically pernicious way.

The question now becomes one of whether the phenomenological concept of horizons can actually allay this worry. The answer here has to be no. Why? Simply because the phenomenologist cannot deploy horizons to rationally immunize phenomenology against neglect without assuming that phenomenology is already so immunized. Or put differently: if it were the case that neglect were true, that Zahavi’s phenomenology, like Dennett’s heterophenomenology, only makes sense given a certain kind of neglect, then we should expect ‘horizons’ to continue playing a conceptually constitutive role—to contribute to phenomenology the way it always has.

Horizons cannot address the problem of neglect. The phenomenologist, then, is stranded with the bare possibility that their practice only appears to be coherent or cognitive. If neglect can cause such problems for Dennett, then it’s at least possible that it can do so for Zahavi. And how else could it be, given that phenomenology was not handed down to Moses by God, but rather elaborated by humans suffering all the cognitive foibles on the list linked above? In all our endeavours, it is always possible that our blindspots get the better of us. We can’t say anything about specific ‘unknown unknowns’ period, let alone anything regarding their relevance! Arguing that phenomenology constitutes a solitary exception to this amounts to withdrawing from the possibility of rational discourse altogether—becoming a secular religion, in effect.

So it has to be possible that Zahavi’s phenomenology runs afoul theoretically pernicious neglect the way he accuses Dennett’s heterophenomenology of running afoul theoretically pernicious neglect.

Fair is fair.

The question now becomes one of whether phenomenology is suffering from theoretically pernicious neglect. Given that magic mushrooms fuck up phenomenologists as much as the rest of us, it seems assured that the capacities involved in cognizing their transcendental domain pertain to the biological in some fundamental respect. Phenomenologists suffer strokes, just like the rest of us. Their neurobiological capacity to take the ‘phenomenological attitude’ can be stripped from them in a tragic inkling.

But if the phenomenological attitude can be neurobiologically taken, it can also be given back, and here’s the thing, in attenuated forms, tweaked in innumerable different ways, fuzzier here, more precise there, truncated, snipped, or twisted.

This means there are myriad levels of phenomenological penetration, which is to say, varying degrees of phenomenological neglect. Insofar as we find ourselves on a biological continuum with other species, this should come as no surprise. Biologically speaking, we do not stand on the roof of the world, so it makes sense to suppose that the same is true of our phenomenology.

So bearing this all in mind, here’s an empirical alternative to what I termed the Transcendental Interpretation above.

On the Global Neuronal Workspace Theory, consciousness can be seen as a serial, broadcast conduit between a vast array of nonconscious parallel systems. Networks continually compete at the threshold of conscious ‘ignition,’ as it’s called, competition between nonconscious processes results in the selection of some information for broadcast. Stanislaus Dehaene—using heterophenomenology exactly as Dennett advocates—claims on the basis of what is now extensive experimentation that consciousness, in addition to broadcasting information, also stabilizes it, slows it down (Consciousness and the Brain). Only information that is so broadcast can be accessed for verbal report. From this it follows that the ‘phenomenological attitude’ can only access information broadcast for verbal report, or conversely, that it neglects all information not selected for stabilization and broadcast.

Now the question becomes one of whether that information is all the information the phenomenologist, given his or her years of specialized training, needs to draw the conclusions they do regarding the ontological structure of experience. And the more one looks at the situation through a natural lens, the more difficult it becomes to see how this possibly could be the case. The GNW model sketched above actually maps quite well onto the dual-process cognitive models that now dominate the field in cognitive science. System 1 cognition applies to the nonconscious, massively parallel processing that both feeds, and feeds from, the information selected for stabilization and broadcast. System 2 cognition applies to the deliberative, conscious problem-solving that stabilization and broadcast somehow makes possible.

Now the phenomenological attitude, Zahavi claims, somehow enables deliberative cognition of the transcendental structure of experience. The phenomenological attitude, then, somehow involves a System 2 attempt to solve for consciousness in a particular way. It constitutes a paradigmatic example of deliberative, theoretical metacognition, something we are also learning more and more about on a daily basis. (The temptation here will be to beg the question and ‘go ontological,’ and then accuse me of begging the question against phenomenology, but insofar as neuropathologies have any kind of bearing on the ‘phenomenological attitude,’ insofar as phenomenologists are human, giving in to this temptation would be tendentious, more a dialectical dodge than an honest attempt to confront a real problem.)

The question of whether Zahavi has access to what he needs, then, calves into two related issues: the issue of what kind of information is available, and the issue of what kind of metacognitive resources are available.

On the metacognitive capacity front, the picture arising out of cognitive psychology and neuroscience is anything but flattering. As Fletcher and Carruthers have recently noted:

What the data show is that a disposition to reflect on one’s reasoning is highly contingent on features of individual personality, and that the control of reflective reasoning is heavily dependent on learning, and especially on explicit training in norms and procedures for reasoning. In addition, people exhibit widely varied abilities to manage their own decision-making, employing a range of idiosyncratic techniques. These data count powerfully against the claim that humans possess anything resembling a system designed for reflecting on their own reasoning and decision-making. Instead, they support a view of meta-reasoning abilities as a diverse hodge-podge of self-management strategies acquired through individual and cultural learning, which co-opt whatever cognitive resources are available to serve monitoring-and-control functions. (“Metacognition and Reasoning”)

We need to keep in mind that the transcendental deliverances of the phenomenological attitude are somehow the product of numerous exaptations of radically heuristic systems. As the most complicated system in its environment, and as the one pocket of its environment that it cannot physically explore, the brain can only cognize its own processes in disparate and radically heuristic ways. In terms of metacognitive capacity, then, we have reason to doubt the reliability of any form of reflection.

On the information front, we’ve already seen how much information slips between the conceptual cracks with Roden’s account of dark phenomenology. Now with the GNW model, we can actually see why this has to be the case. Consciousness provides a ‘workspace’ where a little information is plucked from many producers and made available to many consumers. The very process of selection, stabilization, and broadcasting, in other words, constitutes a radical bottleneck on the information available for deliberative metacognition. This actually allows us to make some rather striking predictions regarding the kinds of difficulties such a system might face attempting to cognize itself.

For one, we should expect such a system to suffer profound source neglect. Since all the neurobiological machinery preceding selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the origins of consciousness to end in dismal failure. In fact, given that the larger cognitive system cognizes environments via predictive error minimization (I heartily recommend Hohwy’s, The Predictive Mind), which is to say, via the ability to anticipate what follows from what, we could suppose it would need some radically different means of cognizing itself, one somehow compensating for, or otherwise accommodating, source neglect.

For another, we should expect such a system to suffer profound scope neglect. Once again, since all the neurobiological machinery bracketing the selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the limits of consciousness to end in failure. Since the larger cognitive system functions via active environmental demarcations, consciousness would jam the gears, to be an ‘object without edges,’ if anything coherent at all.

We should expect to be baffled by our immediate sources and by our immediate scope, not because they comprise our transcendental limitations, but because such blind-spots are an inevitable by-product of the radical neurophysiological limits on our brain’s ability to cognize its own structure and dynamics. Thus Blind Brain Theory, the empirical thesis that we’re natural in such a way that we cannot cognize ourselves as natural, and so cognize ourselves otherwise. We’re a standalone solution-monger, one so astronomically complicated that we at best enjoy an ad hoc, heuristic relation to ourselves. The self-same fundamental first-person structure that phenomenology interprets transcendentally—as ontologically positive, naturalistically inscrutable, and inexplicably efficacious—it explains in terms of neglect, explains away, in effect. It provides a radical alternative to the Transcendental Interpretation discussed above—a Blind Brain interpretation. Insofar as Zahavi’s ‘phenomenological attitude’ amounts to anything at all, it can be seen as a radically blinkered, ‘inside view’ of source and scope neglect. Phenomenology, accordingly, can be diagnosed as the systematic addumbration of a wide variety of metacognitive illusions, all turning in predictable ways on neglect.

As a onetime phenomenologist I can appreciate how preposterous this must all sound, but I ask you to consider, as honestly as that list I linked above allows, the following passage:

This flow is something we speak of in conformity with what is constituted, but it is not ‘something in objective time.’ It is absolute subjectivity and has the absolute properties of something to be designated metaphorically as ‘flow’; of something that originates in a point of actuality, in a primal source-point and a continuity of moments of reverberation. For all this, we lack names. Husserl, Phenomenology of Internal Time-Consciousness, 79.

Now I think this sounds like a verbal report generated by a metacognitive system suffering source and scope neglect yet grappling with questions of source and scope all the same. Blind to our source blindness, our source appears to stand outside the order of the conditioned, to be ‘absolute’ or ‘transcendental.’ Blind to our scope blindness, this source seems to be a kind of ‘object without edges,’ more boundless container than content. And so a concatenation of absolute ignorances drives a powerful intuition of absolute or transcendental subjectivity at the very limit of what can be reported. Thus domesticated, further intuitive inferences abound, and the sourceless, scopeless arena of the phenomenological attitude is born, and with it, the famed ontological difference, the principled distinction of the problem of being from the problems of beings, or the priority of the sourceless and scopeless over the sourced and the scoped.

My point here is to simply provide a dramatic example of the way the transcendental structure revealed by the phenomenological attitude can be naturalistically turned inside out, how its most profound posits are more parsimoniously explained as artifacts of metacognitive neglect. Examples of how this approach can be extended in ways relevant to phenomenology can be found here, here, and here.

This is a blog post, so I can genuinely reach out. Everyone who practices phenomenology needs to consider the very live possibility that they’re actually trading in metacognitive illusions, that the first person they claim to be interpreting in the most fundamental terms possible is actually a figment of neglect. At the very least they need to recognize that the Abductive Argument is no longer open to them. They can no longer assume, the way Zahavi does, that the intersubjective features of their discourse evidence the reality of their transcendental posits exclusively. If anything, Blind Brain Theory offers a far better explanation for the discourse-organizing structure at issue, insofar as it lacks any supernatural posits, renders perspicuous a hitherto occult connection between brain and consciousness (as phenomenologically construed), and is empirically testable.

All of the phenomenological tradition is open to reinterpretation in its terms. I agree that this is disastrous… the very kind of disaster we should have expected science would deliver. Science is to be feared precisely because it monopolizes effective theoretical cognition, not because it seeks to, and philosophies so absurd as to play its ontological master manage only to anaesthetize themselves.

When asked what problems remain outstanding in his AVANT interview, Zahavi acknowledges that phenomenology, despite revealing the dialectical priority of the first person over the third person perspective on consciousness, has yet to elucidate the nature of the relationship between them. “What is still missing is a real theoretical integration of these different perspectives,” he admits. “Such integration is essential, if we are to do justice to the complexity of consciousness, but it is in no way obvious how natural science all by itself will be able to do so” (118). Blind Brain Theory possesses the conceptual resources required to achieve this integration. Via neglect and heuristics, it allows us to see the first-person in terms entirely continuous with the third, while allowing us to understand all the apories and conundrums that have prevented such integration until now. It provides the basis, in other words, for a wholesale naturalization of phenomenology.

Regardless, I think it’s safe to say that phenomenology is at a crossroads. The days when the traditional phenomenologist could go on the attack, actually force their interlocutors to revisit their assumptions, are quickly coming to a close. As the scientific picture of the human accumulates ever more detail—ever more data—the claim that these discoveries have no bearing whatsoever on phenomenological practice and doctrine becomes ever more difficult to credit. “Science is a specific theoretical stance towards the world,” Zahavi claims. “Science is performed by embodied and embedded subjects, and if we wish to comprehend the performance and limits of science, we have to investigate the forms of intentionality that are employed by cognizing subjects.”

Perhaps… But only if it turns out that ‘cognizing subjects’ possess the ‘intentionality’ phenomenology supposes. What if science is performed by natural beings who, quite naturally, cannot intuit themselves in natural terms? Phenomenology has no way of answering this question. So it waits the way all prescientific discourses have waited for the judgment of science on their respective domains. I have given but one possible example of a judgment that will inevitably come.

There will be others. My advice? Jump ship before the real neuroinformatic deluge comes. We live in a society morphing faster and more profoundly every year. There is much more pressing work to be done, especially when it comes to theorizing our everydayness in more epistemically humble and empirically responsive manner. We lack names for what we are, in part because we have been wasting breath on terms that merely name our confusion.

 

*[Originally posted 2014/10/22]

Intentional Philosophy as the Neuroscientific Explananda Problem

by rsbakker

The problem is basically that the machinery of the brain has no way of tracking its own astronomical dimensionality; it can at best track problem-specific correlational activity, various heuristic hacks. We lack not only the metacognitive bandwidth, but the metacognitive access required to formulate the explananda of neuroscientific investigation.

A curious consequence of the neuroscientific explananda problem is the glaring way it reveals our blindness to ourselves, our medial neglect. The mystery has always been one of understanding constraints, the question of what comes before we do. Plans? Divinity? Nature? Desires? Conditions of possibility? Fate? Mind? We’ve always been grasping for ourselves, I sometimes think, such was the strategic value of metacognitive capacity in linguistic social ecologies. The thing to realize is that grasping, the process of developing the capacity to report on our experience, was bootstapped out of nothing and so comprised the sum of all there was to the ‘experience of experience’ at any given stage of our evolution. Our ancestors had to be both implicitly obvious, and explicitly impenetrable to themselves past various degrees of questioning.

We’re just the next step.

What is it we think we want as our neuroscientific explananda? The various functions of cognition. What are the various functions of cognition? Nobody can seem to agree, thanks to medial neglect, our cognitive insensitivity to our cognizing.

Here’s what I think is a productive way to interpret this conundrum.

Generally what we want is a translation between the manipulative and the communicative. It is the circuit between these two general cognitive modes that forms the cornerstone of what we call scientific knowledge. A finding that cannot be communicated is not a finding at all. The thing is, this—knowledge itself—all functions in the dark. We are effectively black boxes to ourselves. In all math and science—all of it—the understanding communicated is a black box understanding, one lacking any natural understanding of that understanding.

Crazy but true.

What neuroscience is after, of course, is a natural understanding of understanding, to peer into the black box. They want manipulations they can communicate, actionable explanations of explanation. The problem is that they have only heuristic, low-dimensional, cognitive access to themselves: they quite simply lack the metacognitive access required to resolve interpretive disputes, and so remain incapable of formulating the explananda of neuroscience in any consensus commanding way. In fact, a great many remain convinced, on intuitive grounds, that the explananda sought, even if they could be canonically formulated, would necessarily remain beyond the pale of neuroscientific explanation. Heady stuff, given the historical track record of the institutions involved.

People need to understand that the fact of a neuroscientific explananda problem is the fact of our outright ignorance of ourselves. We quite simply lack the information required to decide what it is we’re explaining. What we call ‘philosophy of mind’ is a kind of metacognitive ‘crash space,’ a point where our various tools seem to function, but nothing ever comes of it.

The low-dimensionality of the information begets underdetermination, underdetermination begets philosophy, philosophy begets overdetermination. The idioms involved become ever more plastic, more difficult to sort and arbitrate. Crash space bloats. In a sense, intentional philosophy simply is the neuroscientific explananda problem, the florid consequence of our black box souls.

The thing that can purge philosophy is the thing that can tell you what it is.

Phrenomenology: Zahavi, Dennett and the End of Being

by rsbakker

We are led back to these perceptions in all questions regarding origins, but they themselves exclude any further question as to origin. It is clear that the much-talked-of certainty of internal perception, the evidence of the cogito, would lose all meaning and significance if we excluded temporal extension from the sphere of self-evidence and true givenness.

–Husserl, The Phenomenology of Internal Time-Consciousness

So recall this list, marvel how it continues to grow, and remember, the catalogue is just getting started. The real tsunami of information is rumbling off in the near horizon. And lest you think your training or education render you exempt, pause and consider the latest in Eric Schwitzgebel’s empirical investigations of how susceptible professional philosophers are to various biases and effects on that list. I ask you to consider what we know regarding human cognitive shortcomings to put you in a skeptical frame of mind. I want to put in a skeptical frame of mind because of a paper by Dan Zahavi, the Director of the Center for Subjectivity Research at the University of Copenhagen, that came up on my academia.edu feed the other day.

Zahavi has always struck me as unusual as far as ‘continental’ philosophers go, at once a Husserlian ‘purist’ and determined to reach out, to “make phenomenology a powerful and systematically convincing voice in contemporary philosophical discussion” (“Husserl, self, and others: an interview with Dan Zahavi”). I applaud him for this, for braving genuine criticism, genuine scientific research, rather than allowing narrow ingroup interpretative squabbles to swallow him whole. In “Killing the straw man: Dennett and phenomenology,” he undertakes a survey of Dennett’s many comments regarding phenomenology, and a critical evaluation of his alternative to phenomenology, heterophenomenology. Since I happen to be a former phenomenologist, I’ve had occasion to argue both sides of the fence. I spent a good portion of my late twenties and early thirties defending my phenomenological commitments from my skeptical, analytically inclined friends using precisely the arguments and assumptions that Zahavi deploys against Dennett. And I’ve spent the decade following arguing a position even more radically eliminativistic than Dennett’s. I’ve walked a mile in both shoes, I suppose. I’ve gone from agreeing with pretty much everything Zahavi argues in this piece (with a handful of deconstructive caveats) to agreeing with almost nothing.

So what I would like to do is use Zahavi’s position and critique as a foil to explain how and why I’ve abandoned the continental alliance and joined the scientific empire. I gave up on what I call the Apple-and-Oranges Argument because I realized there was no reliable, a priori way to discursively circumscribe domains, to say science can only go so far and no further. I gave up on what I call the Ontological Pre-emption Argument because I realized arguing ‘conditions of possibility,’ far from rationally securing my discourse, simply multiplied my epistemic liabilities. Ultimately, I found myself stranded with what I call the Abductive Argument, an argument based on the putative reality of the consensual structures that seem to genuinely anchor phenomenological disputation. Phenomenology not only offered the best way to describe that structure, it offered the only way, or so I thought. Since Zahavi provides us with examples of all three arguments in the course of castigating Dennett, and since Dennett occupies a position similar to my own, “Killing the straw man” affords an excellent opportunity to demonstrate how phenomenology fares when considered in terms of brain science and heuristic neglect.

As the title of the paper suggests, Zahavi thinks Dennett never moves past critiquing a caricature of phenomenology. For Dennett, Zahavi claims, phenomenology is merely a variant of Introspectionism and thus suffering all the liabilities that caused Introspectionism to die as a branch of empirical psychology almost a century ago now. To redress this equivocation, Zahavi turns to that old stalwart of continental cognitive self-respect, the ‘Apples-and-Oranges Argument’:

To start with, it is important to realize that classical phenomenology is not just another name for a kind of psychological self-observation; rather it must be appreciated as a special form of transcendental philosophy that seeks to reflect on the conditions of possibility of experience and cognition. Phenomenology is a philosophical enterprise; it is not an empirical discipline. This doesn’t rule out, of course, that its analyses might have ramifications for and be of pertinence to an empirical study of consciousness, but this is not its primary aim.

By conflating phenomenology and introspective psychology, Dennett is conflating introspection with the phenomenological attitude, the theoretically attuned orientation to experience that allows the transcendental structure of experience to be interpreted. Titchener’s psychological structuralism, for instance, was invested in empirical investigations into the structure and dynamics of the conscious mind. As descriptive psychology, it could not, by definition, disclose what Zahavi terms the ‘nonpsychological dimension of consciousness,’ those structures that make experience possible.

What makes phenomenology different, in other words, is also what makes phenomenology better. And so we find the grounds for the Ontological Pre-emption Argument in the Apples-and-Oranges Argument:

Phenomenology is not concerned with establishing what a given individual might currently be experiencing. Phenomenology is not interested in qualia in the sense of purely individual data that are incorrigible, ineffable, and incomparable. Phenomenology is not interested in psychological processes (in contrast to behavioral processes or physical processes). Phenomenology is interested in the very dimension of givenness or appearance and seeks to explore its essential structures and conditions of possibility. Such an investigation of the field of presence is beyond any divide between psychical interiority and physical exteriority, since it is an investigation of the dimension in which any object—be it external or internal—manifests itself. Phenomenology aims to disclose structures that are intersubjectively accessible, and its analyses are consequently open for corrections and control by any (phenomenologically tuned) subject.

The strategy is as old as phenomenology itself. First you extricate phenomenology from the bailiwick of the sciences, then you position phenomenology prior to the sciences as the discipline responsible for cognizing the conditions of possibility of science. First you argue that it is fundamentally different, and then you argue that this difference is fundamental.

Of course, Zahavi omits any consideration of the ways Dennett could respond to either of these claims. (This is one among several clues to the institutionally defensive nature of this paper, the fact that it is pitched more to those seeking theoretical reaffirmation than to institutional outsiders—let alone lapsarians). Dennett need only ask Zahavi why anyone should believe that his domain possesses ontological priority over the myriad domains of science. The fact that Zahavi can pluck certain concepts from Dennett’s discourse, drop them in his interpretative machinery, and derive results friendly to that machinery should come as no surprise. The question pertains to the cognitive legitimacy of the machinery: therefore any answer presuming that legitimacy simply begs the question. Does Zahavi not see this?

Even if we granted the possible existence of ‘conditions of possibility,’ the most Zahavi or anyone else could do is intuit them from the conditioned, which just happen to be first-person phenomena. So if generalizing from first-person phenomena proved impossible because of third-person inaccessibility—because genuine first person data were simply too difficult to come by—why should we think those phenomena can nevertheless anchor a priori claims once phenomenologically construed? The fact is phenomenology suffers all the problems of conceptual controversy and theoretical underdetermination as structuralist psychology. Zahavi is actually quite right: phenomenology is most certainly not a science! There’s no need for him to stamp his feet and declare, “Oranges!” Everybody already knows.

The question is why anyone should take his Oranges seriously as a cognitive enterprise. Why should anyone believe his domain comes first? What makes phenomenologically disclosed structures ontologically prior or constitutive of conscious experience? Blood flow, neural function—the life or death priority of these things can be handily demonstrated with a coat-hanger! Claims like Zahavi’s regarding the nature of some ontologically constitutive beyond, on the other hand, abound in philosophy. Certainly powerful assurances are needed to take them seriously, especially when we reject them outright for good reason elsewhere. Why shouldn’t we just side with the folk, chalk phenomenology up to just another hothouse excess of higher education? Because you stack your guesswork up on the basis of your guesswork in a way you’re guessing is right?

Seriously?

As I learned, neither the Apples-and-Oranges nor the Ontological Pre-emption Arguments draw much water outside the company of the likeminded. I felt their force, felt reaffirmed the way many phenomenologists, I’m sure, feel reaffirmed reading Zahavi’s exposition now. But every time I laid them on nonphenomenologists I found myself fenced by questions that were far too easy to ask—and far easier to avoid than answer.

So I switched up my tactics. When my old grad school poker buddies started hacking on Heidegger, making fun of the neologisms, bitching about the lack of consensus, I would say something very similar to what Zahavi claims above—even more powerful, I think, since it concretizes his claims regarding structure and intersubjectivity. Look, I would tell them, once you comport yourself properly (with a tremendous amount of specialized training, bear in mind), you can actually anticipate the kinds of things Husserl or Heidegger or Merleau-Ponty or Sarte might say on this or that subject. Something more than introspective whimsy is being tracked—surely! And if that ‘something more’ isn’t the transcendental structure of experience, what could it be? Little did I know how critical this shift in the way I saw the dialectical landscape would prove.

Basically I had retreated to the Abductive Argument—the only real argument, I now think, that Zahavi or any phenomenologist ultimately has outside the company of their confreres. Apriori arguments for phenomenological aprioricity simply have no traction unless you already buy into some heavily theorized account of the apriori. No one’s going to find the distinction between introspectionism and phenomenology convincing so long as first-person phenomena remain the evidential foundation of both. If empirical psychology couldn’t generalize from phenomena, then why should we think phenomenology can reason to their origins, particularly given the way it so discursively resembles introspectionism? Why should a phenomenological attitude adjustment make any difference at all?

One can actually see Zahavi shift to abductive warrant in the last block quote above, in the way he appeals to the intersubjectively accessible nature of the ‘structures’ comprising the domain of the phenomenological attitude. I suspect this is why Zahavi is so keen on the eliminativist Dennett (whom I generally agree with) at the expense of the intentionalist Dennett (whom I generally disagree with)—so keen on setting up his own straw man, in effect. The more he can accuse Dennett of eliminating various verities of experience, the more spicy the abductive stew becomes. If phenomenology is bunk, then why does it exhibit the systematicity that it does? How else could we make sense of the genuine discursivity that (despite all the divergent interpretations) unquestionably animates the field? If phenomenological reflection is so puny, so weak, then how has any kind of consensus arisen at all?

The easy reply, of course, is to argue that the systematicity evinced by phenomenology is no different than the systematicity evinced by intelligent design, psychoanalysis, climate-change skepticism, or what have you. One might claim that rational systematicity, the kind of ‘intersubjectivity’ that Zahavi evokes several times in “Killing the straw man,” is actually cheap as dirt. Why else would we find ourselves so convincing, no matter what we happen to believe? Thus the importance of genuine first-person data: ‘structure’ or no ‘structure,’ short of empirical evidence, we quite simply have no way of arbitrating between theories, and thus no way of moving forward. Think of the list of our cognitive shortcomings! We humans have an ingrown genius for duping both ourselves and one another given the mere appearance of systematicity.

Now abductive arguments for intentionalism more generally have the advantage of taking intentional phenomena broadly construed as their domain. So in his Sources of Intentionality, for instance, Uriah Kriegel argues ‘observational contact with the intentional structure of experience’ best explains our understanding of intentionality. Given the general consensus that intentional phenomena are real, this argument has real dialectical traction. You can disagree with Kriegel, but until you provide a better explanation, his remains the only game in town.

In contrast to this general, Intentional Abductive Argument, the Phenomenological Abductive Argument takes intentional phenomena peculiar to the phenomenological attitude as its anchoring explananda. Zahavi, recall, accuses Dennett of equivocating phenomenology and introspectionism because of a faulty understanding of the phenomenological attitude. As a result he confuses the ontic with the ontological, ‘a mere sector of being’ with the problem of Being as such. And you know what? From the phenomenological attitude, his criticism is entirely on the mark. Zahavi accuses Dennett of a number of ontological sins that he simply does not commit, even given the phenomenological attitude, but this accusation, that Dennett has run afoul the ‘metaphysics of presence,’ is entirely correct—once again, from the phenomenological attitude.

Zahavi’s whole case hangs on the deliverances of the phenomenological attitude. Refuse him this, and he quite simply has no case at all. This was why, back in my grad school days, I would always urge my buddies to read phenomenology with an open mind, to understand it on its own terms. ‘I’m not hallucinating! The structures are there! You just have to look with the right eyes!’

Of course, no one was convinced. I quickly came to realize that phenomenologists occupied a position analogous to that of born-again Christians, party to a kind of undeniable, self-validating experience. Once you grasp the ontological difference, it truly seems like there’s no going back. The problem is that no matter how much you argue no one who has yet to grasp the phenomenological attitude can possibly credit your claims. You’re talking Jesus, son of God, and they think you’re referring to Heyzoos down at the 7-11.

To be clear, I’m not suggesting that phenomenology is religious, only that it shares this dialectical feature with religious discourses. The phenomenological attitude, like the evangelical attitude, requires what might be called a ‘buy in moment.’ The only way to truly ‘get it’ is to believe. The only way to believe is to open your heart to Husserl, or Heidegger, or in this case, Zahavi. “Killing the straw man” is jam packed with such inducements, elegant thumbnail recapitulations of various phenomenological interpretations made by various phenomenological giants over the years. All of these recapitulations beg the question against Dennett, obviously so, but they’re not dialectically toothless or merely rhetorical for it. By giving us examples of phenomenological understanding, Zahavi is demonstrating possibilities belonging to a different way of looking at the world, laying bare the very structure that organizes phenomenology into genuinely critical, consensus driven discourse.

The structure that phenomenology best explains. For anyone who has spent long rainy afternoons pouring through the phenomenological canon, alternately amused and amazed by this or that interpretation of lived life, the notion that phenomenology is ‘mere bunk’ can only sound like ignorance. If the structures revealed by the phenomenological attitude aren’t ontological, then what else could they be?

This is what I propose to show: a radically different way of conceiving the ‘structures’ that motivate phenomenology. I happen to be the global eliminativist that Zahavi mistakenly accuses Dennett of being, and I also happen to have a fairly intimate understanding of the phenomenological attitude. I came by my eliminativism in the course of discovering an entirely new way to describe the structures revealed by the phenomenological attitude. The Transcendental Interpretation is no longer the only game in town.

The thing is, every phenomenologist, whether they know it or not, is actually part of a vast, informal heterophenomenological experiment. The very systematicity of conscious access reports made regarding phenomenality via the phenomenological attitude is what makes them so interesting. Why do they orbit around the same sets of structures the way they do? Why do they lend themselves to reasoned argumentation? Zahavi wants you to think that his answer—because they track some kind of transcendental reality—is the only game in town, and thus the clear inference to the best explanation.

But this is simply not true.

So what alternatives are there? What kind of alternate interpretation could we give to what phenomenology contends is a transcendental structure?

In his excellent Posthuman Life, David Roden critiques transcendental phenomenology in terms of what he calls ‘dark phenomenology.’ We now know as a matter of empirical fact that our capacity to discriminate colours presented simultaneously outruns our capacity to discriminate sequentially, and that our memory severely constrains the determinacy of our concepts. This gap between the capacity to conceptualize and the capacity to discriminate means that a good deal of phenomenology is conceptually dark. The argument, as I see it, runs something like: 1) There is more than meets the phenomenological eye (dark phenomenology). 2) This ‘more’ is constitutive of what meets the phenomenological eye. 3) This ‘more’ is ontic. 4) Therefore the deliverances of the phenomenological eye cannot be ontological. The phenomenologist, he is arguing, has only a blinkered view. The very act of conceptualizing experience, no matter how angelic your attitude, covers experience over. We know this for a fact!

My guess is that Zahavi would concede (1) and (2) while vigorously denying (3), the claim that the content of dark phenomenology is ontic. He can do this simply by arguing that ‘dark phenomenology’ provides, at best, another way of delimiting horizons. After all, the drastic difference in our simultaneous and sequential discriminatory powers actually makes phenomenological sense: the once-present source impression evaporates into the now-present ‘reverberations,’ as Husserl might call them, fades on the dim gradient of retentional consciousness. It is a question entirely internal to phenomenology as to just where phenomenological interpretation lies on this ‘continuum of reverberations,’ and as it turns out, the problem of theoretically incorporating the absent-yet-constitutive backgrounds of phenomena is as old as phenomenology itself. In fact, the concept of horizons, the subjectively variable limits that circumscribe all phenomena, is an essential component of the phenomenological attitude. The world has meaning–everything we encounter resounds with the significance of past encounters, not to mention future plans. ‘Horizon talk’ simply allows us to make these constitutive backgrounds theoretically explicit. Even while implicit they belong to the phenomena themselves no less, just as implicit. Consciousness is as much non-thematic consciousness as it is thematic consciousness. Zahavi could say the discovery that we cannot discriminate nearly as well sequentially as we can simultaneously simply recapitulates this old phenomenological insight.

Horizons, as it turns out, also provide a way to understand Zahavi’s criticism of the heterophenomenology Dennett proposes we use in place of phenomenology. The ontological difference is itself the keystone of a larger horizon argument involving what Heidegger called the ‘metaphysics of presence,’ how forgetting the horizon of Being, the fundamental background allowing beings to appear as beings, leads to investigations of Being under the auspices of beings, or as something ‘objectively present.’ More basic horizons of use, horizons of care, are all covered over as a result. And when horizons are overlooked—when they are ignored or worse yet, entirely neglected—we run afoul conceptual confusions. In this sense, it is the natural attitude of science that is most obviously culpable, considering beings, not against their horizons of use or care, but against the artificially contrived, parochial, metaphysically naive, horizon of natural knowledge. As Zahavi writes, “the one-sided focus of science on what is available from a third person perspective is both naive and dishonest, since the scientific practice constantly presupposes the scientist’s first-personal and pre-scientific experience of the world.”

As an ontic discourse, natural science can only examine beings from within the parochial horizon of objective presence. Any attempt to drag phenomenology into the natural scientific purview, therefore, will necessarily cover over the very horizon that is its purview. This is what I always considered a ‘basic truth’ of the phenomenological attitude. It certainly seems to be the primary dialectical defence mechanism: to entertain the phenomenological attitude is to recognize the axiomatic priority of the phenomenological attitude. If the intuitive obviousness of this escapes you, then the phenomenological attitude quite simply escapes you.

Dennett, in other words, is guilty of a colossal oversight. He is quite simply forgetting that lived life is the condition of possibility of science. “Dennett’s heterophenomenology,” Zahavi writes, “must be criticized not only for simply presupposing the availability of the third-person perspective without reflecting on and articulating its conditions of possibility, but also for failing to realize to what extent its own endeavour tacitly presupposes an intact first-person perspective.”

Dennett’s discursive sin, in other words, is the sin of neglect. He is quite literally blind to the ontological assumptions—the deep first person facts—that underwrite his empirical claims, his third person observations. As a result, none of these facts condition his discourse the way they should: in Heidegger’s idiom, he is doomed to interpret Being in terms of beings, to repeat the metaphysics of presence.

The interesting thing to note here, however, is that Roden is likewise accusing Zahavi of neglect. Unless phenomenologists accord themselves supernatural powers, it seems hard to believe that they are not every bit as conceptually blind to the full content of phenomenal experience as the rest of us are. The phenomenologist, in other words, must acknowledge the bare fact that they suffer neglect. And if they acknowledge the bare fact of neglect, then, given the role neglect plays in their own critique of scientism, they have to acknowledge the bare possibility that they, like Dennett and heterophenomenology, find themselves occupying a view whose coherence requires ignorance—or to use Zahavi’s preferred term, naivete—in a likewise theoretically pernicious way.

The question now becomes one of whether the phenomenological concept of horizons can actually allay this worry. The answer here has to be no. Why? Simply because the phenomenologist cannot deploy horizons to rationally immunize phenomenology against neglect without assuming that phenomenology is already so immunized. Or put differently: if it were the case that neglect were true, that Zahavi’s phenomenology, like Dennett’s heterophenomenology, only makes sense given a certain kind of neglect, then we should expect ‘horizons’ to continue playing a conceptually constitutive role—to contribute to phenomenology the way it always has.

Horizons cannot address the problem of neglect. The phenomenologist, then, is stranded with the bare possibility that their practice only appears to be coherent or cognitive. If neglect can cause such problems for Dennett, then it’s at least possible that it can do so for Zahavi. And how else could it be, given that phenomenology was not handed down to Moses by God, but rather elaborated by humans suffering all the cognitive foibles on the list linked above? In all our endeavours, it is always possible that our blindspots get the better of us. We can’t say anything about specific ‘unknown unknowns’ period, let alone anything regarding their relevance! Arguing that phenomenology constitutes a solitary exception to this amounts to withdrawing from the possibility of rational discourse altogether—becoming a secular religion, in effect.

So it has to be possible that Zahavi’s phenomenology runs afoul theoretically pernicious neglect the way he accuses Dennett’s heterophenomenology of running afoul theoretically pernicious neglect.

Fair is fair.

The question now becomes one of whether phenomenology is suffering from theoretically pernicious neglect. Given that magic mushrooms fuck up phenomenologists as much as the rest of us, it seems assured that the capacities involved in cognizing their transcendental domain pertain to the biological in some fundamental respect. Phenomenologists suffer strokes, just like the rest of us. Their neurobiological capacity to take the ‘phenomenological attitude’ can be stripped from them in a tragic inkling.

But if the phenomenological attitude can be neurobiologically taken, it can also be given back, and here’s the thing, in attenuated forms, tweaked in innumerable different ways, fuzzier here, more precise there, truncated, snipped, or twisted.

This means there are myriad levels of phenomenological penetration, which is to say, varying degrees of phenomenological neglect. Insofar as we find ourselves on a biological continuum with other species, this should come as no surprise. Biologically speaking, we do not stand on the roof of the world, so it makes sense to suppose that the same is true of our phenomenology.

So bearing this all in mind, here’s an empirical alternative to what I termed the Transcendental Interpretation above.

On the Global Neuronal Workspace Theory, consciousness can be seen as a serial, broadcast conduit between a vast array of nonconscious parallel systems. Networks continually compete at the threshold of conscious ‘ignition,’ as it’s called, competition between nonconscious processes results in the selection of some information for broadcast. Stanislaus Dehaene—using heterophenomenology exactly as Dennett advocates—claims on the basis of what is now extensive experimentation that consciousness, in addition to broadcasting information, also stabilizes it, slows it down (Consciousness and the Brain). Only information that is so broadcast can be accessed for verbal report. From this it follows that the ‘phenomenological attitude’ can only access information broadcast for verbal report, or conversely, that it neglects all information not selected for stabilization and broadcast.

Now the question becomes one of whether that information is all the information the phenomenologist, given his or her years of specialized training, needs to draw the conclusions they do regarding the ontological structure of experience. And the more one looks at the situation through a natural lens, the more difficult it becomes to see how this possibly could be the case. The GNW model sketched above actually maps quite well onto the dual-process cognitive models that now dominate the field in cognitive science. System 1 cognition applies to the nonconscious, massively parallel processing that both feeds, and feeds from, the information selected for stabilization and broadcast. System 2 cognition applies to the deliberative, conscious problem-solving that stabilization and broadcast somehow makes possible.

Now the phenomenological attitude, Zahavi claims, somehow enables deliberative cognition of the transcendental structure of experience. The phenomenological attitude, then, somehow involves a System 2 attempt to solve for consciousness in a particular way. It constitutes a paradigmatic example of deliberative, theoretical metacognition, something we are also learning more and more about on a daily basis. (The temptation here will be to beg the question and ‘go ontological,’ and then accuse me of begging the question against phenomenology, but insofar as neuropathologies have any kind of bearing on the ‘phenomenological attitude,’ insofar as phenomenologists are human, giving in to this temptation would be tendentious, more a dialectical dodge than an honest attempt to confront a real problem.)

The question of whether Zahavi has access to what he needs, then, calves into two related issues: the issue of what kind of information is available, and the issue of what kind of metacognitive resources are available.

On the metacognitive capacity front, the picture arising out of cognitive psychology and neuroscience is anything but flattering. As Fletcher and Carruthers have recently noted:

What the data show is that a disposition to reflect on one’s reasoning is highly contingent on features of individual personality, and that the control of reflective reasoning is heavily dependent on learning, and especially on explicit training in norms and procedures for reasoning. In addition, people exhibit widely varied abilities to manage their own decision-making, employing a range of idiosyncratic techniques. These data count powerfully against the claim that humans possess anything resembling a system designed for reflecting on their own reasoning and decision-making. Instead, they support a view of meta-reasoning abilities as a diverse hodge-podge of self-management strategies acquired through individual and cultural learning, which co-opt whatever cognitive resources are available to serve monitoring-and-control functions. (“Metacognition and Reasoning”)

We need to keep in mind that the transcendental deliverances of the phenomenological attitude are somehow the product of numerous exaptations of radically heuristic systems. As the most complicated system in its environment, and as the one pocket of its environment that it cannot physically explore, the brain can only cognize its own processes in disparate and radically heuristic ways. In terms of metacognitive capacity, then, we have reason to doubt the reliability of any form of reflection.

On the information front, we’ve already seen how much information slips between the conceptual cracks with Roden’s account of dark phenomenology. Now with the GNW model, we can actually see why this has to be the case. Consciousness provides a ‘workspace’ where a little information is plucked from many producers and made available to many consumers. The very process of selection, stabilization, and broadcasting, in other words, constitutes a radical bottleneck on the information available for deliberative metacognition. This actually allows us to make some rather striking predictions regarding the kinds of difficulties such a system might face attempting to cognize itself.

For one, we should expect such a system to suffer profound source neglect. Since all the neurobiological machinery preceding selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the origins of consciousness to end in dismal failure. In fact, given that the larger cognitive system cognizes environments via predictive error minimization (I heartily recommend Hohwy’s, The Predictive Mind), which is to say, via the ability to anticipate what follows from what, we could suppose it would need some radically different means of cognizing itself, one somehow compensating for, or otherwise accommodating, source neglect.

For another, we should expect such a system to suffer profound scope neglect. Once again, since all the neurobiological machinery bracketing the selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the limits of consciousness to end in failure. Since the larger cognitive system functions via active environmental demarcations, consciousness would jam the gears, to be an ‘object without edges,’ if anything coherent at all.

We should expect to be baffled by our immediate sources and by our immediate scope, not because they comprise our transcendental limitations, but because such blind-spots are an inevitable by-product of the radical neurophysiological limits on our brain’s ability to cognize its own structure and dynamics. Thus Blind Brain Theory, the empirical thesis that we’re natural in such a way that we cannot cognize ourselves as natural, and so cognize ourselves otherwise. We’re a standalone solution-monger, one so astronomically complicated that we at best enjoy an ad hoc, heuristic relation to ourselves. The self-same fundamental first-person structure that phenomenology interprets transcendentally—as ontologically positive, naturalistically inscrutable, and inexplicably efficacious—it explains in terms of neglect, explains away, in effect. It provides a radical alternative to the Transcendental Interpretation discussed above—a Blind Brain interpretation. Insofar as Zahavi’s ‘phenomenological attitude’ amounts to anything at all, it can be seen as a radically blinkered, ‘inside view’ of source and scope neglect. Phenomenology, accordingly, can be diagnosed as the systematic addumbration of a wide variety of metacognitive illusions, all turning in predictable ways on neglect.

As a onetime phenomenologist I can appreciate how preposterous this must all sound, but I ask you to consider, as honestly as that list I linked above allows, the following passage:

This flow is something we speak of in conformity with what is constituted, but it is not ‘something in objective time.’ It is absolute subjectivity and has the absolute properties of something to be designated metaphorically as ‘flow’; of something that originates in a point of actuality, in a primal source-point and a continuity of moments of reverberation. For all this, we lack names. Husserl, Phenomenology of Internal Time-Consciousness, 79.

Now I think this sounds like a verbal report generated by a metacognitive system suffering source and scope neglect yet grappling with questions of source and scope all the same. Blind to our source blindness, our source appears to stand outside the order of the conditioned, to be ‘absolute’ or ‘transcendental.’ Blind to our scope blindness, this source seems to be a kind of ‘object without edges,’ more boundless container than content. And so a concatenation of absolute ignorances drives a powerful intuition of absolute or transcendental subjectivity at the very limit of what can be reported. Thus domesticated, further intuitive inferences abound, and the sourceless, scopeless arena of the phenomenological attitude is born, and with it, the famed ontological difference, the principled distinction of the problem of being from the problems of beings, or the priority of the sourceless and scopeless over the sourced and the scoped.

My point here is to simply provide a dramatic example of the way the transcendental structure revealed by the phenomenological attitude can be naturalistically turned inside out, how its most profound posits are more parsimoniously explained as artifacts of metacognitive neglect. Examples of how this approach can be extended in ways relevant to phenomenology can be found here, here, and here.

This is a blog post, so I can genuinely reach out. Everyone who practices phenomenology needs to consider the very live possibility that they’re actually trading in metacognitive illusions, that the first person they claim to be interpreting in the most fundamental terms possible is actually a figment of neglect. At the very least they need to recognize that the Abductive Argument is no longer open to them. They can no longer assume, the way Zahavi does, that the intersubjective features of their discourse evidence the reality of their transcendental posits exclusively. If anything, Blind Brain Theory offers a far better explanation for the discourse-organizing structure at issue, insofar as it lacks any supernatural posits, renders perspicuous a hitherto occult connection between brain and consciousness (as phenomenologically construed), and is empirically testable.

All of the phenomenological tradition is open to reinterpretation in its terms. I agree that this is disastrous… the very kind of disaster we should have expected science would deliver. Science is to be feared precisely because it monopolizes effective theoretical cognition, not because it seeks to, and philosophies so absurd as to play its ontological master manage only to anaesthetize themselves.

When asked what problems remain outstanding in his AVANT interview, Zahavi acknowledges that phenomenology, despite revealing the dialectical priority of the first person over the third person perspective on consciousness, has yet to elucidate the nature of the relationship between them. “What is still missing is a real theoretical integration of these different perspectives,” he admits. “Such integration is essential, if we are to do justice to the complexity of consciousness, but it is in no way obvious how natural science all by itself will be able to do so” (118). Blind Brain Theory possesses the conceptual resources required to achieve this integration. Via neglect and heuristics, it allows us to see the first-person in terms entirely continuous with the third, while allowing us to understand all the apories and conundrums that have prevented such integration until now. It provides the basis, in other words, for a wholesale naturalization of phenomenology.

Regardless, I think it’s safe to say that phenomenology is at a crossroads. The days when the traditional phenomenologist could go on the attack, actually force their interlocutors to revisit their assumptions, are quickly coming to a close. As the scientific picture of the human accumulates ever more detail—ever more data—the claim that these discoveries have no bearing whatsoever on phenomenological practice and doctrine becomes ever more difficult to credit. “Science is a specific theoretical stance towards the world,” Zahavi claims. “Science is performed by embodied and embedded subjects, and if we wish to comprehend the performance and limits of science, we have to investigate the forms of intentionality that are employed by cognizing subjects.”

Perhaps… But only if it turns out that ‘cognizing subjects’ possess the ‘intentionality’ phenomenology supposes. What if science is performed by natural beings who, quite naturally, cannot intuit themselves in natural terms? Phenomenology has no way of answering this question. So it waits the way all prescientific discourses have waited for the judgment of science on their respective domains. I have given but one possible example of a judgment that will inevitably come.

There will be others. My advice? Jump ship before the real neuroinformatic deluge comes. We live in a society morphing faster and more profoundly every year. There is much more pressing work to be done, especially when it comes to theorizing our everydayness in more epistemically humble and empirically responsive manner. We lack names for what we are, in part because we have been wasting breath on terms that merely name our confusion.

The Philosopher, the Drunk, and the Lamppost

by rsbakker

A crucial variable of interest is the accuracy of metacognitive reports with respect to their object-level targets: in other words, how well do we know our own minds? We now understand metacognition to be under segregated neural control, a conclusion that might have surprised Comte, and one that runs counter to an intuition that we have veridical access to the accuracy of our perceptions, memories and decisions. A detailed, and eventually mechanistic, account of metacognition at the neural level is a necessary first step to understanding the failures of metacognition that occur following brain damage and psychiatric disorder. Stephen M. Fleming and Raymond J. Dolan, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1338–1349doi:10.1098/rstb.2011.0417

As well as the degree to which we should accept the deliverances of philosophical reflection.

Philosophical reflection is a cultural achievement, an exaptation of pre-existing cognitive capacities. It is entirely possible that philosophical reflection, as an exaptation of pre-existing biocognitive capacities, suffers any number of cognitive short-circuits. And this could very well explain why philosophy suffers the perennial problems it does.

In other words, the empirical possibility of Blind Brain Theory cannot be doubted—no matter how disquieting its consequences seem to be. What I would like to assess here is the probability of the account being empirically substantiated.

The thesis is that traditional philosophical problem-solving continually runs afoul illusions falling out of metacognitive neglect. The idea is that intentional philosophy has been the butt of the old joke about the police officer who stops to help a drunk searching for his keys beneath a lamppost. The punch-line, of course, is that even though the drunk lost his keys in the parking lot, he’s searching beneath the lamppost because that’s the only place he can see. The twist for the philosopher lies in the way neglect consigns the parking lot—the drunk’s whole world in fact—to oblivion, generating the illusion that the light and the lamppost comprise an independent order of existence. For the philosopher, the keys to understanding what we are essentially can be found nowhere else because they exhaust everything that is within that order. Of course the keys that this or that philosopher claims to have found take wildly different forms—they all but shout profound theoretical underdetermination—but this seems to trouble only the skeptical spoil-sports.

Now I personally think the skeptics have always possessed far and away the better position, but since they could only articulate their critiques in the same speculative idiom as philosophy, they have been every bit as easy to ignore as philosophers. But times, I hope to show, have changed—dramatically so. Intentional philosophy is simply another family of prescientific discourses. Now that science has firmly established itself within its traditional domains, we should expect it to be progressively delegitimized the way all prescientific discourses have delegitimized.

To begin with, it is simply an empirical fact that philosophical reflection on the nature of human cognition suffers massive neglect. To be honest, I sometimes find myself amazed that I even need to make this argument to people. Our blindness to our own cognitive makeup is the whole reason we require cognitive science in the first place. Every single fact that the sciences of cognition and the brain have discovered is another fact that philosophical reflection is all but blind to, another ‘dreaded unknown unknown’ that has always structured our cognitive activity without our knowledge.

As Keith Frankish and Jonathan Evans write:

The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning.  “The Duality of Mind: An Historical Perspective, In Two Minds: Dual Processes and Beyond, 25

We are standing on the cusp of a revolution in self-understanding unlike any in human history. As they note, the process of digesting the implications of these discoveries is just getting underway—news of the revolution has just hit the streets of capital, and the provinces will likely be a long time in hearing it. As a result, the old ways still enjoy what might be called the ‘Only-game-in-town Effect,’ but not for very long.

The deliverances of theoretical metacognition just cannot be trusted. This is simply an empirical fact. Stanslaus Dehaene even goes so far as to state it as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).

As I mentioned, I think this is a deathblow, but philosophers have devised a number of cunning ways to immunize themselves from this fact—philosophy is the art of rationalization, after all! If the brain (for some pretty obvious reasons) is horrible at metacognizing brain functions, then one need only insist that something more than the brain is at work. Since souls will no longer do, the philosopher switches to functions, but not any old functions. The fact that the functions of a system look different depending on the grain of investigation is no surprise: of course neurocellular level descriptions will differ from neural-network level descriptions. The intentional philosopher, however, wants to argue for a special, emergent order of intentional functions, one that happens to correspond to the deliverances of philosophical reflection. Aside from this happy correspondence, what makes these special functions so special is their incompatibility with biomechanical functions—an incompatibility so profound that biomechanical explanation renders them all but unintelligible.

Call this the ‘apples and oranges’ strategy. Now I think the sheer convenience of this view should set off alarm bells: If the science of a domain contradicts the findings of philosophical reflection, then that science must be exploring a different domain. But the picture is far more complicated, of course. One does not overthrow more than two thousand years of (apparent) self-understanding on the back of two decades of scientific research. And even absent this institutional sanction, there remains something profoundly compelling about the intentional deliverances of philosophical reflection, despite all the manifest problems. The intentionalist need only bid you to theoretically reflect, and lo, there are the oranges… Something has to explain them!

In other words, pointing out the mountain of unknown unknowns revealed by cognitive science is simply not enough to decisively undermine the conceits of intentional philosophy. I think it should be, but then I think the ancient skeptics had the better of things from the outset. What we really need, if we want to put an end to this vast squandering of intellectual resources, is to explain the oranges. So long as oranges exist, some kind of abductive case can be made for intentional philosophy. Doing this requires we take a closer look at what cognitive science can teach us about philosophical reflection and its capacity to generate self-understanding.

The fact is the intentionalist is in something of a dilemma. Their functions, they admit, are naturalistically inscrutable. Since they can’t abide dualism, they need their functions to be natural (or whatever it is the sciences are conjuring miracles out of) somehow, so whatever functions they posit, say as one realized in the scorekeeping attitudes of communities, they have to track brain function somehow. This responsibility to cognitive scientific finding regarding their object is matched by a responsibility to cognitive scientific finding regarding their cognitive capacity. Oranges or no oranges, both their domain and their capacity to cognize that domain answer to what cognitive science ultimately reveals. Some kind of emergent order has to be discovered within the order of nature, and we have to have to somehow possess the capacity to reliably metacognize that emergent order. Given what we already know, I think a strong case can be made that this latter, at least, is almost certainly impossible.

Consider Dehaene’s Global Neuronal Workspace Theory of Consciousness (GNW). On his account, at any given moment the information available for conscious report has been selected from parallel swarms of nonconscious processes, stabilized, and broadcast across the brain for consumption by other swarms of other nonconscious processes. As Dehaene writes:

The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result—a conscious symbol—to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing. Consciousness and the Brain, 105

Whatever philosophical reflection amounts to, insofar as it involves conscious report it involves this ‘hybrid serial-parallel machine’ described by Dehaene and his colleagues, a model which is entirely consistent with the ‘adaptive unconscious’ (See Tim Wilson’s A Stranger to Ourselves for a somewhat dated, yet still excellent overview) described in cognitive psychology. Whatever a philosopher can say regarding ‘intentional functions’ must in some way depend on the deliverances of this system.

One of the key claims of the theory, confirmed via a number of different experimental paradigms, is that access (or promotion) to the GNW is all or nothing. The insight is old: psychologists have long studied what is known as the ‘psychological refractory period,’ the way attending to one task tends to blot out or severely impair our ability to perform other tasks simultaneously. But recent research is revealing more of the radical ‘cortical bottleneck’ that marks the boundary between the massively parallel processing of multiple precepts (or interpretations thereof) and the serial stage of conscious cognition. [Marti, S., et al., A shared cortical bottleneck underlying Attentional Blink and Psychological Refractory Period, NeuroImage (2011), doi:10.1016/j.neuroimage.2011.09.063]

This is important because it means that the deliverances the intentional philosopher depend on when reflecting on problems involving intentionality or ‘experience’ more generally are limited to what makes the ‘conscious access cut.’ You could say the situation is actually far worse, since conscious deliberation on conscious phenomena requires the philosopher use the very apparatus they’re attempting to solve. In a sense they’re not only wagering that the information they require actually reaches consciousness in the first place, but that it can be recalled for subsequent conscious deliberation. The same way the scientist cannot incorporate information that doesn’t, either via direct observation or indirect observation via instrumentation, find its way to conscious awareness, the philosopher likewise cannot hazard ‘educated’ guesses regarding information that does not somehow make the conscious access cut, only twice over. In a sense, they’re peering at the remaindered deliverances of a serial straw through a serial straw–one that appears as wide as the sky for neglect! So there is a very real question of whether philosophical reflection, an artifactual form of deliberative cognition, has anything approaching access to the information it needs to solve the kinds of problems it purports to solve. Given the role that information scarcity plays in theoretical underdetermination, the perpetually underdetermined theories posed by intentional philosophers strongly suggest that the answer is no.

But if the science suggests that philosophical reflection may not have access to enough information to answer the questions in its bailiwick, it also raises real questions of whether it has access to the right kind of information. Recent research has focussed on attempting to isolate the mechanisms in the brain responsible for mediating metacognition. The findings seem to be converging on the rostrolateral prefrontal cortex (rlPFC) as playing a pivotal role in the metacognitive accuracy of retrospective reports. As Fleming and Dolan write:

A role for rlPFC in metacognition is consistent with its anatomical position at the top of the cognitive hierarchy, receiving information from other prefrontal cortical regions, cingulate and anterior temporal cortex. Further, compared with non-human primates, rlPFC has a sparser spatial organization that may support greater interconnectivity. The contribution of rlPFC to metacognitive commentary may be to represent task uncertainty in a format suitable for communication to others, consistent with activation here being associated with evaluating self-generated information, and attention to internal representations. Such a conclusion is supported by recent evidence from structural brain imaging that ‘reality monitoring’ and metacognitive accuracy share a common neural substrate in anterior PFC.  Italics added, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1343. doi:10.1098/rstb.2011.0417

As far as I can tell, the rlPFC is perhaps the best candidate we presently have for something like a ‘philosopher module’ [See Badre, et al. “Frontal cortex and the discovery of abstract action rules.” Neuron (2010) 66:315–326.] though the functional organization of the PFC more generally remains a mystery. [Kalina Christoff’s site and Steve Fleming’s site are great places to track research developments in this area of cognitive neuroscience] It primarily seems to be engaged by abstract relational and semantic tasks, and plays some kind of role mediating verbal and spatial information. Mapping evidence also shows that its patterns of communication to other brain regions varies as tasks vary; in particular, it seems to engage regions thought to involve visuospatial and semantic processes. [Wendelken et al., “Rostrolateral Prefrontal Cortex: Domain-General or Domain-Sensitive?” Human Brain Mapping, 000:00-00, 2011 1-12.]

Cognitive neuroscience is nowhere close to any decisive picture of abstract metacognition, but hopefully the philosophical moral of the research should be clear: whatever theoretical metacognition is, it is neurobiological. And this is just to say that the nature of philosophical reflection—in the form of say, ‘making things explicit,’ or what have you—is not something that philosophical reflection on ‘conscious experience’ can solve! Dehaene’s law applies as much to metacognition as to any other metacognitive process—as we should expect, given the cortical bottleneck and what we know of the rlPFC. Information is promoted for stabilization and broadcast from nonconscious parallel swarms to be consumed by nonconscious parallel swarms, which include the rlPFC, which in turn somehow informs further stabilizations and broadcasts. What we presently ‘experience,’ the well from which our intentional claims are drawn, somehow comprises the serial ‘stabilization and broadcast’ portion of this process—and nothing else.

The rlPFC is an evolutionary artifact, something our ancestors developed over generations of practical problem-solving. It is part and parcel of the most complicated (not to mention expensive) organ known. Assume, for the moment, that the rlPFC is the place where the magic happens, the part of the ruminating philosopher’s brain where ‘accurate intuitions’ of the ‘nature of mind and thought’ arise allowing for verbal report. (The situation is without a doubt far more complicated, but since complication is precisely the problem the philosopher faces, this example actually does them a favour). There’s no way the rlPFC could assist in accurately cognizing its own function—another rlPFC would be required to do that, requiring a third rlPFC, and so on and so on. In fact, there’s no way the brain could directly cognize its own activities in any high-dimensionally accurate way. What the rlPFC does instead—obviously one would think—is process information for behaviour. It has to earn its keep after all! Given this, one should expect that it is adapted to process information that is itself adapted to solve the kinds of behaviourally related problems faced by our ancestors, that it consists of ad hoc structures processing ad hoc information.

Philosophy is quite obviously an exaptation of the capacities possessed by the rlPFC (and the systems of which it is part), the learned application of metacognitive capacities originally adapted to solve practical behavioural problems to theoretical problems possessing radically different requirements—such as accuracy, the ability to not simply use a cognitive tool, but to be able to reliably determine what that cognitive tool is.

Even granting the intentionalist their spooky functional order, are we to suppose, given everything considered, that we just happened to have evolved the capacity to accurately intuit this elusive functional order? Seems a stretch. The far more plausible answer is that this exaptation, relying as it does on scarce and specialized information, was doomed from the outset to get far more things wrong than right (as the ancient skeptics insisted!). The far more plausible answer is that our metacognitive capacity is as radically heuristic as cognitive science suggests. Think of the scholastic jungle that is analytic and continental philosophy. Or think of the yawning legitimacy gap between mathematics (exaptation gone right) versus the philosophy of mathematics (exaptation gone wrong). The oh so familiar criticisms of philosophy, that it is impractical, disconnected from reality, incapable of arbitrating its controversies—in short, that it does not decisively solve—are precisely the kinds of problems we might expect, were philosophical reflection an artifact of an exaptation gone wrong.

On my account it is wildly implausible that any design paradigm like evolution could deliver the kind of cognition intentionalism requires. Evolution solves difficult problems heuristically: opportunistic fixes are gradually sculpted by various contingent frequencies in its environment, which in our case, were thoroughly social. Since the brain is the most difficult problem any brain could possibly face, we can assume the heuristics our brain relies on to cognize other brains will be specialized, and that the heuristics it uses to cognize itself will be even more specialized still. Part of this specialization will involve the ability to solve problems absent any causal information: there is simply no way the human brain can cognize itself the way it cognizes its natural environment. Is it really any surprise that causal information would scuttle problem-solving adapted to solve in its absence? And given our blindness to the heuristic nature of the systems involved, is it any surprise that we would be confounded by this incompatibility for as long as we have?

The problem, of course, it that it so doesn’t seem that way. I was a Heideggerean once. I was also a Wittgensteinian. I’ve spent months parsing Husserl’s torturous attempts to discipline philosophical reflection. That version of myself would have scoffed at these kinds of criticisms. ‘Scientism!’ would have been my first cry; ‘Performative contradiction!’ my second. I was so certain of the intrinsic intentionality of human things that the kind of argument I’m making here would have struck me as self-evident nonsense. ‘Not only are these intentional oranges real,’ I would have argued, ‘they are the only thing that makes scientific apples possible.’

It’s not enough to show the intentionalist philosopher that, by the light of cognitive science, it’s more than likely their oranges do not exist. Dialectically, at least, one needs to explain how, intuitively, it could seem so obvious that they do exist. Why do the philosopher’s ‘feelings of knowing,’ as murky and inexplicable as they are, have the capacity to convince them of anything, let alone monumental speculative systems?

As it turns out, cognitive psychology has already begun interrogating the general mechanism that is likely responsible, and the curious ways it impacts our retrospective assessments: neglect. In Thinking, Fast and Slow, Daniel Kahneman cites the difficulty we have distinguishing experience from memory as the reason why we retrospectively underrate our suffering in a variety of contexts. Given the same painful medical procedure, one would expect an individual suffering for twenty minutes to report a far greater amount than an individual suffering for half that time or less. Such is not the case. As it turns out duration has “no effect whatsoever on the ratings of total pain” (380). Retrospective assessments, rather, seem determined by the average of the pain’s peak and its coda. Absent intellectual effort, you could say the default is to remove the band-aid slowly.

Far from being academic, this ‘duration neglect,’ as Kahneman calls it, places the therapist in something of a bind. What should the physician’s goal be? The reduction of the pain actually experienced, or the reduction of the pain remembered. Kahneman provocatively frames the problem as a question of choosing between selves, the ‘experiencing self’ that actually suffers the pain and the ‘remembering self’ that walks out of the clinic. Which ‘self’ should the therapist serve? Kahneman sides with the latter. “Memories,” he writes, “are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self” (381). If the drunk has no recollection of the parking lot, then as far as his decision making is concerned, the parking lot simply does not exist. Kahneman writes:

Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self. 381

Could it be that this is what philosophers are doing? Could they, in the course of defining and arranging their oranges, simply be confusing their memory of experience with experience itself? So in the case of duration neglect, information regarding the duration of suffering makes no difference in the subject’s decision making because that information is nowhere to be found. Given the ubiquity of similar effects, Kahneman generalizes the insight into what he calls WYSIATI, or What-You-See-Is-All-There-Is:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our nonconscious cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. 85

Kahneman’s WYSIATI, you could say, provides a way to explain Dehaene’s Law regarding the chronic overestimation of awareness. The cortical bottleneck renders conscious access captive to the facts as they are given. If information regarding things like the duration of suffering in an experimental context isn’t available, then that information simply makes no difference for subsequent behaviour. Likewise, if information regarding the reliability of an intuition or ‘feeling of knowing’ (aptly abbreviated as ‘FOK’ in the literature!) isn’t available, then that information simply makes no difference—at all.

Thus the illusion of what I’ve been calling cognitive sufficiency these past few years. Kahneman lavishes the reader in Thinking, Fast and Slow with example after example of how subjects perennially confuse the information they do have with all the information they need:

You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance. 201

You could say what his research has isolated the cognitive conceit that lies at the heart of Plato’s cave: absent information regarding the low-dimensionality of the information they have available, shadows become everything. Like the parking lot, the cave, the chains, the fire, even the possibility of looking from side-to-side simply do not exist for the captives.

As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little. We often fail to allow for the possibility that evidence that should be critical to our judgment is missing—what we see is all there is. Furthermore, our associative system tends to settle on a coherent pattern of activation and suppresses doubt and ambiguity. 87-88

Could the whole of intentional philosophy amount to varieties of story-telling, ‘theory-narratives’ that are compelling to their authors precisely to the degree they are underdetermined? The problem as Kahneman outlines it is twofold. For one, “[t]he human mind does not deal well with nonevents” (200) simply because unavailable information is information that makes no difference. This is why deception, or any instance of controlling information availability, allows us to manipulate our fellow drunks so easily. For another, “[c]onfidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it,” and “not a reasoned evaluation of the probability that this judgment is correct” (212). So all that time I was reading Heidegger nodding, certain that I was getting close to finding the key, I was simply confirming parochial assumptions. Once I had bought in, coherence was automatic, and the inferences came easy. Heidegger had to be right—the key had to be beneath his lamppost—simply because it all made so much remembered sense ‘upon reflection.’

Could it really be as simple as this? Now given philosophers’ continued insistence on making claims despite their manifest institutional incapacity to decisively arbitrate any of them, neglect is certainly a plausible possibility. But the fact is this is precisely the kind of problem we should expect given that philosophical reflection is an exaptation of pre-existing cognitive capacities.

Why? Because what researchers term ‘error awareness,’ like every other human cognitive capacity, does not come cheap. To be sure, the evolutionary premium on error-detection is high to the extent that adaptive behaviour is impossible otherwise. It is part and parcel of cognition. But philosophical reflection is, once again, an exaptation of pre-existing metacognitive capacities, a form of problem-solving that has no evolutionary precedent. Research has shown that metacognitive error-awareness is often problematic even when applied to problems, such as assessing memory accuracy or behavioural competence in retrospect, that it has likely evolved to solve. [See, Wessel, “Error awareness and the error-related negativity: evaluating the first decade of evidence,” Front Hum Neurosci. 2012; 6: 88. doi: 10.3389/fnhum.2012.00088, for a GNW related review] So if conscious error-awareness is hit or miss regarding adaptive activities, we should expect that, barring some cosmic stroke of evolutionary good fortune, it pretty much eludes philosophical reflection altogether. Is it really surprising that the only erroneous intuitions philosophers seem to detect with any regularity are those belonging to their peers?

We’re used to thinking of deficits in self-awareness in pathological terms, as something pertaining to brain trauma. But the picture emerging from cognitive science is positively filled with instances of non-pathological neglect, metacognitive deficits that exist by virtue of our constitution. The same way researchers can game the heuristic components of vision to generate any number of different visual illusions, experimentalists are learning how to game the heuristic components of cognition to isolate any number of cognitive illusions, ways in which our problem-solving goes awry without the least conscious awareness. In each of these cases, neglect plays a central role in explaining the behaviour of the subjects under scrutiny, the same way clinicians use neglect to explain the behaviour of their impaired patients.

Pathological neglect strikes us as so catastrophically consequential in clinical settings simply because of the behavioural aberrations of those suffering it. Not only does it make a profoundly visible difference, it makes a difference that we can only understand mechanistically. It quite literally knocks individuals from the problem-ecology belonging to socio-cognition into the problem-ecologies belonging to natural cognition. Socio-cognition, as radically heuristic, leans heavily on access to certain environmental information to function properly. Pathological neglect denies us that information.

Non-pathological neglect, on the other hand, completely eludes us because, insofar as we share the same neurophysiology, we share the same ‘neglect structure.’ The neglect suffered is both collective and adaptive. As a result, we only glimpse it here and there, and are more cued to resolve the problems it generates than ponder the deficits in self-awareness responsible. We require elaborate experimental contexts to draw it into sharp focus.

All Blind Brain Theory does is provide a general theoretical framework for these disparate findings, one that can be extended to a great number of traditional philosophical problems—including the holy grail, the naturalization of intentionality. As of yet, the possibility of such a framework remains at most an inkling to those at the forefront of the field (something that only speculative fiction authors dare consider!) but it is a growing one. Non-pathological neglect is not only a fact, it is ubiquitous. Conceptualized the proper way, it possesses a very parsimonious means of dispatching with a great number of ancient and new conundrums…

At some point, I think all these mad ramblings will seem painfully obvious, and the thought of going back to tackling issues of cognition neglecting neglect will seem all but unimaginable. But for the nonce, it remains very difficult to see—it is neglect we’re talking about, after-all!—and the various researchers struggling with its implications lie so far apart in terms of expertise and idiom that none can see the larger landscape.

And what is this larger landscape? If you swivel human cognitive capacity across the continuum of human interrogation you find a drastic plunge in the dimensionality and an according spike in the specialization of the information we can access for the purposes of theorization as soon as brains are involved. Metacognitive neglect means that things like ‘person’ or ‘rule’ or what have you seem as real as anything else in the world when you ponder them, but in point of fact, we have only our intuitions to go on, the most meagre deliverances lacking provenance or criteria. And this is precisely what we should expect given the rank inability of the human brain to cognize itself or others in the high-dimensional manner it cognizes its environments.

This is the picture that traditional, intentional philosophy, if it is to maintain any shred of cognitive legitimacy moving forward, must somehow accommodate. Since I see traditional philosophy as largely an unwitting artifact of this landscape, I think such an accommodation will result in dissolution, the realization that philosophy has largely been a painting class for the blind. Some useful works have been produced here and there to be sure, but not for any reason the artists responsible suppose. So I would like to leave you with a suggestive parallel, a way to compare the philosopher with the sufferer of Anton’s Syndrome, the notorious form of anosognosia that leaves blind patients completely convinced they can see. So consider:

First, the patient is completely blind secondary to cortical damage in the occipital regionsof the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses,therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. Prigatano and Wolf, “Anton’s Syndrome and Unawareness of Partial or Complete Blindness,” The Study of Anosognosia, 456.

And compare to:

First, the philosopher is metacognitively blind secondary to various developmental and structural constraints. Second, the philosopher is not aware of his metacognitive blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his metacognitive incapacity. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

Neuroscience as Socio-Cognitive Pollution

by rsbakker

Want evidence of the Semantic Apocalypse? Look no further than your classroom.

As the etiology of more and more cognitive and behavioural ‘deficits’ is mapped, more and more of what once belonged to the realm of ‘character’ is being delivered to the domain of the ‘medical.’ This is why professors and educators more generally find themselves institutionally obliged to make more and more ‘accommodations,’ as well as why they find their once personal relations with students becoming ever more legalistic, ever more structured to maximally deflect institutional responsibility. Educators relate with students in an environment that openly declares their institutional incompetence regarding medicalized matters, thus providing students with a failsafe means to circumvent their institutional authority. This short-circuit is brought about by the way mechanical, or medical, explanations of behaviour impact intuitive/traditional notions regarding responsibility. Once cognitive or behavioural deficits are redefined as ‘conditions,’ it becomes easy to argue that treating those possessing the deficit the same as those who do not amounts to ‘punishing’ them for something they ‘cannot help.’ The professor is thus compelled to ‘accommodate’ to level the playing field, in order to be moral.

On Blind Brain Theory, this trend is part and parcel of the more general process of ‘social akrasis,’ the becoming incompatible of knowledge and experience. The adaptive functions of morality turn on certain kinds of ignorance, namely, ignorance of the very kind of information driving medicalization. Once the mechanisms underwriting some kind of ‘character flaw’ are isolated, that character flaw ceases to be a character flaw, and becomes a ‘condition.’ Given pre-existing imperatives to grant assistance to those suffering conditions, behaviour once deemed transgressive becomes symptomatic, and moral censure becomes immoral. Character flaws become disabilities. The problem, of course, is that all transgressive behaviour—all behaviour period, in fact—can be traced back to various mechanisms, begging the question, ‘Where does accommodation end?’ Any disparity in classroom performance can be attributed to disparities between neural mechanisms.

The problem, quite simply, is that the tools in our basic socio-cognitive toolbox are adapted to solve problems in the absence of mechanical cognition—it literally requires our blindness to certain kinds of facts to reliably function. We are primed ‘to hold responsible’ those who ‘could have done otherwise’—those who have a ‘choice.’ Choice, quite famously, requires some kind of fictional discontinuity between us and our precursors, a discontinuity that only ignorance and neglect can maintain. ‘Holding responsible,’ therefore, can only retreat before the advance of medicalization, insofar as the latter involves the specification of various behavioural precursors.

The whole problem of this short circuit—and the neuro-ethical mire more generally, in fact—can be seen as a socio-cognitive version of a visual illusion, where the atypical triggering of different visual heuristics generates conflicting visual intuitions. Medicalization stumps socio-cognition in much the same way the Muller-Lyer Illusion stumps the eye: It provides atypical (evolutionarily unprecedented, in fact) information, information that our socio-cognitive systems are adapted to solve without. Causal information regarding neurophysiological function triggers an intuition of moral exemption regarding behaviour that could never have been solved as such in our evolutionary history. Neuroscientific understanding of various behavioural deficits, however defined, cues the application of a basic, heuristic capacity within a historically unprecedented problem-ecology. If our moral capacities have evolved to solve problems neglecting the brains involved, to work around the lack of brain information, then it stands to reason that the provision of that information would play havoc with our intuitive problem-solving. Brain information, you could say, is ‘non-ecofriendly,’ a kind of ‘informatic pollutant’ in the problem-ecologies moral cognition is adapted to solve.

The idea that heuristic cognition generates illusions is now an old one. In naturalizing intentionality, Blind Brain Theory allows us to see how the heuristic nature of intentional problem-solving regimes means they actually require the absence of certain kinds of information to properly function. Adapted to solve social problems in the absence of any information regarding the actual functioning of the systems involved, our socio-cognitive toolbox literally requires that certain information not be available to function properly. The way this works can be plainly seen with the heuristics governing human threat detection, say. Since our threat detection systems are geared to small-scale, highly interdependent social contexts, the statistical significance of any threat information is automatically evaluated against a ‘default village.’ Our threat detection systems, in other words, are geared to problem-ecologies lacking any reliable information regarding much larger populations. To the extent that such information ‘jams’ reliable threat detection (incites irrational fears), one might liken such information to pollution, to something ecologically unprecedented that renders previously effective cognitive adaptations ineffective.

I actually think ‘cognitive pollution’ is definitive of modernity, that all modern decision-making occurs in information environments, many of them engineered, that cut against our basic decision-making capacities. The ‘ecocog’ ramifications of neuroscientific information, however, promise to be particularly pernicious.

Our moral intuitions were always blunt instruments, the condensation of innumerable ancestral social interactions, selected for their consequences rather than their consistencies. Their resistance to any decisive theoretical regimentation—the mire that is ‘metaethics’—should come as no surprise. But throughout this evolutionary development, neurofunctional neglect remained a constant: at no point in our evolutionary history were our ancestors called on to solve moral problems possessing neurofunctional information. Now, however, that information has become an inescapable feature of our moral trouble-shooting, spawning ad hoc fixes that seem to locally serve our intuitions, while generating any number of more global problems.

A genuine social process is afoot here.

A neglect based account suggests the following interpretation of what’s happening: As medicalization (biomechanization) continues apace, the social identity of the individual is progressively divided into the subject, the morally liable, and the abject, the morally exempt. Like a wipe in cinematic editing, the scene of the abject is slowly crawling across the scene of the subject, generating more and more breakdowns of moral cognition. Becoming abject doesn’t so much erase as displace liability: one individual’s exemption (such as you find in accommodation) from moral censure immediately becomes a moral liability for their compatriots. The paradoxical result is that even as we each become progressively more exempt from moral censure, we become progressively more liable to provide accommodation. Thus the slow accumulation of certain professional liabilities as the years wear on. Those charged with training and assessing their fellows will in particular face a slow erosion in their social capacity to censure—which is to say, evaluate—as accommodation and its administrative bureaucracies slowly continue to bloat, capitalizing on the findings of cognitive science.

The process, then, can be described as one where progressive individual exemption translates into progressive social liability: given our moral intuitions, exemptions for individuals mean liabilities for the crowd. Thus the paradoxical intensification of liability that exemption brings about: the process of diminishing performance liability is at once the process of increasing assessment liability. Censure becomes increasingly prone to trigger censure.

The erosion of censure’s public legitimacy is the most significant consequence of this socio-cognitive short-circuit I’m describing. Heuristic tool kits are typically whole package deals: we evolved our carrot problem-solving capacity as part of a larger problem-solving capacity involving sticks. As informatic pollutants destroy more and more of the stick’s problem-solving habitat, the carrots left behind will become less and less reliable. Thus, on a ‘zombie morality’ account, we should expect the gradual erosion of our social system’s ability to police public competence—a kind of ‘carrot drift.’

This is how social akrasis, the psychotic split between the nihilistic how and fantastic what of our society and culture, finds itself coded within the individual. Broken autonomy, subpersonally parsed. With medicalization, the order of the impersonal moves, not simply into the skull of the person, but into their performance as well. As the subject/abject hybrid continues to accumulate exemptions, it finds itself ever more liable to make exemptions. Since censure is communicative, the increasing liability of censure suggests a contribution, at least, to the increasing liability of moral communication, and thus, to the politicization of public interpersonal discourse.

How this clearly unsustainable trend ends depends on the contingencies of a socially volatile future. We should expect to witness the continual degradation in the capacity of moral cognition to solve in what amounts to an increasingly polluted information environment. Will we overcome these problems via some radical new understanding of social cognition? Or will this lead to some kind of atavistic backlash, the institution of some kind of informatic hygiene—an imposition of ignorance on the public? I sometimes think that the kind of ‘liberal atrocity tales’ I seem to endlessly encounter among my nonacademic peers point in this direction. For those ignorant of the polluting information, the old judgments obviously apply, and stories of students not needing to give speeches in public-speaking classes, or homeless individuals being allowed to dump garbage in the river, float like sparks from tongue to tongue, igniting the conviction that we need to return to the old ways, thus convincing who knows how many to vote directly against their economic interests. David Brookes, protege of William F. Buckley and conservative columnist for The New York Times, often expresses amazement at the way the American public continues to drift to the political right, despite the way fiscal conservative reengineering of the market continues to erode their bargaining power. Perhaps the identification of liberalism with some murky sense of the process described above has served to increase the rhetorical appeal of conservatism…

The sense that someone, somewhere, needs to be censured.

Interstellar Dualists and X-phi Alien Superfreaks

by rsbakker

I came up with this little alien thought experiment to illustrate a cornerstone of the Blind Brain Theory: the way systems can mistake information deficits for positive ontological properties, using a species I call the Walleyes (pronounced ‘Wally’s’):

Walleyes possess two very different visual systems, the one high dimensional, adapted to tracking motion and resolving innumerable details, the other myopic in the extreme, adapted to resolving blurry gestalts at best, blobs of shape and colour. Both are exquisitely adapted to solve their respective problem-ecologies, however; those ecologies just happen to be radically divergent. The Walleyes, it turns out, inhabit the twilight line of a world that forever keeps one face turned to its sun. They grow in a linear row that tracks the same longitude around the entire planet, at least wherever there’s land. The high capacity eye is the eye possessing dayvision, adapted to take down mobile predators using poisonous darts. The low capacity eye is the eye possessing nightvision, adapted to send tendrils out to feed on organic debris. The Walleyes, in fact, have nearly a 360 degree view of their environment: only the margin of each defeats them.

The problem, however, is that Walleyes, like anenomes, are a kind of animal that is rooted in place. Save for the odd storm, which blows the ‘head’ about from time to time, there is very little overlap in their respective visual fields, even though each engages (two very different halves of) the same environment. What’s more, the nightvision eye, despite its manifest myopia, continually signals that it possesses a greater degree of fidelity than the first.

Now imagine an advanced alien species introduces a virus that rewires Walleyes for discursive, conscious experience. Since their low-dimensional nightvision system insists (by default) that it sees everything there is to be seen, and its high-dimensional system, always suspicious of camoflaged predators, regularly signals estimates of reliability, the Walleyes have no reason to think heuristic neglect is a problem. Nothing signals the possibility that the problem might be perspectival (related to issues of information access and problem solving capacity), so the metacognitive default of the Walleyes is to construe themselves as special beings that dwell on the interstice of two very different worlds. They become natural dualists…

The same way we seem to be.

Perhaps some X-phi super-aliens are snickering as they read this!

The Missing Half of the Global Neuronal Workspace: A Commentary on Stanislaus Dehaene’s Consciousness and the Brain

by rsbakker

Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts

.

Introduction

Stanislaus Dehaene, to my mind at least, is the premier consciousness researcher on the planet, one of those rare scientists who seems equally at home in the theoretical aether (like we are here) and in the laboratory (where he is there). His latest book, Consciousness and the Brain provides an excellent, and at times brilliant, overview of the state of contemporary consciousness research. Consciousness has come a long way in the past two decades, and Dehaene deserves credit for much of the yardage gained.

I’ve been anticipating Consciousness and the Brain for quite some time, especially since I bumped across “The Eternal Silence of the Neuronal Spaces,” Dehaene’s review of Cristopher Koch’s Consciousness: Confessions of a Romantic Reductionist, where he concludes with a confession of his own: “Can neuroscience be reconciled with living a happy, meaningful, moral, and yet nondelusional life? I will confess that this question also occasionally keeps me lying awake at night.” Since the implications of the neuroscientific revolution, the prospects of having a technically actionable blueprint of the human soul, often keep my mind churning into the wee hours, I was hoping that I might see a more measured, less sanguine Dehaene in this book, one less inclined to soft-sell the troubling implications of neuroscientific research.

And in that one regard, I was disappointed. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts is written for a broad audience, so in a certain sense one can understand the authorial instinct to make things easy for the reader, but rendering a subject matter more amenable to lay understanding is quite a different thing than rendering it more amenable to lay sensibilities. Dehaene, I think, caters far too much to the very preconceptions his science is in the process of dismantling. As a result, the book, for all its organizational finesse, all its elegant formulations, and economical summaries of various angles of research, finds itself haunted by a jagged shadow, the intimation that things simply are not as they seem. A contradiction—of expressive modes if not factual claims.

Perhaps the most stark example of this contradiction comes at the very conclusion of the book, where Dehaene finally turns to consider some of the philosophical problems raised by his project. Adopting a quasi-Dennettian argument (from Freedom Evolves) that the only ‘free will’ that matters is the free will we actually happen to have (namely, one compatible with physics and biology), he writes:

“Our belief in free will expresses the idea that, under the right circumstances, we have the ability to guide our decisions by our higher-level thoughts, beliefs, values, and past experiences, and to exert control over our undesired lower-level impulses. Whenever we make an autonomous decision, we exercise our free will by considering all the available options, pondering them, and choosing the one that we favor. Some degree of chance may enter in a voluntary choice, but this is not an essential feature. Most of the time our willful acts are anything but random: they consist in a careful review of our options, followed by the deliberate selection of the one we favor.” 264

And yet for his penultimate, concluding line no less, he writes, “[a]s you close this book to ponder your own existence, ignited assemblies of neurons literally make up your mind” (266). At this point, the perceptive reader might be forgiven for asking, ‘What happened to me pondering, me choosing the interpretation I favour, me making up my mind?’ The easy answer, of course, is that ‘ignited assemblies of neurons’ are the reader, such that whatever they ‘make,’ the reader ‘makes’ as well. The problem, however, is that the reader has just spent hours reading hundreds of pages detailing all the ways neurons act entirely outside his knowledge. If ignited assemblies of neurons are somehow what he is, then he has no inkling what he is—or what it is he is supposedly doing.

As we shall see, this pattern of alternating expressive modes, swapping between the personal and the impersonal registers to describe various brain activities, occurs throughout Consciousness and the Brain. As I mentioned above, I’m sure this has much to do with Dehaene’s resolution to write a reader friendly book, and so to market the Global Neuronal Workspace Theory (GNWT) to the broader public. I’ve read enough of Dehaene’s articles to recognize the nondescript, clinical tone that animates the impersonally expressed passages, and so to see those passages expressed in more personal idioms as self-conscious attempts on his part to make the material more accessible. But as the free will quote above makes plain, there’s a sense in which Dehaene, despite his odd sleepless night, remains committed to the fundamental compatibility of the personal and the impersonal idioms. He thinks neuroscience can be reconciled with a meaningful and nondelusional life. In what follows I intend to show why, on the basis of his own theory, he’s mistaken. He’s mistaken because, when all is said and done, Dehaene possesses only half of what could count as a complete theory of consciousness—the most important half to be sure, but half all the same. Despite all the detailed explanations of consciousness he gives in the book, he actually has no account whatsoever of what we seem to take consciousness to be–namely, ourselves.

For that account, Stanislaus Dehaene needs to look closely at the implicature of his Global Neuronal Workspace Theory—it’s long theoretical shadow, if you will—because there, I think, he will find my own Blind Brain Theory (BBT), and with it the theoretical resources to show how the consciousness revealed in his laboratory can be reconciled with the consciousness revealed in us. This, then, will be my primary contention: that Dehaene’s Global Neuronal Workspace Theory directly implies the Blind Brain Theory, and that the two theories, taken together, offer a truly comprehensive account of consciousness…

The one that keeps me lying awake at night.

.

Function Dysfunction

Let’s look at a second example. After drawing up an inventory of varous, often intuition-defying, unconscious feats, Dehaene cautions the reader against drawing too pessimistic a conclusion regarding consciousness—what he calls the ‘zombie theory’ of consciousness. If unconscious processes, he asks, can plan, attend, sum, mean, read, recognize, value and so on, just what is consciousness good for? The threat of these findings, as he sees it, is that they seem to suggest that consciousness is merely epiphenomenal, a kind of kaliedoscopic side-effect to the more important, unconscious business of calculating brute possibilities. As he writes:

“The popular Danish science writer Tor Norretranders coined the term ‘user illusion’ to refer to our feeling of being in control, which may well be fallacious; every one of our decisions, he believes, stems from unconscious sources. Many other psychologists agree: consciousness is the proverbial backseat driver, a useless observer of actions that lie forever beyond its control.” 91

Dehaene disagrees, claiming that his account belongs to “what philosophers call the ‘functionalist’ view of consciousness” (91). He uses this passing criticism as a segue for his subsequent, fascinating account of the numerous functions discharged by consciousness—what makes consciousness a key evolutionary adaptation. The problem with this criticism is that it simply does not apply. Norretranders, for instance, nowhere espouses epiphenomenalism—at least not in The User Illusion. The same might be said of Daniel Wegner, one the ‘many psychologists,’ Dehaene references in the accompanying footnote. Far from epiphenomenalism, the argument that consciousness has no function whatsoever (as, say, Susan Pockett (2004) has argued), both of these authors contend that it’s ‘our feeling of being in control’ that is illusory. So in The Illusion of Conscious Will, for instance, Wegner proposes that the feeling of willing allows us to socially own our actions. For him, our consciousness of ‘control’ has a very determinate function, just one that contradicts our metacognitive intuition of that functionality.

Dehaene is simply in error here. He is confusing the denial of intuitions of conscious efficacy with a denial of conscious efficacy. He has simply run afoul the distinction between consciousness as it is and consciousness as appears to us—the distinction between consciousness as impersonally and personally construed. Note the way he actually slips between idioms in the passage quoted above, at first referencing ‘our feeling of being in control’ and then referencing ‘its control.’ Now one might think this distinction between these two very different perspectives on consciousness would be easy to police, but such is not the case (See Bennett and Hacker, 2003). Unfortunately, Dehaene is far from alone when it comes to running afoul this dichotomy.

For some time now, I’ve been arguing for what I’ve been calling a Dual Theory approach to the problem of consciousness. On the one hand, we need a theoretical apparatus that will allow us to discover what consciousness is as another natural phenomenon in the natural world. On the other hand, we need a theoretical apparatus that will allow us to explain (in a manner that makes empirically testable predictions) why consciousness appears the way that it does, namely, as something that simply cannot be another natural phenomenon in the natural world. Dehaene is in the business of providing the first kind of theory: a theory of what consciousness actually is. I’ve made a hobby of providing the second kind of theory: a theory of why consciousness appears to possess the baffling form that it does.

Few terms in the conceptual lexicon are quite so overdetermined as ‘consciousness.’ This is precisely what makes Dehaene’s operationalization of ‘conscious access’ invaluable. But salient among those traditional overdeterminations is the peculiarly tenacious assumption that consciousness ‘just is’ what it appears to be. Since what it appears to be is drastically at odds with anything else in the natural world, this assumption sets the explanatory bar rather high indeed. You could say consciousness needs a Dual Theory approach for the same reason that Dualism constitutes an intuitive default (Emmons 2014). Our dualistic intuitions arguably determine the structure of the entire debate. Either consciousness really is some wild, metaphysical exception to the natural order, or consciousness represents some novel, emergent twist that has hitherto eluded science, or something about our metacognitive access to consciousness simply makes it seem that way. Since the first leg of this trilemma belongs to theology, all the interesting action has fallen into orbit around the latter two options. The reason we need an ‘Appearance Theory’ when it comes to consciousness as opposed to other natural phenomena, has to do with our inability to pin down the explananda of consciousness, an inability that almost certainly turns on the idiosyncrasy of our access to the phenomena of consciousness compared to the phenomena of the natural world more generally. This, for instance, is the moral of Michael Graziano’s (otherwise flawed) Consciousness and the Social Brain: that the primary job of the neuroscientist is to explain consciousness, not our metacognitive perspective on consciousness.

The Blind Brain Theory is just such an Appearance Theory: it provides a systematic explanation of the kinds of cognitive confounds and access bottlenecks that make consciousness appear to be ‘supra-natural.’ It holds, with Dehaene, that consciousness is functional through and through, just not in any way we can readily intuit outside empirical work like Dehaene’s. As such, it takes findings such as Wegner’s, where the function we presume on the basis of intuition (free willing) is belied by some counter-to-intuition function (behaviour ownership), as paradigmatic. Far from epiphenomenalism, BBT constitutes a kind of ‘ulterior functionalism’: it acknowledges that consciousness discharges a myriad of functions, but it denies that metacognition is any position to cognize those functions (see “THE Something about Mary“) short of sustained empirical investigation.

Dehaene is certainly sensitive to the general outline of this problem: he devotes an entire chapter (“Consciousness Enters the Lab”) to discussing the ways he and others have overcome the notorious difficulties involved in experimentally ‘pinning consciousness down.’ And the masking and attention paradigms he has helped develop have done much to transform consciousness research into a legitimate field of scientific research. He even provides a splendid account of just how deep unconscious processing reaches into what we intuitively assume are wholly conscious exercises—an account that thoroughly identifies him as a fellow ulterior functionalist. He actually agrees with me and Norretranders and Wegner—he just doesn’t realize it quite yet.

.

The Global Neuronal Workspace

As I said, Dehaene is primarily interested in theorizing consciousness apart from how it appears. In order to show how the Blind Brain Theory actually follows from his findings, we need to consider both these findings and the theoretical apparatus that Dehaene and his colleagues use to make sense of them. We need to consider his Global Neuronal Workspace Theory of consciousness.

According to GNWT, the primary function of consciousness is to select, stabilize, solve, and broadcast information throughout the brain. As Dehaene writes:

“According to this theory, consciousness is just brain-wide information sharing. Whatever we become conscious of, we can hold it in our mind long after the corresponding stimulation has disappeared from the outside world. That’s because the brain has brought it into the workspace, which maintains it independently of the time and place at which we first perceived it. As a result, we may use it in whatever way we please. In particular, we can dispatch it to our language processors and name it; this is why the capacity to report is a key feature of a conscious state. But we can also store it in long-term memory or use it for our future plans, whatever they are. The flexible dissemination of information, I argue, is a characteristic property of a conscious state.” 165

A signature virtue of Consciousness and the Brain lays in Dehaene’s ability to blend complexity and nuance with expressive economy. But again one needs to be wary of his tendency to resort to the personal idiom, as he does in this passage, where the functional versatility provided by consciousness is explicitly conflated with agency, the freedom to dispose of information ‘in whatever way we please.’ Elsewhere he writes:

“The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result–a conscious symbol–to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing.” 105

Here we find him making essentially the same claims in less anthropomorphic or ‘reader-friendly’ terms. Despite the folksy allure of the ‘workspace’ metaphor, this image of the brain as a ‘hybrid serial-parallel machine’ is what lies at the root of GNWT. For years now, Dehaene and others have been using masking and attention experiments in concert with fMRI, EEG, and MEG to track the comparative neural history of conscious and unconscious stimuli through the brain. This has allowed them to isolate what Dehaene calls the ‘signatures of consciousness,’ the events that distinguish percepts that cross the conscious threshold from percepts that do not. A theme that Dehaene repeatedly evokes is the information asymmetric nature of conscious versus unconscious processing. Since conscious access is the only access we possess to our brain’s operations, we tend to run afoul a version of what Daniel Kahneman (2012) calls WYSIATI, or the ‘what-you-see-is-all-there-is’ effect. Dehaene even goes so far as to state this peculiar tendency as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79). The fact is the nonconscious brain performs the vast, vast majority of the brain’s calculations.

The reason for this has to do with the Inverse Problem, the challenge of inferring the mechanics of some distal system, a predator or a flood, say, from the mechanics of some proximal system such as ambient light or sound. The crux of the problem lies in the ambiguity inherent to the proximal mechanism: a wild variety of distal events could explain any given retinal stimulus, for instance, and yet somehow we reliably perceive predators or floods or what have you. Dehaene writes:

“We never see the world as our retina sees it. In fact, it would be a pretty horrible sight: a highly distorted set of light and dark pixels, blown up toward the center of the retina, masked by blood vessels, with a massive hole at the location of the ‘blind spot’ where cables leave for the brain; the image would constantly blur and change as our gaze moved around. What we see, instead, is a three-dimensional scene, corrected for retinal defects, mended at the blind spot, and massive reinterpreted based on our previous experience of similar visual scenes.” 60

The brain can do this because it acts as a massively parallel Bayesian inference engine, analytically breaking down various elements of our retinal images, feeding them to specialized heuristic circuits, and cobbling together hypothesis after hypothesis.

“Below the conscious stage, myriad unconscious processors, operating in parallel, constantly strive to extract the most detailed and complete interpretation of our environment. They operate as nearly optimal statisticians who exploit the slightest perceptual hint—a faint movement, a shadow, a splotch of light—to calculate the probability that a given property holds true in the outside world.” 92

But hypotheses are not enough. All this machinery belongs to what is called the ‘sensorimotor loop.’ The whole evolutionary point of all this processing is to produce ‘actionable intelligence,’ which is to say, to help generate and drive effective behaviour. In many cases, when the bottom-up interpretations match the top-down expectations and behaviour is routine, say, such selection need not result in consciousness of the stimuli at issue. In other cases, however, the interpretations are relayed to the nonconscious attentional systems of the brain where they are ranked according to their relevance to ongoing behaviour and selected accordingly for conscious processing. Dehaene summarizes what happens next:

“Conscious perception results from a wave of neuronal activity that tips the cortex over its ignition threshold. A conscious stimulus triggers a self-amplifying avalanche of neural activity that ultimately ignites many regions into a tangled state. During that conscious state, which starts approximately 300 milliseconds after stimulus onset, the frontal regions of the brain are being informed of sensory inputs in a bottom-up manner, but these regions also send massive projections in the converse direction, top-down, and to many distributed areas. The end result is a brain web of synchronized areas whose various facets provide us with many signatures of consciousness: distributed activation, particularly in the frontal and parietal lobes, a P3 wave, gamma-band amplification, and massive long-distance synchrony.” 140

As Dehaene is at pains to point out, the machinery of consciousness is simply too extensive to not be functional somehow. The neurophysiological differences observed between the multiple interpretations that hover in nonconscious attention, and the interpretation that tips the ‘ignition threshold’ of consciousness is nothing if not dramatic. Information that was localized suddenly becomes globally accessible. Information that was transitory suddenly becomes stable. Information that was hypothetical suddenly becomes canonical. Information that was dedicated suddenly becomes fungible. Consciousness makes information spatially, temporally, and structurally available. And this, as Dehaene rightly argues, makes all the difference in the world, including the fact that “[t]he global availability of information is precisely what we subjectively experience as a conscious state” (168).

.

A Mile Wide and an Inch Thin

Consciousness is the Medieval Latin of neural processing. It makes information structurally available, both across time and across the brain. As Dehaene writes, “The capacity to synthesize information over time, space, and modalities of knowledge, and to rethink it at any time in the future, is a fundamental component of the conscious mind, one that seems likely to have been positively selected for during evolution” (101). But this evolutionary advantage comes with a number of crucial caveats, qualifications that, as we shall see, make some kind of Dual Theory approach unavoidable.

Once an interpretation commands the global workspace, it becomes available for processing via the nonconscious input of a number of different processors. Thus the metaphor of the workspace. The information can be ‘worked over,’ mined for novel opportunities, refined into something more useful, but only, as Dehaene points out numerous times, synoptically and sequentially.

Consciousness is synoptic insofar as it samples mere fractions of the information available: “An unconscious army of neurons evaluates all the possibilities,” Dehaene writes, “but consciousness receives only a stripped down report” (96). By selecting, in other words, the workspace is at once neglecting, not only all the alternate interpretations, but all the neural machinations responsible: “Paradoxically, the sampling that goes on in our conscious vision makes us forever blind to its inner complexity” (98).

And consciousness is sequential in that it can only sample one fraction at a time: “our conscious brain cannot experience two ignitions at once and lets us perceive only a single conscious ‘chunk’ at a given time,” he explains. “Whenever the prefrontal and parietal lobes are jointly engaged in processing a first stimulus, they cannot simultaneously reengage toward a second one” (125).

All this is to say that consciousness pertains to the serial portion of the ‘hybrid serial-parallel machine’ that is the human brain. Dehaene even goes so far as to analogize consciousness to a “biological Turing machine” (106), a kind of production system possessing the “capacity to implement any effective procedure” (105). He writes:

“A production system comprises a database, also called ‘working memory,’ and a vast array of if-then production rules… At each step, the system examines whether a rule matches the current state of its working memory. If multiple rules match, then they compete under the aegis of a stochastic prioritizing system. Finally, the winning rule ‘ignites’ and is allowed to change the contents of working memory before the entire process resumes. Thus this sequence of steps amounts to serial cycles of unconscious competition, conscious ignition, and broadcasting.” 105

The point of this analogy, Dehaene is quick to point out, isn’t to “revive the cliché of the brain as a classical computer” (106) so much as it is to understand the relationship between the conscious and nonconscious brain. Indeed, in subsequent experiments, Dehaene and his colleagues discovered that the nonconscious, for all its computational power, is generally incapable of making sequential inferences: “The mighty unconscious generates sophisticated hunches, but only a conscious mind can follow a rational strategy, step after step” (109). It seems something of a platitude to claim that rational deliberation requires consciousness, but to be able to provide an experimentally tested neurobiological account of why this is so is nothing short of astounding. Make no mistake: these are the kind of answers philosophy, rooting through the mire of intuition, has sought for millennia.

Dehaene, as I mentioned, is primarily interested in providing a positive account of what consciousness is apart from what we take it to be. “Putting together all the evidence inescapably leads us to a reductionist conclusion,” Dehaene writes. “All our conscious experiences, from the sound of an orchestra to the smell of burnt toast, result from a similar source: the activity of massive cerebral circuits that have reproducible neuronal signatures” (158). Though he does consider several philosophical implications of his ‘reductionist conclusions,’ he does so only in passing. He by no means dwells on them.

Given that consciousness research is a science attempting to bootstrap its way out of the miasma of philosophical speculation regarding the human soul, this reluctance is quite understandable—perhaps even laudable. The problem, however, is that philosophy and science both traffic in theory, general claims about basic things. As a result, the boundaries are constitutively muddled, typically to the detriment of the science, but sometimes to its advantage. A reluctance to speculate may keep the scientist safe, but to the extent that ‘data without theory is blind,’ it may also mean missed opportunities.

So consider Dehaene’s misplaced charge of epiphenomenalism, the way he seemed to be confusing the denial of our intuitions of conscious efficacy with the denial of conscious efficacy. The former, which I called ‘ulterior functionalism,’ entirely agrees that consciousness possesses functions; it denies only that we have reliable metacognitive access to those functions. Our only recourse, the ulterior functionalist holds, is to engage in empirical investigation. And this, I suggested, is clearly Dehaene’s own position. Consider:

“The discovery that a word or a digit can travel throughout the brain, bias our decisions, and affect our language networks, all the while remaining unseen, was an eye-opener for many cognitive scientists. We had underestimated the power of the unconscious. Our intuitions, it turned out, could not be trusted: we had no way of knowing what cognitive processes could or could not proceed without awareness. The matter was entirely empirical. We had to submit, one by one, each mental faculty to a thorough inspection of its component processes, and decide which of those faculties did or did not appeal to the conscious mind. Only careful experimentation could decide the matter…” 74

This could serve as a mission statement for ulterior functionalism. We cannot, as a matter of fact, trust any of our prescientific intuitions regarding what we are, no more than we could trust our prescientific intuitions regarding the natural world. This much seems conclusive. Then why does Dehaene find the kinds of claims advanced by Norretranders and Wegner problematic? What I want to say is that Dehaene, despite the occasional sleepless night, still believes that the account of consciousness as it is will somehow redeem the most essential aspects of consciousness as it appears, that something like a program of ‘Dennettian redefinition’ will be enough. Thus the attitude he takes toward free will. But then I encounter passages like this:

“Yet we never truly know ourselves. We remain largely ignorant of the actual unconscious determinants of our behaviour, and therefore cannot accurately predict what our behaviour will be in circumstances beyond the safety zone of our past experiences. The Greek motto ‘Know thyself,’ when applied to the minute details of our behaviour, remains an inaccessible ideal. Our ‘self’ is just a database that gets filled in through our social experiences, in the same format with which we attempt to understand other minds, and therefore it is just as likely to include glaring gaps, misunderstandings, and delusions.” 113

Claims like this, which radically contravene our intuitive, prescientific understanding of self, suggest that Dehaene simply does not know where he stands, that he alternately believes and does not believe that his work can be reconciled with our traditional understand of ‘meaningful life.’ Perhaps this explains the pendulum swing between the personal and the impersonal idiom that characterizes this book—down to the final line, no less!

Even though this is an eminently honest frame of mind to take to this subject matter, I personally think his research cuts against even this conflicted optimism. Not surprisingly, the Global Neuronal Workspace Theory of Consciousness casts an almost preposterously long theoretical shadow; it possesses an implicature that reaches to the furthest corners of the great human endeavour to understand itself. As I hope to show, the Blind Brain Theory of the Appearance of Consciousness provides a parsimonious and powerful way to make this downstream implicature explicit.

.

From Geocentrism to ‘Noocentrism’

“Most mental operations,” Dehaene writes, “are opaque to the mind’s eye; we have no insight into the operations that allow us to recognize a face, plan a step, add two digits, or name a word” (104-5). If one pauses to consider the hundreds of experiments that he directly references, not to mention the thousands of others that indirectly inform his work, this goes without saying. We require a science of consciousness simply because we have no other way of knowing what consciousness is. The science of consciousness is literally predicated on the fact of our metacognitive incapacity (See “The Introspective Peepshow“).

Demanding that science provide a positive explanation of consciousness as we intuit it is no different than demanding that science provide a positive explanation of geocentrism—which is to say, the celestial mechanics of the earth as we once intuited it. Any fool knows that the ground does not move. If anything, the fixity of the ground is what allows us to judge movement. Certainly the possibility that the earth moved was an ancient posit, but lacking evidence to the contrary, it could be little more than philosophical fancy. Only the slow accumulation of information allowed us to reconceive the ‘motionless earth’ as an artifact of ignorance, as something that only the absence of information could render obvious. Geocentrism is the product of a perspectival illusion, plain and simple, the fact that we literally stood too close to the earth to comprehend what the earth in fact was.

We stand even closer to consciousness—so close as to be coextensive! Nonetheless, a good number of very intelligent people insist on taking (some version of) consciousness as we intuit it to be the primary explanandum of consciousness research. Given his ‘law’ (We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (79)), Dehaene is duly skeptical. He is a scientific reductionist, after all. So with reference to David Chalmers’ ‘hard problem’ of consciousness, we find him writing:

“My opinion is that Chalmers swapped the labels: it is the ‘easy’ problem that is hard, while the hard problem just seems hard because it engages ill-defined intuitions. Once our intuition is educated by cognitive neuroscience and computer simulations, Chalmer’s hard problem will evaporate.” 262

Referencing the way modern molecular biology has overthrown vitalism, he continues:

“Likewise, the science of consciousness will keep eating away at the hard problem until it vanishes. For instance, current models of visual perception already explain not only why the human brain suffers from a variety of visual illusions but also why such illusions would appear in any rational machine confronted with the same computational problem. The science of consciousness already explains significant chunks of our subjective experience, and I see no obvious limits to this approach.” 262

I agree entirely. The intuitions underwriting the so-called ‘hard problem’ are perspectival artifacts. As in the case of geocentrism, our cognitive systems stand entirely too close to consciousness to not run afoul a number of profound illusions. And I think Dehaene, not unlike Galileo, is using the ‘Dutch Spyglass’ afforded by masking and attention paradigms to accumulate the information required to overcome those illusions. I just think he remains, despite his intellectual scruples, a residual hostage of the selfsame intuitions he is bent on helping us overcome.

Dehaene only needs to think through the consequences of GNWT as it stands. So when he continues to discuss other ‘hail Mary’ attempts (those of Eccles and Penrose) to find some positive account of consciousness as it appears, writing that “the intuition that our mind chooses its actions ‘at will’ begs for an explanation” (263), I’m inclined to think he already possesses the resources to advance such an explanation. He just needs to look at his own findings in a different way.

Consider the synoptic and sequential nature of what Dehaene calls ‘ignition,’ the becoming conscious of some nonconscious interpretation. The synoptic nature of ignition, the fact that consciousness merely samples interpretations, means that consciousness is radically privative, that every instance of selection involves massive neglect. The sequential nature of ignition, on the other hand, the fact that the becoming conscious of any interpretation precludes the becoming conscious of another interpretation, means that each moment of consciousness is an all or nothing affair. As I hope to show, these two characteristics possess profound implications when applied to the question of human metacognitive capacity—which is to say, our capacity to intuit our own makeup.

Dehaene actually has very little to say regarding self-consciousness and metacognition in Consciousness and the Brain, aside from speculating on the enabling role played by language. Where other mammalian species clearly seem to possess metacognitive capacity, it seems restricted to the second-order estimation of the reliability of their first-order estimations. They lack “the potential infinity of concepts that a recursive language affords” (252). He provides an inventory of the anatomical differences between primates and other mammals, such as specialized ‘broadcast neurons,’ and between humans and their closest primate kin, such as the size of the dendritic trees possessed by human prefrontal neurons. As he writes:

“All these adaptations point to the same evolutionary trend. During hominization, the networks of our prefrontal cortex grew denser and denser, to a larger extent than would be predicted by brain size alone. Our workspace circuits expanded way beyond proportion, but this increase is probably just the tip of the iceberg. We are more than just primates with larger brains. I would not be surprised if, in the coming years, cognitive neuroscientists find that the human brain possesses unique microcircuits that give it access to a new level of recursive, language-like operations.” 253

Presuming the remainder of the ‘iceberg’ does not overthrow Dehaene’s workspace paradigm, however, it seems safe to assume that our metacognitive machinery feeds from the same informational trough, that it is simply one among the many consumers of the information broadcast in conscious ignition. The ‘information horizon’ of the Workspace, in other words, is the information horizon of conscious metacognition. This would be why our capacity to report seems to be coextensive with our capacity to consciously metacognize: the information we can report constitutes the sum of information available for reflective problem-solving.

So consider the problem of a human brain attempting to consciously cognize the origins of its own activity—for the purposes of reporting to other brains, say. The first thing to note is that the actual, neurobiological origins of that activity are entirely unavailable. Since only information that ignites is broadcast, only information that ignites is available. The synoptic nature of the information ignited renders the astronomical complexities of ignition inaccessible to conscious access. Even more profoundly, the serial nature of ignition suggests that consciousness, in a strange sense, is always too late. Information pertaining to ignition can never be processed for ignition. This is why so much careful experimentation is required, why our intuitions are ‘ill-defined,’ why ‘most mental operations are opaque.’ The neurofunctional context of the workspace is something that lies outside the capacity of the workspace to access.

This explains the out-and-out inevitability of what I called ‘ulterior functionalism’ above: the information ignited constitutes the sum of the information available for conscious metacognition. Whenever we interrogate the origins or our conscious episodes, reflection only has our working memory of prior conscious episodes to go on. This suggests something as obvious as it is counterintuitive: that conscious metacognition should suffer a profound form of source blindness. Whenever conscious metacognition searches for the origins of its own activity, it finds only itself.

Free will, in other words, is a metacognitive illusion arising out of the structure of the global neuronal workspace, one that, while perhaps not appearing “in any rational machine confronted with the same computational problem” (262), would appear in any conscious system possessing the same structural features as the global neuronal workspace. The situation is almost directly analogous to the situation faced by our ancestors before Galileo. Absent any information regarding the actual celestial mechanics of the earth, the default assumption is that the earth has no such mechanics. Likewise, absent any information regarding the actual neural mechanics of consciousness, the default assumption is that consciousness also has no such mechanics.

But free will is simply one of many problems pertaining to our metacognitive intuitions. According to the Blind Brain Theory of the Appearance of Consciousness, a great number of the ancient and modern perplexities can be likewise explained in terms of metacognitive neglect, attributed to the fact that the structure and dynamics of the workspace render the workspace effectively blind to its own structure and dynamics. Taking Dehaene’s Global Neuronal Workspace Theory of Consciousness, it can explain away the ‘ill-defined intuitions’ that underwrite attributions of some extraordinary irreducibility to conscious phenomena.

On BBT, the myriad structural peculiarities that theologians and philosophers have historically attributed to the first person are perspectival illusions, artifacts of neglect—things that seem obvious only so long as we remain ignorant of the actual mechanics involved (See, “Cognition Obscura“). Our prescientific conception of ourselves is radically delusional, and the kind of counterintuitive findings Dehaene uses to patiently develop and explain GNWT are simply what we should expect. Noocentrism is as doomed as was geocentrism. Our prescientific image of ourselves is as blinkered as our prescientific image of the world, a possibility which should, perhaps, come as no surprise. We are simply another pocket of the natural world, after all.

But the overthrow of noocentrism is bound to generate even more controversy than the overthrow of geocentrism or biocentrism, given that so much of our self and social understanding relies upon this prescientific image. Perhaps we should all lay awake at night, pondering our pondering…

Leaving It Implicit

by rsbakker

Since the aim of philosophy is not “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term” with as little information as possible, I thought it worthwhile to take another run at the instinct to raise firewalls about certain discourses, to somehow immunize them from the plague of scientific information to come. I urge anyone disagreeing to sound off, to explain to me how it’s possible to assert the irrelevance of any empirical discovery in advance, because I am duly mystified. On the one hand, we have these controversial sketches regarding the nature of meaning and normativity, and on the other we have the most complicated mechanism known, the human brain. And learning the latter isn’t going to revolutionize the former?

Of course it is. We are legion, a myriad of subpersonal heuristic systems that we cannot intuit as such. We have no inkling of when we swap between heuristics and so labour under the illusion of cognitive continuity. We have no inkling as to the specific problem-ecologies our heuristics are adapted to and so labour under the illusion of cognitive universality. We are, quite literally, blind to the astronomical complexity of what we are and what we do. I’ve spent these past 18 months on TPB brain-storming novel ways to conceptualize this blindness, and how we might see the controversies and conundrums of traditional philosophy as its expression.

Say that consciousness accompanies/facilitates/enables a disposition to ‘juggle’ cognitive resources, to creatively misapply heuristics in the discovery of exaptive problem ecologies. Traditional philosophy, you might say, represents the institutionalization of this creative misapplication, the ritualized ‘making problematic’ ourselves and our environments. As an exercise in serial misapplication, one must assume (as indeed every individual philosophy does) that the vast bulk of philosophy solves nothing whatsoever. But if one thinks, as I do, that philosophy was a necessary condition of science and democracy, then the obvious, local futility of the philosophical enterprise would seem to be globally redeemed. Thinkers are tinkers, and philosophy is a grand workshop: while the vast majority of the gadgets produced will be relegated to the dustbin, those few that go retail can have dramatic repercussions.

Of course, the hubris is there staring each and every one of us in the face, though its universality renders it almost invisible. To the extent that we agree with ourselves, we all assume we’ve won the Magical Belief Lottery—the conviction, modest or grand, that this gadget here will be the one that reprograms the future.

I’m going to call my collection of contending gadgets, ‘progressive naturalism,’ or more simply, pronaturalism. It is progressive insofar as it attempts to continue the project of disenchantment, to continue the trend of replacing traditional intentional understanding with mechanical understanding. It is naturalistic insofar as it pilfers as much information and as many of its gadgets from natural science as it can.

So from a mechanical problem-solving perspective, words are spoken and actions… simply ensue. Given the systematicity of the ensuing actions, the fact that one can reliably predict the actions that typically follow certain utterances, it seems clear that some kind of constraint is required. Given the utter inaccessibility of the actual biomechanics involved, those constraints need to be conceived in different terms. Since the beginning of philosophy, normativity has been the time-honoured alternative. Rather than positing causes, we attribute reasons to explain the behaviour of others. Say you shout “Duck!” to our golf partner. If he fails to duck and turns to you quizzically instead, you would be inclined to think him incompetent, to say something like, “When I say ‘Duck!’ I mean ‘Duck!’”

From a mechanical perspective, in other words, normativity is our way of getting around the inaccessibility of what is actually going on. Normativity names a family of heuristic tools, gadgets that solve problems absent biomechanical information. Normative cognition, in other words, is a biomechanical way of getting around the absence of biomechanical information.

What else would it be?

From a normative perspective, however, the biomechanical does not seem to exist, at least at the level of expression. This is no coincidence, given that normative heuristics systematically neglect otherwise relevant biomechanical information. Nor is the manifest incompatibility between the normative and biomechanical perspectives any coincidence: as a way to solve problems absent mechanical information, normative cognition will only reliably function in those problem ecologies lacking that information. Information formatted for mechanical cognition simply ‘does not compute.’

From a normative perspective, in other words, the ‘normative’ is bound to seem both ontologically distinct and functionally independent vis a vis the mechanical. And indeed, once one begins taking a census of the normative terms used in biomechanical explanations, it begins to seem clear that normativity is not only distinct and independent, but that it comes first, that it is, to adopt the occult term normalized by the tradition, ‘a priori.’

From the mechanical perspective, these are natural mistakes to make given that mechanical information systematically eludes theoretical metacognition as well. As I said, we are blind to the astronomical complexities of what we are and what we do. Whenever a normative philosopher attempts to ‘make explicit’ our implicit sayings and doings they are banking on the information and cognitive resources they happen to have available. They have no inkling that they’re relying on any heuristics at all, let alone a variety of them, let alone any clear sense of the narrow problem-ecologies they are adapted to solve. They are at best groping their way to a possible solution in the absence of any information pertaining to what they are actually doing.

From the mechanical perspective, in other words, the normative philosopher has only the murkiest idea of what’s going on. They theorize ‘takings as’ and ‘rules’ and ‘commitments’ and ‘entitlements’ and ‘uses’—they develop their theoretical vocabulary—absent any mechanical information, which is to say, absent the information underwriting the most reliable form of theoretical cognition humanity has ever achieved.

The normative philosopher is now in a bind. Given that the development of their theoretical vocabulary turns on the absence of mechanical information, they have no way of asserting that what they are ‘making explicit’ is not actually mechanical. If the normativity of the normative is not given, then the normative philosopher simply cannot assume normative closure, that the use of normative terms—such as ‘use’—implicitly commits any user to any kind of theoretical normative realism, let alone this or that one. This is the article of faith I encounter most regularly in my debates with normative types: that I have to be buying into their picture somehow, somewhere. My first order use of ‘use’ no more commits me to any second-order interpretation of the ‘meaning of use’ as something essentially normative than uttering the Lord’s name in vain commits me to Christianity. The normative philosopher’s inability to imagine how it could be otherwise certainly commits me to nothing. Evolution has given me all these great, normative gadgets—I would be an idiot not to use them! But please, if you want to convince me that these gadgets aren’t gadgets at all, that they are something radically different from anything in nature, then you’re going to have to tell me how and why.

It’s just foot-stomping otherwise.

And this is where I think the bind becomes a garrotte, because the question becomes one of just how the normative philosopher could press their case. If they say their theoretical vocabulary is merely ‘functional,’ a way to describe actual functions at a ‘certain level’ you simply have to ask them to evidence this supposed ‘actuality.’ How can you be sure that your ‘functions’ aren’t, as Craver and Piccinini would argue, ‘mechanism sketches,’ ways to rough out what is actually going on absent the information required to know what’s actually going on? It is a fact that we are blind to the astronomical complexity of what we are and what we do: How do you know if the rope you keep talking about isn’t actually an elephant’s tail?

The normative philosopher simply cannot presume the sufficiency of the information at their disposal. On the one hand, the first-order efficacy of the target vocabulary in no way attests to the accuracy of their second-order regimentations: our ‘mindreading’ heuristics were selected precisely because they were efficacious. The same can be said of logic or any other apparently ‘irreducibly normative’ family of formal problem-solving procedures. Given the relative ease with which these procedures can be mechanically implemented in a simple register system, it’s hard to understand how the normative philosopher can insist they are obviously ‘intrinsically normative.’ Is it simply a coincidence that our brains are also mechanical? Perhaps it is simply our metacognitive myopia, our (obvious) inability to intuit the mechanical complexity of the brain buzzing behind our eyeballs, that leads us to characterize them as such. This would explain the utter lack of second-order, theoretical consensus regarding the nature of these apparently ‘formal’ problem solving systems. Regardless, the efficacy of normative terms in everyday contexts no more substantiates any philosophical account of normativity than the efficacy of mathematics substantiates any given philosophy of mathematics.

Normative intuitions, on the other hand, are equally useless. If ‘feeling right’ had anything but a treacherous relationship with ‘being right,’ we wouldn’t be having this conversation. Not only are we blind to the astronomical complexities of what we are and what we do, we’re blind to this blindness as well! Like Plato’s prisoners, normative philosophers could be shackled to a play of shadows, convinced they see everything they need to see simply for want of information otherwise.

But aside from intuition (or whatever it is that disposes us to affirm certain ‘inferences’ more than others), just what does inform normative theoretical vocabularies?

Good question!

On the mechanical perspective, normative cognition involves the application of specialized heuristics in specialized problem-ecologies—ways we’ve evolved (and learned) to muddle through our own mad complexities. When I utter ‘use’ I’m deploying something mechanical, a gadget that allows me to breeze past the fact of my mechanical blindness and to nevertheless ‘cognize’ given that the gadget and the problem ecologies are properly matched. Moreover, since I understand that ‘use,’ like ‘meaning,’ is a gadget, I know better than to hope that second-order applications of this and other related gadgets to philosophical problem-ecologies will solve much of anything—that is, unless your problem happens to be filling lecture time!

So when Brandom writes, for instance, “What we could call semantic pragmatism is the view that the only explanation there could be for how a given meaning gets associated with a vocabulary is to be found in the use of that vocabulary…” (Extending the Project of Analysis, 11), I hear the claim that the heuristic misapplications characteristic of traditional semantic philosophy can only be resolved via the heuristic misapplications characteristic of traditional pragmatic philosophy. We know that normative cognition is profoundly heuristic. We know that heuristics possess problem ecologies, that they are only effective in parochial contexts. Given this, the burning question for any project like Brandom’s has to be whether the heuristics he deploys are even remotely capable of solving the problems he tackles.

One would think this is a pretty straightforward question deserving a straightforward answer—and yet, whenever I raise it, it’s either passed over in silence or I’m told that it doesn’t apply, that it runs roughshod over some kind of magically impermeable divide. Most recently I was told that my account refuses to recognize that we have ‘perfectly good descriptions’ of things like mathematical proof procedures, which, since they can be instantiated in a variety of mechanisms, must be considered independently of mechanism.

Do we have perfectly good descriptions of mathematical proof procedures? This is news to me! Every time I dip my toe in the philosophy of mathematics I’m amazed by the florid diversity of incompatible theoretical interpretations. In fact, it seems pretty clear that we have no consensus-compelling idea of what mathematics is.

Does the fact that various functions can be realized in a variety of different mechanisms mean that those functions must be considered independently of mechanism altogether? Again, this is news to me. As convenient as it is to pluck apparently identical functions from a multiplicity of different mechanisms in certain problem contexts, it simply does not follow that one must do the same for all problem contexts. For one, how do we know we’ve got those functions right? Perhaps the granularity of the information available occludes a myriad of functional differences. Consider money: despite being a prototypical ‘virtual machine’ (as Dennett calls it in his latest book), there can be little doubt that the mechanistic details of its instantiation have a drastic impact on its function. The kinds of computerized nanosecond transactions now beginning to dominate financial markets could make us pine for good old ‘paper changing hands’ days soon enough. Or consider normativity: perhaps our blindness to the heuristic specificity of normative cognition has led us to theoretically misconstrue its function altogether. There’s gotta be some reason why no one seems to agree. Perhaps mathematics baffles us simply because we cannot intuit how it is instantiated in the human machine! We like to think, for instance, that the atemporal systematicity of mathematics is what makes it so effective—but how do we know this isn’t just another ‘noocentric’ conceit? After all, we have no way of knowing what function our conscious awareness of mathematical cognition plays in mathematical cognition more generally. All that seems certain is that it is not the whole story. Perhaps our apparently all-important ‘abstractions’ are better conceived as low-dimensional shadows of what is actually going on.

And all this is just to say that normativity, even in its most imposing, formal guises, isn’t something magical. It is an evolved capacity to solve specific problems given limited resources. It is natural— not normative. As a natural feature of human cognition, it is simply another object of ongoing scientific inquiry. As another object of ongoing scientific inquiry, we should expect our traditional understanding to be revolutionized, that positions such as ‘inferentialism’ will come to sound every bit as prescientific as they in fact are. To crib a conceit of Feynman’s: the more we learn, the more the neural stage seems too big for the normative philosopher’s drama.