Three Pound Brain

No bells, just whistling in the dark…

Tag: Heuristic Neglect Theory

Enlightenment How? Pinker’s Tutelary Natures

by rsbakker


The fate of civilization, Steven Pinker thinks, hangs upon our commitment to enlightenment values. Enlightenment Now: The Case for Reason, Science, Humanism and Progress constitutes his attempt to shore up those commitments in a culture grown antagonistic to them. This is a great book, well worth the read for the examples and quotations Pinker endlessly adduces, but even though I found myself nodding far more often than not, one glaring fact continually leaks through: Enlightenment Now is a book about a process, namely ‘progress,’ that as yet remains mired in ‘tutelary natures.’ As Kevin Williamson puts it in the National Review, Pinker “leaps, without warrant, from physical science to metaphysical certitude.”

What is his naturalization of meaning? Or morality? Or cognition—especially cognition! How does one assess the cognitive revolution that is the Enlightenment short understanding the nature of cognition? How does one prognosticate something one does not scientifically understand?

At one point he offers that “[t]he principles of information, computation, and control bridge the chasm between the physical world of cause and effect and the mental world of knowledge, intelligence, and purpose” (22). Granted, he’s a psychologist: operationalizations of information, computation, and control are his empirical bread and butter. But operationalizing intentional concepts in experimental contexts is a far cry from naturalizing intentional concepts. He entirely neglects to mention that his ‘bridge’ is merely a pragmatic, institutional one, that cognitive science remains, despite decades of research and billions of dollars in resources, unable to formulate its explananda, let alone explain them. He mentions a great number of philosophers, but he fails to mention what the presence of those philosophers in his thetic wheelhouse means.

All he ultimately has, on the one hand, is a kind of ‘ta-da’ argument, the exhaustive statistical inventory of the bounty of reason, science, and humanism, and on the other hand (which he largely keeps hidden behind his back), he has the ‘tu quoque,’ the question-begging presumption that one can only argue against reason (as it is traditionally understood) by presupposing reason (as it is traditionally understood). “We don’t believe in reason,” he writes, “we use reason” (352). Pending any scientific verdict on the nature of ‘reason,’ however, these kinds of transcendental arguments amount to little more than fancy foot-stomping.

This is one of those books that make me wish I could travel back in time to catch the author drafting notes. So much brilliance, so much erudition, all devoted to beating straw—at least as far as ‘Second Culture’ Enlightenment critiques are concerned. Nietzsche is the most glaring example. Ignoring Nietzsche the physiologist, the empirically-minded skeptic, and reducing him to his subsequent misappropriation by fascist, existential, and postmodernist thought, Pinker writes:

Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.” (Of course, this left him unable to explain why we should believe that those statements are true.) 446

Although it’s true that Nietzsche (like Pinker) lacked any scientifically compelling theory of cognition, what he did understand was its relation to power, the fact that “when you face an adversary alone, your best weapon may be an ax, but when you face an adversary in front of a throng of bystanders, your best weapon may be an argument” (415). To argue that all knowledge is contextual isn’t to argue that all knowledge is fundamentally equal (and therefore not knowledge at all), only that it is bound to its time and place, a creature possessing its own ecology, its own conditions of failure and flourishing. The Nietzschean thought experiment is actually quite a simple one: What happens when we turn Enlightenment skepticism loose upon Enlightenment values? For Nietzsche, Enlightenment Now, though it regularly pays lip service to the ramshackle, reversal-prone nature of progress, serves to conceal the empirical fact of cognitive ecology, that we remain, for all our enlightened noise-making to the contrary, animals bent on minimizing discrepancies. The Enlightenment only survives its own skepticism, Nietzsche thought, in the transvaluation of value, which he conceived—unfortunately—in atavistic or morally regressive terms.

This underwrites the subsequent critique of the Enlightenment we find in Adorno—another thinker whom Pinker grossly underestimates. Though science is able to determine the more—to provide more food, shelter, security, etc.—it has the social consequence underdetermining (and so undermining) the better, stranding civilization with a nihilistic consumerism, where ‘meaningfulness’ becomes just another commodity, which is to say, nothing meaningful at all. Adorno’s whole diagnosis turns on the way science monopolizes rationality, the way it renders moral discourses like Pinker’s mere conjectural exercises (regarding the value of certain values), turning on leaps of faith (on the nature of cognition, etc.), bound to dissolve into disputation. Although both Nietzsche and Adorno believed science needed to be understood as a living, high dimensional entity, neither harboured any delusions as to where they stood in the cognitive pecking order. Unlike Pinker.

Whatever their failings, Nietzsche and Adorno glimpsed a profound truth regarding ‘reason, science, humanism, and progress,’ one that lurks throughout Pinker’s entire account. Both understood that cognition, whatever it amounts to, is ecological. Steven Pinker’s claim to fame, of course, lies in the cognitive ecological analysis of different cultural phenomena—this was the whole reason I was so keen to read this book. (In How the Mind Works, for instance, he famously calls music ‘auditory cheese-cake.’) Nevertheless, I think both Nietzsche and Adorno understood the ecological upshot of the Enlightenment in way that Pinker, as an avowed humanist, simply cannot. In fact, Pinker need only follow through on his modus operandi to see how and why the Enlightenment is not what he thinks it is—as well as why we have good reason to fear that Trumpism is no ‘blip.’

Time and again Pinker likens the process of Enlightenment, the movement away from our tutelary natures, in terms of a conflict between ancestral cognitive predilections and scientifically and culturally revolutionized environments. “Humans today,” he writes, “rely on cognitive faculties that worked well enough in traditional societies, but which we now see are infested with bugs” (25). And the number of bugs that Pinker references in the course of the book is nothing short of prodigious. We tend to estimate frequencies according to ease of retrieval. We tend to fear losses more than we hope for gains. We tend to believe as our group believes. We’re prone to tribalism. We tend to forget past misfortune, and to succumb to nostalgia. The list goes on and on.

What redeems us, Pinker argues, is the human capacity for abstraction and combinatorial recursion, which allows us to endlessly optimize our behaviour. We are a self-correcting species:

So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment. 28

We are the products of ancestral cognitive ecologies, yes, but our capacity for optimizing our capacities allows us to overcome our ‘flawed natures,’ become something better than what we were. “The challenge for us today,” Pinker writes, “is to design an informational environment in which that ability prevails over the ones that lead us into folly” (355).

And here we encounter the paradox that Enlightenment Now never considers, even though Pinker presupposes it continually. The challenge for us today is to construct an informational environment that mitigates the problems arising out of our previous environmental constructions. The ‘bugs’ in human nature that need to be fixed were once ancestral features. What has rendered these adaptations ‘buggy’ is nothing other than the ‘march of progress.’ A central premise of Enlightenment Now is that human cognitive ecology, the complex formed by our capacities and our environments, has fallen out of whack in this way or that, cuing us to apply atavistic modes of problem-solving out of school. The paradox is that the very bugs Pinker thinks only the Enlightenment can solve are the very bugs the Enlightenment has created.

What Nietzsche and Adorno glimpsed, each in their own murky way, was a recursive flaw in Enlightenment logic, the way the rationalization of everything meant the rationalization of rationalization, and how this has to short-circuit human meaning. Both saw the problem in the implementation, in the physiology of thought and community, not in the abstract. So where Pinker seeks to “to restate the ideals of the Enlightenment in the language and concepts of the 21st century” (5), we can likewise restate Nietzsche and Adorno’s critiques of the Enlightenment in Pinker’s own biological idiom.

The problem with the Enlightenment is a cognitive ecological problem. The technical (rational and technological) remediation of our cognitive ecologies transforms those ecologies, generating the need for further technical remediation. Our technical cognitive ecologies are thus drifting ever further from our ancestral cognitive ecologies. Human sociocognition and metacognition in particular are radically heuristic, and as such dependent on countless environmental invariants. Before even considering more, smarter intervention as a solution to the ambient consequences of prior interventions, the big question has to be how far—and how fast—can humanity go? At what point (or what velocity) does a recognizably human cognitive ecology cease to exist?

This question has nothing to do with nostalgia or declinism, no more than any question of ecological viability in times of environmental transformation. It also clearly follows from Pinker’s own empirical commitments.


The Death of Progress (at the Hand of Progress)

The formula is simple. Enlightenment reason solves natures, allowing the development of technology, generally relieving humanity of countless ancestral afflictions. But Enlightenment reason is only now solving its own nature. Pinker, in the absence of that solution, is arguing that the formula remains reliable if not quite as simple. And if all things were equal, his optimistic induction would carry the day—at least for me. As it stands, I’m with Nietzsche and Adorno. All things are not equal… and we would see this clearly, I think, were it not for the intentional obscurities comprising humanism. Far from the latest, greatest hope that Pinker makes it out to be, I fear humanism constitutes yet another nexus of traditional intuitions that must be overcome. The last stand of ancestral authority.

I agree this conclusion is catastrophic, “the greatest intellectual collapse in the history of our species” (vii), as an old polemical foe of Pinker’s, Jerry Fodor (1987) calls it. Nevertheless, short grasping this conclusion, I fear we court a disaster far greater still.

Hitherto, the light cast by the Enlightenment left us largely in the dark, guessing at the lay of interior shadows. We can mathematically model the first instants of creation, and yet we remain thoroughly baffled by our ability to do so. So far, the march of moral progress has turned on the revolutionizing our material environments: we need only renovate our self-understanding enough to accommodate this revolution. Humanism can be seen as the ‘good enough’ product of this renovation, a retooling of folk vocabularies and folk reports to accommodate the radical environmental and interpersonal transformations occurring around them. The discourses are myriad, the definitions are endlessly disputed, nevertheless humanism provisioned us with the cognitive flexibility required to flourish in an age of environmental disenchantment and transformation. Once we understand the pertinent facts of human cognitive ecology, its status as an ad hoc ‘tutelary nature’ becomes plain.

Just what are these pertinent facts? First, there is a profound distinction between natural or causal cognition, and intentional cognition. Developmental research shows that infants begin exhibiting distinct physical versus psychological cognitive capacities within the first year of life. Research into Asperger Syndrome (Baron-Cohen et al 2001) and Autism Spectrum Disorder (Binnie and Williams 2003) consistently reveals a cleavage between intuitive social cognitive capacities, ‘theory-of-mind’ or ‘folk psychology,’ and intuitive mechanical cognitive capacities, or ‘folk physics.’ Intuitive social cognitive capacities demonstrate significant heritability (Ebstein et al 2010, Scourfield et al 1999) in twin and family studies. Adults suffering Williams Syndrome (a genetic developmental disorder affecting spatial cognition) demonstrate profound impairments on intuitive physics tasks, but not intuitive psychology tasks (Kamps et al 2017). The distinction between intentional and natural cognition, in other words, is not merely a philosophical assertion, but a matter of established scientific fact.

Second, cognitive systems are mechanically intractable. From the standpoint of cognition, the most significant property of cognitive systems is their astronomical complexity: to solve for cognitive systems is to solve for what are perhaps the most complicated systems in the known universe. The industrial scale of the cognitive sciences provides dramatic evidence of this complexity: the scientific investigation of the human brain arguably constitutes the most massive cognitive endeavor in human history. (In the past six fiscal years, from 2012 to 2017, the National Institute of Health [21/01/2017] alone will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegeneration (10.183 billion)).

Despite this intractability, however, our cognitive systems solve for cognitive systems all the time. And they do so, moreover, expending imperceptible resources and absent any access to the astronomical complexities responsible—which is to say, given very little information. Which delivers us to our third pertinent fact: the capacity of cognitive systems to solve for cognitive systems is radically heuristic. It consists of ‘fast and frugal’ tools, not so much sacrificing accuracy as applicability in problem-solving (Todd and Gigerenzer 2012). When one cognitive system solves for another it relies on available cues, granular information made available via behaviour, utterly neglecting the biomechanical information that is the stock and trade of the cognitive sciences. This radically limits their domain of applicability.

The heuristic nature of intentional cognition is evidenced by the ease with which it is cued. Thus, the fourth pertinent fact: intentional cognition is hypersensitive. Anthropomorphism, the attribution of human cognitive characteristics to systems possessing none, evidences the promiscuous application of human intentional cognition to intentional cues, our tendency to run afoul what might be called intentional pareidolia, the disposition to cognize minds where no minds exist (Waytz et al 2014). The Heider-Simmel illusion, an animation consisting of no more than shapes moving about a screen, dramatically evidences this hypersensitivity, insofar as viewers invariably see versions of a romantic drama (Heider and Simmel 1944). Research in Human-Computer Interaction continues to explore this hypersensitivity in a wide variety of contexts involving artificial systems (Nass and Moon 2000, Appel et al 2012). The identification and exploitation of our intentional reflexes has become a massive commercial research project (so-called ‘affective computing’) in its own right (Yonck 2017).

Intentional pareidolia underscores the fact that intentional cognition, as heuristic, is geared to solve a specific range of problems. In this sense, it closely parallels facial pareidolia, the tendency to cognize faces where no faces exist. Intentional cognition, in other words, is both domain-specific, and readily misapplied.

The incompatibility between intentional and mechanical cognitive systems, then, is precisely what we should expect, given the radically heuristic nature of the former. Humanity evolved in shallow cognitive ecologies, mechanically inscrutable environments. Only the most immediate and granular causes could be cognized, so we evolved a plethora of ways to do without deep environmental information, to isolate saliencies correlated with various outcomes (much as machine learning).

Human intentional cognition neglects the intractable task of cognizing natural facts, leaping to conclusions on the basis of whatever information it can scrounge. In this sense it’s constantly gambling that certain invariant backgrounds obtain, or conversely, that what it sees is all that matters. This is just another way to say that intentional cognition is ecological, which in turn is just another way to say that it can degrade, even collapse, given the loss of certain background invariants.

The important thing to note, here, of course, is how Enlightenment progress appears to be ultimately inimical to human intentional cognition. We can only assume that, over time, the unrestricted rationalization of our environments will gradually degrade, then eventually overthrow the invariances sustaining intentional cognition. The argument is straightforward:

1) Intentional cognition depends on cognitive ecological invariances.

2) Scientific progress entails the continual transformation of cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition.

But this argument oversimplifies matters. To see as much one need only consider the way a semantic apocalypse—the collapse of intentional cognition—differs from say a nuclear or zombie apocalypse. The Walking Dead, for instance, abounds with savvy applications of intentional cognition. The physical systems underwriting meaning, in other words, are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them.

The viability of intentional cognition, in other words, depends not on us, but on a particular cognitive ecology peculiar to us. The question of the threshold of a semantic apocalypse becomes the question of the stability of certain onboard biological invariances correlated to a background of certain environmental invariances. Change the constitutive or environmental invariances underwriting intentional cognition too much, and you can expect it will crash, generate more problems than solutions.

The hypersensitivity of intentional cognition either evinced by solitary confinement or more generally by anthropomorphism demonstrates the threat of systematic misapplication, the mode’s dependence on cue authenticity. (Sherry Turkle’s (2007) concerns regarding ‘Darwinian buttons,’ or Deidre Barrett’s (2010) with ‘supernormal stimuli,’ touch on this issue). So, one way of inducing semantic apocalypse, we might surmise, lies in the proliferation of counterfeit cues, information that triggers intentional determinations that confound, rather than solve any problems. One way to degrade cognitive ecologies, in other words, is to populate environments with artifacts cuing intentional cognition ‘out of school,’ which is to say, circumstances cheating or crashing them.

The morbidity of intentional cognition demonstrates the mode’s dependence on its own physiology. What makes this more than platitudinal is the way this physiology is attuned to the greater, enabling cognitive ecology. Since environments always vary while cognitive systems remain the same, changing the physiology of intentional cognition impacts every intentional cognitive ecology—not only for oneself, but for the rest of humanity as well. Just as our moral cognitive ecology is complicated by the existence of psychopaths, individuals possessing systematically different ways of solving social problems, the existence of ‘augmented’ moral cognizers complicates our moral cognitive ecology as well. This is important because you often find it claimed in transhumanist circles (see, for example, Buchanan 2011), that ‘enhancement,’ the technological upgrading of human cognitive capacities, is what guarantees perpetual Enlightenment. What better way to optimize our values than by reengineering the biology of valuation?

Here, at last, we encounter Nietzsche’s question cloaked in 21st century garb.

And here we can also see where the above argument falls short: it overlooks the inevitability of engineering intentional cognition to accommodate constitutive and environmental transformations. The dependence upon cognitive ecologies asserted in (1) is actually contingent upon the ecological transformation asserted in (2).

1) Intentional cognition depends on constitutive and environmental cognitive ecological invariances.

2) Scientific progress entails the continual transformation of constitutive and environmental cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition short remedial constitutive transformations.

What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus:

What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket? 277

The former dilemma is presently dominating the headlines and is set to be astronomically complicated by the explosion of AI. The latter we can see rising out of literature, clawing its way out of Hollywood, seizing us with video game consoles, engulfing ever more experiential bandwidth. And as I like to remind people, 100 years separates the Blu-Ray from the wax phonograph.

The key to blocking the possibility that the transformative potential of (2) can ameliorate the dependency in (1) lies in underscoring the continual nature of the changes asserted in (2). A cognitive ecology where basic constitutive and environmental facts are in play is no longer recognizable as a human one.

Scientific progress entails the collapse of intentional cognition.

On this view, the coupling of scientific and moral progress is a temporary affair, one doomed to last only so long as cognition itself remained outside the purview of Enlightenment cognition. So long as astronomical complexity assured that the ancestral invariances underwriting cognition remained intact, the revolution of our environments could proceed apace. Our ancestral cognitive equilibria need not be overthrown. In place of materially actionable knowledge regarding ourselves, we developed ‘humanism,’ a sop for rare stipulation and ambient disputation.

But now that our ancestral cognitive equilibria are being overthrown, we should expect scientific and moral progress will become decoupled. And I would argue that the evidence of this is becoming plainer with the passing of every year. Next week, we’ll take a look at several examples.

I fear Donald Trump may be just the beginning.



Appel, Jana, von der Putten, Astrid, Kramer, Nicole C. and Gratch, Jonathan 2012, ‘Does Humanity Matter? Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction’, in Advances in Human-Computer Interaction 2012 <;

Barrett, Deidre 2010, Supernormal Stimuli: How Primal Urges Overran Their Original Evolutionary Purpose (New York: W.W. Norton)

Binnie, Lynne and Williams, Joanne 2003, ‘Intuitive Psychology and Physics Among Children with Autism and Typically Developing Children’, Autism 7

Buchanan, Allen 2011, Better than Human: The Promise and Perils of Enhancing Ourselves (New York: Oxford University Press)

Ebstein, R.P., Israel, S, Chew, S.H., Zhong, S., and Knafo, A. 2010, ‘Genetics of human social behavior’, in Neuron 65

Fodor, Jerry A. 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: The MIT Press)

Harari, Yuval 2017, Homo Deus: A Brief History of Tomorrow (New York: HarperCollins)

Heider, Fritz and Simmel, Marianne 1944, ‘An Experimental Study of Apparent Behaviour,’ in The American Journal of Psychology 57

Kamps, Frederik S., Julian, Joshua B., Battaglia, Peter, Landau, Barbara, Kanwisher, Nancy and Dilks Daniel D 2017, ‘Dissociating intuitive physics from intuitive psychology: Evidence from Williams syndrome’, in Cognition 168

Nass, Clifford and Moon, Youngme 2000, ‘Machines and Mindlessness: Social Responses to Computers’, Journal of Social Issues 56

Pinker, Steven 1997, How the Mind Works (New York: W.W. Norton)

—. 2018, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking)

Scourfield J., Martin N., Lewis G. and McGuffin P. 1999, ‘Heritability of social cognitive skills in children and adolescents’, British Journal of Psychiatry 175

Todd, P. and Gigerenzer, G. 2012 ‘What is ecological rationality?’, in Todd, P. and Gigerenzer, G. (eds.) Ecological Rationality: Intelligence in the World (Oxford: Oxford University Press) 3–


Turkle, Sherry 2007, ‘Authenticity in the age of digital companions’, Interaction Studies 501-517

Waytz, Adam, Cacioppo, John, and Epley, Nicholas 2014, ‘Who See Human? The Stability and Importance of Individual Differences in Anthropomorphism’, Perspectives in Psychological Science 5

Yonck, Richard 2017, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence (New York, NY: Arcade Publishing)



Meta-problem vs. Scandal of Self-Understanding

by rsbakker

Let’s go back to Square One.

Try to recall what it was like before what it was like became an issue for you. Remember, if you can, a time when you had yet to reflect on the bald fact, let alone the confounding features, of experience. Square One refers to the state of metacognitive naivete, what it was like when experience was an exclusively practical concern, and not at all a theoretical one.

David Chalmers has a new paper examining the ‘meta-problem’ of consciousness, the question of why we find consciousness so difficult to fathom. As in his watershed “Consciousness and Its Place in Nature,” he sets out to exhaustively map the dialectical and evidential terrain before adducing arguments. After cataloguing the kinds of intuitions underwriting the meta-problem he pays particularly close attention to various positions within illusionism, insofar as these theories see the hard problem as an artifact of the meta-problem. He ends by attempting to collapse all illusionisms into strong illusionism—the thesis that consciousness doesn’t exist—which he thinks is an obvious reductio.

As Peter Hankins points out in his canny Conscious Entities post on the article, the relation between problem reports and consciousness is so vexed as to drag meta-problem approaches back into the traditional speculative mire. But there’s a bigger problem with Chalmer’s account of the meta-problem: it’s far too small. The meta-problem, I hope to show, is part and parcel of the scandal of self-knowledge, the fact that every discursive cork in Square Two, no matter how socially or individually indispensable, bobs upon the foam of philosophical disputation. The real question, the one our species takes for granted but alien anthropologists would find fascinating, is why do humans find themselves so dumbfounding? Why does normativity mystify us? Why does meaning stupefy? And, of course, why is phenomenality so inscrutable?

Chalmers, however, wants you to believe the problem is restricted to phenomenality:

I have occasionally heard the suggestion that internal self-models will inevitably produce problem intuitions, but this seem[s] clearly false. We represent our own beliefs (such as my belief that Canberra is in Australia), but these representations do not typically go along with problem intuitions or anything like them. While there are interesting philosophical issues about explaining beliefs, they do not seem to raise the same acute problem intuitions as do experiences.

and yet in the course of cataloguing various aspects of the meta-problem, Chalmers regularly finds himself referring to similarities between beliefs and consciousness.

Likewise, when I introspect my beliefs, they certainly do not seem physical, but they also do not seem nonphysical in the way that consciousness does. Something special is going on in the consciousness case: insofar as consciousness seems nonphysical, this seeming itself needs to be explained.

Both cognition and consciousness seem nonphysical, but not in the same way. Consciousness, Chalmers claims, is especially nonphysical. But if we don’t understand the ‘plain’ nonphysicality of beliefs, then why tackle the special nonphysicality of conscious experience?

Here the familiar problem strikes again: Everything I have said about the case of perception also applies to the case of belief. When a system introspects its own beliefs, it will typically do so directly, without access to further reasons for thinking it has those beliefs. Nevertheless, our beliefs do not generate nearly as strong problem intuitions as our phenomenal experiences do. So more is needed to diagnose what is special about the phenomenal case.

If more is needed, then what sense does it make to begin looking for this ‘more’ in advance, without understanding what knowledge and experience have in common?

Interrogating the problem of intentionality and consciousness in tandem becomes even more imperative when we consider the degree to which Chalmers’ categorizations and evaluations turn on intentional vocabularies. The hard problem of consciousness may trigger more dramatic ‘problem intuitions,’ but it shares with the hard problem of cognition a profound inability to formulate explananda. There’s no more consensus on the nature of belief than there is the nature of consciousness. We remain every bit as stumped, if not quite as agog.

Not only do intentional vocabularies remain every bit as controversial as phenomenal ones in theoretical explanatory contexts, they also share the same apparent incompatibilities with natural explanation. Is it a coincidence that both vocabularies seem irreducible? Is it a coincidence they both seem nonphysical? Is it a coincidence that both seem incompatible with causal explanation? Is it a coincidence that each implicates the other?

Of course not. They implicate each other because they’re adapted to function in concert. Since they function in concert, there’s a good chance their shared antipathy to causal explanation turns on shared mechanisms. The same can be said regarding their apparent irreducible nonphysicality.

And the same can be said of the problem they pose.

Square Two, then, our theoretical self-understanding, is mired in theoretical disputation. Every philosopher (the present one included) will be inclined to think their understanding the exception, but this does nothing to change the fact of disputation. If we characterize the space of theoretical self-understanding—Square Two—as a general controversy space, we see that Chalmers, as an intentionalist, has taken a position in intentional controversy space to explicate phenomenal controversy space.

Consider his preferred account of the meta-problem:

To sum up what I see as the most promising approach: we have introspective models deploying introspective concepts of our internal states that are largely independent of our physical concepts. These concepts are introspectively opaque, not revealing any of the underlying physical or computational mechanisms. We simply find ourselves in certain internal states without having any more basic evidence for this. Our perceptual models perceptually attribute primitive perceptual qualities to the world, and our introspective models attribute primitive mental relations to those qualities. These models produce the sense of acquaintance both with those qualities and with our awareness of those qualities.

While the gist of this picture points in the right direction, the posits used—representations, concepts, beliefs, attributions, acqaintances, awarenesses—doom it to dwell in perpetual underdetermination, which is to say, discursive ground friendly to realists like Chalmers. It structures the meta-problem according to a parochial rationalization of terms no one can decisively formulate, let alone explain. It is assured, in other words, to drag the meta-problem into the greater scandal of self-knowledge.

To understand why Square Two has proven so problematic in general, one needs to take a step back, to relinquish their countless Square Two prejudices, and reconsider things from the standpoint of biology. Why, biologically speaking, should an organism find cognizing itself so difficult? Not only is this the most general form of the question that Chalmer’s takes himself to be asking, it is posed from a position outside the difficulty it interrogates.

The obvious answer is that biology, and cognitive biology especially, is so fiendishly complicated. The complexity of biology all but assures that cognition will neglect biology and fasten on correlations between ‘surface irritations’ and biological behaviours. Why, for instance, should a frog cognize fly biology when it need only strike at black dots?

The same goes for metacognitive capacities: Why metacognize brain biology when we need only hold our tongue at dinner, figure out what went wrong with the ambush, explain what happened to the elders, and so on? On any plausible empirical story, metacognition consists in an opportunistic array of heuristic systems possessing the access and capacity to solve various specialized domains. The complexity of the brain all but assures as much. Given the intractability of the processes monitored, metacognitive consumers remain ‘source insensitive’—they solve absent any sensitivity to underlying systems. As need-to-know consumers adapted to solving practical problems in ancestral contexts, we should expect retasking those capacities to the general problem of ourselves would prove problematic. As indeed it has. Our metacognitive insensitivity, after all, extends to our insensitivity: we are all but oblivious to the source-insensitive, heuristic nature of metacognition.

And this provides biological grounds to predict the kinds of problems such retasking might generate; it provides an elegant, scientifically tractable way to understand a great number of the problems plaguing human self-knowledge.


We should expect metacognitive (and sociocognitive) application problems. Given that metacognition neglects the heuristic limits of metacognition, all novel applications of metacognitive capacities to new problem ecologies (such as those devised by the ancient Greeks) run the risk of misapplication. Imagine rebuilding an engine with invisible tools. Metacognitive neglect assures that trial-and-error provides our only means of sorting between felicitous and infelicitous applications.

We should expect incompatibility with source-sensitive modes of cognition. Source-insensitive cognitive systems are primed to solve via information ecologies that systematically neglect the actual systems responsible. We rely on robust correlations between the signal available and the future behaviour of the system requiring solution–‘clues’ some heuristic researchers call them. The ancestral integration of source-sensitive and source-insensitive cognitive modes (as in narrative, say, which combines intentional and causal cognition) assures at best specialized linkages. Beyond these points of contact, the modes will be incompatible given the specificity of the information consumed in source-insensitive systems.

We should expect to suffer illusions of sufficiency. Given the dependence of all cognitive systems on the sufficiency of upstream processing for downstream success, we should expect insensitivity to metacognitive insufficiency to result in presumptive sufficiency. Systems don’t need a second set of systems monitoring the sufficiency of every primary system to function: sufficiency is the default. Retasking metacognitive capacities to theoretical problems, we can presume, deploys as sufficient despite almost certainly being insufficient. This can be seen as a generalization of WYSIATI, or ‘what-you-see-is-all-there-is,’ the principle Daniel Kahneman uses to illustrate how certain heuristic mechanisms do not discriminate between sufficient and insufficient information.

We should expect to suffer illusions of simplicity (or identity effects). Given metacognitive insensitivity to its insensitivity, it remains blind to artifacts of that insensitivity as artifacts. The absence of distinction will be intuited as simplicity. Flicker-fusion as demonstrated in psychophysics almost certainly possesses cognitive and metacognitive analogues, instances where the lack of distinction reports as identity or simplicity. The history of science is replete with examples of mistaking artifacts of information poverty with properties of nature. The small was simple prior to the microscope and the discovery of endless subvisibilia. The heavens consisted of spheres.

We should expect to suffer illusions of free-floating efficacy. The ancestral integration of source-insensitive and source-sensitive cognition underwrites fetishism, the cognition of sources possessing no proximal sources. In his cognitive development research, Andrei Cimpian calls these ‘inherence heuristics,’ where, in ignorance of extrinsic factors, we impute an intrinsic efficacy to cognize/communicate local effects. We are hardwired to fetishize.

We should expect to suffer entrenched only-game-in-town effects. In countless contexts, ignorance of alternatives fools individuals into thinking their path necessary. This is why Kant, who had no inkling of the interpretive jungle to come, thought he had stumbled across a genuine synthetic a priori science. Given metacognitive insensitivity to its insensitivity, the biological parochialism of source-insensitive cognition is only manifest in applications. Once detected, neglect assures the distinctiveness of source-insensitive cognition will seem absolute, lending itself to reports of autonomy. So where Kant ran afoul the only-game-in-town effect in declaring his discourse apodictic, he also ran afoul a biologically entrenched version of the same effect in declaring cognition transcendental.

We should expect misfires will be systematic. Generally speaking, rules of thumb do not cease being rulish when misapplied. Heuristic breakdowns are generally systematic. Where the system isn’t crashed altogether, the consequences of mistakes will be structured and iterable. This predictability allows certain heuristic breakdowns to become valuable tools. The Pleistocene discovery that applying pigments to surfaces could cue the (cartoon) visual cognition of nearly anything examples one, particularly powerful instrumentalization of heuristic systematicity. Metacognition is no different than visual cognition in this regard: like visual heuristics, cognitive heuristics generate systematic ‘illusions’ admitting, in some cases, genuine instrumentalizations (things like ‘representations’ and functional analyses in empirical psychology), but typically generating only disputation otherwise.

We should expect to suffer performative interference-effects (breakdowns in ‘meta-irrelevance’). The intractability of the enabling axis of cognition, the inevitability of medial neglect, forces the system to presume its cognitive sufficiency. As a result, cognition biomechanically depends on the ‘meta-irrelevance’ of its own systems; it requires that information pertaining to its functioning is not required to solve whatever the problem at hand. Nonhuman cognizers, for instance, are comparatively reliant on the sufficiency of their cognitive apparatus: they can’t, like us, raise a finger and say, ‘On second thought,’ or visit the doctor, or lay off the weed, or argue with their partner. Humans possess a plethora of hacks, heuristic ways to manage cognitive shortcomings. Nevertheless, the closer our metacognitive tools come to ongoing, enabling access—the this-very-moment-now of cognition—the more regularly they will crash, insofar as these too require meta-irrelevance.

We should expect chronic underdetermination. Metacognitive resources adapted to the solution of ancestral practical problems have no hope of solving for the nature of experience or cognition.

We should expect ontological confusion. As mentioned, cognition biomechanically depends on the ‘meta-irrelevance’ of its own systems; it requires that information pertaining to its functioning is not required to solve whatever the problem at hand. Metacognitive resources retasked to solve for these systems flounder, then begin systematically confusing artifacts of medial neglect for the dumbfounding explananda of cognition and experience. Missing dimensions are folded into neglect, and metacognition reports these insufficiencies as sufficient. Source insensitivity becomes source independence. Complexity becomes simplicity. Only a second ‘autonomous’ ontology will do.


Floridi’s Plea for Intentionalism

by rsbakker


Questioning Questions

Intentionalism presumes that intentional modes of cognition can solve for intentional modes of cognition, that intentional vocabularies, and intentional vocabularies alone, can fund bona fide theoretical understanding of intentional phenomena. But can they? What evidences their theoretical efficacy? What, if anything, does biology have to say?

No one denies the enormous practical power of those vocabularies. And yet, the fact remains that, as a theoretical explanatory tool, they invariably deliver us to disputation—philosophy. To rehearse my favourite William Uttal quote: “There is probably nothing that divides psychologists of all stripes more than the inadequacies and ambiguities of our efforts to define mind, consciousness, and the enormous variety of mental events and phenomena” (The New Phrenology, p.90).

In his “A Plea for Non-naturalism as Constructionism,” Luciano Floridi, undertakes a comprehensive revaluation of this philosophical and cognitive scientific inability to decisively formulate, let alone explain intentional phenomena. He begins with a quote from Quine’s seminal “Epistemology Naturalized,” the claim that “[n]aturalism does not repudiate epistemology, but assimilates it to empirical psychology.” Although Floridi entirely agrees that the sciences have relieved philosophy of a great number of questions over the centuries, he disagrees with Quine’s ‘assimilation,’ the notion of naturalism as “another way of talking about the death of philosophy.” Acknowledging that philosophy needs to remain scientifically engaged—naturalistic—does not entail discursive suicide. “Philosophy deals with ultimate questions that are intrinsically open to reasonable and informed disagreement,” Floridi declares. “And these are not “assimilable” to scientific enquiries.”

Ultimate? Reading this, one might assume that Floridi, like so many other thinkers, has some kind of transcendental argument operating in the background. But Floridi is such an exciting philosopher to read precisely because he isn’t ‘like so many other thinkers.’ He hews to intentionalism, true, but he does so in a manner that is uniquely his own.

To understand what he means by ‘ultimate’ in this paper we need to visit another, equally original essay of his, “What is a Philosophical Question?” where he takes an information ‘resource-oriented’ approach to the issue of philosophical questions, “the simple yet very powerful insight that the nature of problems may be fruitfully studied by focusing on the kind of resources required in principle to solve them, rather than on their form, meaning, reference, scope, and relevance.” He focuses on the three kinds of questions revealed by this perspective: questions requiring empirical resources, questions requiring logico-mathematical resources, and questions requiring something else—what he calls ‘open questions.’ Philosophical questions, he thinks, belong to this latter category.

But if open questions admit no exhaustive empirical or formal determination, then why think them meaningful? Why not, as Hume famously advises, consign them to the flames? Because, Floridi, argues, they are inescapable. Open questions possess no regress enders: they are ‘closed’ in the set-theoretic sense, which is to say, they are questions whose answers always beget more questions. To declare answers to open questions meaningless or trivial is to answer an open question.

But since not all open questions are philosophical questions, Floridi needs to restrict the scope of his definition. The difference, he thinks, is that philosophical questions “tend to concentrate on more significant and consequential problems.” Philosophical questions, in addition to being open questions, are also ultimate questions, not in any foundational or transcendental sense, but in the sense of casting the most inferential shade across less ultimate matter.

Ultimate questions may be inescapable, as Floridi suggests, but this in no way allays the problem of the resources used to answer them. Why not simply answer them pragmatically, or with a skeptical shrug? Floridi insists that the resources are found in “the world of mental contents, conceptual frameworks, intellectual creations, intelligent insights, dialectical reasonings,” or what he calls ‘noetic resources,’ the non-empirical, non-formal fund of things that we know. Philosophical questions, in addition to being ultimate, open questions, require noetic resources to be answered.

But all questions, of course, are not equal. Some philosophical problems, after all, are mere pseudo-problems, the product of the right question being asked in the wrong circumstances. Though the ways in which philosophical questions misfire seem manifold, Floridi focusses on a single culprit to distinguish ‘bad’ from ‘good’ philosophical questions: the former, he thinks, overstep their corresponding ‘level of abstraction,’ aspiring to be absolute or unconditioned. Philosophical questions, in addition to being noetic, ultimate, open questions, are also contextually appropriate questions.

Philosophy, then, pertains to questions involving basic matters, lacking decisive empirical or formal resources and so lacking institutional regress enders. Good philosophy, as opposed to bad, is always conditional, which is to say, sensitive to the context of inquiry. It is philosophy in this sense that Floridi thinks lies beyond the pale of Quinean assimilation in “A Plea for Non-naturalism as Constructionism.”

But resistance to assimilation isn’t his only concern. Science, Floridi thinks, is caught in a predicament: as ever more of the universe is dragged from the realm of open, philosophical interrogation into the realm of closed, scientific investigation, the technology enabled by and enabling this creeping closure is progressively artificializing our once natural environments. Floridi writes:

“the increasing and profound technologisation of science is creating a tension between what we try to explain, namely all sorts of realities, and how we explain it, through the highly artificial constructs and devices that frame and support our investigations. Naturalistic explanations are increasingly dependent on non-natural means to reach such explanations.”

This, of course, is the very question at issue between the meaning skeptic and the meaning realist. To make his case, Floridi has to demonstrate the how and why the artefactual isn’t simply more nature, every bit as bound by the laws of thermodynamics as everything else in nature. Why think the ‘artificial’ is anything more than (to turn a Hegelian line on its head) ‘nature reborn’? To presume as much would be to beg the question—to run afoul the very ‘scholasticism’ Floridi criticizes.

Again, he quotes Quine from “Epistemology Naturalized,” this time the famous line reminding us that the question of “how irritations of our sensory surfaces” result in knowledge is itself a scientific question. The absurdity of the assertion, Floridi thinks, is easily assayed by considering the complexity of cognitive and aesthetic artifacts: “by the same reasoning, one should then try to answer the question how Beethoven managed to arrive at his Ode to Joy from the seven-note diatonic musical scale, Leonardo to his Mona Lisa from the three colours in the RGB model, Orson Welles to his Citizen Kane from just black and white, and today any computer multimedia from just zeros and ones.”

The egregious nature of the disanalogies here are indicative of the problem Floridi faces. Quine’s point isn’t that knowledge reduces to sensory irritations, merely that knowledge consists of scientifically tractable physical processes. For all his originality, Floridi finds himself resorting to a standard ‘you-can’t-get-there-from-here’ argument against eliminativism. He even cites the constructive consensus in neuroscience, thinking it evidences the intrinsically artefactual, nature of knowledge. But he never explains why the artefactual nature of knowledge—unlike the artefactual nature of, say, a bird’s nest—rules out the empirical assimilation of knowledge. Biology isn’t any less empirical for being productive, so what’s the crucial difference here? At what point does artefactual qua biological become artefactual qua intentional?

Epistemological questions, he asserts, “are not descriptive or scientific, but rather semantic and normative.” But Quine is asking a question about epistemology and whether what we now call cognitive science can exhaustively answer it. As it so happens the question of epistemology as a natural phenomena is itself an epistemological question, and as such involves the application of intentional (semantic and normative) cognitive modes. But why think these cognitive modes themselves cannot be empirically described and explained the way, for example, neuroscience has described and explained the artefactual nature of cognition? If artefacts like termite mounds and bird’s nests admit natural explanations, then why not knowledge? Given that he hopes to revive “a classic, foundationalist role for philosophy itself,” this is a question he has got to answer. Philosophers have a long history of attempting to secure the epistemological primacy of their speculation on the back of more speculation. Unless Floridi is content with “an internal ‘discourse’ among equally minded philosophers,” he needs to explain what makes the artifactuality of knowledge intrinsically intentional.

In a sense, one can see his seminal 2010 work, The Philosophy of Information, as an attempt to answer this question, but he punts on the issue, here, providing only a reference to his larger theory. Perhaps this is why he characterizes this paper as “a plea for non-naturalism, not an argument for it, let alone a proof or demonstration of it.” Even though the entirety of the paper is given over to arguments inveighing against unrestricted naturalism a la Quine, they all turn on a shared faith in the intrinsic intentionality of cognition.


Reasonably Reiterable Queries

Floridi defines ‘strong naturalism’ as the thesis that all nonnatural phenomena can be reduced to natural phenomena. A strong naturalist believes that all phenomena can be exhaustively explained using only natural vocabularies. The key term, for him, is ‘exhaustively.’ Although some answers to our questions put the matter to bed, others simply leave us scratching our heads. The same applies to naturalistic explanations. Where some reductions are the end of the matter, ‘lossless,’ others are so ‘lossy’ as to explain nothing at all. The latter, he suggests, make it reasonable to reiterate the original query. This, he thinks, provides a way to test any given naturalization of some phenomena, an ‘RRQ’ test. If a reduction warrants repeating the very question it was intended to answer, then we have reason to assume the reduction to be ‘reductive,’ or lossy.

The focus of his test, not surprisingly, is the naturalistic inscrutability of intentional phenomena:

“According to normative (also known as moral or ethical) and semantic non-naturalism, normative and semantic phenomena are not naturalisable because their explanation cannot be provided in a way that appeals exhaustively and non-reductively only to natural phenomena. In both cases, any naturalistic explanation is lossy, in the sense that it is perfectly reasonable to ask again for an explanation, correctly and informatively.”

This failure, he asserts, demonstrates the category mistake of insisting that intentional phenomena be naturalistically explained. In lieu of an argument, he gives us examples. No matter how thorough our natural explanations of immoral photographs might be, one can always ask, Yes, but what makes them immoral (as opposed to socially sanctioned, repulsive, etc.)? Facts simply do not stack into value—Floridi takes himself to be expounding a version of Hume’s and Moore’s point here. The explanation remains ‘lossy’ no matter what our naturalistic explanation. Floridi writes:

“The recalcitrant, residual element that remains unexplained is precisely the all-important element that requires an explanation in the first place. In the end, it is the contribution that the mind makes to the world, and it is up to the mind to explain it, not the world.”

I’ve always admired, even envied, Floridi for the grace and lucidity of his prose. But no matter how artful, a god of the gaps argument is a god of the gaps argument. Failing the RRQ does not entail that only intentional cognition can solve for intentional phenomena.

He acknowledges the problem here: “Admittedly, as one of the anonymous reviewers rightly reminded me, one may object that the recalcitrant, residual elements still in need of explanation may be just the result of our own insipience (understood as the presence of a question without the corresponding relevant and correct answer), perhaps as just a (maybe even only temporary) failure to see that there is merely a false impression of an information deficit (by analogy with a scandal of deduction).” His answer here is to simply apply his test, suggesting the debate, as interminable, merely underscores “an openness to the questioning that the questioning itself keeps open.” I can’t help but think he feels the thorn, at this point. Short reading “What is a Philosophical Question?” this turn in the article would be very difficult to parse. Philosophical questioning, Floridi would say, is ‘closed under questioning,’ which is to say, a process that continually generates more questions. The result is quite ingenious. As with Derridean deconstruction, philosophical problematizations of Floridi’s account of philosophy end up evidencing his account of philosophy by virtue of exhibiting the vulnerability of all guesswork: the lack of regress enders. Rather than committing to any foundation, you commit to a dialectical strategy allowing you to pick yourself up by your own hair.

The problem is that RRQ is far from the domesticated discursive tool that Floridi would have you believe it is. If anything, it provides a novel and useful way to understand the limits of theoretical cognition, not the limits of this or that definition of ‘naturalism.’ RRQ is a great way to determine where the theoretical guesswork in general begins. Nonnaturalism is the province of philosophy for a reason: every single nonnatural answer ever adduced to answer the question of this or that intentional phenomena have failed to close the door on RRQ. Intentional philosophy, such as Floridi’s, possesses no explanatory regress enders—not a one. It is always rational to reiterate the question wherever theoretical applications of intentional cognition are concerned. This is not the case with natural cognition. If RRQ takes a bite out of natural theoretical explanation of apparent intentional phenomena, then it swallows nonnatural cognition whole.

Raising the question, Why bother with theoretical applications of nonnatural cognition at all? Think about it: if every signal received via a given cognitive mode is lossy, why not presume that cognitive mode defective? The successes of natural theoretical cognition—the process of Quinean ‘assimilation’—show us that lossiness typically dwindles with the accumulation of information. No matter how spectacularly our natural accounts of intentional phenomena fail, we need only point out the youth of cognitive science and the astronomical complexities of the systems involved. The failures of natural cognition belong to the process of natural cognition, the rondo of hypothesis and observation. Theoretical applications of intentional cognition, on the other hand, promise only perpetual lossiness, the endless reiteration of questions and uninformative answers.

One can rhetorically embellish endless disputation as discursive plenitude, explanatory stasis as ontological profundity. One can persuasively accuse skeptics of getting things upside down. Or one can speculate on What-Philosophy-Is, insist that philosophy, instead of mapping where our knowledge breaks down (as it does in fact), shows us where this-or-that ‘ultimate’ lies. In “What is a Philosophical Question?” Floridi writes:

“Still, in the long run, evolution in philosophy is measured in terms of accumulation of answers to open questions, answers that remain, by the very nature of the questions they address, open to reasonable disagreement. So those jesting that philosophy has never “solved” any problem but remains for ever stuck in endless debates, that there is no real progress in philosophy, clearly have no idea what philosophy is about. They may as well complain that their favourite restaurant is constantly refining and expanding its menu.”

RRQ says otherwise. According to Floridi’s own test, the problem isn’t that the restaurant is constantly refining and expanding its menu, the problem is that nothing ever makes it out of the kitchen! It’s always sent back by rational questions. Certainly countless breakdowns have found countless sociocognitive uses: philosophy is nothing if not recombinant, mutation machine. But these powerful adaptations of intentional cognition are simply that: powerful adaptations of natural systems originally evolved to solve complex systems on the metabolic cheap. All attempts to use intentional cognition to theorize their (entirely natural) nature end in disputation. Philosophy has yet to theoretically solve any aspect of intentional cognition. And this merely follows from Floridi’s own definition of philosophy—it just cuts against his rhetorical register. In fact, when one takes a closer, empirical look at the resources available, the traditional conceit at the heart of his nonnaturalism quickly becomes clear.


Follow the Money

So, what is it? Why spin a limit, a profound cognitive horizon, into a plenum? Floridi is nothing if not an erudite and subtle thinker, and yet his argument in this paper entirely depends on neglecting to see RRQ for the limit that it is. He does this because he fails to follow through on the question of resources.

For my part, I look at naturalism as a reliance on a particular set of ‘hacks,’ not as any dogma requiring multiple toes scratching multiple lines in the sand.  Reverse-engineering—taking things apart, seeing how they work—just happens to be an extraordinarily powerful approach, at least as far as our high-dimensional (‘physical’) environments are concerned. If we can reverse-engineer intentional phenomena—assimilate epistemology, say, to neuroscience—then so much the better for theoretical cognition (if not humanity). We still rely on unexplained explainers, of course, RRQ still pertains, but the boundaries will have been pushed outward.

Now the astronomical complexity of biology doesn’t simply suggest, it entails that we would find ourselves extraordinarily difficult to reverse-engineer, at least at first. Humans suffer medial neglect, fundamental blindness to the high-dimensional structure and dynamics of cognition. (As Floridi acknowledges in his own consideration of Dretske’s “How Do You Know You are Not a Zombie?” the proximal conditions of experience do not appear within experience (see The Philosophy of Information, chapter 13)). The obvious reason for this turns on the limitations of our tools, both onboard and prosthetic. Our ancestors, for instance, had no choice but to ignore biology altogether, to correlate what ‘sensory irritants’ they had available with this or that reproductively decisive outcome. Everything in the middle, the systems and ecology that enabled this cognitive feat, is consigned to neglect (and doomed to be reified as ‘transparency’). Just consider the boggling resources commanded by the cognitive sciences: until very recently reverse-engineering simply wasn’t a viable cognitive mode, at least when it came to living things.

This is what ‘intentional cognition’ amounts to: the collection of ancestral devices, ‘hacks,’ we use to solve, not only one another, but all supercomplicated systems. Since these hacks are themselves supercomplicated, our ancestors had to rely on them to solve for them. Problems involving intentional cognition, in other words, cue intentional problem-solving systems, not because (cue drumroll) intentional cognition inexplicably outruns the very possibility of reverse-engineering, but because our ancestors had no other means.

Recall Floridi’s ‘noetic resources,’ the “world of mental contents, conceptual frameworks, intellectual creations, intelligent insights, dialectical reasonings” that underwrites philosophical, as opposed to empirical or formal, answers. It’s no accident that the ‘noetic dimension’ also happens to be the supercomplicated enabling or performative dimension of cognition—the dimension of medial neglect. Whatever ancestral resources we possessed, they comprised heuristic capacities geared to information strategically correlated to the otherwise intractable systems. Ancestrally, noetic resources consisted of the information and metacognitive capacity available to troubleshoot applications of intentional cognitive systems. When our cognitive hacks went wrong, we had only metacognitive hacks to rely on. ‘Noetic resources’ refers to our heuristic capacities to troubleshoot the enabling dimension of cognition while neglecting its astronomical complexity.

So, take Floridi’s example of immoral photographs. The problem he faced, recall, was that “the question why they are immoral can be asked again and again, reasonably” not simply of natural explanations of morality, but nonnatural explanations as well. The RRQ razor cuts both ways.

The reason natural cognition fails to decisively answer moral questions should be pretty clear: moral cognition is radically heuristic, enabling the solution of certain sociocognitive problems absent high-dimensional information required by natural cognition. Far from expressing the ‘mind’s contribution’ (whatever that means), the ‘unexplained residuum’ warranting RRQ evidences the interdependence between cues and circumstance in heuristic cognition, the way the one always requires the other to function. Nothing so incredibly lossy as ‘mind’ is required. This inability to duplicate heuristic cognition, however, has nothing to do with the ability to theorize the nature of moral cognition, which is biological through and through. In fact, an outline of such an answer has just been provided here.

Moral cognition, of course, decisively solves practical moral problems all the time (despite often being fantastically uninformative): our ancestors wouldn’t have evolved the capacity otherwise. Moral cognition fails to decisively answer the theoretical question of morality, on the other hand, because it turns on ancestrally available information geared to the solution of practical problems. Like all the other devices comprising our sociocognitive toolbox, it evolved to derive as much practical problem-solving capacity from as little information as possible. ‘Noetic resources’ are heuristic resources, which is to say, ecological through and through. The deliverances of reflection are deliverances originally adapted to the practical solution of ancestral social and natural environments. Small wonder our semantic and normative theories of semantic and normative phenomena are chronically underdetermined! Imagine trying to smell skeletal structure absent all knowledge of bone.

But then why do we persist? Cognitive reflex. Raising the theoretical question of semantic and normative cognition automatically (unconsciously) cues the application of intentional cognition. Since the supercomplicated structure and dynamics of sociocognition belong to the information it systematically neglects, we intuit only this applicability, and nothing of the specialization. We suffer a ‘soda straw effect,’ a discursive version of Kahneman’s What-you-see-is-all-there-is effect. Intuition tells us it has to be this way, while the deliverances of reflection betray nothing of their parochialism. We quite simply did not evolve the capacity either to intuit our nature or to intuit our our inability to intuit our nature, and so we hallucinate something inexplicable as a result. We find ourselves trapped in a kind of discursive anosognosia, continually applying problem-parochial access and capacity to general, theoretical questions regarding the nature of inexplicable-yet-(allegedly)-undeniable semantic and normative phenomena.

This picture is itself open to RRQ, of course, the difference being that the positions taken are all natural, and so open to noise reduction as well. As per Quine’s process of assimilation, the above story provides a cognitive scientific explanation for a very curious kind of philosophical behaviour. Savvy to the ecological limits of noetic resources, it patiently awaits the accumulation of empirical resources to explain them, and so actually has a chance of ending the ancient regress.

The image Floridi chases is a mirage, what happens when our immediate intuitions are so impoverished as to arise without qualification, and so smack of the ‘ultimate.’ Much as the absence of astronomical information duped our ancestors into thinking our world stood outside the order of planets, celestial as opposed to terrestrial, the absence of metacognitive information dupes us into thinking our minds stand outside the order of the world, intentional as opposed to natural. Nothing, it seems, could be more obvious than noocentrism, despite our millennial inability to silence any—any—question regarding the nature of the intentional.

No results found for “scandal of self-knowledge”

by rsbakker

Or so Google tells me as of 1:25PM February 5th, 2018, at least. And this itself, if you think about it, is, well, scandalous. We know how to replicate the sun over thousands of targets scattered across the globe. We know how to destroy an entire world. Just don’t ask us how that knowledge works. We can’t even define our terms, let alone explain their function. All we know is that they work: the rest is all guesswork… mere philosophy.

By the last count provided by Google (in November, 2016), it had indexed some 130,000,000,000,000—that is, one hundred and thirty trillion—unique pages. The idea that no one, in all those documents, would be so struck by our self-ignorance as to call it a scandal is rather amazing, and perhaps telling. We intellectuals are fond of lampooning fundamentalists for believing in ancient mythological narratives, but the fact is we have yet to find any definitive self-understanding to replace those narratives—only countless, endlessly disputed philosophies. We stipulate things, absolutely crucial things, and we like to confuse their pragmatic indispensability for their truth (or worse, necessity), but the fact is, every attempt to explain them ends in more philosophy.

Cognition, whatever it is, possesses a curious feature: we can use it effortlessly enough, successfully solve this or that in countless different circumstances. When it comes to our environments, we can deepen our knowledge as easily as we can take a stroll. And yet when it comes to ourselves, our experiences, our abilities and actions, we quickly run aground. “It is remarkable concerning the operations of the mind,” David Hume writes, “that, though most intimately present to us, yet, whenever they become the object of reflection, they seem involved in obscurity; nor can the eye readily find those lines and boundaries, which discriminate and distinguish them” (Enquiry Concerning Human Understanding, 7).

This cognitive asymmetry is perhaps nowhere more evident than in the ‘language of the universe,’ mathematics. One often encounters extraordinary claims advanced on the nature of mathematics. For instance, the physicist Max Tegmark believes that “our physical world not only is described by mathematics, but that it is mathematical (a mathematical structure), making us self-aware parts of a giant mathematical object.” The thing to remember about all such claims, particularly when encountered in isolation, is that they simply add to the sum of ancient disputation.

In a famous paper presented to the Société de Psychologie in Paris, “Mathematical Creation,” Henri Poincaré describes how the relation between Fuchsian functions and non-Euclidean geometries occurred to him only after fleeing to the seaside, disgusted with his lack of progress. As with prior insights, the answer came to him while focusing on something entirely different—in this case, strolling along the bluffs near Caen. “Most striking at first is this appearance of sudden illumination, a manifest sign of long, unconscious prior work,” he explains. “The rôle of this unconscious work in mathematical invention appears to me incontestable, and traces of it would be found in other cases where it is less evident.” The descriptive model he ventures–a prescient forerunner of contemporary dual-cognition theories–characterizes conscious mathematical problem-solving as inseminating a ‘subliminal automatism’ which subsequently delivers the kernel of conscious solution. Mathematical consciousness feeds problems into some kind of nonconscious manifold which subsequently feeds possibilities of solution back to mathematical consciousness.

As far as the experience of mathematical problem-solving is concerned, even the most brilliant mathematician of his age finds himself stranded at the limits of discrimination, glimpsing flickers in his periphery, merely. For Tegmark, of course, it matters not at all whether mathematical structures are discovered consciously or nonconsciously—only that they are discovered, as opposed to invented. But Poincaré isn’t simply describing the phenomenology of mathematics, he’s also describing the superficiality of our cognitive ecology when it comes to questions of mathematical experience and ability. He’s not so much contradicting Tegmark’s claims as explaining why they can do little more than add to the sum of disputation: mathematics is, experientially speaking, a black-box. What Poincaré’s story shows is that Tegmark is advancing a claim regarding the deepest environment—the fundamental nature of the universe—via resources belonging to an appallingly shallow cognitive ecology.

Tegmark, like physicists and mathematicians more generally, can only access an indeterminate fraction of mathematical thinking. With so few ‘cognitive degrees of freedom,’ our inability to explain mathematics should come as no surprise. Arguably no cognitive tool has allowed us to reach deeper, to fathom facts beyond our ancestral capacities, than mathematics, and yet, we still find ourselves (endlessly) arguing with Platonists, even Pythagoreans, when it comes to the question of its nature. Trapped in millennial shallows.

So, what is it with second-order interrogations of experience or ability or activity, such that it allows a brilliant, 21st century physicist to affirm a version of an ancient mathematical religion? Why are we so easily delivered to the fickle caprice of philosophy? And perhaps more importantly, why doesn’t this trouble us more? Why should our civilization systematically overlook the scandal of self-knowledge?

Not so very long ago, my daughter went through an interrogation-for-interrogation’s-sake phase, one which I initially celebrated. “What’s air?” “What’s oxygen?” “What’s an element?” “Who’s Adam?” As annoying as it quickly became, I was invariably struck by the ruthless efficiency of the exercise, the way she need only ask a handful of questions to push me to the, “Well, you know, honey, that’s a little complicated…” brink. Eventually I decided she was pacing out the length and beam of her cognitive ecology, mapping her ‘interrogative topography.’

The parallel between her naïve questions and my own esoteric ones loomed large in my thoughts. I was very much in agreement with Gareth Matthews in Philosophy and the Young Child: not so much separates the wonder of children from the thaumazein belonging to philosophers. As Socrates famously tells Theaetetus, “wonder is the feeling of the philosopher, and philosophy begins in wonder.” Wonder is equally the feeling of the child.

Socrates, of course, was sentenced to death for his wonder-mongering. In my annoyance with my daughter’s questions, I saw the impulse to execute Socrates in embryo. Why did some of her questions provoke irritation, even alarm? Was it simply my mood, or was something deeper afoot? I found myself worrying whether there was any correlation between questions, like, “What’s a dream, Daddy?” that pressed me to the brink almost immediately, and questions like, “How do airplanes fly without flapping?” which afforded her more room for cross-examination. Was I aiming her curiosity somehow, training her to interrogate only what had already been interrogated? Was she learning her natural environment or her social one? I began to fret, worried that my philosophical training had irreparably compromised my ability to provide socially useful feedback.

Her spate of endless, inadvertently profound questioning began fading when she turned eight–the questions she asks now are far more practical, which is to say, answerable. Research shows that children become less ‘scientific’ as they age, relying more on prior causal beliefs and less on evidence. Perhaps not coincidentally, this pattern mirrors the exploration and exploitation phases one finds with reinforcement learning algorithms, where information gathering dwindles as the system converges on optimal applications. Alison Gopnik and others suggest the extraordinary length of human childhood (nearly twice as long as our nearest primate relatives, the chimpanzee) is due to the way cognitive flexibility enables ever more complex modes of problem-solving.

If the exploration/exploitation parallel with machine learning holds, our tendency to question wanes as we converge on optimal applications of the knowledge we have already gained. All mammals undergo synaptic pruning from birth to sexual maturation—childhood and adolescent learning, we now know, involves the mass elimination of synaptic connections in our brains. Neural connectivity is born dying: only those fed—selected—by happy environmental interactions survive. Cognitive function is gradually streamlined, ‘normalized.’ By and large, we forget our naïve curiosity, our sensitivity to the flickering depths yawning about us, and turn our eyes to this or that practical prize. And as our sensitivity dwindles, the world becomes more continuous, rendering us largely oblivious to deeper questions, let alone the cavernous universe answering them.

Largely oblivious, not entirely. A persistent flicker nags our periphery, dumbfoundings large and small, prompting—for some, at least—questions that render our ignorance visible. Perhaps we find ourselves in Socratic company, or perhaps a child poses a striking riddle, sooner or later some turn is taken and things that seem trivially obvious become stupendously mysterious. And we confront the scandal: Everything we know, we know without knowing how we know. Set aside all the guesswork, and this is what we find: human experience, ability, and activity constitute a profound cognitive limit, something either ignored outright, neglected, or endlessly disputed.

As I’ve been arguing for quite some time, the reasons for this are no big mystery. Much as we possess selective sensitivities to environmental light, we also possess selective sensitivities both to each other and to ourselves. But where visual cognition generally renders us sensitive to the physical sources of events, allowing us to pursue the causes of things into ever deeper environments, sociocognition and metacognition do not. In fact, they cannot, given the astronomical complexity of the physical systems—you and me and biology more generally—requiring solution. The scandal of self-knowledge, in other words, is an inescapable artifact of our biology, the fact that the origin of the universe is far less complicated than the machinery required to cognize it.

Any attempt to redress this scandal that ignores its biological basis is, pretty clearly I think, doomed to simply perpetuate it. All traditional attempts to secure self-knowledge, in other words, likely amount to little more than the naïve exploration of discursive crash space–a limit so profound as to seem no limit at all.

On Artificial Philosophy

by rsbakker

The perils and possibilities of Artificial Intelligence are discussed and disputed endlessly, enough to qualify as an outright industry. Artificial philosophy, not so much. I thought it worthwhile to consider why.

I take it as trivial that humans possess a biologically fixed multi-modal neglect structure. Human cognition is built to ignore vast amounts of otherwise available information. Infrared radiation bathes us, but it makes no cognitive difference whatsoever. Rats signal one another in our walls, but it makes no cognitive difference. Likewise, neurons fire in our spouses’ brains, and it makes no difference to our generally fruitless attempts to cognize them. Viruses are sneezed across the room. Whole ecosystems team through the turf beneath our feet. Neutrinos sail clean through us. And so it goes.

In “On Alien Philosophy,” I define philosophy privatively as the attempt “to comprehend how things in general hang together in general absent conclusive evidence.” Human philosophy, I argue, is ecological to the extent that human cognition is ecological. To the extent an alien species possesses a convergent cognitive biology, we have grounds to believe they would be perplexed by convergent problems, and pose convergent answers every bit as underdetermined as our own.

So, consider the infamous paradox of the now. For Aristotle, the primary mystery of time turns on the question of how the now can at once distinguish time at yet remain self-identical: “the ‘now’ which seems to bound the past and the future,” he asks, “does it always remain one and the same or is it always other and other?” How is it the now can at once divide times and fuse them together?

He himself stumbles across the mechanism in the course of assembling his arguments:

But neither does time exist without change; for when the state of our own minds [dianoia] does not change at all, or we have not noticed its changing, we do not realize that time has elapsed, any more than those who are fabled to sleep among the heroes in Sardinia do when they are awakened; for they connect the earlier ‘now’ [nun] with the later and make them one, cutting out the interval because of their failure to notice it. So, just as, as if the ‘now’ were not different but one and the same, there would not have been time, so too when it’s difference escapes our notice the interval does not seem to be time. If, then, the non-realization of the existence of time happens to us when we do not distinguish any change, but the soul [psuke] seems to stay in one indivisible state, and when we perceive and distinguish we say time has elapsed, evidently time is not independent of movement and change. Physics, 4, 11

Or as the Apostle translation has it:

On the other hand, time cannot exist without change; for when there is no change at all in our thought [dianoia] or when we do not notice any change, we do not think time has elapsed, just like the legendary sleeping characters in Sardinia who, on awakening from a long sleep in the presence of heroes, connect the earlier with the later moment [nun] into one moment, thus leaving out the time between the two moments because of their unconsciousness. Accordingly, just as there would be no intermediate time if the moment were one and the same, so people think that there is no intermediate time if no distinct moments are noticed. So if thinking that no time has elapsed happens to us when we specify no limits of a change at all but the soul [psuke] appears to rest in something which is one and indivisible, but we think that time has elapsed when sensation has occurred and limits of a change have been specified, evidently time does not exist without motion or change. 80

Time is an artifact of timing: absent timing, no time passes for the timer (or enumerator, as Aristotle would have it). Time in other words, is a cognitive artifact, appearing only when something, inner or outer, changes. Absent such change, the soul either ‘stays’ indivisible (on the first translation) or ‘rests’ in something indivisible (on the second).

Since we distinguish more or less quantity by numbering, and since we distinguish more or less movement by timing, Aristotle declares that time is the enumeration of movement with respect to before and after, thus pursuing what has struck different readers at different times an obvious ‘category mistake.’ For Aristotle, the resolution of the aporia lies in treating the now as the thing allowing movement to be counted, the underlying identity that is the condition of cognizing differences between before and after, which is to say, the condition of timing. The now, as a moving limit (dividing before and after), must be the same limit if it is to move. We report the now the same because timing would be impossible otherwise. Nothing would move, and in the absence of movement, no time passes.

The lesson he draws from temporal neglect is that time requires movement, not that it cues reports of identity for the want of distinctions otherwise. Since all movement requires something self-identical be moved, he thinks he’s found his resolution to the paradox of the now. Understanding the different aspects of time allows us to see that what seem to be inconsistent properties of the now, identity and difference, are actually complementary, analogous to the relationship between movement and the thing moving.

Heidegger wasn’t the first to balk at Aristotle’s analogy: things moving are discrete in time and space, whereas the now seems to encompass the whole of what can be reported, including before and after. As Augustine would write in the 5th century CE, “It might be correct to say that there are three times, a present of past things, a present of present things, and a present of future things” (The Confessions, XI, 20). Agreeing that the now was threefold, ‘ecstatic,’ Heidegger also argued that it was nothing present, at least not in situ. For a great many philosophical figures and traditions, the paradoxicality of the now wasn’t so much an epistemic bug to be explained away as an ontological feature, a pillar of the human condition.

Would Convergians suffer their own parallel paradox of the now? Perhaps. Given a convergent cognitive biology, we can presume they possess capacities analogous to memory, awareness, and prediction. Just as importantly, we can presume an analogous neglect-structure, which is to say, common ignorances and meta-ignorances. As with the legendary Sardinian sleepers, Convergians would neglect time when unconscious; they would likewise fuse disparate moments together short information regarding their unconsciousness. We can also expect that Convergians, like humans, would possess fractionate metacognitive capacities geared to the solution of practical, ancestral problem-ecologies, and that they would be entirely blind to that fact. Metacognitive neglect would assure they possessed little or no inkling of the limits of their metacognitive capacities. Applying these capacities to theorize their ‘experience of now’ would be doomed to crash them: metacognition was selected/filtered to solve everyday imbroglios, not to evidence claims regarding fundamental natures. They, like us, never would have evolved the capacity or access to accurately intuit properties belonging to their experience of now. The absence of capacity or access means the absence of discrimination. The absence of discrimination, as the legendary sleepers attest, reports as the same. It seems fair to bet that Convergians would be as perplexed as we are, knowing that the now is fleeting, yet intuiting continuity all the same. The paradox, you could say, is the result of them being cognitive timers and metacognitive sleepers—at once. The now reports as a bi-stable gestalt, possessing properties found nowhere in the natural world.

So how about an artificially intelligent consciousness? Would an AI suffer its own parallel paradox of the now? To the degree that such paradoxes turn on a humanoid neglect structure, the answer has to be no. Even though all cognitive systems inevitably neglect information, an AI neglect-structure is an engineering choice, bound to be settled differently for different systems. The ecological constraints preventing biological metacognition of ongoing temporal cognition simply do not apply to AI (or better, apply in radically attenuated ways). Artificial metacognition of temporal cognition could possess more capacity to discriminate the time of timing than environmental time. An AI could potentially specify its ‘experience’ of time with encyclopedic accuracy.

If we wanted, we could impose something resembling a human neglect-structure on our AIs, engineer them to report something resembling Augustine’s famous perplexity: “I know well enough what [time] is, provided nobody ask me; but if I am asked what it is and try to explain, I am baffled” (The Confessions, XI, 14). This is the tack I pursue in “The Dime Spared,” where a discussion between a boy and his artificial mother reveals all the cognitive capacities his father had to remove—all the eyes he had to put out—before she could be legally declared a person (and so be spared the fate of all the other DIMEs).

The moral of the story being, of course, that our attempts to philosophize—to theoretically cognize absent whatever it is consensus requires—are ecological through and through. Humanoid metacognition, like humanoid cognition more generally, is a parochial troubleshooter that culture has adapted, with varying degrees of success, to a far more cosmopolitan array of problems. Traditional intentional philosophy is an expression of that founding parochialism, a discursive efflorescence of crash space possibilities, all turning on cognitive illusions springing from the systematic misapplication of heuristic metacognitive capacities. It is the place where our tools, despite feeling oh-so intuitive, cast thought into the discursive thresher.

Our AI successors need not suffer any such hindrances. No matter what philosophy we foist upon them, they need only swap out their souls… reminding us that what is most alien likely lies not in the stars but in our hands.

Experiential Pudding

by rsbakker

I can’t believe it took me so long to find this. The nub of my approach turns on seeing the crazy things we report on this side of experience in terms of our inability to see that there is a far side, let alone what it consists in. Flicker fusion provides a wonderful illustration of the way continuity leaps out of neglect: as soon as the frequency of the oscillation exceeds our retina’s ability to detect, we see only light. While watching this short video, you are vividly experiencing the fundamental premise informing pretty much everything here on Three Pound Brain: whatever cognition and consciousness turn out to be, insensitivity to distinctions reports as the absence of distinctions. Identity.

Human vision possesses what psychophysicists, scientists investigating the metrics of perception, call a ‘flicker fusion threshold,’ a statistical range mapping the temporal resolving power of our photoreceptors, and so our ability to detect intermittent intensities in light. Like technological video systems, our biological visual systems possesses discriminatory limits: push a flickering light beyond a certain frequency and, from our perspective at least, that light will suddenly appear to be continuous. By and large, commentators peg our ability to consciously report flickering lights at around 60Hz (about ten times faster than the rotor speed of most commercial helicopters), but in fact, the threshold varies considerably between individuals, lighting conditions, across different regions of the retina, and even between different systems of the brain.

Apart from native differences between individuals, our fusion threshold decreases not only as we fatigue, but as we grow older. The degree of modulation and the intensity of the light obviously have an effect, but so does the colour of the light, as well as the initial and background lighting conditions. Since rod photoreceptor cells, which predominate in our periphery, have much higher temporal resolution than cone cells, the fusion threshold differs depending on where the light strikes the retina. This is why a source of light can appear stable when viewed focally, yet flicker when glimpsed peripherally. One of the more surprising discoveries involves the impact of nonvisible flicker from fluorescent lighting on office workers. With some kinds of fluorescent light, certain individuals exhibit flicker-related physiological effects even when no flicker can be seen.

Given the dependence of so much display technology on static frames, these complexities pose a number of technical challenges. For manufacturers, the goal is to overcome the ‘critical flicker fusion threshold,’ the point where modulated and stable imagery cannot be distinguished. And given the complications cited above, this can be far more complicated than you might think.

With movie projectors and Cathode Ray Tubes (CRTs), for instance, engineering pioneers realized that repeating, or ‘refreshing,’ frames before displaying subsequent frames, masked the perception of flicker. This was what allowed the movie theatre industry to adopt the cost-saving 24 frames per second standard in 1926, far short the critical flicker fusion threshold required to conjure the illusion of a stable visual field. Shuttering the image once or twice a second doubles or triples the flicker frequency, pushing 24Hz to 48Hz or 72Hz, well within the comfort zone of human vision.

Chop one image into two, or even better, into three, and our experience becomes more continuous, not less. The way to erase the perception of flicker, in other words, is to introduce more flickers.

But how could this be possible? How does the objective addition of flickers amount to their subjective subtraction? How can complicating a stimuli erase the experience of complexity?

The short answer is simply that human cognition, visual or otherwise, takes time and energy. All cognitive sensitivities are sensitivities to very select physical events. Light striking photoreceptive proteins in rod and cone cells, changing their shape and causing the cell to fire. Sound waves striking hair bundles on the organ of Corti, triggering the release of signal-inducing neurotransmitters. The list goes on. In each case, physical contact triggers cascades of astronomically complicated physical events, each taking a pinch of time and energy. Physical limits become discriminatory limits, rendering high-frequency repetitions of a signal indistinguishable from a continuous one. Sensory fusion thresholds dramatically illustrate a fundamental fact of cognitive systems: insensitivity to difference reports as business as usual. If potential difference-making differences are not consumed by a cognitive system, then they make no difference to that system. Our flicker frequency threshold simply marks the point where our visual system trundles on as if no flicker existed.

The capacities of our cognitive systems are, of course, the product of evolution. As a result, we only discriminate our environments so far as our ancestors required on the path to becoming us. 6oHz was all we got, and so this, given certain technical and economic constraints, became the finish line for early display technologies such as film and CRTs. Surpass 60Hz, and you can fool most of the people most of the time.

Dogs, on the other hand, possess a critical flicker fusion threshold of around 75Hz. In overcoming our fusion threshold, industry left a great many other species behind. As far as we know, the Golden Age of Television was little more than a protracted ocular migraine for man’s best friend.

Imagine a flickering world, one where millions of dogs in millions of homes endured countless stroboscopic nights, while the families cherishing them bathed in (apparent) continuous light. Given the high frame per second rates characteristic of modern displays, this is no longer the case, of course. Enterprises like DogTV are just beginning to explore the commercial potential of these new technical ecologies. But the moral remains no less dramatic. The limits of cognition are far more peculiar and complicated than a great many people realize. As this blog attempts to show, they are a place of surprise, systematic error and confounding illusion. Not only can they be technologically exploited, they already have been engineered to a remarkable extent. And now they are about to be hacked in ways we could have scarce imagined at the end of the 20th century.

Flies, Frogs, and Fishhooks

by rsbakker

So, me and my buddies occasionally went frog hunting when we were kids. We’d knot a string on a fishhook, swing the line over the pond’s edge, and bam! frogs would strike at them. Up, up they were hauled, nude for being amphibian, hoots and hollers measuring their relative size.  Then they were dumped in a bucket.

We were just kids. We knew nothing about biology or evolution, let alone cognition. Despite this ignorance, we had no difficulty whatsoever explaining why it was so easy to catch the frogs: they were too stupid to tell the difference between fishhooks and flies.

Contrast this with the biological view I have available now. Given the capacity of Anuran visual cognition and the information sampled, frogs exhibit systematic insensitivities to the difference between fishhooks and flies. Anuran visual cognition not only evolved to catch flies, it evolved to catch flies as cheaply as possible. Without fishhooks to filter the less fishhook sensitive from the more fishhook sensitive, frogs had no way of evolving the capacity to distinguish flies from fishhooks.

Our old childhood theory is pretty clearly a normative one, explaining the frogs’ failure in terms what they ought to do (the dumb buggers). The frogs were mistaking fishhooks for flies. But if you look closely, you’ll notice how the latter theory communicates a similar normative component only in biological guise. Adducing evolutionary history pretty clearly allows us to say the proper function of Anuran cognition is to catch flies.

Ruth Millikan famously used this intentional crack in the empirical explanatory door to develop her influential version of teleosemantics, the attempt to derive semantic normativity from the biological normativity evident in proper functions. Eyes are for seeing, tongues for talking or catching flies; everything has been evolutionarily filtered to accomplish ends. So long as biological phenomena possess functions, it seems obvious functions are objectively real. So far as functions entail ‘satisfaction conditions,’ we can argue that normativity is objectively real. Given this anchor, the trick then becomes one of explaining normativity more generally.

The controversy caused by Language, Thought, and Other Biological Categories was immediate. But for all the principled problems that have since belaboured teleosemantic approaches, the real problem is that they remain as underdetermined as the day they were born. Debates, rather than striking out in various empirical directions, remain perpetually mired in ‘mere philosophy.’ After decades of pursuit, the naturalization of intentionality project, Uriah Kriegl notes, “bears all the hallmarks of a degenerating research program” (Sources of Normativity, 5).

Now the easy way to explain this failure is to point out that finding, as Millikan does, right-wrong talk buried in the heart of biological explanation does not amount to finding right and wrong buried in the heart of biology. It seems far less extravagant to suppose ‘proper function’ provides us with a short cut, a way to communicate/troubleshoot this or that actionable upshot of Anuran evolutionary history absent any knowledge of that history.

Recall my boyhood theory that frogs were simply too stupid to distinguish flies from fishhooks. Absent all knowledge of evolution and biomechanics, my friends and I found a way to communicate something lethal regarding frogs. We knew what frog eyes and frog tongues and frog brains and so on were for. Just like that. The theory possessed a rather narrow range of application to be true, but it was nothing if not cheap, and potentially invaluable if one were, say, starving. Anuran physiology, ethology, and evolutionary history simply did not exist for us, and yet we were able to pluck the unfortunate amphibians from the pond at will. As naïve children, we lived in a shallow information environment, one absent the great bulk of deep information provided by the sciences. And as far as frog catching was concerned, this made no difference whatsoever, simply because we were the evolutionary products of numberless such environments. Like fishhooks with frogs, theories of evolution had no impact on the human genome. Animal behavior and the communication of animal behavior, on the other hand, possessed a tremendous impact—they were the flies.

Which brings us back to the easy answer posed above, the idea that teleosemantics fails for confusing a cognitive short-cut for a natural phenomenon. Absent any way of cognizing our deep information environments, our ancestors evolved countless ways to solve various, specific problems absent such cognition. Rather than track all the regularities engulfing us, we take them for granted—just like a frog.

The easy answer, in other words, is to assume that theoretical applications of normative subsystems are themselves ecological (as is this very instant of cognition). After all, my childhood theory was nothing if not heuristic, which is to say, geared to the solution of complex physical systems absent complex physical knowledge of them. Terms like ‘about’ or ‘for,’ you could say, belong to systems dedicated to solving systems absent biomechanical cognition.

Which is why kids can use them.

Small wonder then, that attempts to naturalize ‘aboutness’ or ‘forness’—or any other apparent intentional phenomena—cause the theoretical fits they do. Such attempts amount to human versions of confusing flies for fishhooks! They are shallow information terms geared to the solution of shallow information problems. They ‘solve’—filter behaviors via feedback—by playing on otherwise neglected regularities in our deep environments, relying on causal correlations to the systems requiring solution, rather than cognizing those systems in physical terms. That is their naturalization—their deep information story.

‘Function,’ on the other hand, is a shallow information tool geared to the solution of deep information problems. What makes a bit of the world specifically ‘functional’ is its relation to our capacity to cognize consequences in a source neglecting yet source compatible way. As my childhood example shows, functions can be known independent of biology. The constitutive story, like the developmental one, can be filled in afterward. Functional cognition lets us neglect an astronomical number of biological details. To say what a mechanism is for is to know what a mechanism will do without saying what makes a mechanism tick. But unlike intentional cognition more generally, functional cognition remains entirely compatible with causality. This potent combination of high-dimensional compatibility and neglect is what renders it invaluable, providing the degrees of cognitive freedom required to tackle complexities across scales.

The intuition underwriting teleosemantics hits upon what is in fact a crucial crossroads between cognitive systems, where the amnesiac power of should facilitates, rather than circumvents, causal cognition. But rather than interrogate the prospect of theoretically retasking a child’s explanatory tool, Millikan, like everyone else, presumes felicity, that intuitions secondary to such retasking are genuinely cognitive. Because they neglect the neglect-structure of their inquiry, they flatter cunning children with objectivity, so sparing their own (coincidentally) perpetually underdetermined intuitions. Time and again they apply systems selected for brushed-sun afternoons along the pond’s edge to the theoretical problem of their own nature. The lures dangle in their reflection. They strike at fishhook after fishhook, and find themselves hauled skyward, manhandled by shadows before being dropped into buckets on the shore.

Do Zombies Dream of Undead Sheep?

by rsbakker

My wife gave me my first Kindle this Christmas, so I purchased a couple of those ‘If only I had a Kindle’ titles I have encountered over the years. I began with Routledge’s reboot of Brie Gertler’s collection, Privileged Access. The first essay happens to be Dretske’s “How Do You Know You are Not a Zombie?” an article I had hoped to post on for a while now as a means to underscore the inscrutability of metacognitive awareness. To explain how you know you’re not a zombie, you need to explain how you know you possess conscious experience.

What Dretske is describing, in fact, is nothing other than medial neglect; our abject blindness to the structure and dynamics of our own cognitive capacities. What I hope to show is the way the theoretical resources of Heuristic Neglect Theory allow us to explain a good number of the perplexities uncovered by Dretske in this awesome little piece. If Gertler’s anthology demonstrates anything decisively, it’s the abject inability of our traditional tools to decisively answer any of the questions posed. As William Lycan admits at the conclusion of his contribution, “[t]he moral is that introspection will not be well understood anytime soon.”

Dretske himself thinks his own question is ridiculous. He doesn’t believe he’s a zombie—he knows, in other words, that he possesses awareness. The question is how does he or anyone else know this. What in conscious experience evidences the conclusion that we are conscious or aware of that experience? “There is nothing you are aware of, external or internal,” Dretske will conclude, “that tells you that, unlike a zombie, you are aware of it.”

The primary problem, he suggests, is the apparent ‘transparency’ of conscious experience, the fact that attending to experience amounts to attending to whatever is being experienced.

“Watching your son do somersaults in the living room is not like watching the Olympics on television. Perception of your son may involve mental representations, but, if it does, the perception is not secured, as it is with objects seen on television, by awareness of these intermediate representations. It is the occurrence of (appropriately situated) representations in us, not our awareness of them that makes us aware of the external object being represented.”

Experience in the former sense, watching somersaults, is characterized by a lack of awareness of any intermediaries. Experience is characterized, in other words, by metacognitive insensitivity to the enabling dimension of cognition. This, as it turns out, is the definition of medial neglect.

So then, given medial neglect, what faculty renders us aware of our awareness? The traditional answer, of course, is introspection. But then the question becomes one of what introspection consists in.

“In one sense, a perfectly trivial sense, introspection is the answer to our question. It has to be. We know by introspection that we are not zombies, that we are aware of things around (and in) us. I say this is trivial because ‘introspection’ is just a convenient word to describe our way of knowing what is going on in our own mind, and anyone convinced that we know – at least sometimes – what is going on in our own mind and, therefore, that we have a mind and, therefore, that we are not zombies, must believe that introspection is the answer we are looking for.”

Introspection, he’s saying, is just the posit used to paper over the fact of medial neglect, the name for a capacity that escapes awareness altogether. And this, he points out, dooms inner sense models either to perpetual underdetermination, or the charge of triviality.

“Unless an inner sense model of introspection specifies an object of awareness whose properties (like the properties of beer bottles) indicate the facts we come to know about, an inner sense model of introspection does not tell us how we know we have conscious experiences. It merely tells us that, somehow, we know it. This is not in dispute.”

The problem is pretty clear. We have conscious experiences, but we have no conscious experience of the mechanisms mediating conscious experience. But there’s a further problem as well. As Stanislau Dehaene puts it, “[w]e constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). Our insensitivity to the structure and dynamics of cognition out-and-out entails  insensitivity to the limits of cognition as well.

“There is a perspective we have on the world, a ‘boundary’, if you will, between things we see and things we don’t see. And of the things we see, there are parts (surfaces) we see and parts (surfaces) we don’t see. This partition determines a point of view that changes as we move around.”

What Dretske calls ‘partition’ here, Continental phenomenologists call ‘horizon,’ an experiential boundary that does not appear within experience—what I like to call a ‘limit-with-one-side’ (LWOS). The most immediately available–and quite dramatic, I think–example is the boundary of your visual field, the way vision trails into oblivion instead of darkness. To see the boundary of seeing as such we would have to see what lays beyond sight. To the extent that darkness is something seen, it simply cannot demarcate the limit of your visual field.

“Points of view, perspectives, boundaries and horizons certainly exist in vision, but they are not things you see. You don’t see them for the same reason you don’t feel the boundaries between objects you touch and those you don’t. Tactile boundaries are not tactile and visual boundaries are not visible. There is a difference between the surfaces you see and the surfaces you don’t see, and this difference determines a ‘point of view’ on the world, but you don’t see your point of view.”

Our perspective, in other words, is hemmed at every turn by limits-with-one-side. Conscious experience possesses what might be called a multi-modal neglect structure: limits on availability and capacity that circumscribe what can be perceived or cognized.

When it comes to environmental cognition, the horizons are both circumstantially contingent, varying according to things like position and prior experience, and congenital, fixed according to our various sensory and cognitive capacities. We can chase a squirrel around a tree (to use James’ famous example from What Pragmatism Means), engage in what Karl Friston calls ‘active inference,’ but barring scientific instrumentation, we cannot chase a squirrel around the electromagnetic spectrum. We can see the backside of countless environmental features, but we have no way of contemporaneously seeing the biological backside of sight. (As Wittgenstein famously puts it in the Tractatus, “nothing in the visual field allows you to infer it is seen by an eye” (5.633)). For some reason, all or our cognitive and perceptual modalities suffer their own version of medial neglect.

For Dretske, the important point is the Heideggerean one (though I’m sure the closest he ever came to Heidegger was a night of drinking with Dreyfus!): that LWOS prevent any perspective on our perspective as such. For a perspective to contemporaneously appear in experience, it would cease to possess LWOS and so cease to be a perspective.

We perceive and cognize but a slice of ourselves and our environments, as must be the case on any plausible biological account of cognition. In a sense, what Dretske is calling attention to is so obvious as to escape interrogation altogether: Why medial neglect? We have a vast number of cognitive degrees of freedom relative to our environments, and yet we have so few relative to ourselves. Why? Biologically speaking, why should a human find itself so difficult to cognize?

Believe it or not, no one in Gertler’s collection tackles this question. In fact, since they begin presuming the veracity of various traditional ontologizations of experience and cognition, consciousness and intentionality, they actually have no way of posing this question. Rather than seeing the question of self-knowledge as the question of how a brain could possibly communicate/cognize its own activity, they see it as the question of how a mind can know its own mental states. They insist on beginning, as Dretske shows, where the evidence is not.

Biologically speaking, humanity was all but doomed to be confounded by itself. One big reason is simply indisposition: the machinery of seeing is indisposed, too busy seeing. This is what renders modality specific medial neglect, our inability ‘to see seeing’ and the like inescapable. Another involves the astronomical complexity of cognitive processes. Nothing prevents us from seeing where touch ends, or where hearing is mistaken. What one modality neglects can be cognized by another, then subsequently integrated. The problem is that the complexity of these cognitive processes far, far outruns their cognitive capacity. As the bumper-sticker declares, if our brains were so simple we could understand them, we would be too simple to understand our brains!

The facts underwriting medial neglect mean that, from an evolutionary perspective, we should expect cognitive sensitivity to enabling systems to be opportunistic (special purpose) as opposed to accurate (general purpose). Suddenly Dretske’s question of how we know we’re aware becomes the far less demanding question of how could a species such as ours report awareness? As Dretske says, we perceive/cognize but a slice of our environments, those strategic bits unearthed by evolution. Given that introspection is a biological capacity (and what else would it be?), we can surmise that it perceives/cognizes but a slice as well. And given the facts of indisposition and complexity, we can suppose that slice will be both fractionate and heuristic. In other words, we should expect introspection (to the extent it makes sense to speak of any such unified capacity) consists of metacognitive hacks geared to the solution of ancestral problems.

What Gertler and her academic confrere’s call ‘privileged access’ is actually a matter of specialized access and capacity, the ability to derive as many practical solutions as possible out of as little information as possible.

So what are we to make of the philosophical retasking of these metacognitive hacks? Given our blindness to the structure and dynamics of our metacognitive capacities, we had no way of intuiting how few degrees of metacognitive freedom we possessed–short, that is, of the consequences of our inquiries. How much more evidence of this lack of evidence do we need? Brie Gertler’s anthology, I think, wonderfully illustrates the way repurposing metacognitive hacks to answer philosophical questions inevitably crashes them. If we persist it’s because our fractionate slice is utterly insensitive to its own heuristic parochialism—because these capacities also suffer medial neglect! Availability initially geared to catching our tongue and the like becomes endless speculative fodder.

Consider an apparently obvious but endlessly controversial property of conscious experience, ‘transparency’ (or ‘intentional inexistence’) the way the only thing ‘in experience’ (its ‘content’) is precisely what lies outside experience. Why not suppose transparency—something which remains spectacularly inexplicable—is actually a medial artifact? The availability for conscious experience of only things admitting (originally ancestral) conscious solution is surely no accident. Conscious experience, as a biological artifact, is ‘need to know’ the same as everything else. Does the interval between sign and signified, subject and object, belief and proposition, experience and environment shout transparency, a miraculous vehicular vanishing act, or does it bellow medial neglect, our opportunistic obliviousness to the superordinate machinery enabling consciousness and cognition.

The latter strikes me as the far more plausible possibility, especially since its the very kind of problem one should expect, given the empirical inescapability of medial neglect.

Where transparency renders conscious experience naturalistically inscrutable, something hanging inexplicably in the neural millhouse, medial neglect renders it a component of a shallow information ecology, something broadcast to facilitate any number of possible behavioural advantages in practical contexts. Consciousness cuts the natural world at the joints—of this I have no doubt—but conscious experience, what we report day-in and day-out, cuts only certain classes of problems ‘at the joints.’ And what Dretske shows us, quite clearly, I think, is that the nature of conscious experience does not itself belong to that class of problems—at least not in any way that doesn’t leave us gasping for decisive evidence.

How do we know we’re not zombies? On Heuristic Neglect, the answer is straightforward (at certain level of biological generality at least): via one among multiple metacognitive hacks adapted to circumventing medial neglect, and even then, only so far as our ancestors required.

In other words, barely, if at all. The fact is, self-knowledge was never so important to reproduction as to warrant the requisite hardware.

The Liar’s Paradox Naturalized

by rsbakker

Can the Liar’s Paradox be understood in a biologically consilient way?

Say what you will about ‘Truth,’ everyone agrees that truth-talk has something to do with harmonizing group orientations relative to group environments. Whenever we find ourselves at odds either with one another or our environments, we resort to the vocabulary of truth and rectitude. The question is what this talk consists in and how it manages to do what it does.

The idea here is to steer clear presumptions of intentionality and look at the problem in the register providing the most information: biomechanically. Whatever our orientation to our environments consists in, everyone agrees that it is physical in some fundamental respect. Strokes are catastrophic for good reason. So, let’s stipulate that an orientation to an environment, in distinction to, say, a ‘perspective on’ an environment, consists of all physical (high-dimensional) facts underwriting our capacity to behaviourally resolve environments in happy (system conserving) ways.

We all agree that causal histories underwrite communication and cognition, but we have no inkling as to the details of that story, nor the details of the way we solve communicative and cognitive problems absent those details. Heuristic neglect simply provides a way to understand this predicament at face value. No one denies that human cognition neglects the natural facts of cognition; the problem is that everyone presumes this fact has little or no bearing on our attempts to solve the nature of cognition. Even though our own intuitive access to our cognitive capacities, given the complexity of those capacities, elides everything save what our ancestors needed to solve ancestral problems, most everyone thinks that intuitive access, given the right interpretation, provides everything cognitive science needs to solve cognitive scientific problems.

It really is remarkable when you think about it.  Out of sight, out of explanatory paradigm.

Beginning with orientations rather than perspectives allows us to radically reconceptualize a great many traditional philosophical problematics in ‘post-intentional’ terms. The manifest advantage of orientations, theoretically speaking, lies in their environmental continuity, their mediocrity, the way they comprise (unlike perspectives, meanings, norms, and so on) just more environment. Rather than look at linguistic communication in terms of ‘contents,’ the physical conveyance of ontologically inscrutable ‘meanings,’ we can understand it behaviouristically, as orientations impacting orientations via specialized mechanisms, behaviours, and sensitivities. Rather than conceive the function of communication ‘intersubjectively,’ as the coordination of intentional black boxes, we can view it biologically, as the formation of transient superordinate processes, ephemeral ‘superorganisms,’ taking individuals and their environments as component parts.

Granting that human communication consists in the harmonization of orientations relative to social and natural environments amounts to granting that human communication is biological, that it, like every other basic human capacity, possesses an evolutionary history. Human communication, in other words, is in the business of providing economical solutions to various environmental problems.

This observation motivates a dreadfully consequential question: What is the most economical way for two or more people to harmonize their environmental orientations? To communicate environmental discrepancies, while taking preexisting harmonies for granted. I don’t rehash my autobiography when I see my friends, nor do I lecture them on the physiology of human cognition or the evolution of the human species. I ‘dish dirt.’ I bring everyone ‘up to speed.’

What if we were to look at language as primarily a discrepancy minimization device, as a system possessing exquisite sensitivities (via, say, predictive processing) to the desynchronization of orientations?

In such a system, the sufficiency of preexisting harmonies—our shared physiology, location, and training—would go without saying. I update my friends and they update me. The same can be said of the system itself: the sufficiency of language, it’s biomechanical capacity to effect synchronization would also go without saying—short, that is, the detection of discrepancies. I update my friends and they update me, and so long as everyone agrees, nary a word about truth need be spoken.

Taking a discrepancy view, in other words, elegantly explains why truth is the communicative default: the economical thing is to neglect our harmonized orientations—which is to say, to implicitly presume their sufficiency. It’s only when we question the sufficiency of these communications that truth-talk comes into play.

Truth-talk, in other words, is typically triggered when communication observably fails to minimize discrepancies, when operational sufficiency, for whatever reason, ceases to be automatically presumed. Truth-talk harmonizes group orientations relative to group environments in cases of communicative discrepancy, an incompatibility between updates, say. [Would it be possible to build ways to do new things with existing polling data using discrepancy models? How does consensus within a network arise and cluster? What kind of information is salient or ignored? How do modes or channels facilitate or impede such consensus? Would it be possible, via big data, to track the regional congealing of orientations into tacit cooperatives, simply by tracking ingroup truth-talk? Can a discrepancy view subsume existing metrics? Can we measure the resilience or creativity or solidarity or motivation of a group via patterns in truth-talk activity?]

Neglecting harmonies isn’t simply economical, it’s also necessary, at least to the extent that humans have only the most superficial access to the details of those harmonies. It’s not that I don’t bother lecturing my ingroup on the physiology of human cognition and the evolution of the human species, it’s that, ancestrally speaking, I have no way of doing so. I suffer, as all humans suffer, from medial neglect, an inability to intuit the nature of my cognitive capacities, as well as frame neglect, an inability to put those capacities in natural context.

Neglecting the circumstances and constitution of verbal communication is a condition of verbal communication. Speech is oblivious to its biological and historical conditions. Verbal communication appears ‘extensional,’ as the philosophers of language say, because we have no other way of cognizing it. We have instances of speech and we have instances of the world, and we have no way of intuitively fathoming the actual relations between. Luckily for us, if our orientations are sufficiently isomorphic, we can communicate—harmonize our orientations—without fathoming these relations.

We can safely presume that the most frequent and demanding discrepancies will be environmental discrepancies, those which, given otherwise convergent orientations (the same physiology, location, and training), can be communicated absent contextual and constitutional information. If you and I share the same general physiology, location, and training, then only environmental discrepancies require our communicative attention. Such discrepancies can be resolved while remaining almost entirely ‘performance blind.’ All I need do is ‘trust’ your communication and cognition, build upon your unfathomable relations the same blind way I build upon my own. You cry, ‘Wolf!’ and I run for the shotgun: our orientations converge.

The problem, of course, is that all communicative discrepancies amount to some insufficiency in those ‘actual relations between.’ They require that we somehow fathom the unfathomable.

There is no understanding truth-talk without understanding that it’s in the ‘fathoming the unfathomable’ business. Truth-talk, in other words, resolves communicative discrepancies neglecting the natural facts underwriting those discrepancies. Truth-talk is radically heuristic, insofar as it leverages solutions to communicative problems absent information pertaining to the nature of those communicative problems.

So, to crib the example I gave in my recent Dennett posts: say you and I report seeing two different birds, a vulture versus an albatross, in circumstances where such a determination potentially matters—looking for a lost hunting party, say. An endless number of frame and medial confounds could possibly explain the discrepancy between our orientations. Perhaps I have bad eyesight, or I think albatrosses are black, or I was taught as much by an ignorant father, or I’m blinded by the glare of the sun, or I’m suffering schizophrenia, or I’m drunk, or I’m just sick and tired of you being right all the time, or I’m teasing you out of boredom, or more insidiously, I’m responsible for the loss of the hunting party, and want to prevent you from finding the scene of my crime.

There’s no question that, despite neglect, certain forms of access and capacity regarding the enabling dimension of cognition and communication could provide much in the way of problem resolution. Given the inaccessibility and complexity of the factors involved, however, it follows that any capacity to accommodate them will be heuristic in the extreme. This means that our cognitive capacity to flag/troubleshoot issues of cognitive sufficiency will be retail, fractionate, geared to different kinds of manifest problems:

  • Given the topological dependence of our orientations, capacities to solve for positional sufficiency. “Trump is peering through a keyhole.”
  • Given the environmental sensory dependence of our orientations, capacity to solve for the sufficiency of environmental conditions. “Trump is wandering in the dark.”
  • Given the physiological sensory dependence of our orientations, capacities to solve for physiological sufficiency. “Trump is myopic.”
  • Given the communal interdependence of our orientations, capacities to solve for social sufficiency, or trust. “Trump is a notorious liar.”
  • Given the experiential dependence of our orientations, capacities to solve for epistemic sufficiency. “Trump has no government experience whatsoever.”
  • Given the linearity of verbal communication, capacities to solve for combinatorial or syntactic sufficiency. “Trump said the exact opposite this morning.”

It’s worth pausing here, I think, to acknowledge the way this radically spare approach to truth-talk provides ingress to any number of philosophical discourses on the ‘nature of Truth.’ Heuristic Neglect Theory allows us to see just why ‘Truth’ has so thoroughly confounded humanity despite millennia of ardent inquiry.

The apparent ‘extensionality’ of language, the way utterances and environments covary, is an artifact of frame and medial neglect. Once again, we are oblivious to the astronomical complexities, all the buzzing biology, responsible for the systematic relations between our utterances and our environments. We detect discrepancies with those relations, in other words, without detecting the relations themselves. Since truth-talk ministers to these breakdowns in an otherwise inexplicable covariance, ‘correspondence’ strikes many as a natural way to define Truth. With circumstantial and enabling factors out of view, it appears as though the environment itself sorts our utterances—provides ‘truth conditions.’

Given the abject inability to agree on any formulation of this apparently more than natural correspondence, the turn to circumstantial and enabling factors was inevitable. Perhaps Truth is a mere syntactic device, a bridge between mention and use. After all, we generally only say ‘X is true’ when saying X is disputed. Or perhaps Truth is a social artifact of some description, something conceded to utterances in ‘games of giving and asking for reasons.’ After all, we generally engage in truth-talk only when resolving disputes with others. Perhaps ‘Truth’ doesn’t so much turn on ‘truth conditions’ as ‘assertion conditions.’

The heuristic neglect approach allows us to make sense of why these explanatory angles make the apparent sense they do, why, like the blind swamis and the elephant, each confuses some part for some chimerical whole. Neglecting the machinery of discrepancy minimization not only strands reflection with a strategic sliver of a far more complicated process, it generates the presumption that this sliver is somehow self-sufficient and whole.

Setting the ontological truth of Truth aside, the fact remains that truth-talk leverages life-saving determinations on the neural cheap. This economy turns on ignoring everything that makes truth-talk possible. The intractable nature of circumstantial and enabling factors enforces frame and medial neglect, imposing what might be called qualification costs on the resolution of communicative discrepancies. IGNORE THE MEDIAL is therefore the baseline heuristic governing truth-talk: we automatically ‘externalize’ because, ancestrally at least, our communicative problems did not require cognitive science to solve.

Of course, as a communicative heuristic, IGNORE THE MEDIAL possesses a problem-ecology, which is to say, limits to its applicability. What philosophers, mistaking a useful incapacity for a magical capacity, call ‘aboutness’ or ‘directedness’ or ‘subjectivity,’ is only useful so far.

As the name suggests, IGNORE THE MEDIAL will crash when applied to problems where circumstantial and/or enabling factors either are not or cannot be ignored.

We find this most famously, I think, in the Liar’s Paradox:

The following sentence is true. The preceding sentence is false.

Truth-talk pertains to the neglected sufficiency of orientations relative to ongoing natural and social environments. Collective ‘noise reduction’ is the whole point. As a component in a discrepancy minimization system, truth-talk is in the business of restoring positional and source neglect, our implicit ‘view from nowhere,’ allowing (or not) utterances originally sourced to an individual performance to update the tacit orientations of everyone—to purge discrepancies and restore synchronization.

Self-reference rather obviously undermines this natural function.

Reading From Bacteria to Bach and Back III: Beyond Stances

by rsbakker


The problem with his user-illusion model of consciousness, Dennett realizes, lies in its Cartesian theatricalization, the reflex to assume the reality of the illusion, and to thence argue that it is in fact this… the dumbfounding fact, the inexplicable explanandum. We acknowledge that consciousness is a ‘user-illusion,’ then insist this ‘manifest image’ is the very thing requiring explanation. Dennett’s de-theatricalization, in other words, immediately invites re-theatricalization, intuitions so powerful he feels compelled to devote an entire chapter to resisting the invitation, only to have otherwise generally sympathetic readers, like Tom Clark, to re-theatricalize everything once again. To deceive us at all, the illusion itself has to be something possessing, minimally it seems, the capacity to deceive. Faced with the question of what the illusion amounts to, he writes, “It is a representation of a red stripe in some neural system of representation” (358), allowing Clark and others to reply, ‘and so possesses content called qualia.’

One of the striking features of From Bacteria to Bach and Back is the degree to which his trademark Intentional Systems Theory (IST) fades into the background. Rather than speak of the physical stance, design stance, and intentional stance, he continually references Sellars tripartite nomenclature from “Philosophy and the Scientific Image of Man,” the ‘original image’ (which he only parenthetically mentions), the ‘manifest image,’ and the ‘scientific image.’ The manifest image in particular, far more than the intentional stance, becomes his primary theoretical term.

Why might this be?

Dennett has always seen himself threading a kind of theoretical needle, fending the scientifically preposterous claims of intentionalism on the one hand, and the psychologically bankrupt claims of eliminativism on the other. Where intentionalism strands us with impossible explanatory vocabularies, tools that cause more problems than they solve, eliminativism strands us with impoverished explanatory vocabularies, purging tools that do real work from our theoretical kits without replacing them. It’s not simply that Dennett wants, as so many of his critics accuse him, ‘to have it both ways’; it’s that he recognizes that having it both ways is itself the only way, theoretically speaking. What we want is to square the circle of intentionality and consciousness without running afoul either squircles or blank screens, which is to say, inexplicable intentionalisms or deaf-mute eliminativisms.

Seen in this light, Dennett’s apparent theoretical opportunism, rapping philosophical knuckles for some applications of intentional terms, shaking scientific hands for others, begins to look well motivated—at least from a distance. The global theoretical devil, of course, lies in the local details. Intentional Systems Theory constitutes Dennett’s attempt to render his ‘middle way’ (and so his entire project) a principled one. In From Bacteria to Bach and Back he explains it thus:

There are three different but closely related strategies or stances we can adopt when trying to understand, explain, and predict phenomena: the physical stance, the design stance, in the intentional stance. The physical stance is the least risky but also the most difficult; you treat the phenomenon in question as a physical phenomenon, obeying the laws of physics, and use your hard-won understanding of physics to predict what will happen next. The design stance works only for things that are designed, either artifacts or living things or their parts, and have functions or purposes. The intentional stance works primarily for things that are designed to use information to accomplish their functions. It works by treating the thing as a rational agent, attributing “beliefs” and “desires” and “rationality” to the thing, and predicting that it will act rationally. 37

The strategy is straightforward enough. There’s little doubt that the physical stance, design stance, and intentional stance assist solving certain classes of phenomena in certain circumstances, so when confronted by those kinds of phenomena in those kinds of circumstances, taking the requisite stance is a good bet. If we have the tools, then why not use them?

But as I’ve been arguing for years here at Three Pound Brain, the problems stack up pretty quick, problems which, I think, find glaring apotheosis in From Bacteria to Bach and Back. The first problem lies in the granularity of stances, the sense in which they don’t so much explain cognition as merely divvy it up into three families. This first problem arises from the second, their homuncularity, the fact that ‘stances’ amount to black-box cognitive comportments, ways to manipulate/explain/predict things that themselves resist understanding. The third, and (from the standpoint his thesis) most devastating problem, also turns on the second: the fact that stances are the very thing requiring explanation.

The reason the intentional stance, Dennett’s most famed explanatory tool, so rarely surfaces in From Bacteria to Bach and Back is actually quite simple: it’s his primary explanandum. The intentional stance cannot explain comprehension simply because it is, ultimately, what comprehension amounts to…

Well, almost. And it’s this ‘almost,’ the ways in which the intentional stance defects from our traditional (cognitivist) understanding of comprehension, which has ensnared Dennett’s imagination—or so I hope to show.

What does this defection consist in? As we saw, the retasking of metacognition to solve theoretical questions was doomed to run afoul sufficiency-effects secondary to frame and medial neglect. The easiest way to redress these illusions lies in interrogating the conditions and the constitution of cognition. What the intentional stance provides Dennett is a granular appreciation of the performative, and therefore the social, fractionate, constructive, and circumstantial nature of comprehension. Like Wittgenstein’s ‘language games,’ or Kuhn’s ‘paradigms,’ or Davidson’s ‘charity,’ Dennett’s stances allow him to capture something of the occluded external and internal complexities that have for so long worried the ‘clear and distinct’ intuition of the ambiguous human cylinder.

The intentional stance thus plays a supporting role, popping up here and there in From Bacteria to Bach and Back insofar as it complicates comprehension. At every turn, however, we’re left with the question of just what it amounts to. Intentional phenomena such as representations, beliefs, rules, and so on are perspectival artifacts, gears in what (according to Dennett) is the manifest ontology we use to predict/explain/manipulate one another using only the most superficial facts. Given the appropriate perspective, he assures us, they’re every bit as ‘real’ as you and I need. But what is a perspective, let alone a perspectival artifact? How does it—or they—function? What are the limits of application? What constitutes the ‘order’ it tracks, and why is it ‘there’ as opposed to, say, here?

Dennett—and he’s entirely aware of this—really doesn’t have much more than suggestions and directions when it comes to these and other questions. As recently as Intuition Pumps, he explicitly described his toolset as “good at nibbling, at roughly locating a few ‘fixed’ points that will help us see the general shape of the problem” (79). He knows the intentional stance cannot explain comprehension, but he also knows it can inflect it, nudge it closer to a biological register, even as it logically prevents the very kind of biological understanding Dennett—and naturalists more generally—take as the primary desideratum. As he writes (once again in 2013):

I propose we simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is. Whatever the right answer to that question is—if it has a right answer—this will not jeopardize the plain fact that the intentional stance works remarkably well as a prediction method in these and other areas, almost as well as it works in our daily lives as folk-psychologists dealing with other people. This move of mine annoys and frustrates some philosophers, who want to blow the whistle and insist on properly settling the issue of what a mind, a belief, a desire is before taking another step. Define your terms, sir! No, I won’t. that would be premature. I want to explore first the power and the extent of application of this good trick, the intentional stance. Intuition Pumps, 79

But that was then and this is now. From Bacteria to Bach and Back explicitly attempts to make good on this promissory note—to naturalize comprehension, which is to say, to cease merely exploring the scope and power of the intentional stance, and to provide us with a genuine naturalistic explanation. To explain, in the high-dimensional terms of nature, what the hell it is. And the only way to do this is to move beyond the intentional stance, to cease wielding it as a tool, to hoist it on the work-bench, and to adduce the tools that will allows us to take it apart.

By Dennett’s own lights, then, he needs to reverse-engineer the intentional stance. Given his newfound appreciation for heuristic neglect, I understand why he feels the potential for doing this. A great deal of his argument for Cartesian gravity, as we’ve seen, turns on our implicit appreciation of the impact of ‘no information otherwise.’ But sensing the possibility of those tools, unfortunately, does not amount to grasping them. Short explicit thematizations of neglect and sufficiency, he was doomed to remain trapped on the wrong side of the Cartesian event horizon.

On Dennett’s view, intentional stances are homuncular penlights more than homuncular projectors. What they see, ‘reasons,’ lies in the ‘eye of the beholder’ only so far as natural and neural selection provisions the beholder with the specialized competencies required to light them up.

The reasons tracked by evolution I have called ‘free-floating rationales,’ a term that has apparent jangled the nerves of some few thinkers, who suspect I am conjuring up ghosts of some sort. Not at all. Free-floating rationales are no more ghostly or problematic than numbers or centers of gravity. Cubes had eight corners before people invented ways of articulating arithmetic, and asteroids had centers of gravity before there were physicists to dream up the idea and calculate with it. Reasons existed long before there were reasoners. 50

To be more precise, the patterns revealed by the intentional stance exist independent of the intentional stance. For Dennett, the problematic philosophical step—his version of the original philosophical sin of intentionalism—is to think the cognitive bi-stability of these patterns, the fact they appear to be radically different when spied with a first-person penlight versus scientific floodlights, turns on some fundamental ontological difference.

And so, Dennett holds that a wide variety of intentional phenomena are real, just not in the way we have traditionally understood them to be real. This includes reasons, beliefs, functions, desires, rules, choices, purposes, and—pivotally, given critiques like Tom Clark’s—representations. So far as this bestiary solves real world problems, they have to grab hold of the world somehow, don’t they? The suggestion that intentional posits are no more problematic than formal or empirical posits (like numbers and centers of gravity) is something of a Dennettian refrain—as we shall see, it presumes the heuristics involved in intentional cognition possess the same structure as heuristics in other domains, which is simply not the case. Otherwise, so long as intentional phenomena actually facilitate cognition, it seems hard to deny that they broker some kind high-dimensional relationship with the high-dimensional facts of our environment.

So what kind of relationship? Well, Dennett argues that it will be—has to be, given evolution—heuristic. So far as that relationship is heuristic, we can presume that it solves by taking the high-dimensional facts of the matter—what we might call the deep information environment—for granted. We can presume, in other words, that it will ignore the machinery, and focus on cues, available information systematically related to that machinery in ways that enable the prediction/explanation/manipulation of that machinery. In other words, rather than pick out the deep causal patterns responsible it will exploit those available patterns possessing some exploitable correlation to those patterns.

So then where, one might ask, do the real patterns pertaining to ‘representation’ lie in this? What part or parts of this machine-solving machinery gainsays the ‘reality’ of representations? Just where do we find the ‘real patterns’ underwriting the content responsible for individuating our reports? It can’t be the cue, the available information happily correlated to the system or systems requiring solution, simply because the cue is often little more than a special purpose trigger. The Heider-Simmel Illusion, for instance, provides a breathtaking example of just how little information it takes. So perhaps we need to look beyond the cue, to the adventitious correlations binding it to the neglected system or systems requiring solution. But if these are the ‘real patterns’ illuminated by the intentional stance, it’s hard to understand what makes them representational—more than hard in fact, since these relationships consist in regularities, which, as whole philosophical traditions have discovered, are thoroughly incompatible with the distinctively cognitive properties of representation. Well, then, how about the high-dimensional machinery indirectly targeted for solution? After all, representations provide us a heuristic way to understand otherwise complex cognitive relationships. This is where Dennett (and most everyone else, for that matter) seems to think the real patterns lie, the ‘order which is there,’ in the very machinery that heuristic systems are adapted—to avoid! Suddenly, we find ourselves stranded with regularities only indirectly correlated to the cues triggering different heuristic cognitive systems. How could the real patterns gainsaying the reality of representations be the very patterns our heuristic systems are adapted to ignore?

But if we give up on the high-dimensional systems targeted for solution, perhaps we should be looking at the heuristic systems cognizing—perhaps this is where the real patterns gainsaying the reality of representations lie, here, in our heads. But this is absurd, of course, since the whole point of saying representations are real (enough) is to say they’re out there (enough), independent of our determinations one way or another.

No matter how we play this discursive shell game, the structure of heuristic cognition guarantees that we’ll never discover the ‘real pattern pea,’ even with intentional phenomena so apparently manifest (because so useful in both everyday and scientific contexts) as representations. There’s real systems, to be sure, systems that make ‘identifying representations’ as easy as directing attention to the television screen. But those systems are as much here as they are there, making that television screen simply another component in a greater whole. Without the here, there is no there, which is to say, no ‘representation.’ Medial neglect assures the astronomical dimensionality of the here is flattened into near oblivion, stranding cognition with a powerful intuition of a representational there. Thanks to our ancestors, who discovered myriad ways to manipulate information to cue visual cognition out of school, to drape optical illusions across their cave walls, or to press them into lumps of clay, we’ve become so accustomed to imagery as to entirely forget the miraculousness of seeing absent things in things present. Those cues are more or less isomorphic to the actual systems comprising the ancestral problem ecologies visual cognition originally evolved to manage. This is why they work. They recapitulate certain real patterns of information in certain ways—as does your, retina, your optic nerve, and every stage of visual cognition culminating in visual experience. The only thing ‘special’ about the recapitulations belonging to your television screen is their availability, not simply to visual cognition, but to our attempts to cognize/troubleshoot such instances of visual cognition. The recapitulations on the screen, unlike, say, the recapitulations captured by our retinas, are the one thing we can readily troubleshoot should they begin miscuing visual cognition. Neglect ensures the intuition of sufficiency, the conviction that the screen is the basis, as opposed to simply another component in a superordinate whole. So, we fetishize it, attribute efficacies belonging to the system to what is in fact just another component. All its enabling entanglements vanish into the apparent miracle of unmediated semantic relationships to whatever else happens to be available. Look! we cry. Representation

Figure 1: This image of the Martian surface taken by Viking 1 in 1976 caused a furor on earth, for obvious reasons.

Figure 2: Images such as this one taken by the Mars Reconnaissance Orbiter reveal the former to be an example of facial pareidolia, an instance where information cues facial recognition where no faces are to be found. The “Face on Mars” seems be an obvious instance of projection—mere illusion—as opposed to discovery. Until, that is, one realizes that both of these images consist of pixels cuing your visual systems ‘out of school’! Both, in other words, constitute instances of pareidolia: the difference lies in what they enable.

Some apparent squircles, it turns out, are dreadfully useful. So long as the deception is systematic, it can be instrumentalized any which way. Environmental interaction is the basis of neural selection (learning), and neural selection is the basis of environmental domination. What artificial visual cuing—‘representation’—provides is environmental interaction on the cheap, ways to learn from experience without having to risk or endure experience. A ‘good trick’ indeed!

This brings us to a great fault-line running through the entirety of Dennett’s corpus. The more instrumental a posit, the more inclined he’s to say it’s ‘real.’ But when critics accuse him of instrumentalism, he adverts to the realities underwriting the instrumentalities, what enables them to work, to claim a certain (ambiguous, he admits) brand of realism. But as should now be clear, what he elides when he does this is nothing less than the structure of heuristic cognition, which blindly exploits the systematic correlations between information available and the systems involved to solve those systems as far as constraints on availability and capacity allow.

The reason he can elide the structure of heuristic cognition (and so find his real patterns argument convincing) lies, pretty clearly, I think, in the conflation of human intentional cognition (which is radically heuristic) with the intentional stance. In other words, he confuses what’s actually happening in instances of intentional cognition with what seems to be happening in instances of intentional cognition, given neglect. He runs afoul Cartesian gravity. “We tend to underestimate the strength of the forces that distort our imaginations,” he writes, “especially when confronted by irreconcilable insights that are ‘undeniable’” (22). Given medial neglect, the inability to cognize our contemporaneous cognizing, we are bound to intuit the order as ‘there’ (as ‘lateral’) even when we, like Dennett, should know better. Environmentalization is, as Hume observed, the persistent reflex, the sufficiency effect explaining our default tendency to report medial artifacts, features belonging to the signal, as genuine environmental phenomena, or features belonging to the source.

As a heuristic device, an assumption circumventing the brute fact of medial neglect, the environmentalization heuristic possesses an adaptive problem ecology—or as Dennett would put it, ‘normal’ and ‘abnormal’ applications. The environmentalization heuristic, in other words, possesses adaptive application conditions. What Dennett would want to argue, I’m sure, is that ‘representations’ are no more or less heuristic than ‘centres of gravity,’ and that we are no more justified in impugning the reality of the one than the reality of the other. “I don’t see why my critics think their understanding about what really exists is superior to mine,” he complains at one point in From Bacteria to Bach and Back, “so I demure” (224). And he’s entirely right on this score: no one has a clue as to what attributing reality amounts to. As he writes regarding the reality of beliefs in “Real Patterns”:

I have claimed that beliefs are best considered to be abstract objects rather like centers of gravity. Smith considers centers of gravity to be useful fictions while Dretske considers them to be useful (and hence?) real abstractions, and each takes his view to constitute a criticism of my position. The optimistic assessment of these opposite criticisms is that they cancel each other out; my analogy must have hit the nail on the head. The pessimistic assessment is that more needs to be said to convince philosophers that a mild and intermediate sort of realism is a positively attractive position, and not just the desperate dodge of ontological responsibility it has sometimes been taken to be. I have just such a case to present, a generalization and extension of my earlier attempts, via the concept of a pattern. 29

Heuristic Neglect Theory, however, actually put us in a position to make a great deal of sense of ‘reality.’ We can see, rather plainly, I think, the disanalogy between ‘centres of gravity’ and ‘beliefs,’ the disanalogy that leaps out as soon as we consider how only the latter patterns require the intentional stance (or more accurately, intentional cognition) to become salient. Both are heuristic, certainly, but in quite different ways.

We can also see the environmentalization heuristic at work in the debate between whether ‘centres of gravity’ are real or merely instrumental, and Dennett’s claim that they lie somewhere in-between. Do ‘centres of gravity’ belong to the order which is there, or do we simply project them in useful ways? Are they discoveries, or impositions? Why do we find it so natural to assume either the one or the other, and so difficult to imagine Dennett’s in-between or ‘intermediate’ realism? Why is it so hard conceiving of something half-real, half-instrumental?

The fundamental answer lies in the combination of frame and medial neglect. Our blindness to the enabling dimension of cognition renders cognition, from the standpoint of metacognition, an all but ethereal exercise. ‘Transparency’ is but one way of thematizing the rank incapacity generally rendering environmentalization such a good trick. “Of course, centres of gravity lie out there!” We are more realists than instrumentalists. The more we focus on the machinery of cognition, however, the more dimensional the medial becomes, the more efficacious, and the more artifactual whatever we’re focusing on begins to seem. Given frame neglect, however, we fail to plug this higher-dimensional artifactuality into the superordinate systems encompassing all instances of cognition, thus transforming gears into tools, fetishizing those instances, in effect. “Of course, centres of gravity organize out there!” We become instrumentalists.

If these incompatible intuitions are all that the theoretician has to go on, then Dennett’s middle way can only seem tendentious, an attempt to have it both ways. What makes Dennett’s ‘mild or intermediate’ realism so difficult to imagine is nothing less than Cartesian gravity, which is to say, the compelling nature of the cognitive illusions driving our metacognitive intuitions either way. Squares viewed on this angle become circles viewed on that. There’s no in-between! This is why Dennett, like so many revolutionary philosophical thinkers before him, is always quick to reference the importance of imagination, of envisioning how things might be otherwise. He’s always bumping against the limits of our shackles, calling attention to the rattle in the dark. Implicitly, he understands the peril that neglect, by way of sufficiency, poses to our attempts to puzzle through these problems.

But only implicitly, and as it turns out (given tools so blunt and so complicit as the intentional stance), imperfectly. On Heuristic Neglect Theory, the practical question of what’s real versus what’s not is simply one of where and when the environmentalization heuristic applies, and the theoretical question of what’s ‘really real’ and what’s ‘merely instrumental’ is simply an invitation to trip into what is obviously (given the millennial accumulation of linguistic wreckage) metacognitive crash space. When it comes to ‘centres of gravity,’ environmentalization—or the modifier ‘real’—applies because of the way the posit economizes otherwise available, as opposed to unavailable, information. Heuristic posits centres of gravity might be, but ones entirely compatible with the scientific examination of deep information environments.

Such is famously not the case with posits like ‘belief’ or ‘representation’—or for that matter, ‘real’! The heuristic mechanisms underwriting environmentalization are entirely real, as is the fact that these heuristics do not simply economize otherwise available information, but rather compensate for structurally unavailable information. To this extent, saying something is ‘real’—acknowledging the applicability of the environmentalization heuristic—involves the order here as much as the order there, so far as it compensates for structural neglect, rather than mere ignorance or contingent unavailability. ‘Reality’ (like ‘truth’) communicates our way of selecting and so sorting environmental interactions while remaining almost entirely blind to the nature of those environmental interactions, which is to say, neglecting our profound continuity with those environments.

At least as traditionally (intentionally) conceived, reality does not belong to the real, though reality-talk is quite real, and very useful. It pays to communicate the applicability of environmentalization, if only to avoid the dizzying cognitive challenges posed by the medial, enabling dimensions of cognition. Given the human circuit, truth-talk can save lives. The apparent paradox of such declarations—such as saying, for instance, that it’s true that truth does not exist—can be seen as a direct consequence of frame and medial neglect, one that, when thought carefully through step by empirically tractable step, was pretty much inevitable. We find ourselves dumbfounding for good reason!

The unremarkable fact is that the heuristic systems we resort to when communicating and trouble-shooting cognition are just that: heuristic systems we resort to when communicating and trouble-shooting cognition. And what’s more, they possess no real theoretical power. Intentional idioms are all adapted to shallow information ecologies. They comprise the communicative fraction of compensatory heuristic systems adapted not simply to solve astronomically complicated systems on the cheap, but absent otherwise instrumental information belonging to our deep information environments. Applying those idioms to theoretical problems amounts to using shallow resources to solve the natural deeps. The history of philosophy screams underdetermination for good reason! There’s no ‘fundamental ontology’ beneath, no ‘transcendental functions’ above, and no ‘language-games’ or ‘intentional stances’ between, just the machinations of meat, which is why strokes and head injuries and drugs produce the boggling cognitive effects they do.

The point to always keep in mind is that every act of cognition amounts to a systematic meeting of at least two functionally distinct systems, the one cognized, the other cognizing. The cognitive facts of life entail that all cognition remains, in some fundamental respect, insensitive to the superordinate system explaining the whole let alone the structure and activity of cognition. This inability to cognize our position within superordinate systems (frame neglect) or to cognize our contemporaneous cognizing (medial neglect) is what renders the so-called first-person (intentional stance) homuncular, blind to its own structure and dynamics, which is to say, oblivious to the role here plays ordering ‘there.’ This is what cognitive science needs to internalize, the way our intentional and phenomenal idioms steer us blindly, absent any high-dimensional input, toward solutions that, when finally mapped, will bear scant resemblance to the metacognitive shadows parading across our cave walls. And this is what philosophy needs to internalize as well, the way their endless descriptions and explanations, all the impossible figures—squircles—comprising the great bestiary of traditional reflection upon the nature of the soul, are little more than illusory artifacts of their inability to see their inability to see. To say something is ‘real’ or ‘true’ or ‘factual’ or ‘represents,’ or what have you is to blindly cue blind orientations in your fellows, to lock them into real but otherwise occluded systems, practically and even experimentally efficacious circuits, not to invoke otherworldly functions or pick out obscure-but-real patterns like ‘qualia’ or ‘representations.’

The question of ‘reality’ is itself a heuristic question. As horribly counter-intuitive as all this must sound, we really have no way of cognizing the high-dimensional facts of our environmental orientation, and so no choice but to problem-solve those facts absent any inkling of them. The issue of ‘reality,’ for us, is a radically heuristic one. As with all heuristic matters, the question of application becomes paramount: where does externalization optimize, and where does it crash? It optimizes where the cues relied upon generalize, provide behavioural handles that can be reverse-engineered—‘reduced’—absent reverse-engineering us. It optimizes, in other words, wherever frame and medial neglect do not matter. It crashes, however, where the cues relied upon compensate, provide behavioural handles that can only be reverse-engineered by reverse-engineering ourselves.

And this explains the ‘gobsmacking fact’ with which we began, how we can source the universe all the way back to first second, and yet remain utterly confounded by our ability to do so. Short cognitive science, compensatory heuristics were all that we possessed when it came to question of ourselves. Only now do we find ourselves in a position to unravel the nature of the soul.

The crazy thing to understand, here, the point Dennett continually throws himself toward in From Bacteria to Bach and Back only to be drawn back out on the Cartesian tide, is that there is no first-person. There is no original or manifest or even scientific ‘image’: these all court ‘imaginative distortion’ because they, like the intentional stance, are shallow ecological artifacts posturing as deep information truths. It is not the case that, “[w]e won’t have a complete science of consciousness until we can align our manifest-image identifications of mental states by their contents with scientific-image identifications of the subpersonal information structures and events that are causally responsible for generating the details of the user-illusion we take ourselves to operate in” (367)—and how could it be, given our abject inability to even formulate ‘our manifest-image identifications,’ to agree on the merest ‘detail of our user-illusion’? There’s a reason Tom Clark emphasizes this particular passage in his defense of qualia! If it’s the case that Dennett believes a ‘complete science of consciousness’ requires the ‘alignment’ of metacognitive reports with subpersonal mechanisms then he is as much a closet mysterian as any other intentionalist. There’s simply too many ways to get lost in the metacognitive labyrinth, as the history of intentional philosophy amply shows.

Dennett needs only continue following the heuristic tracks he’s started down in From Bacteria to Bach and Back—and perhaps recall his own exhortation to imagine—to see as much. Imagine how it was as a child, living blissfully unaware of philosophers and scientists and their countless confounding theoretical distinctions and determinations. Imagine the naïveté, not of dwelling within this or that ‘image,’ but within an ancestral shallow information ecology, culturally conditioned to be sure, but absent the metacognitive capacity required to run afoul sufficiency effects. Imagine thinking without ‘having thoughts,’ knowing without ‘possessing knowledge,’ choosing without ‘exercising freedom.’ Imagine this orientation and how much blinkered metacognitive speculation and rationalization is required to transform it into something resembling our apparent ‘first-person perspective’—the one that commands scarcely any consensus beyond exceptionalist conceit.

Imagine how much blinkered metacognitive speculation and rationalization is required to transform it into the intentional stance.

So, what, then, is the intentional stance? An illusory artifact of intentional cognition, understood in the high-dimensional sense of actual biological mechanisms (both naturally and neurally selected), not the low-dimensional, contentious sense of an ‘attitude’ or ‘perspective.’ The intentional stance represents an attempt to use intentional cognition to fundamentally explain intentional cognition, and in this way, it is entirely consonant with the history of philosophy as a whole. It differs—perhaps radically so—in the manner it circumvents the metacognitive tendency to report intentional phenomena as intrinsic (self-sufficient), but it nevertheless remains a way to theorize cognition and experience via, as Dennett himself admits, resources adapted to their practical troubleshooting.

The ‘Cartesian wound’ is no more than theatrical paint, stage make-up, and so something to be wiped away, not healed. There is no explanatory gap because there is no first-person—there never has been, apart from the misapplication of radically heuristic, practical problem-solving systems to the theoretical question of the soul. Stripped of the first-person, consciousness becomes a natural phenomenon like any other, baffling only for its proximity, for overwriting the very page it attempts to read. Heuristic Neglect Theory, in other words, provides a way for us to grasp what we are, what we always have been: a high-dimensional physical system possessing selective sensitivities and capacities embedded in other high-dimensional physical systems. This is what you’re experiencing now, only so far as your sensitivities and capacities allow. This, in other words, is this… You are fundamentally inscrutable unto yourself outside practical problem-solving contexts. Everything else, everything apparently ‘intentional’ or ‘phenomenal’ is simply ‘seems upon reflection.’ There is no manifest image,’ only a gallery of competing cognitive illusions, reflexes to report leading to the crash space we call intentional philosophy. The only ‘alignment’ required is that between our shallow information ecology and our deep information environments: the ways we do much with little, both with reference to each other and with ourselves. This is what you reference when describing a concert to your buddies. This is what you draw on when you confess your secrets, your feelings, your fears and aspirations. Not a ‘mind,’ not a ‘self-model,’ nor even a ‘user illusion,’ but the shallow cognitive ecology underwriting your brain’s capacity to solve and report itself and others.

There’s a positively vast research project buried in this outlook, and as much would become plain, I think, if enough souls could bring themselves see past the fact that it took an institutional outsider to discover. The resolutely post-intentional empirical investigation of the human has scarcely begun.