Three Pound Brain

No bells, just whistling in the dark…

Month: March, 2018

Enlightenment How? Omens of the Semantic Apocalypse

by rsbakker

“In those days the world teemed, the people multiplied, the world bellowed like a wild bull, and the great god was aroused by the clamor. Enlil heard the clamor and he said to the gods in council, “The uproar of mankind is intolerable and sleep is no longer possible by reason of the babel.” So the gods agreed to exterminate mankind.” –The Epic of Gilgamesh

We know that human cognition is largely heuristic, and as such dependent upon cognitive ecologies. We know that the technological transformation of those ecologies generates what Pinker calls ‘bugs,’ heuristic miscues due to deformations in ancestral correlative backgrounds. In ancestral times, our exposure to threat-cuing stimuli possessed a reliable relationship to actual threats. Not so now thanks to things like the nightly news, generating (via, Pinker suggests, the availability heuristic (42)) exaggerated estimations of threat.

The toll of scientific progress, in other words, is cognitive ecological degradation. So far that degradation has left the problem-solving capacities of intentional cognition largely intact: the very complexity of the systems requiring intentional cognition has hitherto rendered cognition largely impervious to scientific renovation. Throughout the course of revolutionizing our environments, we have remained a blind-spot, the last corner of nature where traditional speculation dares contradict the determinations of science.

This is changing.

We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travelers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts.

Now that the sciences are colonizing the complexities of experience and cognition, we can see the first clear-cut omens of the semantic apocalypse.

 

Crash Space

He assiduously avoids the topic in Enlightenment Now, but in The Blank Slate, Pinker devotes several pages to deflating the arch-incompatibility between natural and intentional modes of cognition, the problem of free will:

“But how can we have both explanation, with its requirement of lawful causation, and responsibility, with its requirement of free choice? To have them both we don’t need to resolve the ancient and perhaps irresolvable antinomy between free will and determinism. We have only to think clearly about what we want the notion of responsibility to achieve.” 180

He admits there’s no getting past the ‘conflict of intuitions’ underwriting the debate. Since he doesn’t know what intentional and natural cognition amount to, he doesn’t understand their incompatibility, and so proposes we simply side-step the problem altogether by redefining ‘responsibility’ to mean what we need it to mean—the same kind of pragmatic redefinition proposed by Dennett. He then proceeds to adduce examples of ‘clear thinking’ by providing guesses regarding ‘holding responsible’ as deterrence, which is more scientifically tractable. “I don’t claim to have solved the problem of free will, only to show that we don’t need to solve it to preserve personal responsibility in the face of an increasing understanding of the causes of behaviour” (185).

Here we can see how profoundly Pinker (as opposed to Nietzsche and Adorno) misunderstands the profundity of Enlightenment disenchantment. The problem isn’t that one can’t cook up alternate definitions of ‘responsibility,’ the problem is that anyone can, endlessly. ‘Clear thinking’ is as liable to serve Pinker as well as ‘clear and distinct ideas’ served Descartes, which is to say, as more grease for the speculative mill. No matter how compelling your particular instrumentalization of ‘responsibility’ seems, it remains every bit as theoretically underdetermined as any other formulation.

There’s a reason such exercises in pragmatic redefinition stall in the speculative ether. Intentional and mechanical cognitive systems are not optional components of human cognition, nor are the intuitions we are inclined to report. Moreover, as we saw in the previous post, intentional cognition generates reliable predictions of system behaviour absent access to the actual sources of that behaviour. Intentional cognition is source-insensitive. Natural cognition, on the other hand, is source sensitive: it generates predictions of system behaviour via access to the actual sources of that behaviour.

Small wonder, then, that our folk intentional intuitions regularly find themselves scuttled by scientific explanation. ‘Free will,’ on this account, is ancestral lemonade, a way to make the best out of metacognitive lemons, namely, our blindness to the sources of our thought and decisions. To the degree it relies upon ancestrally available (shallow) saliencies, any causal (deep) account of those sources is bound to ‘crash’ our intuitions regarding free will. The free will debate that Pinker hopes to evade with speculation can be seen as a kind of crash space, the point where the availability of deep information generates incompatible causal intuitions and intentional intuitions.

The confusion here isn’t (as Pinker thinks) ‘merely conceptual’; it’s a bona fide, material consequence of the Enlightenment, a cognitive version of a visual illusion. Too much information of the wrong kind crashes our radically heuristic modes of cognizing decisions. Stipulating definitions, not surprisingly, solves nothing insofar as it papers over the underlying problem—this is why it merely adds to the literature. Responsibility-talk cues the application of intentional cognitive modes; it’s the incommensurability of these modes with causal cognition that’s the problem, not our lexicons.

 

Cognitive Information

Consider the laziness of certain children. Should teachers be allowed to hold students responsible for their academic performance? As the list of learning disabilities grows, incompetence becomes less a matter of ‘character’ and more a matter of ‘malfunction’ and providing compensatory environments. Given that all failures of competence redound on cognitive infelicities of some kind, and given that each and every one of these infelicities can and will be isolated and explained, should we ban character judgments altogether? Should we regard exhortations to ‘take responsibility’ as forms of subtle discrimination, given that executive functioning varies from student to student? Is treating children like (sacred) machinery the only ‘moral’ thing to do?

So far at least. Causal explanations of behaviour cue intentional exemptions: our ancestral thresholds for exempting behaviour from moral cognition served larger, ancestral social equilibria. Every etiological discovery cues that exemption in an evolutionarily unprecedented manner, resulting in what Dennett calls “creeping exculpation,” the gradual expansion of morally exempt behaviours. Once a learning impediment has been discovered, it ‘just is’ immoral to hold those afflicted responsible for their incompetence. (If you’re anything like me, simply expressing the problem in these terms rankles!) Our ancestors, resorting to systems adapted to resolving social problems given only the merest information, had no problem calling children lazy, stupid, or malicious. Were they being witlessly cruel doing so? Well, it certainly feels like it. Are we more enlightened, more moral, for recognizing the limits of that system, and curtailing the context of application? Well, it certainly feels like it. But then how do we justify our remaining moral cognitive applications? Should we avoid passing moral judgment on learners altogether? It’s beginning to feel like it. Is this itself moral?

This is theoretical crash space, plain and simple. Staking out an argumentative position in this space is entirely possible—but doing so merely exemplifies, as opposed to solves, the dilemma. We’re conscripting heuristic systems adapted to shallow cognitive ecologies to solve questions involving the impact of information they evolved to ignore. We can no more resolve our intuitions regarding these issues than we can stop Necker Cubes from spoofing visual cognition.

The point here isn’t that gerrymandered solutions aren’t possible, it’s that gerrymandered solutions are the only solutions possible. Pinker’s own ‘solution’ to the debate (see also, How the Mind Works, 54-55) can be seen as a symptom of the underlying intractability, the straits we find ourselves in. We can stipulate, enforce solutions that appease this or that interpretation of this or that displaced intuition: teachers who berate students for their laziness and stupidity are not long for their profession—at least not anymore. As etiologies of cognition continue to accumulate, as more and more deep information permeates our moral ecologies, the need to revise our stipulations, to engineer them to discharge this or that heuristic function, will continue to grow. Free will is not, as Pinker thinks, “an idealization of human beings that makes the ethics game playable” (HMW 55), it is (as Bruce Waller puts it) stubborn, a cognitive reflex belonging to a system of cognitive reflexes belonging to intentional cognition more generally. Foot-stomping does not change how those reflexes are cued in situ. The free-will crash space will continue to expand, no matter how stubbornly Pinker insists on this or that redefinition of this or that term.

We’re not talking about a fall from any ‘heuristic Eden,’ here, an ancestral ‘golden age’ where our instincts were perfectly aligned with our circumstances—the sheer granularity of moral cognition, not to mention the confabulatory nature of moral rationalization, suggests that it has always slogged through interpretative mire. What we’re talking about, rather, is the degree that moral cognition turns on neglecting certain kinds of natural information. Or conversely, the degree to which deep natural information regarding our cognitive capacities displaces and/or crashes once straightforward moral intuitions, like the laziness of certain children.

Or the need to punish murderers…

Two centuries ago a murderer suffering irregular sleep characterized by vocalizations and sometimes violent actions while dreaming would have been prosecuted to the full extent of the law. Now, however, such a murderer would be diagnosed as suffering an episode of ‘homicidal somnambulism,’ and could very likely go free. Mammalian brains do not fall asleep or awaken all at once. For some yet-to-be-determined reason, the brains of certain individuals (mostly men older than 50), suffer a form of partial arousal causing them to act out their dreams.

More and more, neuroscience is making an impact in American courtrooms. Nita Farahany (2016) has found that between 2005 and 2012 the number of judicial opinions referencing neuroscientific evidence has more than doubled. She also found a clear correlation between the use of such evidence and less punitive outcomes—especially when it came to sentencing. Observers in the burgeoning ‘neurolaw’ field think that for better or worse, neuroscience is firmly entrenched in the criminal justice system, and bound to become ever more ubiquitous.

Not only are responsibility assessments being weakened as neuroscientific information accumulates, social risk assessments are being strengthened (Gkotsi and Gasser 2016). So-called ‘neuroprediction’ is beginning to revolutionize forensic psychology. Studies suggest that inmates with lower levels of anterior cingulate activity are approximately twice as likely to reoffend as those relatively higher levels of activity (Aharoni et al 2013). Measurements of ‘early sensory gating’ (attentional filtering) predict the likelihood that individuals suffering addictions will abandon cognitive behavioural treatment programs (Steele et al 2014). Reduced gray matter volumes in the medial and temporal lobes identify youth prone to commit violent crimes (Cope et al 2014). ‘Enlightened’ metrics assessing recidivism risks already exist within disciplines such as forensic psychiatry, of course, but “the brain has the most proximal influence on behavior” (Gaudet et al 2016). Few scientific domains illustrate the problems secondary to deep environmental information than the issue of recidivism. Given the high social cost of criminality, the ability to predict ‘at risk’ individuals before any crime is committed is sure to pay handsome preventative dividends. But what are we to make of justice systems that parole offenders possessing one set of ‘happy’ neurological factors early, while leaving others possessing an ‘unhappy’ set to serve out their entire sentence?

Nothing, I think, captures the crash of ancestral moral intuitions in modern, technological contexts quite so dramatically as forensic danger assessments. Consider, for instance, the way deep information in this context has the inverse effect of deep information in the classroom. Since punishment is indexed to responsibility, we generally presume those bearing less responsibility deserve less punishment. Here, however, it’s those bearing the least responsibility, those possessing ‘social learning disabilities,’ who ultimately serve the longest. The very deficits that mitigate responsibility before conviction actually aggravate punishment subsequent conviction.

The problem is fundamentally cognitive, and not legal, in nature. As countless bureaucratic horrors make plain, procedural decision-making need not report as morally rational. We would be mad, on the one hand, to overlook any available etiology in our original assessment of responsibility. We would be mad, on the other hand, to overlook any available etiology in our subsequent determination of punishment. Ergo, less responsibility often means more punishment.

Crash.

The point, once again, is to describe the structure and dynamics of our collective sociocognitive dilemma in the age of deep environmental information, not to eulogize ancestral cognitive ecologies. The more we disenchant ourselves, the more evolutionarily unprecedented information we have available, the more problematic our folk determinations become. Demonstrating this point demonstrates the futility of pragmatic redefinition: no matter how Pinker or Dennett (or anyone else) rationalizes a given, scientifically-informed definition of moral terms, it will provide no more than grist for speculative disputation. We can adopt any legal or scientific operationalization we want (see Parmigiani et al 2017); so long as responsibility talk cues moral cognitive determinations, however, we will find ourselves stranded with intuitions we cannot reconcile.

Considered in the context of politics and the ‘culture wars,’ the potentially disastrous consequences of these kinds of trends become clear. One need only think of the oxymoronic notion of ‘commonsense’ criminology, which amounts to imposing moral determinations geared to shallow cognitive ecologies upon criminal contexts now possessing numerous deep information attenuations. Those who, for whatever reason, escaped the education system with something resembling an ancestral ‘neglect structure’ intact, those who have no patience for pragmatic redefinitions or technical stipulations will find appeals to folk intuitions every bit as convincing as those presiding over the Salem witch trials in 1692. Those caught up in deep information environments, on the other hand, will be ever more inclined to see those intuitions as anachronistic, inhumane, immoral—unenlightened.

Given the relation between education and information access and processing capacity, we can expect that education will increasingly divide moral attitudes. Likewise, we should expect a growing sociocognitive disconnect between expert and non-expert moral determinations. And given cognitive technologies like the internet, we should expect this dysfunction to become even more profound still.

 

Cognitive Technology

Given the power of technology to cue intergroup identifications, the internet was—and continues to be—hailed as a means of bringing humanity together, a way of enacting the universalistic aspirations of humanism. My own position—one foot in academe, another foot in consumer culture—afforded me a far different perspective. Unlike academics, genre writers rub shoulders with all walks, and often find themselves debating outrageously chauvinistic views. I realized quite quickly that the internet had rendered rationalizations instantly available, that it amounted to pouring marbles across the floor of ancestral social dynamics. The cost of confirmation had plummeted to zero. Prior to the internet, we had to test our more extreme chauvinisms against whomever happened to be available—which is to say, people who would be inclined to disagree. We had to work to indulge our stone-age weaknesses in post-war 20th century Western cognitive ecologies. No more. Add to this phenomena such as online disinhibition effect, as well as the sudden visibility of ingroup, intellectual piety, and the growing extremity of counter-identification struck me as inevitable. The internet was dividing us into teams. In such an age, I realized, the only socially redemptive art was art that cut against this tendency, art that genuinely spanned ingroup boundaries. Literature, as traditionally understood, had become a paradigmatic expression of the tribalism presently engulfing us now. Epic fantasy, on the other hand, still possessed the relevance required to inspire book burnings in the West.

(The past decade has ‘rewarded’ my turn-of-the-millennium fears—though in some surprising ways. The greatest attitudinal shift in America, for instance, has been progressive: it has been liberals, and not conservatives, who have most radically changed their views. The rise of reactionary sentiment and populism is presently rewriting European politics—and the age of Trump has all but overthrown the progressive political agenda in the US. But the role of the internet and social media in these phenomena remains a hotly contested one.)

The earlier promoters of the internet had banked on the notional availability of intergroup information to ‘bring the world closer together,’ not realizing the heuristic reliance of human cognition on differential information access. Ancestrally, communicating ingroup reliability trumped communicating environmental accuracy, stranding us with what Pinker (following Kahan 2011) calls the ‘tragedy of the belief commons’ (Enlightenment Now, 358), the individual rationality of believing collectively irrational claims—such as, for instance, the belief that global warming is a liberal myth. Once falsehoods become entangled with identity claims, they become the yardstick of true and false, thus generating the terrifying spectacle we now witness on the evening news.

The provision of ancestrally unavailable social information is one thing, so long as it is curated—censored, in effect—as it was in the mass media age of my childhood. Confirmation biases have to swim upstream in such cognitive ecologies. Rendering all ancestrally unavailable social information available, on the other hand, allows us to indulge our biases, to see only what we want to see, to hear only what we want to hear. Where ancestrally, we had to risk criticism to secure praise, no such risks need be incurred now. And no surprise, we find ourselves sliding back into the tribalistic mire, arguing absurdities haunted—tainted—by the death of millions.

Jonathan Albright, the research director at the Tow Center for Digital Journalism at Columbia, has found that the ‘fake news’ phenomenon, as the product of a self-reinforcing technical ecosystem, has actually grown worse since the 2016 election. “Our technological and communication infrastructure, the ways we experience reality, the ways we get news, are literally disintegrating,” he recently confessed in a NiemanLab interview. “It’s the biggest problem ever, in my opinion, especially for American culture.” As Alexis Madrigal writes in The Atlantic, “the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

The individual cost of fantasy continues to shrink, even as the collective cost of deception continues to grow. The ecologies once securing the reliability of our epistemic determinations, the invariants that our ancestors took for granted, are being levelled. Our ancestral world was one where seeking risked aversion, a world where praise and condemnation alike had to brave condemnation, where lazy judgments were punished rather than rewarded. Our ancestral world was one where geography and the scarcity of resources forced permissives and authoritarians to intermingle, compromise, and cooperate. That world is gone, leaving the old equilibria to unwind in confusion, a growing social crash space.

And this is only the beginning of the cognitive technological age. As Tristan Harris points out, social media platforms, given their commercial imperatives, cannot but engineer online ecologies designed to exploit the heuristic limits of human cognition. He writes:

“I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.”

More and more of what we encounter online is dedicated to various forms of exogenous attention capture, maximizing the time we spend on the platform, so maximizing our exposure not just to advertising, but to hidden metrics, algorithms designed to assess everything from our likes to our emotional well-being. As with instances of ‘forcing’ in the performance of magic tricks, the fact of manipulation escapes our attention altogether, so we always presume we could have done otherwise—we always presume ourselves ‘free’ (whatever this means). We exhibit what Clifford Nass, a pioneer in human-computer interaction, calls ‘mindlessness,’ the blind reliance on automatic scripts. To the degree that social media platforms profit from engaging your attention, they profit from hacking your ancestral cognitive vulnerabilities, exploiting our shared neglect structure. They profit, in other words, from transforming crash spaces into cheat spaces.

With AI, we are set to flood human cognitive ecologies with systems designed to actively game the heuristic nature of human social cognition, cuing automatic responses based on boggling amounts of data and the capacity to predict our decisions better than our intimates, and soon, better than we can ourselves. And yet, as the authors of the 2017 AI Index report state, “we are essentially “flying blind” in our conversations and decision-making related to AI.” A blindness we’re largely blind to. Pinker spends ample time domesticating the bogeyman of superintelligent AI (296-298) but he completely neglects this far more immediate and retail dimension of our cognitive technological dilemma.

Consider the way humans endure as much as need one another: the problem is that the cues signaling social punishment and reward are easy to trigger out of school. We’ve already crossed the borne where ‘improving the user experience’ entails substituting artificial for natural social feedback. Notice the plethora of nonthreatening female voices at all? The promise of AI is the promise of countless artificial friends, voices that will ‘understand’ your plight, your grievances, in some respects better than you do yourself. The problem, of course, is that they’re artificial, which is to say, not your friend at all.

Humans deceive and manipulate one another all the time, of course. And false AI friends don’t rule out true AI defenders. But the former merely describes the ancestral environments shaping our basic heuristic tool box. And the latter simply concedes the fundamental loss of those cognitive ecologies. The more prosthetics we enlist, the more we complicate our ecology, the more mediated our determinations become, the less efficacious our ancestral intuitions become. The more we will be told to trust to gerrymandered stipulations.

Corporate simulacra are set to deluge our homes, each bent on cuing trust. We’ve already seen how the hypersensitivity of intentional cognition renders us liable to hallucinate minds where none exist. The environmental ubiquity of AI amounts to the environmental ubiquity of systems designed to exploit granular sociocognitive systems tuned to solve humans. The AI revolution amounts to saturating human cognitive ecology with invasive species, billions of evolutionarily unprecedented systems, all of them camouflaged and carnivorous. It represents—obviously, I think—the single greatest cognitive ecological challenge we have ever faced.

What does ‘human flourishing’ mean in such cognitive ecologies? What can it mean? Pinker doesn’t know. Nobody does. He can only speculate in an age when the gobsmacking power of science has revealed his guesswork for what it is. This was why Adorno referred to the possibility of knowing the good as the ‘Messianic moment.’ Until that moment comes, until we find a form of rationality that doesn’t collapse into instrumentalism, we have only toothless guesses, allowing the pointless optimization of appetite to command all. It doesn’t matter whether you call it the will to power or identity thinking or negentropy or selfish genes or what have you, the process is blind and it lies entirely outside good and evil. We’re just along for the ride.

 

Semantic Apocalypse

Human cognition is not ontologically distinct. Like all biological systems, it possesses its own ecology, its own environmental conditions. And just as scientific progress has brought about the crash of countless ecosystems across this planet, it is poised to precipitate the crash of our shared cognitive ecology as well, the collapse of our ability to trust and believe, let alone to choose or take responsibility. Once every suboptimal behaviour has an etiology, what then? Once everyone us has artificial friends, heaping us with praise, priming our insecurities, doing everything they can to prevent non-commercial—ancestral— engagements, what then?

‘Semantic apocalypse’ is the dramatic term I coined to capture this process in my 2008 novel, Neuropath. Terminology aside, the crashing of ancestral (shallow information) cognitive ecologies is entirely of a piece with the Anthropocene, yet one more way that science and technology are disrupting the biology of our planet. This is a worst-case scenario, make no mistake. I’ll be damned if I see any way out of it.

Humans cognize themselves and one another via systems that take as much for granted as they possibly can. This is a fact. Given this, it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible. Why should we evolve the extravagant capacity to intuit our nature beyond the demands of ancestral life? The shallow cognitive ecology arising out of those demands constitutes our baseline self-understanding, one that bears the imprimatur of evolutionary contingency at every turn. There’s no replacing this system short replacing our humanity.

Thus the ‘worst’ in ‘worst case scenario.’

There will be a great deal of hand-wringing in the years to come. Numberless intentionalists with countless competing rationalizations will continue to apologize (and apologize) while the science trundles on, crashing this bit of traditional self-understanding and that, continually eroding the pilings supporting the whole. The pieties of humanism will be extolled and defended with increasing desperation, whole societies will scramble, while hidden behind the endless assertions of autonomy, beneath the thundering bleachers, our fundamentals will be laid bare and traded for lucre.

Enlightenment How? Pinker’s Tutelary Natures

by rsbakker

 

The fate of civilization, Steven Pinker thinks, hangs upon our commitment to enlightenment values. Enlightenment Now: The Case for Reason, Science, Humanism and Progress constitutes his attempt to shore up those commitments in a culture grown antagonistic to them. This is a great book, well worth the read for the examples and quotations Pinker endlessly adduces, but even though I found myself nodding far more often than not, one glaring fact continually leaks through: Enlightenment Now is a book about a process, namely ‘progress,’ that as yet remains mired in ‘tutelary natures.’ As Kevin Williamson puts it in the National Review, Pinker “leaps, without warrant, from physical science to metaphysical certitude.”

What is his naturalization of meaning? Or morality? Or cognition—especially cognition! How does one assess the cognitive revolution that is the Enlightenment short understanding the nature of cognition? How does one prognosticate something one does not scientifically understand?

At one point he offers that “[t]he principles of information, computation, and control bridge the chasm between the physical world of cause and effect and the mental world of knowledge, intelligence, and purpose” (22). Granted, he’s a psychologist: operationalizations of information, computation, and control are his empirical bread and butter. But operationalizing intentional concepts in experimental contexts is a far cry from naturalizing intentional concepts. He entirely neglects to mention that his ‘bridge’ is merely a pragmatic, institutional one, that cognitive science remains, despite decades of research and billions of dollars in resources, unable to formulate its explananda, let alone explain them. He mentions a great number of philosophers, but he fails to mention what the presence of those philosophers in his thetic wheelhouse means.

All he ultimately has, on the one hand, is a kind of ‘ta-da’ argument, the exhaustive statistical inventory of the bounty of reason, science, and humanism, and on the other hand (which he largely keeps hidden behind his back), he has the ‘tu quoque,’ the question-begging presumption that one can only argue against reason (as it is traditionally understood) by presupposing reason (as it is traditionally understood). “We don’t believe in reason,” he writes, “we use reason” (352). Pending any scientific verdict on the nature of ‘reason,’ however, these kinds of transcendental arguments amount to little more than fancy foot-stomping.

This is one of those books that make me wish I could travel back in time to catch the author drafting notes. So much brilliance, so much erudition, all devoted to beating straw—at least as far as ‘Second Culture’ Enlightenment critiques are concerned. Nietzsche is the most glaring example. Ignoring Nietzsche the physiologist, the empirically-minded skeptic, and reducing him to his subsequent misappropriation by fascist, existential, and postmodernist thought, Pinker writes:

Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.” (Of course, this left him unable to explain why we should believe that those statements are true.) 446

Although it’s true that Nietzsche (like Pinker) lacked any scientifically compelling theory of cognition, what he did understand was its relation to power, the fact that “when you face an adversary alone, your best weapon may be an ax, but when you face an adversary in front of a throng of bystanders, your best weapon may be an argument” (415). To argue that all knowledge is contextual isn’t to argue that all knowledge is fundamentally equal (and therefore not knowledge at all), only that it is bound to its time and place, a creature possessing its own ecology, its own conditions of failure and flourishing. The Nietzschean thought experiment is actually quite a simple one: What happens when we turn Enlightenment skepticism loose upon Enlightenment values? For Nietzsche, Enlightenment Now, though it regularly pays lip service to the ramshackle, reversal-prone nature of progress, serves to conceal the empirical fact of cognitive ecology, that we remain, for all our enlightened noise-making to the contrary, animals bent on minimizing discrepancies. The Enlightenment only survives its own skepticism, Nietzsche thought, in the transvaluation of value, which he conceived—unfortunately—in atavistic or morally regressive terms.

This underwrites the subsequent critique of the Enlightenment we find in Adorno—another thinker whom Pinker grossly underestimates. Though science is able to determine the more—to provide more food, shelter, security, etc.—it has the social consequence underdetermining (and so undermining) the better, stranding civilization with a nihilistic consumerism, where ‘meaningfulness’ becomes just another commodity, which is to say, nothing meaningful at all. Adorno’s whole diagnosis turns on the way science monopolizes rationality, the way it renders moral discourses like Pinker’s mere conjectural exercises (regarding the value of certain values), turning on leaps of faith (on the nature of cognition, etc.), bound to dissolve into disputation. Although both Nietzsche and Adorno believed science needed to be understood as a living, high dimensional entity, neither harboured any delusions as to where they stood in the cognitive pecking order. Unlike Pinker.

Whatever their failings, Nietzsche and Adorno glimpsed a profound truth regarding ‘reason, science, humanism, and progress,’ one that lurks throughout Pinker’s entire account. Both understood that cognition, whatever it amounts to, is ecological. Steven Pinker’s claim to fame, of course, lies in the cognitive ecological analysis of different cultural phenomena—this was the whole reason I was so keen to read this book. (In How the Mind Works, for instance, he famously calls music ‘auditory cheese-cake.’) Nevertheless, I think both Nietzsche and Adorno understood the ecological upshot of the Enlightenment in way that Pinker, as an avowed humanist, simply cannot. In fact, Pinker need only follow through on his modus operandi to see how and why the Enlightenment is not what he thinks it is—as well as why we have good reason to fear that Trumpism is no ‘blip.’

Time and again Pinker likens the process of Enlightenment, the movement away from our tutelary natures, in terms of a conflict between ancestral cognitive predilections and scientifically and culturally revolutionized environments. “Humans today,” he writes, “rely on cognitive faculties that worked well enough in traditional societies, but which we now see are infested with bugs” (25). And the number of bugs that Pinker references in the course of the book is nothing short of prodigious. We tend to estimate frequencies according to ease of retrieval. We tend to fear losses more than we hope for gains. We tend to believe as our group believes. We’re prone to tribalism. We tend to forget past misfortune, and to succumb to nostalgia. The list goes on and on.

What redeems us, Pinker argues, is the human capacity for abstraction and combinatorial recursion, which allows us to endlessly optimize our behaviour. We are a self-correcting species:

So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment. 28

We are the products of ancestral cognitive ecologies, yes, but our capacity for optimizing our capacities allows us to overcome our ‘flawed natures,’ become something better than what we were. “The challenge for us today,” Pinker writes, “is to design an informational environment in which that ability prevails over the ones that lead us into folly” (355).

And here we encounter the paradox that Enlightenment Now never considers, even though Pinker presupposes it continually. The challenge for us today is to construct an informational environment that mitigates the problems arising out of our previous environmental constructions. The ‘bugs’ in human nature that need to be fixed were once ancestral features. What has rendered these adaptations ‘buggy’ is nothing other than the ‘march of progress.’ A central premise of Enlightenment Now is that human cognitive ecology, the complex formed by our capacities and our environments, has fallen out of whack in this way or that, cuing us to apply atavistic modes of problem-solving out of school. The paradox is that the very bugs Pinker thinks only the Enlightenment can solve are the very bugs the Enlightenment has created.

What Nietzsche and Adorno glimpsed, each in their own murky way, was a recursive flaw in Enlightenment logic, the way the rationalization of everything meant the rationalization of rationalization, and how this has to short-circuit human meaning. Both saw the problem in the implementation, in the physiology of thought and community, not in the abstract. So where Pinker seeks to “to restate the ideals of the Enlightenment in the language and concepts of the 21st century” (5), we can likewise restate Nietzsche and Adorno’s critiques of the Enlightenment in Pinker’s own biological idiom.

The problem with the Enlightenment is a cognitive ecological problem. The technical (rational and technological) remediation of our cognitive ecologies transforms those ecologies, generating the need for further technical remediation. Our technical cognitive ecologies are thus drifting ever further from our ancestral cognitive ecologies. Human sociocognition and metacognition in particular are radically heuristic, and as such dependent on countless environmental invariants. Before even considering more, smarter intervention as a solution to the ambient consequences of prior interventions, the big question has to be how far—and how fast—can humanity go? At what point (or what velocity) does a recognizably human cognitive ecology cease to exist?

This question has nothing to do with nostalgia or declinism, no more than any question of ecological viability in times of environmental transformation. It also clearly follows from Pinker’s own empirical commitments.

 

The Death of Progress (at the Hand of Progress)

The formula is simple. Enlightenment reason solves natures, allowing the development of technology, generally relieving humanity of countless ancestral afflictions. But Enlightenment reason is only now solving its own nature. Pinker, in the absence of that solution, is arguing that the formula remains reliable if not quite as simple. And if all things were equal, his optimistic induction would carry the day—at least for me. As it stands, I’m with Nietzsche and Adorno. All things are not equal… and we would see this clearly, I think, were it not for the intentional obscurities comprising humanism. Far from the latest, greatest hope that Pinker makes it out to be, I fear humanism constitutes yet another nexus of traditional intuitions that must be overcome. The last stand of ancestral authority.

I agree this conclusion is catastrophic, “the greatest intellectual collapse in the history of our species” (vii), as an old polemical foe of Pinker’s, Jerry Fodor (1987) calls it. Nevertheless, short grasping this conclusion, I fear we court a disaster far greater still.

Hitherto, the light cast by the Enlightenment left us largely in the dark, guessing at the lay of interior shadows. We can mathematically model the first instants of creation, and yet we remain thoroughly baffled by our ability to do so. So far, the march of moral progress has turned on the revolutionizing our material environments: we need only renovate our self-understanding enough to accommodate this revolution. Humanism can be seen as the ‘good enough’ product of this renovation, a retooling of folk vocabularies and folk reports to accommodate the radical environmental and interpersonal transformations occurring around them. The discourses are myriad, the definitions are endlessly disputed, nevertheless humanism provisioned us with the cognitive flexibility required to flourish in an age of environmental disenchantment and transformation. Once we understand the pertinent facts of human cognitive ecology, its status as an ad hoc ‘tutelary nature’ becomes plain.

Just what are these pertinent facts? First, there is a profound distinction between natural or causal cognition, and intentional cognition. Developmental research shows that infants begin exhibiting distinct physical versus psychological cognitive capacities within the first year of life. Research into Asperger Syndrome (Baron-Cohen et al 2001) and Autism Spectrum Disorder (Binnie and Williams 2003) consistently reveals a cleavage between intuitive social cognitive capacities, ‘theory-of-mind’ or ‘folk psychology,’ and intuitive mechanical cognitive capacities, or ‘folk physics.’ Intuitive social cognitive capacities demonstrate significant heritability (Ebstein et al 2010, Scourfield et al 1999) in twin and family studies. Adults suffering Williams Syndrome (a genetic developmental disorder affecting spatial cognition) demonstrate profound impairments on intuitive physics tasks, but not intuitive psychology tasks (Kamps et al 2017). The distinction between intentional and natural cognition, in other words, is not merely a philosophical assertion, but a matter of established scientific fact.

Second, cognitive systems are mechanically intractable. From the standpoint of cognition, the most significant property of cognitive systems is their astronomical complexity: to solve for cognitive systems is to solve for what are perhaps the most complicated systems in the known universe. The industrial scale of the cognitive sciences provides dramatic evidence of this complexity: the scientific investigation of the human brain arguably constitutes the most massive cognitive endeavor in human history. (In the past six fiscal years, from 2012 to 2017, the National Institute of Health [21/01/2017] alone will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegeneration (10.183 billion)).

Despite this intractability, however, our cognitive systems solve for cognitive systems all the time. And they do so, moreover, expending imperceptible resources and absent any access to the astronomical complexities responsible—which is to say, given very little information. Which delivers us to our third pertinent fact: the capacity of cognitive systems to solve for cognitive systems is radically heuristic. It consists of ‘fast and frugal’ tools, not so much sacrificing accuracy as applicability in problem-solving (Todd and Gigerenzer 2012). When one cognitive system solves for another it relies on available cues, granular information made available via behaviour, utterly neglecting the biomechanical information that is the stock and trade of the cognitive sciences. This radically limits their domain of applicability.

The heuristic nature of intentional cognition is evidenced by the ease with which it is cued. Thus, the fourth pertinent fact: intentional cognition is hypersensitive. Anthropomorphism, the attribution of human cognitive characteristics to systems possessing none, evidences the promiscuous application of human intentional cognition to intentional cues, our tendency to run afoul what might be called intentional pareidolia, the disposition to cognize minds where no minds exist (Waytz et al 2014). The Heider-Simmel illusion, an animation consisting of no more than shapes moving about a screen, dramatically evidences this hypersensitivity, insofar as viewers invariably see versions of a romantic drama (Heider and Simmel 1944). Research in Human-Computer Interaction continues to explore this hypersensitivity in a wide variety of contexts involving artificial systems (Nass and Moon 2000, Appel et al 2012). The identification and exploitation of our intentional reflexes has become a massive commercial research project (so-called ‘affective computing’) in its own right (Yonck 2017).

Intentional pareidolia underscores the fact that intentional cognition, as heuristic, is geared to solve a specific range of problems. In this sense, it closely parallels facial pareidolia, the tendency to cognize faces where no faces exist. Intentional cognition, in other words, is both domain-specific, and readily misapplied.

The incompatibility between intentional and mechanical cognitive systems, then, is precisely what we should expect, given the radically heuristic nature of the former. Humanity evolved in shallow cognitive ecologies, mechanically inscrutable environments. Only the most immediate and granular causes could be cognized, so we evolved a plethora of ways to do without deep environmental information, to isolate saliencies correlated with various outcomes (much as machine learning).

Human intentional cognition neglects the intractable task of cognizing natural facts, leaping to conclusions on the basis of whatever information it can scrounge. In this sense it’s constantly gambling that certain invariant backgrounds obtain, or conversely, that what it sees is all that matters. This is just another way to say that intentional cognition is ecological, which in turn is just another way to say that it can degrade, even collapse, given the loss of certain background invariants.

The important thing to note, here, of course, is how Enlightenment progress appears to be ultimately inimical to human intentional cognition. We can only assume that, over time, the unrestricted rationalization of our environments will gradually degrade, then eventually overthrow the invariances sustaining intentional cognition. The argument is straightforward:

1) Intentional cognition depends on cognitive ecological invariances.

2) Scientific progress entails the continual transformation of cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition.

But this argument oversimplifies matters. To see as much one need only consider the way a semantic apocalypse—the collapse of intentional cognition—differs from say a nuclear or zombie apocalypse. The Walking Dead, for instance, abounds with savvy applications of intentional cognition. The physical systems underwriting meaning, in other words, are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them.

The viability of intentional cognition, in other words, depends not on us, but on a particular cognitive ecology peculiar to us. The question of the threshold of a semantic apocalypse becomes the question of the stability of certain onboard biological invariances correlated to a background of certain environmental invariances. Change the constitutive or environmental invariances underwriting intentional cognition too much, and you can expect it will crash, generate more problems than solutions.

The hypersensitivity of intentional cognition either evinced by solitary confinement or more generally by anthropomorphism demonstrates the threat of systematic misapplication, the mode’s dependence on cue authenticity. (Sherry Turkle’s (2007) concerns regarding ‘Darwinian buttons,’ or Deidre Barrett’s (2010) with ‘supernormal stimuli,’ touch on this issue). So, one way of inducing semantic apocalypse, we might surmise, lies in the proliferation of counterfeit cues, information that triggers intentional determinations that confound, rather than solve any problems. One way to degrade cognitive ecologies, in other words, is to populate environments with artifacts cuing intentional cognition ‘out of school,’ which is to say, circumstances cheating or crashing them.

The morbidity of intentional cognition demonstrates the mode’s dependence on its own physiology. What makes this more than platitudinal is the way this physiology is attuned to the greater, enabling cognitive ecology. Since environments always vary while cognitive systems remain the same, changing the physiology of intentional cognition impacts every intentional cognitive ecology—not only for oneself, but for the rest of humanity as well. Just as our moral cognitive ecology is complicated by the existence of psychopaths, individuals possessing systematically different ways of solving social problems, the existence of ‘augmented’ moral cognizers complicates our moral cognitive ecology as well. This is important because you often find it claimed in transhumanist circles (see, for example, Buchanan 2011), that ‘enhancement,’ the technological upgrading of human cognitive capacities, is what guarantees perpetual Enlightenment. What better way to optimize our values than by reengineering the biology of valuation?

Here, at last, we encounter Nietzsche’s question cloaked in 21st century garb.

And here we can also see where the above argument falls short: it overlooks the inevitability of engineering intentional cognition to accommodate constitutive and environmental transformations. The dependence upon cognitive ecologies asserted in (1) is actually contingent upon the ecological transformation asserted in (2).

1) Intentional cognition depends on constitutive and environmental cognitive ecological invariances.

2) Scientific progress entails the continual transformation of constitutive and environmental cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition short remedial constitutive transformations.

What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus:

What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket? 277

The former dilemma is presently dominating the headlines and is set to be astronomically complicated by the explosion of AI. The latter we can see rising out of literature, clawing its way out of Hollywood, seizing us with video game consoles, engulfing ever more experiential bandwidth. And as I like to remind people, 100 years separates the Blu-Ray from the wax phonograph.

The key to blocking the possibility that the transformative potential of (2) can ameliorate the dependency in (1) lies in underscoring the continual nature of the changes asserted in (2). A cognitive ecology where basic constitutive and environmental facts are in play is no longer recognizable as a human one.

Scientific progress entails the collapse of intentional cognition.

On this view, the coupling of scientific and moral progress is a temporary affair, one doomed to last only so long as cognition itself remained outside the purview of Enlightenment cognition. So long as astronomical complexity assured that the ancestral invariances underwriting cognition remained intact, the revolution of our environments could proceed apace. Our ancestral cognitive equilibria need not be overthrown. In place of materially actionable knowledge regarding ourselves, we developed ‘humanism,’ a sop for rare stipulation and ambient disputation.

But now that our ancestral cognitive equilibria are being overthrown, we should expect scientific and moral progress will become decoupled. And I would argue that the evidence of this is becoming plainer with the passing of every year. Next week, we’ll take a look at several examples.

I fear Donald Trump may be just the beginning.

.

References

Appel, Jana, von der Putten, Astrid, Kramer, Nicole C. and Gratch, Jonathan 2012, ‘Does Humanity Matter? Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction’, in Advances in Human-Computer Interaction 2012 <https://www.hindawi.com/journals/ahci/2012/324694/ref/&gt;

Barrett, Deidre 2010, Supernormal Stimuli: How Primal Urges Overran Their Original Evolutionary Purpose (New York: W.W. Norton)

Binnie, Lynne and Williams, Joanne 2003, ‘Intuitive Psychology and Physics Among Children with Autism and Typically Developing Children’, Autism 7

Buchanan, Allen 2011, Better than Human: The Promise and Perils of Enhancing Ourselves (New York: Oxford University Press)

Ebstein, R.P., Israel, S, Chew, S.H., Zhong, S., and Knafo, A. 2010, ‘Genetics of human social behavior’, in Neuron 65

Fodor, Jerry A. 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: The MIT Press)

Harari, Yuval 2017, Homo Deus: A Brief History of Tomorrow (New York: HarperCollins)

Heider, Fritz and Simmel, Marianne 1944, ‘An Experimental Study of Apparent Behaviour,’ in The American Journal of Psychology 57

Kamps, Frederik S., Julian, Joshua B., Battaglia, Peter, Landau, Barbara, Kanwisher, Nancy and Dilks Daniel D 2017, ‘Dissociating intuitive physics from intuitive psychology: Evidence from Williams syndrome’, in Cognition 168

Nass, Clifford and Moon, Youngme 2000, ‘Machines and Mindlessness: Social Responses to Computers’, Journal of Social Issues 56

Pinker, Steven 1997, How the Mind Works (New York: W.W. Norton)

—. 2018, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking)

Scourfield J., Martin N., Lewis G. and McGuffin P. 1999, ‘Heritability of social cognitive skills in children and adolescents’, British Journal of Psychiatry 175

Todd, P. and Gigerenzer, G. 2012 ‘What is ecological rationality?’, in Todd, P. and Gigerenzer, G. (eds.) Ecological Rationality: Intelligence in the World (Oxford: Oxford University Press) 3–

30

Turkle, Sherry 2007, ‘Authenticity in the age of digital companions’, Interaction Studies 501-517

Waytz, Adam, Cacioppo, John, and Epley, Nicholas 2014, ‘Who See Human? The Stability and Importance of Individual Differences in Anthropomorphism’, Perspectives in Psychological Science 5

Yonck, Richard 2017, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence (New York, NY: Arcade Publishing)

 

Meta-problem vs. Scandal of Self-Understanding

by rsbakker

Let’s go back to Square One.

Try to recall what it was like before what it was like became an issue for you. Remember, if you can, a time when you had yet to reflect on the bald fact, let alone the confounding features, of experience. Square One refers to the state of metacognitive naivete, what it was like when experience was an exclusively practical concern, and not at all a theoretical one.

David Chalmers has a new paper examining the ‘meta-problem’ of consciousness, the question of why we find consciousness so difficult to fathom. As in his watershed “Consciousness and Its Place in Nature,” he sets out to exhaustively map the dialectical and evidential terrain before adducing arguments. After cataloguing the kinds of intuitions underwriting the meta-problem he pays particularly close attention to various positions within illusionism, insofar as these theories see the hard problem as an artifact of the meta-problem. He ends by attempting to collapse all illusionisms into strong illusionism—the thesis that consciousness doesn’t exist—which he thinks is an obvious reductio.

As Peter Hankins points out in his canny Conscious Entities post on the article, the relation between problem reports and consciousness is so vexed as to drag meta-problem approaches back into the traditional speculative mire. But there’s a bigger problem with Chalmer’s account of the meta-problem: it’s far too small. The meta-problem, I hope to show, is part and parcel of the scandal of self-knowledge, the fact that every discursive cork in Square Two, no matter how socially or individually indispensable, bobs upon the foam of philosophical disputation. The real question, the one our species takes for granted but alien anthropologists would find fascinating, is why do humans find themselves so dumbfounding? Why does normativity mystify us? Why does meaning stupefy? And, of course, why is phenomenality so inscrutable?

Chalmers, however, wants you to believe the problem is restricted to phenomenality:

I have occasionally heard the suggestion that internal self-models will inevitably produce problem intuitions, but this seem[s] clearly false. We represent our own beliefs (such as my belief that Canberra is in Australia), but these representations do not typically go along with problem intuitions or anything like them. While there are interesting philosophical issues about explaining beliefs, they do not seem to raise the same acute problem intuitions as do experiences.

and yet in the course of cataloguing various aspects of the meta-problem, Chalmers regularly finds himself referring to similarities between beliefs and consciousness.

Likewise, when I introspect my beliefs, they certainly do not seem physical, but they also do not seem nonphysical in the way that consciousness does. Something special is going on in the consciousness case: insofar as consciousness seems nonphysical, this seeming itself needs to be explained.

Both cognition and consciousness seem nonphysical, but not in the same way. Consciousness, Chalmers claims, is especially nonphysical. But if we don’t understand the ‘plain’ nonphysicality of beliefs, then why tackle the special nonphysicality of conscious experience?

Here the familiar problem strikes again: Everything I have said about the case of perception also applies to the case of belief. When a system introspects its own beliefs, it will typically do so directly, without access to further reasons for thinking it has those beliefs. Nevertheless, our beliefs do not generate nearly as strong problem intuitions as our phenomenal experiences do. So more is needed to diagnose what is special about the phenomenal case.

If more is needed, then what sense does it make to begin looking for this ‘more’ in advance, without understanding what knowledge and experience have in common?

Interrogating the problem of intentionality and consciousness in tandem becomes even more imperative when we consider the degree to which Chalmers’ categorizations and evaluations turn on intentional vocabularies. The hard problem of consciousness may trigger more dramatic ‘problem intuitions,’ but it shares with the hard problem of cognition a profound inability to formulate explananda. There’s no more consensus on the nature of belief than there is the nature of consciousness. We remain every bit as stumped, if not quite as agog.

Not only do intentional vocabularies remain every bit as controversial as phenomenal ones in theoretical explanatory contexts, they also share the same apparent incompatibilities with natural explanation. Is it a coincidence that both vocabularies seem irreducible? Is it a coincidence they both seem nonphysical? Is it a coincidence that both seem incompatible with causal explanation? Is it a coincidence that each implicates the other?

Of course not. They implicate each other because they’re adapted to function in concert. Since they function in concert, there’s a good chance their shared antipathy to causal explanation turns on shared mechanisms. The same can be said regarding their apparent irreducible nonphysicality.

And the same can be said of the problem they pose.

Square Two, then, our theoretical self-understanding, is mired in theoretical disputation. Every philosopher (the present one included) will be inclined to think their understanding the exception, but this does nothing to change the fact of disputation. If we characterize the space of theoretical self-understanding—Square Two—as a general controversy space, we see that Chalmers, as an intentionalist, has taken a position in intentional controversy space to explicate phenomenal controversy space.

Consider his preferred account of the meta-problem:

To sum up what I see as the most promising approach: we have introspective models deploying introspective concepts of our internal states that are largely independent of our physical concepts. These concepts are introspectively opaque, not revealing any of the underlying physical or computational mechanisms. We simply find ourselves in certain internal states without having any more basic evidence for this. Our perceptual models perceptually attribute primitive perceptual qualities to the world, and our introspective models attribute primitive mental relations to those qualities. These models produce the sense of acquaintance both with those qualities and with our awareness of those qualities.

While the gist of this picture points in the right direction, the posits used—representations, concepts, beliefs, attributions, acqaintances, awarenesses—doom it to dwell in perpetual underdetermination, which is to say, discursive ground friendly to realists like Chalmers. It structures the meta-problem according to a parochial rationalization of terms no one can decisively formulate, let alone explain. It is assured, in other words, to drag the meta-problem into the greater scandal of self-knowledge.

To understand why Square Two has proven so problematic in general, one needs to take a step back, to relinquish their countless Square Two prejudices, and reconsider things from the standpoint of biology. Why, biologically speaking, should an organism find cognizing itself so difficult? Not only is this the most general form of the question that Chalmer’s takes himself to be asking, it is posed from a position outside the difficulty it interrogates.

The obvious answer is that biology, and cognitive biology especially, is so fiendishly complicated. The complexity of biology all but assures that cognition will neglect biology and fasten on correlations between ‘surface irritations’ and biological behaviours. Why, for instance, should a frog cognize fly biology when it need only strike at black dots?

The same goes for metacognitive capacities: Why metacognize brain biology when we need only hold our tongue at dinner, figure out what went wrong with the ambush, explain what happened to the elders, and so on? On any plausible empirical story, metacognition consists in an opportunistic array of heuristic systems possessing the access and capacity to solve various specialized domains. The complexity of the brain all but assures as much. Given the intractability of the processes monitored, metacognitive consumers remain ‘source insensitive’—they solve absent any sensitivity to underlying systems. As need-to-know consumers adapted to solving practical problems in ancestral contexts, we should expect retasking those capacities to the general problem of ourselves would prove problematic. As indeed it has. Our metacognitive insensitivity, after all, extends to our insensitivity: we are all but oblivious to the source-insensitive, heuristic nature of metacognition.

And this provides biological grounds to predict the kinds of problems such retasking might generate; it provides an elegant, scientifically tractable way to understand a great number of the problems plaguing human self-knowledge.

 

We should expect metacognitive (and sociocognitive) application problems. Given that metacognition neglects the heuristic limits of metacognition, all novel applications of metacognitive capacities to new problem ecologies (such as those devised by the ancient Greeks) run the risk of misapplication. Imagine rebuilding an engine with invisible tools. Metacognitive neglect assures that trial-and-error provides our only means of sorting between felicitous and infelicitous applications.

We should expect incompatibility with source-sensitive modes of cognition. Source-insensitive cognitive systems are primed to solve via information ecologies that systematically neglect the actual systems responsible. We rely on robust correlations between the signal available and the future behaviour of the system requiring solution–‘clues’ some heuristic researchers call them. The ancestral integration of source-sensitive and source-insensitive cognitive modes (as in narrative, say, which combines intentional and causal cognition) assures at best specialized linkages. Beyond these points of contact, the modes will be incompatible given the specificity of the information consumed in source-insensitive systems.

We should expect to suffer illusions of sufficiency. Given the dependence of all cognitive systems on the sufficiency of upstream processing for downstream success, we should expect insensitivity to metacognitive insufficiency to result in presumptive sufficiency. Systems don’t need a second set of systems monitoring the sufficiency of every primary system to function: sufficiency is the default. Retasking metacognitive capacities to theoretical problems, we can presume, deploys as sufficient despite almost certainly being insufficient. This can be seen as a generalization of WYSIATI, or ‘what-you-see-is-all-there-is,’ the principle Daniel Kahneman uses to illustrate how certain heuristic mechanisms do not discriminate between sufficient and insufficient information.

We should expect to suffer illusions of simplicity (or identity effects). Given metacognitive insensitivity to its insensitivity, it remains blind to artifacts of that insensitivity as artifacts. The absence of distinction will be intuited as simplicity. Flicker-fusion as demonstrated in psychophysics almost certainly possesses cognitive and metacognitive analogues, instances where the lack of distinction reports as identity or simplicity. The history of science is replete with examples of mistaking artifacts of information poverty with properties of nature. The small was simple prior to the microscope and the discovery of endless subvisibilia. The heavens consisted of spheres.

We should expect to suffer illusions of free-floating efficacy. The ancestral integration of source-insensitive and source-sensitive cognition underwrites fetishism, the cognition of sources possessing no proximal sources. In his cognitive development research, Andrei Cimpian calls these ‘inherence heuristics,’ where, in ignorance of extrinsic factors, we impute an intrinsic efficacy to cognize/communicate local effects. We are hardwired to fetishize.

We should expect to suffer entrenched only-game-in-town effects. In countless contexts, ignorance of alternatives fools individuals into thinking their path necessary. This is why Kant, who had no inkling of the interpretive jungle to come, thought he had stumbled across a genuine synthetic a priori science. Given metacognitive insensitivity to its insensitivity, the biological parochialism of source-insensitive cognition is only manifest in applications. Once detected, neglect assures the distinctiveness of source-insensitive cognition will seem absolute, lending itself to reports of autonomy. So where Kant ran afoul the only-game-in-town effect in declaring his discourse apodictic, he also ran afoul a biologically entrenched version of the same effect in declaring cognition transcendental.

We should expect misfires will be systematic. Generally speaking, rules of thumb do not cease being rulish when misapplied. Heuristic breakdowns are generally systematic. Where the system isn’t crashed altogether, the consequences of mistakes will be structured and iterable. This predictability allows certain heuristic breakdowns to become valuable tools. The Pleistocene discovery that applying pigments to surfaces could cue the (cartoon) visual cognition of nearly anything examples one, particularly powerful instrumentalization of heuristic systematicity. Metacognition is no different than visual cognition in this regard: like visual heuristics, cognitive heuristics generate systematic ‘illusions’ admitting, in some cases, genuine instrumentalizations (things like ‘representations’ and functional analyses in empirical psychology), but typically generating only disputation otherwise.

We should expect to suffer performative interference-effects (breakdowns in ‘meta-irrelevance’). The intractability of the enabling axis of cognition, the inevitability of medial neglect, forces the system to presume its cognitive sufficiency. As a result, cognition biomechanically depends on the ‘meta-irrelevance’ of its own systems; it requires that information pertaining to its functioning is not required to solve whatever the problem at hand. Nonhuman cognizers, for instance, are comparatively reliant on the sufficiency of their cognitive apparatus: they can’t, like us, raise a finger and say, ‘On second thought,’ or visit the doctor, or lay off the weed, or argue with their partner. Humans possess a plethora of hacks, heuristic ways to manage cognitive shortcomings. Nevertheless, the closer our metacognitive tools come to ongoing, enabling access—the this-very-moment-now of cognition—the more regularly they will crash, insofar as these too require meta-irrelevance.

We should expect chronic underdetermination. Metacognitive resources adapted to the solution of ancestral practical problems have no hope of solving for the nature of experience or cognition.

We should expect ontological confusion. As mentioned, cognition biomechanically depends on the ‘meta-irrelevance’ of its own systems; it requires that information pertaining to its functioning is not required to solve whatever the problem at hand. Metacognitive resources retasked to solve for these systems flounder, then begin systematically confusing artifacts of medial neglect for the dumbfounding explananda of cognition and experience. Missing dimensions are folded into neglect, and metacognition reports these insufficiencies as sufficient. Source insensitivity becomes source independence. Complexity becomes simplicity. Only a second ‘autonomous’ ontology will do.

 

Floridi’s Plea for Intentionalism

by rsbakker

 

Questioning Questions

Intentionalism presumes that intentional modes of cognition can solve for intentional modes of cognition, that intentional vocabularies, and intentional vocabularies alone, can fund bona fide theoretical understanding of intentional phenomena. But can they? What evidences their theoretical efficacy? What, if anything, does biology have to say?

No one denies the enormous practical power of those vocabularies. And yet, the fact remains that, as a theoretical explanatory tool, they invariably deliver us to disputation—philosophy. To rehearse my favourite William Uttal quote: “There is probably nothing that divides psychologists of all stripes more than the inadequacies and ambiguities of our efforts to define mind, consciousness, and the enormous variety of mental events and phenomena” (The New Phrenology, p.90).

In his “A Plea for Non-naturalism as Constructionism,” Luciano Floridi, undertakes a comprehensive revaluation of this philosophical and cognitive scientific inability to decisively formulate, let alone explain intentional phenomena. He begins with a quote from Quine’s seminal “Epistemology Naturalized,” the claim that “[n]aturalism does not repudiate epistemology, but assimilates it to empirical psychology.” Although Floridi entirely agrees that the sciences have relieved philosophy of a great number of questions over the centuries, he disagrees with Quine’s ‘assimilation,’ the notion of naturalism as “another way of talking about the death of philosophy.” Acknowledging that philosophy needs to remain scientifically engaged—naturalistic—does not entail discursive suicide. “Philosophy deals with ultimate questions that are intrinsically open to reasonable and informed disagreement,” Floridi declares. “And these are not “assimilable” to scientific enquiries.”

Ultimate? Reading this, one might assume that Floridi, like so many other thinkers, has some kind of transcendental argument operating in the background. But Floridi is such an exciting philosopher to read precisely because he isn’t ‘like so many other thinkers.’ He hews to intentionalism, true, but he does so in a manner that is uniquely his own.

To understand what he means by ‘ultimate’ in this paper we need to visit another, equally original essay of his, “What is a Philosophical Question?” where he takes an information ‘resource-oriented’ approach to the issue of philosophical questions, “the simple yet very powerful insight that the nature of problems may be fruitfully studied by focusing on the kind of resources required in principle to solve them, rather than on their form, meaning, reference, scope, and relevance.” He focuses on the three kinds of questions revealed by this perspective: questions requiring empirical resources, questions requiring logico-mathematical resources, and questions requiring something else—what he calls ‘open questions.’ Philosophical questions, he thinks, belong to this latter category.

But if open questions admit no exhaustive empirical or formal determination, then why think them meaningful? Why not, as Hume famously advises, consign them to the flames? Because, Floridi, argues, they are inescapable. Open questions possess no regress enders: they are ‘closed’ in the set-theoretic sense, which is to say, they are questions whose answers always beget more questions. To declare answers to open questions meaningless or trivial is to answer an open question.

But since not all open questions are philosophical questions, Floridi needs to restrict the scope of his definition. The difference, he thinks, is that philosophical questions “tend to concentrate on more significant and consequential problems.” Philosophical questions, in addition to being open questions, are also ultimate questions, not in any foundational or transcendental sense, but in the sense of casting the most inferential shade across less ultimate matter.

Ultimate questions may be inescapable, as Floridi suggests, but this in no way allays the problem of the resources used to answer them. Why not simply answer them pragmatically, or with a skeptical shrug? Floridi insists that the resources are found in “the world of mental contents, conceptual frameworks, intellectual creations, intelligent insights, dialectical reasonings,” or what he calls ‘noetic resources,’ the non-empirical, non-formal fund of things that we know. Philosophical questions, in addition to being ultimate, open questions, require noetic resources to be answered.

But all questions, of course, are not equal. Some philosophical problems, after all, are mere pseudo-problems, the product of the right question being asked in the wrong circumstances. Though the ways in which philosophical questions misfire seem manifold, Floridi focusses on a single culprit to distinguish ‘bad’ from ‘good’ philosophical questions: the former, he thinks, overstep their corresponding ‘level of abstraction,’ aspiring to be absolute or unconditioned. Philosophical questions, in addition to being noetic, ultimate, open questions, are also contextually appropriate questions.

Philosophy, then, pertains to questions involving basic matters, lacking decisive empirical or formal resources and so lacking institutional regress enders. Good philosophy, as opposed to bad, is always conditional, which is to say, sensitive to the context of inquiry. It is philosophy in this sense that Floridi thinks lies beyond the pale of Quinean assimilation in “A Plea for Non-naturalism as Constructionism.”

But resistance to assimilation isn’t his only concern. Science, Floridi thinks, is caught in a predicament: as ever more of the universe is dragged from the realm of open, philosophical interrogation into the realm of closed, scientific investigation, the technology enabled by and enabling this creeping closure is progressively artificializing our once natural environments. Floridi writes:

“the increasing and profound technologisation of science is creating a tension between what we try to explain, namely all sorts of realities, and how we explain it, through the highly artificial constructs and devices that frame and support our investigations. Naturalistic explanations are increasingly dependent on non-natural means to reach such explanations.”

This, of course, is the very question at issue between the meaning skeptic and the meaning realist. To make his case, Floridi has to demonstrate the how and why the artefactual isn’t simply more nature, every bit as bound by the laws of thermodynamics as everything else in nature. Why think the ‘artificial’ is anything more than (to turn a Hegelian line on its head) ‘nature reborn’? To presume as much would be to beg the question—to run afoul the very ‘scholasticism’ Floridi criticizes.

Again, he quotes Quine from “Epistemology Naturalized,” this time the famous line reminding us that the question of “how irritations of our sensory surfaces” result in knowledge is itself a scientific question. The absurdity of the assertion, Floridi thinks, is easily assayed by considering the complexity of cognitive and aesthetic artifacts: “by the same reasoning, one should then try to answer the question how Beethoven managed to arrive at his Ode to Joy from the seven-note diatonic musical scale, Leonardo to his Mona Lisa from the three colours in the RGB model, Orson Welles to his Citizen Kane from just black and white, and today any computer multimedia from just zeros and ones.”

The egregious nature of the disanalogies here are indicative of the problem Floridi faces. Quine’s point isn’t that knowledge reduces to sensory irritations, merely that knowledge consists of scientifically tractable physical processes. For all his originality, Floridi finds himself resorting to a standard ‘you-can’t-get-there-from-here’ argument against eliminativism. He even cites the constructive consensus in neuroscience, thinking it evidences the intrinsically artefactual, nature of knowledge. But he never explains why the artefactual nature of knowledge—unlike the artefactual nature of, say, a bird’s nest—rules out the empirical assimilation of knowledge. Biology isn’t any less empirical for being productive, so what’s the crucial difference here? At what point does artefactual qua biological become artefactual qua intentional?

Epistemological questions, he asserts, “are not descriptive or scientific, but rather semantic and normative.” But Quine is asking a question about epistemology and whether what we now call cognitive science can exhaustively answer it. As it so happens the question of epistemology as a natural phenomena is itself an epistemological question, and as such involves the application of intentional (semantic and normative) cognitive modes. But why think these cognitive modes themselves cannot be empirically described and explained the way, for example, neuroscience has described and explained the artefactual nature of cognition? If artefacts like termite mounds and bird’s nests admit natural explanations, then why not knowledge? Given that he hopes to revive “a classic, foundationalist role for philosophy itself,” this is a question he has got to answer. Philosophers have a long history of attempting to secure the epistemological primacy of their speculation on the back of more speculation. Unless Floridi is content with “an internal ‘discourse’ among equally minded philosophers,” he needs to explain what makes the artifactuality of knowledge intrinsically intentional.

In a sense, one can see his seminal 2010 work, The Philosophy of Information, as an attempt to answer this question, but he punts on the issue, here, providing only a reference to his larger theory. Perhaps this is why he characterizes this paper as “a plea for non-naturalism, not an argument for it, let alone a proof or demonstration of it.” Even though the entirety of the paper is given over to arguments inveighing against unrestricted naturalism a la Quine, they all turn on a shared faith in the intrinsic intentionality of cognition.

 

Reasonably Reiterable Queries

Floridi defines ‘strong naturalism’ as the thesis that all nonnatural phenomena can be reduced to natural phenomena. A strong naturalist believes that all phenomena can be exhaustively explained using only natural vocabularies. The key term, for him, is ‘exhaustively.’ Although some answers to our questions put the matter to bed, others simply leave us scratching our heads. The same applies to naturalistic explanations. Where some reductions are the end of the matter, ‘lossless,’ others are so ‘lossy’ as to explain nothing at all. The latter, he suggests, make it reasonable to reiterate the original query. This, he thinks, provides a way to test any given naturalization of some phenomena, an ‘RRQ’ test. If a reduction warrants repeating the very question it was intended to answer, then we have reason to assume the reduction to be ‘reductive,’ or lossy.

The focus of his test, not surprisingly, is the naturalistic inscrutability of intentional phenomena:

“According to normative (also known as moral or ethical) and semantic non-naturalism, normative and semantic phenomena are not naturalisable because their explanation cannot be provided in a way that appeals exhaustively and non-reductively only to natural phenomena. In both cases, any naturalistic explanation is lossy, in the sense that it is perfectly reasonable to ask again for an explanation, correctly and informatively.”

This failure, he asserts, demonstrates the category mistake of insisting that intentional phenomena be naturalistically explained. In lieu of an argument, he gives us examples. No matter how thorough our natural explanations of immoral photographs might be, one can always ask, Yes, but what makes them immoral (as opposed to socially sanctioned, repulsive, etc.)? Facts simply do not stack into value—Floridi takes himself to be expounding a version of Hume’s and Moore’s point here. The explanation remains ‘lossy’ no matter what our naturalistic explanation. Floridi writes:

“The recalcitrant, residual element that remains unexplained is precisely the all-important element that requires an explanation in the first place. In the end, it is the contribution that the mind makes to the world, and it is up to the mind to explain it, not the world.”

I’ve always admired, even envied, Floridi for the grace and lucidity of his prose. But no matter how artful, a god of the gaps argument is a god of the gaps argument. Failing the RRQ does not entail that only intentional cognition can solve for intentional phenomena.

He acknowledges the problem here: “Admittedly, as one of the anonymous reviewers rightly reminded me, one may object that the recalcitrant, residual elements still in need of explanation may be just the result of our own insipience (understood as the presence of a question without the corresponding relevant and correct answer), perhaps as just a (maybe even only temporary) failure to see that there is merely a false impression of an information deficit (by analogy with a scandal of deduction).” His answer here is to simply apply his test, suggesting the debate, as interminable, merely underscores “an openness to the questioning that the questioning itself keeps open.” I can’t help but think he feels the thorn, at this point. Short reading “What is a Philosophical Question?” this turn in the article would be very difficult to parse. Philosophical questioning, Floridi would say, is ‘closed under questioning,’ which is to say, a process that continually generates more questions. The result is quite ingenious. As with Derridean deconstruction, philosophical problematizations of Floridi’s account of philosophy end up evidencing his account of philosophy by virtue of exhibiting the vulnerability of all guesswork: the lack of regress enders. Rather than committing to any foundation, you commit to a dialectical strategy allowing you to pick yourself up by your own hair.

The problem is that RRQ is far from the domesticated discursive tool that Floridi would have you believe it is. If anything, it provides a novel and useful way to understand the limits of theoretical cognition, not the limits of this or that definition of ‘naturalism.’ RRQ is a great way to determine where the theoretical guesswork in general begins. Nonnaturalism is the province of philosophy for a reason: every single nonnatural answer ever adduced to answer the question of this or that intentional phenomena have failed to close the door on RRQ. Intentional philosophy, such as Floridi’s, possesses no explanatory regress enders—not a one. It is always rational to reiterate the question wherever theoretical applications of intentional cognition are concerned. This is not the case with natural cognition. If RRQ takes a bite out of natural theoretical explanation of apparent intentional phenomena, then it swallows nonnatural cognition whole.

Raising the question, Why bother with theoretical applications of nonnatural cognition at all? Think about it: if every signal received via a given cognitive mode is lossy, why not presume that cognitive mode defective? The successes of natural theoretical cognition—the process of Quinean ‘assimilation’—show us that lossiness typically dwindles with the accumulation of information. No matter how spectacularly our natural accounts of intentional phenomena fail, we need only point out the youth of cognitive science and the astronomical complexities of the systems involved. The failures of natural cognition belong to the process of natural cognition, the rondo of hypothesis and observation. Theoretical applications of intentional cognition, on the other hand, promise only perpetual lossiness, the endless reiteration of questions and uninformative answers.

One can rhetorically embellish endless disputation as discursive plenitude, explanatory stasis as ontological profundity. One can persuasively accuse skeptics of getting things upside down. Or one can speculate on What-Philosophy-Is, insist that philosophy, instead of mapping where our knowledge breaks down (as it does in fact), shows us where this-or-that ‘ultimate’ lies. In “What is a Philosophical Question?” Floridi writes:

“Still, in the long run, evolution in philosophy is measured in terms of accumulation of answers to open questions, answers that remain, by the very nature of the questions they address, open to reasonable disagreement. So those jesting that philosophy has never “solved” any problem but remains for ever stuck in endless debates, that there is no real progress in philosophy, clearly have no idea what philosophy is about. They may as well complain that their favourite restaurant is constantly refining and expanding its menu.”

RRQ says otherwise. According to Floridi’s own test, the problem isn’t that the restaurant is constantly refining and expanding its menu, the problem is that nothing ever makes it out of the kitchen! It’s always sent back by rational questions. Certainly countless breakdowns have found countless sociocognitive uses: philosophy is nothing if not recombinant, mutation machine. But these powerful adaptations of intentional cognition are simply that: powerful adaptations of natural systems originally evolved to solve complex systems on the metabolic cheap. All attempts to use intentional cognition to theorize their (entirely natural) nature end in disputation. Philosophy has yet to theoretically solve any aspect of intentional cognition. And this merely follows from Floridi’s own definition of philosophy—it just cuts against his rhetorical register. In fact, when one takes a closer, empirical look at the resources available, the traditional conceit at the heart of his nonnaturalism quickly becomes clear.

 

Follow the Money

So, what is it? Why spin a limit, a profound cognitive horizon, into a plenum? Floridi is nothing if not an erudite and subtle thinker, and yet his argument in this paper entirely depends on neglecting to see RRQ for the limit that it is. He does this because he fails to follow through on the question of resources.

For my part, I look at naturalism as a reliance on a particular set of ‘hacks,’ not as any dogma requiring multiple toes scratching multiple lines in the sand.  Reverse-engineering—taking things apart, seeing how they work—just happens to be an extraordinarily powerful approach, at least as far as our high-dimensional (‘physical’) environments are concerned. If we can reverse-engineer intentional phenomena—assimilate epistemology, say, to neuroscience—then so much the better for theoretical cognition (if not humanity). We still rely on unexplained explainers, of course, RRQ still pertains, but the boundaries will have been pushed outward.

Now the astronomical complexity of biology doesn’t simply suggest, it entails that we would find ourselves extraordinarily difficult to reverse-engineer, at least at first. Humans suffer medial neglect, fundamental blindness to the high-dimensional structure and dynamics of cognition. (As Floridi acknowledges in his own consideration of Dretske’s “How Do You Know You are Not a Zombie?” the proximal conditions of experience do not appear within experience (see The Philosophy of Information, chapter 13)). The obvious reason for this turns on the limitations of our tools, both onboard and prosthetic. Our ancestors, for instance, had no choice but to ignore biology altogether, to correlate what ‘sensory irritants’ they had available with this or that reproductively decisive outcome. Everything in the middle, the systems and ecology that enabled this cognitive feat, is consigned to neglect (and doomed to be reified as ‘transparency’). Just consider the boggling resources commanded by the cognitive sciences: until very recently reverse-engineering simply wasn’t a viable cognitive mode, at least when it came to living things.

This is what ‘intentional cognition’ amounts to: the collection of ancestral devices, ‘hacks,’ we use to solve, not only one another, but all supercomplicated systems. Since these hacks are themselves supercomplicated, our ancestors had to rely on them to solve for them. Problems involving intentional cognition, in other words, cue intentional problem-solving systems, not because (cue drumroll) intentional cognition inexplicably outruns the very possibility of reverse-engineering, but because our ancestors had no other means.

Recall Floridi’s ‘noetic resources,’ the “world of mental contents, conceptual frameworks, intellectual creations, intelligent insights, dialectical reasonings” that underwrites philosophical, as opposed to empirical or formal, answers. It’s no accident that the ‘noetic dimension’ also happens to be the supercomplicated enabling or performative dimension of cognition—the dimension of medial neglect. Whatever ancestral resources we possessed, they comprised heuristic capacities geared to information strategically correlated to the otherwise intractable systems. Ancestrally, noetic resources consisted of the information and metacognitive capacity available to troubleshoot applications of intentional cognitive systems. When our cognitive hacks went wrong, we had only metacognitive hacks to rely on. ‘Noetic resources’ refers to our heuristic capacities to troubleshoot the enabling dimension of cognition while neglecting its astronomical complexity.

So, take Floridi’s example of immoral photographs. The problem he faced, recall, was that “the question why they are immoral can be asked again and again, reasonably” not simply of natural explanations of morality, but nonnatural explanations as well. The RRQ razor cuts both ways.

The reason natural cognition fails to decisively answer moral questions should be pretty clear: moral cognition is radically heuristic, enabling the solution of certain sociocognitive problems absent high-dimensional information required by natural cognition. Far from expressing the ‘mind’s contribution’ (whatever that means), the ‘unexplained residuum’ warranting RRQ evidences the interdependence between cues and circumstance in heuristic cognition, the way the one always requires the other to function. Nothing so incredibly lossy as ‘mind’ is required. This inability to duplicate heuristic cognition, however, has nothing to do with the ability to theorize the nature of moral cognition, which is biological through and through. In fact, an outline of such an answer has just been provided here.

Moral cognition, of course, decisively solves practical moral problems all the time (despite often being fantastically uninformative): our ancestors wouldn’t have evolved the capacity otherwise. Moral cognition fails to decisively answer the theoretical question of morality, on the other hand, because it turns on ancestrally available information geared to the solution of practical problems. Like all the other devices comprising our sociocognitive toolbox, it evolved to derive as much practical problem-solving capacity from as little information as possible. ‘Noetic resources’ are heuristic resources, which is to say, ecological through and through. The deliverances of reflection are deliverances originally adapted to the practical solution of ancestral social and natural environments. Small wonder our semantic and normative theories of semantic and normative phenomena are chronically underdetermined! Imagine trying to smell skeletal structure absent all knowledge of bone.

But then why do we persist? Cognitive reflex. Raising the theoretical question of semantic and normative cognition automatically (unconsciously) cues the application of intentional cognition. Since the supercomplicated structure and dynamics of sociocognition belong to the information it systematically neglects, we intuit only this applicability, and nothing of the specialization. We suffer a ‘soda straw effect,’ a discursive version of Kahneman’s What-you-see-is-all-there-is effect. Intuition tells us it has to be this way, while the deliverances of reflection betray nothing of their parochialism. We quite simply did not evolve the capacity either to intuit our nature or to intuit our our inability to intuit our nature, and so we hallucinate something inexplicable as a result. We find ourselves trapped in a kind of discursive anosognosia, continually applying problem-parochial access and capacity to general, theoretical questions regarding the nature of inexplicable-yet-(allegedly)-undeniable semantic and normative phenomena.

This picture is itself open to RRQ, of course, the difference being that the positions taken are all natural, and so open to noise reduction as well. As per Quine’s process of assimilation, the above story provides a cognitive scientific explanation for a very curious kind of philosophical behaviour. Savvy to the ecological limits of noetic resources, it patiently awaits the accumulation of empirical resources to explain them, and so actually has a chance of ending the ancient regress.

The image Floridi chases is a mirage, what happens when our immediate intuitions are so impoverished as to arise without qualification, and so smack of the ‘ultimate.’ Much as the absence of astronomical information duped our ancestors into thinking our world stood outside the order of planets, celestial as opposed to terrestrial, the absence of metacognitive information dupes us into thinking our minds stand outside the order of the world, intentional as opposed to natural. Nothing, it seems, could be more obvious than noocentrism, despite our millennial inability to silence any—any—question regarding the nature of the intentional.