Three Pound Brain

No bells, just whistling in the dark…

Tag: Nihilism

Enlightenment How? Omens of the Semantic Apocalypse

by rsbakker

“In those days the world teemed, the people multiplied, the world bellowed like a wild bull, and the great god was aroused by the clamor. Enlil heard the clamor and he said to the gods in council, “The uproar of mankind is intolerable and sleep is no longer possible by reason of the babel.” So the gods agreed to exterminate mankind.” –The Epic of Gilgamesh

We know that human cognition is largely heuristic, and as such dependent upon cognitive ecologies. We know that the technological transformation of those ecologies generates what Pinker calls ‘bugs,’ heuristic miscues due to deformations in ancestral correlative backgrounds. In ancestral times, our exposure to threat-cuing stimuli possessed a reliable relationship to actual threats. Not so now thanks to things like the nightly news, generating (via, Pinker suggests, the availability heuristic (42)) exaggerated estimations of threat.

The toll of scientific progress, in other words, is cognitive ecological degradation. So far that degradation has left the problem-solving capacities of intentional cognition largely intact: the very complexity of the systems requiring intentional cognition has hitherto rendered cognition largely impervious to scientific renovation. Throughout the course of revolutionizing our environments, we have remained a blind-spot, the last corner of nature where traditional speculation dares contradict the determinations of science.

This is changing.

We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travelers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts.

Now that the sciences are colonizing the complexities of experience and cognition, we can see the first clear-cut omens of the semantic apocalypse.

 

Crash Space

He assiduously avoids the topic in Enlightenment Now, but in The Blank Slate, Pinker devotes several pages to deflating the arch-incompatibility between natural and intentional modes of cognition, the problem of free will:

“But how can we have both explanation, with its requirement of lawful causation, and responsibility, with its requirement of free choice? To have them both we don’t need to resolve the ancient and perhaps irresolvable antinomy between free will and determinism. We have only to think clearly about what we want the notion of responsibility to achieve.” 180

He admits there’s no getting past the ‘conflict of intuitions’ underwriting the debate. Since he doesn’t know what intentional and natural cognition amount to, he doesn’t understand their incompatibility, and so proposes we simply side-step the problem altogether by redefining ‘responsibility’ to mean what we need it to mean—the same kind of pragmatic redefinition proposed by Dennett. He then proceeds to adduce examples of ‘clear thinking’ by providing guesses regarding ‘holding responsible’ as deterrence, which is more scientifically tractable. “I don’t claim to have solved the problem of free will, only to show that we don’t need to solve it to preserve personal responsibility in the face of an increasing understanding of the causes of behaviour” (185).

Here we can see how profoundly Pinker (as opposed to Nietzsche and Adorno) misunderstands the profundity of Enlightenment disenchantment. The problem isn’t that one can’t cook up alternate definitions of ‘responsibility,’ the problem is that anyone can, endlessly. ‘Clear thinking’ is as liable to serve Pinker as well as ‘clear and distinct ideas’ served Descartes, which is to say, as more grease for the speculative mill. No matter how compelling your particular instrumentalization of ‘responsibility’ seems, it remains every bit as theoretically underdetermined as any other formulation.

There’s a reason such exercises in pragmatic redefinition stall in the speculative ether. Intentional and mechanical cognitive systems are not optional components of human cognition, nor are the intuitions we are inclined to report. Moreover, as we saw in the previous post, intentional cognition generates reliable predictions of system behaviour absent access to the actual sources of that behaviour. Intentional cognition is source-insensitive. Natural cognition, on the other hand, is source sensitive: it generates predictions of system behaviour via access to the actual sources of that behaviour.

Small wonder, then, that our folk intentional intuitions regularly find themselves scuttled by scientific explanation. ‘Free will,’ on this account, is ancestral lemonade, a way to make the best out of metacognitive lemons, namely, our blindness to the sources of our thought and decisions. To the degree it relies upon ancestrally available (shallow) saliencies, any causal (deep) account of those sources is bound to ‘crash’ our intuitions regarding free will. The free will debate that Pinker hopes to evade with speculation can be seen as a kind of crash space, the point where the availability of deep information generates incompatible causal intuitions and intentional intuitions.

The confusion here isn’t (as Pinker thinks) ‘merely conceptual’; it’s a bona fide, material consequence of the Enlightenment, a cognitive version of a visual illusion. Too much information of the wrong kind crashes our radically heuristic modes of cognizing decisions. Stipulating definitions, not surprisingly, solves nothing insofar as it papers over the underlying problem—this is why it merely adds to the literature. Responsibility-talk cues the application of intentional cognitive modes; it’s the incommensurability of these modes with causal cognition that’s the problem, not our lexicons.

 

Cognitive Information

Consider the laziness of certain children. Should teachers be allowed to hold students responsible for their academic performance? As the list of learning disabilities grows, incompetence becomes less a matter of ‘character’ and more a matter of ‘malfunction’ and providing compensatory environments. Given that all failures of competence redound on cognitive infelicities of some kind, and given that each and every one of these infelicities can and will be isolated and explained, should we ban character judgments altogether? Should we regard exhortations to ‘take responsibility’ as forms of subtle discrimination, given that executive functioning varies from student to student? Is treating children like (sacred) machinery the only ‘moral’ thing to do?

So far at least. Causal explanations of behaviour cue intentional exemptions: our ancestral thresholds for exempting behaviour from moral cognition served larger, ancestral social equilibria. Every etiological discovery cues that exemption in an evolutionarily unprecedented manner, resulting in what Dennett calls “creeping exculpation,” the gradual expansion of morally exempt behaviours. Once a learning impediment has been discovered, it ‘just is’ immoral to hold those afflicted responsible for their incompetence. (If you’re anything like me, simply expressing the problem in these terms rankles!) Our ancestors, resorting to systems adapted to resolving social problems given only the merest information, had no problem calling children lazy, stupid, or malicious. Were they being witlessly cruel doing so? Well, it certainly feels like it. Are we more enlightened, more moral, for recognizing the limits of that system, and curtailing the context of application? Well, it certainly feels like it. But then how do we justify our remaining moral cognitive applications? Should we avoid passing moral judgment on learners altogether? It’s beginning to feel like it. Is this itself moral?

This is theoretical crash space, plain and simple. Staking out an argumentative position in this space is entirely possible—but doing so merely exemplifies, as opposed to solves, the dilemma. We’re conscripting heuristic systems adapted to shallow cognitive ecologies to solve questions involving the impact of information they evolved to ignore. We can no more resolve our intuitions regarding these issues than we can stop Necker Cubes from spoofing visual cognition.

The point here isn’t that gerrymandered solutions aren’t possible, it’s that gerrymandered solutions are the only solutions possible. Pinker’s own ‘solution’ to the debate (see also, How the Mind Works, 54-55) can be seen as a symptom of the underlying intractability, the straits we find ourselves in. We can stipulate, enforce solutions that appease this or that interpretation of this or that displaced intuition: teachers who berate students for their laziness and stupidity are not long for their profession—at least not anymore. As etiologies of cognition continue to accumulate, as more and more deep information permeates our moral ecologies, the need to revise our stipulations, to engineer them to discharge this or that heuristic function, will continue to grow. Free will is not, as Pinker thinks, “an idealization of human beings that makes the ethics game playable” (HMW 55), it is (as Bruce Waller puts it) stubborn, a cognitive reflex belonging to a system of cognitive reflexes belonging to intentional cognition more generally. Foot-stomping does not change how those reflexes are cued in situ. The free-will crash space will continue to expand, no matter how stubbornly Pinker insists on this or that redefinition of this or that term.

We’re not talking about a fall from any ‘heuristic Eden,’ here, an ancestral ‘golden age’ where our instincts were perfectly aligned with our circumstances—the sheer granularity of moral cognition, not to mention the confabulatory nature of moral rationalization, suggests that it has always slogged through interpretative mire. What we’re talking about, rather, is the degree that moral cognition turns on neglecting certain kinds of natural information. Or conversely, the degree to which deep natural information regarding our cognitive capacities displaces and/or crashes once straightforward moral intuitions, like the laziness of certain children.

Or the need to punish murderers…

Two centuries ago a murderer suffering irregular sleep characterized by vocalizations and sometimes violent actions while dreaming would have been prosecuted to the full extent of the law. Now, however, such a murderer would be diagnosed as suffering an episode of ‘homicidal somnambulism,’ and could very likely go free. Mammalian brains do not fall asleep or awaken all at once. For some yet-to-be-determined reason, the brains of certain individuals (mostly men older than 50), suffer a form of partial arousal causing them to act out their dreams.

More and more, neuroscience is making an impact in American courtrooms. Nita Farahany (2016) has found that between 2005 and 2012 the number of judicial opinions referencing neuroscientific evidence has more than doubled. She also found a clear correlation between the use of such evidence and less punitive outcomes—especially when it came to sentencing. Observers in the burgeoning ‘neurolaw’ field think that for better or worse, neuroscience is firmly entrenched in the criminal justice system, and bound to become ever more ubiquitous.

Not only are responsibility assessments being weakened as neuroscientific information accumulates, social risk assessments are being strengthened (Gkotsi and Gasser 2016). So-called ‘neuroprediction’ is beginning to revolutionize forensic psychology. Studies suggest that inmates with lower levels of anterior cingulate activity are approximately twice as likely to reoffend as those relatively higher levels of activity (Aharoni et al 2013). Measurements of ‘early sensory gating’ (attentional filtering) predict the likelihood that individuals suffering addictions will abandon cognitive behavioural treatment programs (Steele et al 2014). Reduced gray matter volumes in the medial and temporal lobes identify youth prone to commit violent crimes (Cope et al 2014). ‘Enlightened’ metrics assessing recidivism risks already exist within disciplines such as forensic psychiatry, of course, but “the brain has the most proximal influence on behavior” (Gaudet et al 2016). Few scientific domains illustrate the problems secondary to deep environmental information than the issue of recidivism. Given the high social cost of criminality, the ability to predict ‘at risk’ individuals before any crime is committed is sure to pay handsome preventative dividends. But what are we to make of justice systems that parole offenders possessing one set of ‘happy’ neurological factors early, while leaving others possessing an ‘unhappy’ set to serve out their entire sentence?

Nothing, I think, captures the crash of ancestral moral intuitions in modern, technological contexts quite so dramatically as forensic danger assessments. Consider, for instance, the way deep information in this context has the inverse effect of deep information in the classroom. Since punishment is indexed to responsibility, we generally presume those bearing less responsibility deserve less punishment. Here, however, it’s those bearing the least responsibility, those possessing ‘social learning disabilities,’ who ultimately serve the longest. The very deficits that mitigate responsibility before conviction actually aggravate punishment subsequent conviction.

The problem is fundamentally cognitive, and not legal, in nature. As countless bureaucratic horrors make plain, procedural decision-making need not report as morally rational. We would be mad, on the one hand, to overlook any available etiology in our original assessment of responsibility. We would be mad, on the other hand, to overlook any available etiology in our subsequent determination of punishment. Ergo, less responsibility often means more punishment.

Crash.

The point, once again, is to describe the structure and dynamics of our collective sociocognitive dilemma in the age of deep environmental information, not to eulogize ancestral cognitive ecologies. The more we disenchant ourselves, the more evolutionarily unprecedented information we have available, the more problematic our folk determinations become. Demonstrating this point demonstrates the futility of pragmatic redefinition: no matter how Pinker or Dennett (or anyone else) rationalizes a given, scientifically-informed definition of moral terms, it will provide no more than grist for speculative disputation. We can adopt any legal or scientific operationalization we want (see Parmigiani et al 2017); so long as responsibility talk cues moral cognitive determinations, however, we will find ourselves stranded with intuitions we cannot reconcile.

Considered in the context of politics and the ‘culture wars,’ the potentially disastrous consequences of these kinds of trends become clear. One need only think of the oxymoronic notion of ‘commonsense’ criminology, which amounts to imposing moral determinations geared to shallow cognitive ecologies upon criminal contexts now possessing numerous deep information attenuations. Those who, for whatever reason, escaped the education system with something resembling an ancestral ‘neglect structure’ intact, those who have no patience for pragmatic redefinitions or technical stipulations will find appeals to folk intuitions every bit as convincing as those presiding over the Salem witch trials in 1692. Those caught up in deep information environments, on the other hand, will be ever more inclined to see those intuitions as anachronistic, inhumane, immoral—unenlightened.

Given the relation between education and information access and processing capacity, we can expect that education will increasingly divide moral attitudes. Likewise, we should expect a growing sociocognitive disconnect between expert and non-expert moral determinations. And given cognitive technologies like the internet, we should expect this dysfunction to become even more profound still.

 

Cognitive Technology

Given the power of technology to cue intergroup identifications, the internet was—and continues to be—hailed as a means of bringing humanity together, a way of enacting the universalistic aspirations of humanism. My own position—one foot in academe, another foot in consumer culture—afforded me a far different perspective. Unlike academics, genre writers rub shoulders with all walks, and often find themselves debating outrageously chauvinistic views. I realized quite quickly that the internet had rendered rationalizations instantly available, that it amounted to pouring marbles across the floor of ancestral social dynamics. The cost of confirmation had plummeted to zero. Prior to the internet, we had to test our more extreme chauvinisms against whomever happened to be available—which is to say, people who would be inclined to disagree. We had to work to indulge our stone-age weaknesses in post-war 20th century Western cognitive ecologies. No more. Add to this phenomena such as online disinhibition effect, as well as the sudden visibility of ingroup, intellectual piety, and the growing extremity of counter-identification struck me as inevitable. The internet was dividing us into teams. In such an age, I realized, the only socially redemptive art was art that cut against this tendency, art that genuinely spanned ingroup boundaries. Literature, as traditionally understood, had become a paradigmatic expression of the tribalism presently engulfing us now. Epic fantasy, on the other hand, still possessed the relevance required to inspire book burnings in the West.

(The past decade has ‘rewarded’ my turn-of-the-millennium fears—though in some surprising ways. The greatest attitudinal shift in America, for instance, has been progressive: it has been liberals, and not conservatives, who have most radically changed their views. The rise of reactionary sentiment and populism is presently rewriting European politics—and the age of Trump has all but overthrown the progressive political agenda in the US. But the role of the internet and social media in these phenomena remains a hotly contested one.)

The earlier promoters of the internet had banked on the notional availability of intergroup information to ‘bring the world closer together,’ not realizing the heuristic reliance of human cognition on differential information access. Ancestrally, communicating ingroup reliability trumped communicating environmental accuracy, stranding us with what Pinker (following Kahan 2011) calls the ‘tragedy of the belief commons’ (Enlightenment Now, 358), the individual rationality of believing collectively irrational claims—such as, for instance, the belief that global warming is a liberal myth. Once falsehoods become entangled with identity claims, they become the yardstick of true and false, thus generating the terrifying spectacle we now witness on the evening news.

The provision of ancestrally unavailable social information is one thing, so long as it is curated—censored, in effect—as it was in the mass media age of my childhood. Confirmation biases have to swim upstream in such cognitive ecologies. Rendering all ancestrally unavailable social information available, on the other hand, allows us to indulge our biases, to see only what we want to see, to hear only what we want to hear. Where ancestrally, we had to risk criticism to secure praise, no such risks need be incurred now. And no surprise, we find ourselves sliding back into the tribalistic mire, arguing absurdities haunted—tainted—by the death of millions.

Jonathan Albright, the research director at the Tow Center for Digital Journalism at Columbia, has found that the ‘fake news’ phenomenon, as the product of a self-reinforcing technical ecosystem, has actually grown worse since the 2016 election. “Our technological and communication infrastructure, the ways we experience reality, the ways we get news, are literally disintegrating,” he recently confessed in a NiemanLab interview. “It’s the biggest problem ever, in my opinion, especially for American culture.” As Alexis Madrigal writes in The Atlantic, “the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

The individual cost of fantasy continues to shrink, even as the collective cost of deception continues to grow. The ecologies once securing the reliability of our epistemic determinations, the invariants that our ancestors took for granted, are being levelled. Our ancestral world was one where seeking risked aversion, a world where praise and condemnation alike had to brave condemnation, where lazy judgments were punished rather than rewarded. Our ancestral world was one where geography and the scarcity of resources forced permissives and authoritarians to intermingle, compromise, and cooperate. That world is gone, leaving the old equilibria to unwind in confusion, a growing social crash space.

And this is only the beginning of the cognitive technological age. As Tristan Harris points out, social media platforms, given their commercial imperatives, cannot but engineer online ecologies designed to exploit the heuristic limits of human cognition. He writes:

“I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.”

More and more of what we encounter online is dedicated to various forms of exogenous attention capture, maximizing the time we spend on the platform, so maximizing our exposure not just to advertising, but to hidden metrics, algorithms designed to assess everything from our likes to our emotional well-being. As with instances of ‘forcing’ in the performance of magic tricks, the fact of manipulation escapes our attention altogether, so we always presume we could have done otherwise—we always presume ourselves ‘free’ (whatever this means). We exhibit what Clifford Nass, a pioneer in human-computer interaction, calls ‘mindlessness,’ the blind reliance on automatic scripts. To the degree that social media platforms profit from engaging your attention, they profit from hacking your ancestral cognitive vulnerabilities, exploiting our shared neglect structure. They profit, in other words, from transforming crash spaces into cheat spaces.

With AI, we are set to flood human cognitive ecologies with systems designed to actively game the heuristic nature of human social cognition, cuing automatic responses based on boggling amounts of data and the capacity to predict our decisions better than our intimates, and soon, better than we can ourselves. And yet, as the authors of the 2017 AI Index report state, “we are essentially “flying blind” in our conversations and decision-making related to AI.” A blindness we’re largely blind to. Pinker spends ample time domesticating the bogeyman of superintelligent AI (296-298) but he completely neglects this far more immediate and retail dimension of our cognitive technological dilemma.

Consider the way humans endure as much as need one another: the problem is that the cues signaling social punishment and reward are easy to trigger out of school. We’ve already crossed the borne where ‘improving the user experience’ entails substituting artificial for natural social feedback. Notice the plethora of nonthreatening female voices at all? The promise of AI is the promise of countless artificial friends, voices that will ‘understand’ your plight, your grievances, in some respects better than you do yourself. The problem, of course, is that they’re artificial, which is to say, not your friend at all.

Humans deceive and manipulate one another all the time, of course. And false AI friends don’t rule out true AI defenders. But the former merely describes the ancestral environments shaping our basic heuristic tool box. And the latter simply concedes the fundamental loss of those cognitive ecologies. The more prosthetics we enlist, the more we complicate our ecology, the more mediated our determinations become, the less efficacious our ancestral intuitions become. The more we will be told to trust to gerrymandered stipulations.

Corporate simulacra are set to deluge our homes, each bent on cuing trust. We’ve already seen how the hypersensitivity of intentional cognition renders us liable to hallucinate minds where none exist. The environmental ubiquity of AI amounts to the environmental ubiquity of systems designed to exploit granular sociocognitive systems tuned to solve humans. The AI revolution amounts to saturating human cognitive ecology with invasive species, billions of evolutionarily unprecedented systems, all of them camouflaged and carnivorous. It represents—obviously, I think—the single greatest cognitive ecological challenge we have ever faced.

What does ‘human flourishing’ mean in such cognitive ecologies? What can it mean? Pinker doesn’t know. Nobody does. He can only speculate in an age when the gobsmacking power of science has revealed his guesswork for what it is. This was why Adorno referred to the possibility of knowing the good as the ‘Messianic moment.’ Until that moment comes, until we find a form of rationality that doesn’t collapse into instrumentalism, we have only toothless guesses, allowing the pointless optimization of appetite to command all. It doesn’t matter whether you call it the will to power or identity thinking or negentropy or selfish genes or what have you, the process is blind and it lies entirely outside good and evil. We’re just along for the ride.

 

Semantic Apocalypse

Human cognition is not ontologically distinct. Like all biological systems, it possesses its own ecology, its own environmental conditions. And just as scientific progress has brought about the crash of countless ecosystems across this planet, it is poised to precipitate the crash of our shared cognitive ecology as well, the collapse of our ability to trust and believe, let alone to choose or take responsibility. Once every suboptimal behaviour has an etiology, what then? Once everyone us has artificial friends, heaping us with praise, priming our insecurities, doing everything they can to prevent non-commercial—ancestral— engagements, what then?

‘Semantic apocalypse’ is the dramatic term I coined to capture this process in my 2008 novel, Neuropath. Terminology aside, the crashing of ancestral (shallow information) cognitive ecologies is entirely of a piece with the Anthropocene, yet one more way that science and technology are disrupting the biology of our planet. This is a worst-case scenario, make no mistake. I’ll be damned if I see any way out of it.

Humans cognize themselves and one another via systems that take as much for granted as they possibly can. This is a fact. Given this, it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible. Why should we evolve the extravagant capacity to intuit our nature beyond the demands of ancestral life? The shallow cognitive ecology arising out of those demands constitutes our baseline self-understanding, one that bears the imprimatur of evolutionary contingency at every turn. There’s no replacing this system short replacing our humanity.

Thus the ‘worst’ in ‘worst case scenario.’

There will be a great deal of hand-wringing in the years to come. Numberless intentionalists with countless competing rationalizations will continue to apologize (and apologize) while the science trundles on, crashing this bit of traditional self-understanding and that, continually eroding the pilings supporting the whole. The pieties of humanism will be extolled and defended with increasing desperation, whole societies will scramble, while hidden behind the endless assertions of autonomy, beneath the thundering bleachers, our fundamentals will be laid bare and traded for lucre.

Enlightenment How? Pinker’s Tutelary Natures

by rsbakker

 

The fate of civilization, Steven Pinker thinks, hangs upon our commitment to enlightenment values. Enlightenment Now: The Case for Reason, Science, Humanism and Progress constitutes his attempt to shore up those commitments in a culture grown antagonistic to them. This is a great book, well worth the read for the examples and quotations Pinker endlessly adduces, but even though I found myself nodding far more often than not, one glaring fact continually leaks through: Enlightenment Now is a book about a process, namely ‘progress,’ that as yet remains mired in ‘tutelary natures.’ As Kevin Williamson puts it in the National Review, Pinker “leaps, without warrant, from physical science to metaphysical certitude.”

What is his naturalization of meaning? Or morality? Or cognition—especially cognition! How does one assess the cognitive revolution that is the Enlightenment short understanding the nature of cognition? How does one prognosticate something one does not scientifically understand?

At one point he offers that “[t]he principles of information, computation, and control bridge the chasm between the physical world of cause and effect and the mental world of knowledge, intelligence, and purpose” (22). Granted, he’s a psychologist: operationalizations of information, computation, and control are his empirical bread and butter. But operationalizing intentional concepts in experimental contexts is a far cry from naturalizing intentional concepts. He entirely neglects to mention that his ‘bridge’ is merely a pragmatic, institutional one, that cognitive science remains, despite decades of research and billions of dollars in resources, unable to formulate its explananda, let alone explain them. He mentions a great number of philosophers, but he fails to mention what the presence of those philosophers in his thetic wheelhouse means.

All he ultimately has, on the one hand, is a kind of ‘ta-da’ argument, the exhaustive statistical inventory of the bounty of reason, science, and humanism, and on the other hand (which he largely keeps hidden behind his back), he has the ‘tu quoque,’ the question-begging presumption that one can only argue against reason (as it is traditionally understood) by presupposing reason (as it is traditionally understood). “We don’t believe in reason,” he writes, “we use reason” (352). Pending any scientific verdict on the nature of ‘reason,’ however, these kinds of transcendental arguments amount to little more than fancy foot-stomping.

This is one of those books that make me wish I could travel back in time to catch the author drafting notes. So much brilliance, so much erudition, all devoted to beating straw—at least as far as ‘Second Culture’ Enlightenment critiques are concerned. Nietzsche is the most glaring example. Ignoring Nietzsche the physiologist, the empirically-minded skeptic, and reducing him to his subsequent misappropriation by fascist, existential, and postmodernist thought, Pinker writes:

Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.” (Of course, this left him unable to explain why we should believe that those statements are true.) 446

Although it’s true that Nietzsche (like Pinker) lacked any scientifically compelling theory of cognition, what he did understand was its relation to power, the fact that “when you face an adversary alone, your best weapon may be an ax, but when you face an adversary in front of a throng of bystanders, your best weapon may be an argument” (415). To argue that all knowledge is contextual isn’t to argue that all knowledge is fundamentally equal (and therefore not knowledge at all), only that it is bound to its time and place, a creature possessing its own ecology, its own conditions of failure and flourishing. The Nietzschean thought experiment is actually quite a simple one: What happens when we turn Enlightenment skepticism loose upon Enlightenment values? For Nietzsche, Enlightenment Now, though it regularly pays lip service to the ramshackle, reversal-prone nature of progress, serves to conceal the empirical fact of cognitive ecology, that we remain, for all our enlightened noise-making to the contrary, animals bent on minimizing discrepancies. The Enlightenment only survives its own skepticism, Nietzsche thought, in the transvaluation of value, which he conceived—unfortunately—in atavistic or morally regressive terms.

This underwrites the subsequent critique of the Enlightenment we find in Adorno—another thinker whom Pinker grossly underestimates. Though science is able to determine the more—to provide more food, shelter, security, etc.—it has the social consequence underdetermining (and so undermining) the better, stranding civilization with a nihilistic consumerism, where ‘meaningfulness’ becomes just another commodity, which is to say, nothing meaningful at all. Adorno’s whole diagnosis turns on the way science monopolizes rationality, the way it renders moral discourses like Pinker’s mere conjectural exercises (regarding the value of certain values), turning on leaps of faith (on the nature of cognition, etc.), bound to dissolve into disputation. Although both Nietzsche and Adorno believed science needed to be understood as a living, high dimensional entity, neither harboured any delusions as to where they stood in the cognitive pecking order. Unlike Pinker.

Whatever their failings, Nietzsche and Adorno glimpsed a profound truth regarding ‘reason, science, humanism, and progress,’ one that lurks throughout Pinker’s entire account. Both understood that cognition, whatever it amounts to, is ecological. Steven Pinker’s claim to fame, of course, lies in the cognitive ecological analysis of different cultural phenomena—this was the whole reason I was so keen to read this book. (In How the Mind Works, for instance, he famously calls music ‘auditory cheese-cake.’) Nevertheless, I think both Nietzsche and Adorno understood the ecological upshot of the Enlightenment in way that Pinker, as an avowed humanist, simply cannot. In fact, Pinker need only follow through on his modus operandi to see how and why the Enlightenment is not what he thinks it is—as well as why we have good reason to fear that Trumpism is no ‘blip.’

Time and again Pinker likens the process of Enlightenment, the movement away from our tutelary natures, in terms of a conflict between ancestral cognitive predilections and scientifically and culturally revolutionized environments. “Humans today,” he writes, “rely on cognitive faculties that worked well enough in traditional societies, but which we now see are infested with bugs” (25). And the number of bugs that Pinker references in the course of the book is nothing short of prodigious. We tend to estimate frequencies according to ease of retrieval. We tend to fear losses more than we hope for gains. We tend to believe as our group believes. We’re prone to tribalism. We tend to forget past misfortune, and to succumb to nostalgia. The list goes on and on.

What redeems us, Pinker argues, is the human capacity for abstraction and combinatorial recursion, which allows us to endlessly optimize our behaviour. We are a self-correcting species:

So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment. 28

We are the products of ancestral cognitive ecologies, yes, but our capacity for optimizing our capacities allows us to overcome our ‘flawed natures,’ become something better than what we were. “The challenge for us today,” Pinker writes, “is to design an informational environment in which that ability prevails over the ones that lead us into folly” (355).

And here we encounter the paradox that Enlightenment Now never considers, even though Pinker presupposes it continually. The challenge for us today is to construct an informational environment that mitigates the problems arising out of our previous environmental constructions. The ‘bugs’ in human nature that need to be fixed were once ancestral features. What has rendered these adaptations ‘buggy’ is nothing other than the ‘march of progress.’ A central premise of Enlightenment Now is that human cognitive ecology, the complex formed by our capacities and our environments, has fallen out of whack in this way or that, cuing us to apply atavistic modes of problem-solving out of school. The paradox is that the very bugs Pinker thinks only the Enlightenment can solve are the very bugs the Enlightenment has created.

What Nietzsche and Adorno glimpsed, each in their own murky way, was a recursive flaw in Enlightenment logic, the way the rationalization of everything meant the rationalization of rationalization, and how this has to short-circuit human meaning. Both saw the problem in the implementation, in the physiology of thought and community, not in the abstract. So where Pinker seeks to “to restate the ideals of the Enlightenment in the language and concepts of the 21st century” (5), we can likewise restate Nietzsche and Adorno’s critiques of the Enlightenment in Pinker’s own biological idiom.

The problem with the Enlightenment is a cognitive ecological problem. The technical (rational and technological) remediation of our cognitive ecologies transforms those ecologies, generating the need for further technical remediation. Our technical cognitive ecologies are thus drifting ever further from our ancestral cognitive ecologies. Human sociocognition and metacognition in particular are radically heuristic, and as such dependent on countless environmental invariants. Before even considering more, smarter intervention as a solution to the ambient consequences of prior interventions, the big question has to be how far—and how fast—can humanity go? At what point (or what velocity) does a recognizably human cognitive ecology cease to exist?

This question has nothing to do with nostalgia or declinism, no more than any question of ecological viability in times of environmental transformation. It also clearly follows from Pinker’s own empirical commitments.

 

The Death of Progress (at the Hand of Progress)

The formula is simple. Enlightenment reason solves natures, allowing the development of technology, generally relieving humanity of countless ancestral afflictions. But Enlightenment reason is only now solving its own nature. Pinker, in the absence of that solution, is arguing that the formula remains reliable if not quite as simple. And if all things were equal, his optimistic induction would carry the day—at least for me. As it stands, I’m with Nietzsche and Adorno. All things are not equal… and we would see this clearly, I think, were it not for the intentional obscurities comprising humanism. Far from the latest, greatest hope that Pinker makes it out to be, I fear humanism constitutes yet another nexus of traditional intuitions that must be overcome. The last stand of ancestral authority.

I agree this conclusion is catastrophic, “the greatest intellectual collapse in the history of our species” (vii), as an old polemical foe of Pinker’s, Jerry Fodor (1987) calls it. Nevertheless, short grasping this conclusion, I fear we court a disaster far greater still.

Hitherto, the light cast by the Enlightenment left us largely in the dark, guessing at the lay of interior shadows. We can mathematically model the first instants of creation, and yet we remain thoroughly baffled by our ability to do so. So far, the march of moral progress has turned on the revolutionizing our material environments: we need only renovate our self-understanding enough to accommodate this revolution. Humanism can be seen as the ‘good enough’ product of this renovation, a retooling of folk vocabularies and folk reports to accommodate the radical environmental and interpersonal transformations occurring around them. The discourses are myriad, the definitions are endlessly disputed, nevertheless humanism provisioned us with the cognitive flexibility required to flourish in an age of environmental disenchantment and transformation. Once we understand the pertinent facts of human cognitive ecology, its status as an ad hoc ‘tutelary nature’ becomes plain.

Just what are these pertinent facts? First, there is a profound distinction between natural or causal cognition, and intentional cognition. Developmental research shows that infants begin exhibiting distinct physical versus psychological cognitive capacities within the first year of life. Research into Asperger Syndrome (Baron-Cohen et al 2001) and Autism Spectrum Disorder (Binnie and Williams 2003) consistently reveals a cleavage between intuitive social cognitive capacities, ‘theory-of-mind’ or ‘folk psychology,’ and intuitive mechanical cognitive capacities, or ‘folk physics.’ Intuitive social cognitive capacities demonstrate significant heritability (Ebstein et al 2010, Scourfield et al 1999) in twin and family studies. Adults suffering Williams Syndrome (a genetic developmental disorder affecting spatial cognition) demonstrate profound impairments on intuitive physics tasks, but not intuitive psychology tasks (Kamps et al 2017). The distinction between intentional and natural cognition, in other words, is not merely a philosophical assertion, but a matter of established scientific fact.

Second, cognitive systems are mechanically intractable. From the standpoint of cognition, the most significant property of cognitive systems is their astronomical complexity: to solve for cognitive systems is to solve for what are perhaps the most complicated systems in the known universe. The industrial scale of the cognitive sciences provides dramatic evidence of this complexity: the scientific investigation of the human brain arguably constitutes the most massive cognitive endeavor in human history. (In the past six fiscal years, from 2012 to 2017, the National Institute of Health [21/01/2017] alone will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegeneration (10.183 billion)).

Despite this intractability, however, our cognitive systems solve for cognitive systems all the time. And they do so, moreover, expending imperceptible resources and absent any access to the astronomical complexities responsible—which is to say, given very little information. Which delivers us to our third pertinent fact: the capacity of cognitive systems to solve for cognitive systems is radically heuristic. It consists of ‘fast and frugal’ tools, not so much sacrificing accuracy as applicability in problem-solving (Todd and Gigerenzer 2012). When one cognitive system solves for another it relies on available cues, granular information made available via behaviour, utterly neglecting the biomechanical information that is the stock and trade of the cognitive sciences. This radically limits their domain of applicability.

The heuristic nature of intentional cognition is evidenced by the ease with which it is cued. Thus, the fourth pertinent fact: intentional cognition is hypersensitive. Anthropomorphism, the attribution of human cognitive characteristics to systems possessing none, evidences the promiscuous application of human intentional cognition to intentional cues, our tendency to run afoul what might be called intentional pareidolia, the disposition to cognize minds where no minds exist (Waytz et al 2014). The Heider-Simmel illusion, an animation consisting of no more than shapes moving about a screen, dramatically evidences this hypersensitivity, insofar as viewers invariably see versions of a romantic drama (Heider and Simmel 1944). Research in Human-Computer Interaction continues to explore this hypersensitivity in a wide variety of contexts involving artificial systems (Nass and Moon 2000, Appel et al 2012). The identification and exploitation of our intentional reflexes has become a massive commercial research project (so-called ‘affective computing’) in its own right (Yonck 2017).

Intentional pareidolia underscores the fact that intentional cognition, as heuristic, is geared to solve a specific range of problems. In this sense, it closely parallels facial pareidolia, the tendency to cognize faces where no faces exist. Intentional cognition, in other words, is both domain-specific, and readily misapplied.

The incompatibility between intentional and mechanical cognitive systems, then, is precisely what we should expect, given the radically heuristic nature of the former. Humanity evolved in shallow cognitive ecologies, mechanically inscrutable environments. Only the most immediate and granular causes could be cognized, so we evolved a plethora of ways to do without deep environmental information, to isolate saliencies correlated with various outcomes (much as machine learning).

Human intentional cognition neglects the intractable task of cognizing natural facts, leaping to conclusions on the basis of whatever information it can scrounge. In this sense it’s constantly gambling that certain invariant backgrounds obtain, or conversely, that what it sees is all that matters. This is just another way to say that intentional cognition is ecological, which in turn is just another way to say that it can degrade, even collapse, given the loss of certain background invariants.

The important thing to note, here, of course, is how Enlightenment progress appears to be ultimately inimical to human intentional cognition. We can only assume that, over time, the unrestricted rationalization of our environments will gradually degrade, then eventually overthrow the invariances sustaining intentional cognition. The argument is straightforward:

1) Intentional cognition depends on cognitive ecological invariances.

2) Scientific progress entails the continual transformation of cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition.

But this argument oversimplifies matters. To see as much one need only consider the way a semantic apocalypse—the collapse of intentional cognition—differs from say a nuclear or zombie apocalypse. The Walking Dead, for instance, abounds with savvy applications of intentional cognition. The physical systems underwriting meaning, in other words, are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them.

The viability of intentional cognition, in other words, depends not on us, but on a particular cognitive ecology peculiar to us. The question of the threshold of a semantic apocalypse becomes the question of the stability of certain onboard biological invariances correlated to a background of certain environmental invariances. Change the constitutive or environmental invariances underwriting intentional cognition too much, and you can expect it will crash, generate more problems than solutions.

The hypersensitivity of intentional cognition either evinced by solitary confinement or more generally by anthropomorphism demonstrates the threat of systematic misapplication, the mode’s dependence on cue authenticity. (Sherry Turkle’s (2007) concerns regarding ‘Darwinian buttons,’ or Deidre Barrett’s (2010) with ‘supernormal stimuli,’ touch on this issue). So, one way of inducing semantic apocalypse, we might surmise, lies in the proliferation of counterfeit cues, information that triggers intentional determinations that confound, rather than solve any problems. One way to degrade cognitive ecologies, in other words, is to populate environments with artifacts cuing intentional cognition ‘out of school,’ which is to say, circumstances cheating or crashing them.

The morbidity of intentional cognition demonstrates the mode’s dependence on its own physiology. What makes this more than platitudinal is the way this physiology is attuned to the greater, enabling cognitive ecology. Since environments always vary while cognitive systems remain the same, changing the physiology of intentional cognition impacts every intentional cognitive ecology—not only for oneself, but for the rest of humanity as well. Just as our moral cognitive ecology is complicated by the existence of psychopaths, individuals possessing systematically different ways of solving social problems, the existence of ‘augmented’ moral cognizers complicates our moral cognitive ecology as well. This is important because you often find it claimed in transhumanist circles (see, for example, Buchanan 2011), that ‘enhancement,’ the technological upgrading of human cognitive capacities, is what guarantees perpetual Enlightenment. What better way to optimize our values than by reengineering the biology of valuation?

Here, at last, we encounter Nietzsche’s question cloaked in 21st century garb.

And here we can also see where the above argument falls short: it overlooks the inevitability of engineering intentional cognition to accommodate constitutive and environmental transformations. The dependence upon cognitive ecologies asserted in (1) is actually contingent upon the ecological transformation asserted in (2).

1) Intentional cognition depends on constitutive and environmental cognitive ecological invariances.

2) Scientific progress entails the continual transformation of constitutive and environmental cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition short remedial constitutive transformations.

What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus:

What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket? 277

The former dilemma is presently dominating the headlines and is set to be astronomically complicated by the explosion of AI. The latter we can see rising out of literature, clawing its way out of Hollywood, seizing us with video game consoles, engulfing ever more experiential bandwidth. And as I like to remind people, 100 years separates the Blu-Ray from the wax phonograph.

The key to blocking the possibility that the transformative potential of (2) can ameliorate the dependency in (1) lies in underscoring the continual nature of the changes asserted in (2). A cognitive ecology where basic constitutive and environmental facts are in play is no longer recognizable as a human one.

Scientific progress entails the collapse of intentional cognition.

On this view, the coupling of scientific and moral progress is a temporary affair, one doomed to last only so long as cognition itself remained outside the purview of Enlightenment cognition. So long as astronomical complexity assured that the ancestral invariances underwriting cognition remained intact, the revolution of our environments could proceed apace. Our ancestral cognitive equilibria need not be overthrown. In place of materially actionable knowledge regarding ourselves, we developed ‘humanism,’ a sop for rare stipulation and ambient disputation.

But now that our ancestral cognitive equilibria are being overthrown, we should expect scientific and moral progress will become decoupled. And I would argue that the evidence of this is becoming plainer with the passing of every year. Next week, we’ll take a look at several examples.

I fear Donald Trump may be just the beginning.

.

References

Appel, Jana, von der Putten, Astrid, Kramer, Nicole C. and Gratch, Jonathan 2012, ‘Does Humanity Matter? Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction’, in Advances in Human-Computer Interaction 2012 <https://www.hindawi.com/journals/ahci/2012/324694/ref/&gt;

Barrett, Deidre 2010, Supernormal Stimuli: How Primal Urges Overran Their Original Evolutionary Purpose (New York: W.W. Norton)

Binnie, Lynne and Williams, Joanne 2003, ‘Intuitive Psychology and Physics Among Children with Autism and Typically Developing Children’, Autism 7

Buchanan, Allen 2011, Better than Human: The Promise and Perils of Enhancing Ourselves (New York: Oxford University Press)

Ebstein, R.P., Israel, S, Chew, S.H., Zhong, S., and Knafo, A. 2010, ‘Genetics of human social behavior’, in Neuron 65

Fodor, Jerry A. 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: The MIT Press)

Harari, Yuval 2017, Homo Deus: A Brief History of Tomorrow (New York: HarperCollins)

Heider, Fritz and Simmel, Marianne 1944, ‘An Experimental Study of Apparent Behaviour,’ in The American Journal of Psychology 57

Kamps, Frederik S., Julian, Joshua B., Battaglia, Peter, Landau, Barbara, Kanwisher, Nancy and Dilks Daniel D 2017, ‘Dissociating intuitive physics from intuitive psychology: Evidence from Williams syndrome’, in Cognition 168

Nass, Clifford and Moon, Youngme 2000, ‘Machines and Mindlessness: Social Responses to Computers’, Journal of Social Issues 56

Pinker, Steven 1997, How the Mind Works (New York: W.W. Norton)

—. 2018, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking)

Scourfield J., Martin N., Lewis G. and McGuffin P. 1999, ‘Heritability of social cognitive skills in children and adolescents’, British Journal of Psychiatry 175

Todd, P. and Gigerenzer, G. 2012 ‘What is ecological rationality?’, in Todd, P. and Gigerenzer, G. (eds.) Ecological Rationality: Intelligence in the World (Oxford: Oxford University Press) 3–

30

Turkle, Sherry 2007, ‘Authenticity in the age of digital companions’, Interaction Studies 501-517

Waytz, Adam, Cacioppo, John, and Epley, Nicholas 2014, ‘Who See Human? The Stability and Importance of Individual Differences in Anthropomorphism’, Perspectives in Psychological Science 5

Yonck, Richard 2017, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence (New York, NY: Arcade Publishing)

 

Bleak Theory (By Paul J. Ennis)

by rsbakker

In the beginning there was nothing and it has been getting steadily worse ever since. You might know this, and yet repress it. Why? Because you have a mind that is capable of generating useful illusions, that’s why. How is this possible? Because you are endowed with a brain that creates a self-model which has the capacity to hide things from ‘you.’ This works better for some than for others. Some of us are brain-sick and, for whatever perverse reasons, we chip away at our delusions. In such cases recourse is possible to philosophy, which offers consolation (or so I am told), or to mysticism, which intentionally offers nothing, or to aesthetics, which is a kind of self-externalizing that lets the mind’s eye drift elsewhere. All in all, however, the armor on offer is thin. Such are the options: to mirror (philosophy), to blacken (mysticism), or to embrace contingency (aesthetics). Let’s select the latter for now. By embracing contingency I mean that aesthetics consists of deciding upon and pursuing something quite specific for intuitive rather than rational reasons. This is to try to come to know contingency in your very bones.

As a mirrorer by trade I have to abandon some beliefs to allow myself to proceed this way. My belief that truth comes first and everything else later will be bracketed. I replace this with a less demanding constraint: truth comes when you know why you believe what you believe. Oftentimes I quite simply believe things because they are austere and minimal and I have a soft spot for that kind of thing. When I allow myself to think in line with these bleak tones an unusual desire is generated: to outbleak black, to be bleaker than black. This desire comes from I know not where. It seemingly has no reason. It is an aesthetic impulse. That’s why I ask that you take from what follows what you will. It brings me no peace either way.

I cannot hope to satisfy anyone with a definition of aesthetic experience, but let me wager that those moments that let me identify with the world a-subjectively – but not objectively – are commonly associated in my mind with bleakness. My brain chemistry, my environment, and similar contingent influences have rendered me this way. So be it. Bleakness manifests most often when I am faced with what is most distinctly impersonal: with cloudscapes and dimmed, wet treescapes. Or better yet, any time I witness a stark material disfiguration of the real by our species. And flowering from this is a bleak outlook correlated with the immense, consistent, and mostly hidden, suffering that is our history – our being. The intensity arising from the global reach of suffering becomes impressive when dislocated from the personal and the particular because then you realize that it belongs to us. Whatever the instigator the result is the same: I am alerted not just to the depths of unknowing that I embody, to the fact that I will never know most of life, but also to the industrial-scale sorrow consistently operative in being. All that is, is a misstep away from ruin. Consciousness is the holocaust of happiness.

Not that I expect anything more. Whatever we may say of our cultural evolution there was nothing inscribed in reality suggesting our world should be a fit for us. I am, on this basis, not surprised by our bleak surroundings. The brain, model-creator that it is, does quite a job at systematizing the outside into a representation that allows you to function; assuming, that is, that you have been gifted with a working model. Some have not. Perhaps the real horror is to try to imagine what has been left out (even the most ardent realist surely knows you do not look at the world directly as it is). Thankfully there is no real reason for us to register most of the information out there and we were not designed to know most of it anyway. This is the minimal blessing our evolution has gifted us with. The maximal damage is that from the exaption we call consciousness cultural evolution flowers and puts our self-model at the mercy of a bombardment of social complexity – our factical situation. It is impossible to know how our information age is toying with our brain, suffice to say that the spike in depression, anxiety and self-loathing is surely some kind of signal. The brain though, like the body, can function even when maltreated. Whether this is truly to the good is difficult to say.

And yet we must be careful to remember that even in so-called eliminative materialism the space of reasons remains. The normative dimension is, as Brandom would put it, irreducible. It does not constitute the entire range of cognition, and is perhaps best deflated in light of empirical evidence, but that is beside the point. To some degree, perhaps minor, we are rational animals with the capacity for relatively free decision-making. My intuition is that ultimately the complexity of our structure means that we will never be free of certain troubles arising from what we are. Being embodied is to be torn between immense capacity and the constant threat of losing capacities. A stroke, striking as if from nowhere, can fundamentally alter anyone. This is not to suggest that progress does not occur. It can and it does, but it can also be, and often is, undone. It’s an unfortunate state of affairs, bleak even, but being attuned to the bleakness of reality does not result in passivity by necessity.

Today there are projects that explicitly register all this, and nonetheless intend to operate in line with the potentiality contained within the capacities of reason. What differentiates these projects, oftentimes rationalist in nature, is that they do not follow our various universalist legacies in simply conceiving of the general human as deserving of dignity simply because we all belong to the same class of suffering beings. This is not sufficient to make humans act well. The phenomenon of suffering is easily recognizable and most humans are acutely aware of it, and yet they continue to act in ways contrary to how we ‘ought’ to respond. In fact, it is clear that knowing the sheer scale of suffering may lead to hedonism, egoism or repression. Various functional delusions can be generated by our mind, and it is hardly beyond us to rationalize selfishness on the basis of the universal. We are versatile like that. For this reason, I find myself torn between two poles. I maintain a philosophical respect for various neo-rationalist projects under development. And I remain equally under no illusion they will ever be put to much use. And I do not blame people for falling short of these demands. I am so far from them I only really take them seriously on the page. I find myself drawn, for these reasons, to the pessimist attitude, often considered a suspect stance.

One might suggest that we need only a minimal condition to be ethical. An appeal to the reality of pain in sentient and sapient creatures, perhaps. In that decision you might find solace – despite everything (or in spite of everything). It is a choice, however. Our attempts to assert an ethical universalism are bound up with a counter-logic: the bleak truth of contingency on the basis of the impersonal-in-the-personal. It is a logic quietly operative in the philosophical tradition and one I believe has been suppressed. Self-suppressed it flirts too much with a line leading us to the truth of our hallucination. It’s Nietzsche telling you about perspectivism hinging on the impersonal will-to-power and then you maturing, and forgetting. Not knocking his arguments out of the water, mind. Simply preferring not to accept it. Nobody wants to circle back round to the merry lunatic truths that make a mockery of your life. You might find it hard to get out of bed…whereas now I am sure you leap up every morning, smile on your face…The inhuman, impersonal attachment to each human has many names, but let us look at some that are found right at the heart of the post-Kantian tradition: transcendental subject, Dasein, Notion. Don’t believe me? I don’t mind, it makes no difference to me.

Let’s start with the sheer impersonality involved in Heidegger’s sustained fascination with discussing the human without using the word. Dasein is not supposed to be anything or anyone, in particular. Now once you think about it Dasein really does come across as extraordinarily peculiar. It spends a lot of its time being infested by language since this is, Heidegger insists, the place where its connection to being can be expressed. Yet it is also an easily overrun fortress that has been successfully invaded by techno-scientific jargon. When you hook this thesis up with Heidegger’s epochal shifts then the impersonal forces operative in his schema start to look downright ominous. However, we can’t blame Heidegger on what we can blame on Kant. His transcendental field of sense also belongs to one and all. And so, like Dasein, no one in particular. This aspect of the transcendental field still remains contentious. The transcendental is, at once, housed in a human body but also, in its sense-making functions, to be considered somehow separate from it. It is not quite human, but not exactly inhuman either.

There is, then, some strange aspect, I can think of no other word for it, inhabiting our own flowing world of a coherent ego, or ‘I,’ that allows for the emergence of a pooled intersubjectivity. Kant’s account, of course, had two main aims: to constrain groundless metaphysical speculation and, in turn, to ground the sciences. Yet his readers did not always follow his path. Kant’s decision to make a distinction between the phenomena and the noumena is perhaps the most consequential one in our tradition and is surely one of the greatest examples of opening up what you intended to close down. The nature of the noumenal realm has proven irresistible to philosophers and it has recursive consequences for how we see ourselves. If the nominal realm names a reality that is phenomenally clouded then it surely precedes, ontologically, the ego-as-center; even if it is superseded by the ego’s modelling function for us. Seen within the wider context of the noumenal realm it is legitimate to ask whether the ‘I’ is merely a densely concentrated, discrete packet amidst a wider flow; a locus amidst the chaos. The ontological generation of egos is then shorn back until all you have is Will (Schopenhaeur), Will to Power (Nietzsche), or, in a less generative sense ‘what gives,’ es gibt (Heidegger). This way of thinking belongs, when one takes the long-view, to the slow-motion deconstruction of the Cartesian ego in post-Kantian philosophy, albeit with Husserl cutting a lonely revivalist figure here. Today the ego is trounced everywhere, but there is perhaps no better example that the ‘no-self-at-all’ argument of Metzinger, but even the one-object-amongst-many thesis of object oriented ontology traces a similar line.

The destruction of the Cartesian ego may have its lineage in Kant, but the notion of the impersonal as force, process, or will, owes much to Hegel. In his metaphysics Hegel presents us with a cosmic loop explicable through retroactive justification. At the beginning, the un-articulated Notion, naming what is at the heart-of-the-real, sets off without knowledge of itself, but with the emergence of thinking subjects the Notion is finally able to think itself. In this transition the gap between the un-articulated and articulated Notion is closed, and the entire thing sets off again in directions as yet unknown. Absolute knowing is, after all, not totalized knowing, but a constant, vigilant knowing navigating its way through contingency and recognizing the necessity below it all. But that’s just the thing: despite being important conduits to this process, and having a quite special and specific function, it’s the impersonal process that really counts. In the end Kant’s attempt to close down discussion about the nature of the noumenal realm simply made it one of the most appealing themes for a philosopher to pursue. Censorship helps sales.

Speaking of sales, all kinds of new realism are being hawked on the various para-academic street-corners. All of them benefit from a tint of recognizability rooted, I would suggest, in the fact that ontological realism has always been hidden in plain sight; for any continentalist willing to look. What is different today is how the question of the impersonal attachments affecting the human comes not from inside philosophy, but from a number of external pressures. In what can only be described as a tragic situation for metaphysicians, truth now seeps into the discipline from the outside. We see thinking these days where philosophers promised there was none. The brilliance of continental realism lies in reminding us how this is an immense opportunity for philosophers to wake up from various self-induced slumbers, even if that means stepping outside the protected circle from time to time. It involves bringing this bubbling, left-over question of ontological realism right to the fore. This does not mean ontological realism will come to be accepted and then casually integrated into the tradition. If anything the backlash may eviscerate it, but the attempt will have been made. Or was, and quietly passed.

And the attempt should be made because the impersonality infecting ontological realist excesses such as the transcendental subject (in-itself), the Notion, or Dasein are attuned to what we can now see as the (delayed) flowering of the Copernican revolution. The de-centering is now embedded enough that whatever defense of the human we posit it must not be dishonest. We cannot hallucinate our way out of our ‘cold world’. If we know that our self-model is itself a hallucination, but a very real one, then what do we do then? Is it enough to situate the real in our ontological flesh and blood being-there that is not captured by thinking? Or is it best to remain with thinking as a contingent error that despite its aberrancy nonetheless spews out the truth? These avenues are grounded in consciousness and in our bodies and although both work wonders they can just as easily generate terrors. Truth qualified by these terrors is where one might go. No delusion can outflank these constraints forever. Bled of any delusional disavowal, one tries to think without hope. Hope is undignified anyway. Dignity involves resisting all provocation and remaining sane when you know it’s bleakness all the way down.

Some need hope, no? As I write this I feel the beautiful soul rising from his armchair, but I do not want to hear it. Bleak theory is addressed to your situation: a first worlder inhabiting an accelerated malaise. The ethics to address poverty, inequality, and hardship will be different. Our own heads are disordered and we do not quite know how to respond to the field outside it. You will feel guilty for your myopia, and you deserve it, but you cannot elide by endlessly pointing to the plank in the other’s eye.  You can pray through your tears, and in doing so ironically demonstrate the disturbance left by the death of God, but what does this shore up? It builds upon cathedral ruins: those sites where being is doubled-up and bent-over-backwards trying to look inconspicuous as just another option. Do you want to write religion back into being? Why not, as Ayache suggests, just ruin yourself? I hope it is clear I don’t have any answers: all clarity is a lie these days. I can only offer bleak theory as a way of seeing and perhaps a way of operating. It ‘works’ as follows: begin with confusion and shear away at what you can. Whatever is left is likely the closest thing approximating to what we name truth. It will be strictly negative. Elimination of errors is the best you can hope for.

I don’t know how to end this, so I am just going to end it.

 

Snuffing the Spark: A Nihilistic Account of Moral Progress

by rsbakker

sparkman

 

If we define moral progress in brute terms of more and more individuals cooperating, then I think we can cook up a pretty compelling naturalistic explanation for its appearance.

So we know that our basic capacity to form ingroups is adapted to prehistoric ecologies characterized by resource scarcity and intense intergroup competition.

We also know that we possess a high degree of ingroup flexibility: we can easily add to our teams.

We also know moral and scientific progress are related. For some reason, modern prosocial trends track scientific and technological advance. Any theory attempting to explain moral progress should explain this connection.

We know that technology drastically increases information availability.

It seems modest to suppose that bigger is better in group competition. Cultural selection theory, meanwhile, pretty clearly seems to be onto something.

It seems modest to suppose that ingroup cuing turns on information availability.

Technology, as the homily goes, ‘brings us closer’ across a variety of cognitive dimensions. Moral progress, then, can be understood as the sustained effect of deep (or ancestrally unavailable) social information cuing various ingroup responses–people recognizing fractions of themselves (procedural if not emotional bits) in those their grandfathers would have killed.  The competitive benefits pertaining to cooperation suggest that ingroup trending cultures would gradually displace those trending otherwise.

Certainly there’s a far, far more complicated picture to be told here—a bottomless one, you might argue—but the above set of generalizations strike me as pretty solid. The normativist would cry foul, for instance, claiming that some account of the normative nature of the institutions underpinning such a process is necessary to understanding ‘moral progress.’ For them, moral progress has to involve autonomy, agency, and a variety of other posits perpetually lacking decisive formulation. Heuristic neglect allows us to sidestep this extravagance as the very kind of dead-end we should expect to confound us. At the same time, however, reflection on moral cognition has doubtless had a decisive impact on moral cognition. The problem of explaining ‘norm-talk’ remains. The difference is we now recognize the folly of using normative cognition to theoretically solve the nature of normative cognition. How can systems adapted to solving absent information regarding the nature of normative cognition reveal the nature of normative cognition? Relieved of these inexplicable posits, the generalizations above become unproblematic. We can set aside the notion of some irreducible ‘human spark’ impinging on the process in a manner that makes them empirically inexplicable.

If only our ‘deepest intuitions’ could be trusted.

The important thing about this way of looking at things is that it reveals the degree to which moral progress depends upon its information environments. So far, the technical modification of our environments has allowed our suite of social instincts, combined with institutionally regimented social training, to progressively ratchet the expansion of the franchise. But accepting the contingency of moral progress means accepting vulnerability to radical transformations in our information environment. Nothing guarantees moral progress outside the coincidence of certain capacities in certain conditions. Change those conditions, and you change the very function of human moral cognition.

So, for instance, what if something as apparently insignificant as the ‘online disinhibition effect’ has the gradual, aggregate effect of intensifying adversarial group identifications? What if the network possibilities of the web gradually organizes those possessing authoritarian dispositions, renders them more socially cohesive, while having the opposite impact on those possessing anti-authoritarian dispositions?

Anything can happen here, folks.

One can be a ‘nihilist’ and yet be all for ‘moral progress.’ The difference is that you are advocating for cooperation, for hewing to heuristics that promote prosocial behaviour. More importantly, you have no delusions of somehow standing outside contingency, of ‘rational’ immunity to radical transformations in your cognitive environments. You don’t have the luxury of burning magical holes through actual problems with your human spark. You see the ecology of things, and so you intervene.

A Secret History of Enlightened Animals (by Ben Cain)

by rsbakker

Stair of Being

 

As proud and self-absorbed as most of us are, you’d expect we’d be obsessed with reading history to discover more and more of our past and how we got where we are. But modern historical narratives concentrate on the mere facts of who did what to whom and exactly when and where such dramas played out. What actually happened in our recent and distant past doesn’t seem grandiose enough for us, and so we prefer myths that situate our endeavours in a cosmic or supernatural background. Those myths can be religious, of course, but also secular as in films, novels, and the other arts. We’re so fixated on ourselves and on our cultural assumptions that we must imagine we’re engaged in more than just humdrum family life, business, political chicanery, and wars. We’re heroes in a universal tale of good and evil, gods and monsters. We thereby resort to the imagination, overlooking the existential importance of our actual evolutionary transformation. When animals became people, the universe turned in its grave.

 

Awakening from Animal Servitude unto Alienation

The so-called wise way of life, that of our species, originates from the birth of an anomalous form of consciousness. That origin has been widely mythologized to protect us from the vertigo of feeling how fine the line is between us and animals. Thus, personal consciousness has been interpreted as an immaterial spirit or as a spark left behind by the intrusion of a higher-dimensional realm into fallen nature, as in Gnosticism, or as an illusion to maintain the play of the slumbering God Brahman, as in some versions of Hinduism, and so on and so forth. But the consciousness that separates people from animals is merely the particular higher-order thought—that is, a thought about thoughts—that you (your lower-order thoughts) are free in the sense of being autonomous, that you’re largely liberated from naturally-selected, animal processes such as hunting for food or seeking mates in the preprogrammed ways. That thought eventually comes to lie in the background of the flurry of mental activity sustained by our oversized brains, along with the frisson of fear that accompanies the revelation that as long as we can think we’re free from nature, we’re actually so. This is because such a higher-order thought, removed as it is from the older, animal parts of our brain, is just what allows us to independently direct our body’s activities. The freedom opened up by human sentience is typically experienced as a falling away from a more secure position. In fact, our collective origin is very likely encapsulated in each child’s development of personhood, fraught as that is with anxiety and sadness as well as with wonder. Children cry and sulk when they don’t get their way, which is when they learn that they stand apart from the world as egos who must strive to live up to otherworldly social standards.

Animals become people by using thought to lever themselves into a black hole-like viewpoint subsisting outside of nature as such. The results are alienation and the existential crisis which are at the root of all our actions. Organic processes are already anomalous and thus virtually miraculous. Personhood represents not progress, since the values that would define such an advance are themselves alien and unnatural by being anthropocentric, but a maximal state of separation from the world, the exclusion of some primates from the environments that would test their genetic mettle. Personal consciousness is the carving of godlike beings from the raw materials of animal slaves, by the realization that thoughts—memories, emotions, imaginings, rational modeling for the sake of problem-solving—comprise an inner world whose contents need not be dictated by stimuli. The cost of personhood, that is, of virtual godhood in the otherwise mostly inanimate universe, is the suffering from alienation that marks our so-called maturity, our fall from childhood innocence whereupon we land in the adult’s clownish struggles with hubris. Our independence empowers us to change ourselves and the world around us, and so we assume we’re the stars of the cosmic show or at least of the narrative of our private life. But because the business of our acting like grownups is witnessed by hardly any audience at all—except in the special case of celebrities who are ironically infantilized by their fame, because the wildly inhuman cosmos is indifferent to our successes and failures—we typically develop into existential mediocrities, not heroes. We overcompensate for the anguish we feel because our thoughts sever us from everything outside our skull, becoming proud of our adult independence; we’re like children begging their parents to admire their finger paintings. The natural world responds with randomness and indiscriminateness, with luck and indifference, humiliating us with a sense of the ultimate futility of our efforts. Our oldest solution is to retreat to the anthropocentric social world in which we can honour our presumed greatness, justly rewarding or punishing each other for our deeds as we feel we deserve.

 

Hypersocialization and the Existential Crisis of Consciousness

The alienation of higher consciousness is followed, then, by intensive socialization. Animals socialize for natural purposes, whereas we do so in the wake of the miracle of personhood. Our relatively autonomous selves are miraculous not just because they’re so rare (count up the rocks and the minds in the universe, for example, and the former will so outnumber the latter that minds will seem to have spontaneously popped into existence without any general cause), but because whereas animals adapt to nature, conforming to genetic and environmental regularities, people negate those regularities, abandoning their genetic upbringing and reshaping the global landscape. The earliest people channeled their resentment against the world they discovered they’re not wholly at home in, by inventing tools to help them best nature and its animal slaves, but also by forming tribes defined by more and more elaborate social conventions. The more arbitrary the implicit and explicit laws that regulate a society, the more frenzied its members’ dread of being embedded in a greater, uncaring wilderness. Again, human societies are animalistic in so far as they rely on the structure of dominance hierarchies, but whereas alpha males in animal groups overpower their inferiors for the natural reason of maintaining group cohesion to protect the alphas whose superior genes are the species’ best hope for future generations, human leaders adopt the pathologies of the God complex. Indeed, all people would act like gods if only they could sustain the farce. Alas, just as every winning lottery ticket necessitates multitudes of losers, every full-blown personal deity depends on an army of worshippers. Personhood makes us all metaphysically godlike with respect to our autonomy and our liberation from some natural, impersonal systems, but only a lucky minority can live like mythical gods on Earth.

We socialize, then, to flatter our potential for godhood, by elevating some of our members to a social position in which they can tantalize us with their extravagant lifestyles and superhuman responsibilities. We form sheltered communities in which we can hide from nature’s alien glare. Our elders, tyrants, kings, and emperors lord it over us and we thank them for it, since their appallingly decadent lives nevertheless prove that personhood can be completed, that an absolute fall from the grace of animal innocence isn’t asymptotic, that our evolution has a finite end in transhumanity. Our psychopathic rulers are living proofs that nature isn’t omnipresent, that escape is possible in the form of insanity sustained by mass hallucination. We daydream the differences between right and wrong, honour and dishonour, meaning and meaninglessness. We fill the air with subtle noises and imagine that those symbols are meant to lay bare the final truth. We thus mitigate the removal of our mind from the world, with a myth of reconciliation between thoughts and facts. But language was likely conceived of in the first place as a magical instrument, that is, as an extension of mentality into nature which was everywhere anthropomorphized. Human tribes were assumed to be mere inner circles within a vast society of gods, monsters, and other living forces. We socialized, then, not just to escape to friendly domains to preserve our dignity as unnatural wonders, but to pretend that we hadn’t emerged just by a satanic/promethean act of cognitive defiance, with the ego-making thought that severs us from natural reality. We childishly presumed that the whole universe is a stage populated by puppets and actors; thus, no existential retreat might have been deemed necessary, because nature’s alienness was blotted out in our mythopoeic imagination. As in Genesis, God created by speaking the world into being, just as shamans and magicians were believed to cast magical spells that bent reality to their will.

But every theistic posit was part of an unconscious strategy to avoid facing the obvious fact that since all gods are people, we’re evidently the only gods. Nevertheless, having conceived of theistic fictions, we drew up models to standardize the behaviour of actual gods. Thus, the Pharaoh had to be as remote and majestic as Osiris, while the Roman Emperor had to rule like Jupiter, the Raj had to adjudicate like Krishna, the Pope had to appear Christ-like, and the U.S. President has to seem to govern like your favourite Hollywood hero. The double standard that exempts the upper classes from the laws that oppress the lowly masses is supposed to prevent an outbreak of consciousness-induced angst. Social exceptions for the upper class work with mass personifications and enchantments of nature, and those propagandistic myths are then made plausible by the fact that superhuman power elites actually exist. Ironically, such class divisions and their concomitant theologies exacerbate the existential predicament by placing those exquisite symbols of our transcendence (the power elites) before public consciousness, reminding us that just as the gods are prior to and thus independent of nature, so too we who are the only potential or actual gods don’t belong within that latter world.

 

Scientific Objectivity and Artificialization

Hypersocialization isn’t our only existential stratagem; there’s also artificialization as a defense against full consciousness of our unnatural self-control. Whereas the socializer tries to act like a god by climbing social ladders, bullying his underlings, spending unseemly wealth in generational projects of self-aggrandizement, and creating and destroying societal frameworks, the artificializer wants to replace all of nature with artifacts. That way, what began as the imaginary negation of nature’s inhuman indifference to life, in the mythopoeic childhood of our species, can be fulfilled when that indifference is literally undone by our re-engineering of natural processes.

To do that, the artificializer needs to think, not just to act, like a god. That required forming cognitive programs that don’t depend on the innate, naturally-selected ones. Cognitive scientists maintain that the brain’s ability to process sensations, for example, evolved not to present us with the absolute truth but to ensure our fitness to our environment, by helping us survive long enough to sexually reproduce. Animal neural pathways differ from personal ones in that the former serve the species, not the individual, and so the animal is fundamentally a puppet acting out its life cycle as directed by its genetic programming and by certain environmental constraints. Animals can learn to adapt their behaviour to their environment and so their behaviour isn’t always robotic, but unless they can apply their learning towards unnatural ends, such as by developing birth control techniques that systematically thwart the pseudo goals of natural selection, they’ll think as animals, not as gods. Animals as such are entirely natural creatures, meaning that in so far as their behaviour is mediated by an independent control center, their thinking nevertheless is dedicated to furthering the end of natural selection, which is just that of transmitting genes to future generations. By contrast, gods don’t merely survive or even thrive. Insects and bacteria thrive, as did the dinosaurs for millions of years, but none were godlike because none were existentially transformed by conscious enlightenment, by a cognitive black hole into which an animal can fall, creating the world of inner space.

People, too, have animal programming, such as the autonomic programs for processing sensory information. Social behaviour is likewise often purely animalistic, as in the cases of sex and the power struggle for some advantage in a dominance hierarchy. Rational thinking is less so and thus less natural, meaning more anti-natural in that it serves rational ideals rather than just lower-order aims. To be sure, Machiavellian reasoning is animalistic, but reason has also taken on an unnatural function. Whereas writing was first used for the utilitarian purpose of record keeping, reason in the Western tradition was initially not so practical. The Presocratics argued about metaphysical substances and other philosophical matters, indicating that they’d been largely liberated from animal concerns of day-to-day survival and were exploring cognitive territory that’s useful only from the internal, personal perspective. Who am I really? What is the world, ultimately speaking? Is there a worthy difference between right and wrong? Such philosophical questions are impossible without rational ideals of skepticism, intellectual integrity, and love of knowledge even if that knowledge should be subversive—as it proved to be in Socrates’ classic case.

While the biblical Abraham was willing to sacrifice his son for the sake of hypersocializing with an imaginary deity, Socrates died for the antisocial quest of pursuing objective knowledge that inevitably threatens the natural order along with the animal social structures that entrench that order, such as the Athenian government of his day. Socrates cared not about face-saving opinions, but about epistemic principles that arm us with rationally-justified beliefs about how the world might be in reality. Much later, in the Scientific Revolution, rationalists (which is to say philosophers) in Europe would revive the ancient pagan ideal of reasoning regardless of the impact on faith-based dogmas. Scientists like Isaac Newton developed cognitive methods that were counterintuitive in that they went against the grain of more natural human thinking that’s prone to fallacies and survival-based biases. In addition, he served rational institutions, namely the Royal Society and Cambridge, which rivaled the genes for control over the enlightened individual’s loyalty. Moreover, the findings of those cognitive methods were symbolized using artificial languages such as mathematics and formal logic, which enabled liberated minds to communicate their discoveries without the genetic tragicomedies of territorialism, fight-or-flight responses, hero worship, demagoguery, and the like that are liable to be triggered by rhetoric and metaphors expressed in natural languages.

But what is objective knowledge? Are scientists and other so-called enlightened rationalists as neutral as the indifferent world they study? No, rationalists in this broad sense are partly liberated from animal life but they’re not lost in a limbo; rather, they participate in another, unnatural process which I’m calling artificialization. Objectivity isn’t a purely mechanical, impersonal capacity; indeed, natural processes themselves have aesthetically interpretable ends and effective means, so there are no such capacities. In any case, the search for objective knowledge builds on human animalism and on our so-called enlightenment, on our having transcended our animal past and instincts. We were once wholly slaves to nature and we often behave as if we were still playthings of natural forces. But consciousness and hypersocialization provided escapes, albeit into fantasy worlds that nevertheless empowered us. We saw ourselves as being special because we became aware of the increasing independence of our mental models from the modeled territory, owing to the formers’ ultra-complexity. The inner world of the mind emerged and detached from the natural order—not just metaphysically or abstractly, but psychologically and historically. That liberation was traumatic and so we fled to the fictitious world of our imagination, to a world we could control, and we pretended the outer world was likewise held captive to our mental projections. The rational enterprise is fundamentally another form of escape, a means of living with the burden of hyper-awareness. Instead of settling for cheap, flimsy mental constructions such as our gods, boogeymen, and the panoply of delusions to which we’re prone, and instead of hording divinity in the upper social classes that exercise their superpowers in petty or sadistic projects of self-aggrandizement, we saw that we could usurp God’s ability to create real worlds, as it were. We could democratize divinity, replacing impersonal nature with artificial constructs that would actually exist outside our minds as opposed to being mere projections of imagination and existential longing.

The pragmatic aspect of objectivity is apparent from the familiar historical connections between science, European imperialism, and modern industries. But it’s apparent also from the analytical structure of scientific explanations itself. The existential point of scientific objectivity was paradoxically to achieve a total divorce from our animal side by de-personalizing ourselves, by restraining our desire for instant gratification, scolding our inner child and its playpen, the imagination, and identifying with rational methods. Whereas an animal relies on its hardwired programs or on learned rules-of-thumb for interpreting its environment, an enlightened person codifies and reifies such rules, suspending disbelief and siding with idealized or instrumental formulations of these rules so that the person can occupy a higher cognitive plane. Once removed from natural processes by this identification with rational procedures and institutions, with teleological algorithms, artificial symbols and the like, the animal has become a person with a godlike view from outside of nature—albeit not an overview of what the universe really is, but an engineer’s perspective of how the universe works mechanically from the ground up.

To see what I mean, consider the Hindu parable of the blind men who try to ascertain the nature of an elephant by touching its different body parts. One of the men feels a tusk and infers that the elephant is like a pipe. Another touches the leg and thinks the whole animal is like a tree trunk. Another touches the belly and believes the animal is like a wall. Another touches the tail and says the elephant is like a rope. Finally, another one touches the ear and thinks the elephant is like a hand fan. One of the traditional lessons of this parable is that we can fallaciously overgeneralize and mistake the part for the whole, but this isn’t my point about science. Still, there is a difference between what the universe is in reality, which is what it is in its entirety in so far as all of its parts form a cohesive order, and how inquisitive primates choose to understand the universe with their divisive concepts and models. Scientists can’t possibly understand everything in nature all at once; the word “universe” is a mere placeholder with no content adequate to the task of representing everything that’s out there interacting to produce what we think of as distinct events. We have no name for the universe which gives us power over it by identifying its essence, as it were. So scientists analyze the whole, observing how parts of the world work in isolation, ideally in a laboratory. They then generalize their findings, positing a natural regularity or nomic relation between those fragments, as pictured by their model or theory. It’s as if scientists were the blind men who lack the brainpower to cognize the whole of natural reality, and so they study each part, perhaps hoping that if they cooperate they can combine their partial understandings and arrive at some inkling of what the natural universe in general is. Unfortunately, the deeper we look into nature, the more complexity we find in its parts and so the more futile becomes any such plan for total comprehension. Scientists can barely keep up with advances in their subfields; the notion that anyone could master all the sciences as they currently stand is ludicrous, and there’s still much in the world that isn’t scientifically understood by anyone.

So whatever the scientist’s aspiration might be, the effect of science isn’t the achievement of complete, final understanding of everything in the universe or of the whole of nature. Instead, science allows us to rebuild the whole based on partial, analytical knowledge of how the world works. Suppose scientists discover an extraterrestrial artifact and they have no clue as to the artifact’s function, which is to say they have no understanding of what the object is in reality. Still, they can reverse-engineer the artifact, taking it apart, identifying the materials used to assemble it and certain patterns in how the parts interact with each other. With that limited knowledge of the artifact’s mechanical aspect, scientists might be able to build a replica or else they could apply that knowledge to create something more useful to them, that is, something that works in similar ways to the original but which works towards an end supplied by the scientists’ interests, not the alien’s. There would be no point in replicating the alien technology, since the artifact would be useless without knowledge of what it’s for or without even a shared interest in pursuing that alien goal. Replace the alien artifact with the natural universe and you have some measure of the Baconian position of human science. Of course, nature has no designer; nevertheless, we experience natural processes as having ends and so we’re faced with the choice of whether to apply our piecemeal knowledge of natural mechanisms to the task of reinforcing those ends or to that of adjusting or even reversing them. The choice is to act as stewards of God’s garden, as it were, or as promethean rebels who seek to be divine creators. There are still enclaves of native tribes living as retro-human animals and preserving nature rather than demolishing the wilderness and establishing in its place a technological wonderland built with knowledge of natural mechanisms. But the billions of participants in the science-driven, global monoculture have evidently chosen the promethean, quasi-satanic path.

 

Existentialism and our Hidden History

History is a narrative that often informs us indirectly about the present state of human affairs, by representing part of our past. Ancient historical narratives were more mythical than fact-based. The New Testament, for example, uses historical details to form an exoteric shell around the Gnostic, transhumanist suspicion that human nature is “fallen” to the extent that we surrender our capacity to transcend the animal life cycle; we must “die” to our natural bodies and be reborn in a glorious, unnatural or “spiritual” form. At any rate, like poetry, the mythical language of such ancient historical narratives is open to endless interpretations, which is to say that such stories are obscure. Josephus’s ancient histories of the Jewish people, written for a Roman audience, aren’t so mythologized but they’re no less propagandistic. By contrast, modern historians strive to avoid the pitfalls of writing highly subjective or biased narratives, and so they seek to analyze and interpret just the facts dug up by archeologists and textual critics. Modern histories are thus influenced by the exoteric presumption about science, which is that science isn’t primarily in the business of artificializing everything that’s wild in the sense of being out of our control, but is just a mode of inquiry for arriving at the objective truth (come what may).

Left out of this development of the telling of history is the existential significance of our evolutionary transition from being animals, which were at one with nature, to being people who are implicitly if not consciously at war with everything nonhuman. What I’ve sketched above is part of our secret history; it’s the story of what it means to be human, which underlies all our endeavours. The significance of our standing between animalism and godhood is hidden and largely unknown or forgotten, because at the root of this purpose that drives us is the trauma of godlike consciousness which we’d rather not relive. We each have our fill of that trauma in our passage from childhood innocence, which approximates the animal state of unknowing, to adult independence. Teen angst, which cultures combat with initiation rituals to distract the teenager with sanctioned, typically delusional pastimes, is the tip of the iceberg of pain that awaits anyone who recognizes the plight entailed by our very form of existence.

In Escape from Freedom, Erich Fromm argued that citizens of modern democracies are in danger of preferring the comfort of a totalitarian system, to escape the ennui and dehumanization generated by modern societies. In particular, capitalistic exploitation of the worker class and the need to assimilate to an environment run more and more by automated, inhuman machines are supposed to drive civilized persons to savage, authoritarian regimes. At least, this was Fromm’s explanation of the Nazis’ rise to power. A similar analysis could apply to the present degeneration of the Republican Party in the U.S. and to the militant jihadist movement in the Middle East. But Fromm’s analysis is limited. To be sure, capitalism and technology have their drawbacks and these may even contribute to totalitarianism’s appeal, as Fromm shows. But this overlooks what liberal, science-driven societies and savage, totalitarian societies have in common. Both are flights from existential reckoning, as I’ve explained: the one revolves around artificialization (Enlightenment, rationalist values of individual autonomy, which deteriorate until we’re left with the fraud of consumerism), the other around hypersocialization (cult of personality, restoring the sadomasochistic interplay between mythical gods and their worshippers). Fromm ignores the existential effect of the rational enlightenment that brought on modern science, democracy, and capitalism in the first place, the effect being our deification. By deifying ourselves, we prevent our treasured religions from being fiascos and we spare ourselves the horror of living in an inhuman wilderness from which we’re alienated by our hyper-awareness.

We created the modern world to accelerate the rate at which nature is removed from our presence. Contrary to optimists like Steven Pinker, modernity hasn’t fulfilled its promise of democratizing divinity, as I’d put it. Robber barons and more parasitic oligarchs do indeed resort to the older departure of hypersocialization, acting like decadent gods in relation to human slaves instead of focusing their divine creativity on our common enemy, the monstrous wilderness. The internet that trivializes everything it touches and the omnipresence of our high-tech gadgets do infantilize us, turning us into cattle-like consumers instead of unleashing our creativity and training us to be the indomitable warriors that alone could endure the promethean mission. This is because we, being the only gods that exist, are woefully unprepared for our responsibility, having retained our animal heritage in the form of our bodies which infect most of our decisions with natural fears and prejudices. At any rate, the deeper story of the animal that becomes a godlike person to obliterate the source of alienation that’s the curse of any flawed, lonely godling helps explain why we now settle more often for the minor anxieties of living in modern civilization, to avoid the major angst of recognizing the existential importance of what we are.

Science, Nihilism, and the Artistry of Nature (by Ben Cain)

by rsbakker

nihilism image

Technologically-advanced societies may well destroy themselves, but there are two other reasons to worry that science rather than God will usher in the apocalypse, directly destroying us by destroying our will to live. The threat in question is nihilism, the loss of faith in our values and thus the wholesale humiliation of all of us, due to science’s tendency to falsify every belief that’s traditionally comforted the masses. The two reasons to suspect that science entails nihilism are that scientists find the world to be natural (fundamentally material, mechanical, and impersonal), whereas traditional values tend to have supernatural implications, and that scientific methods famously bypass intuitions and feelings to arrive at the objective truth.

These two features of science, being the content of scientific theories and the scientific methods of inquiry might seem redundant, since the point about methods is that science is methodologically naturalistic. Thus, the point about the theoretical content might seem to come as no surprise. By definition, a theory that posits something supernatural wouldn’t be scientific. While scientists may be open to learning that the world isn’t a natural place, making that discovery would amount to ending or at least transforming the scientific mode of inquiry. Nevertheless, naturalism, the worldview that explains everything in materialistic and mechanistic terms, isn’t just an artifact of scientific methods. What were once thought to be ghosts and gods and spirits really did turn out to be natural phenomena.

Moreover, scientific objectivity seems a separate cause of nihilism in that, by showing us how to be objective, paradigmatic scientists like Galileo, Newton, and Darwin showed us also how to at least temporarily give up on our commonsense values. After all, in the moment when we’re following scientific procedures, we’re ignoring our preferences and foiling our biases. Of course, scientists still have feelings and personal agendas while they’re doing science; for example, they may be highly motivated to prove their pet theory. But they also know that by participating in the scientific process they’re holding their feelings to the ultimate test. Scientific methods objectify not just the phenomenon but the scientist; as a functionary in the institution, she must follow strict procedures, recording the data accurately, thinking logically, and publishing the results, making her scientific work as impersonal as the rest of the natural world. In so far as nonscientists understand this source of science’s monumental success, we might come to question the worth of our subjectivity, of our private intuitions, wishes, and dreams which scientific methods brush aside as so many distortions.

Despite the imperative to take scientists as our model thinkers in the Age of Reason, we might choose to ignore these two threats to our naïve self-image. Nevertheless, the fear is that distraction, repression, and delusion might work only for so long before the truth outs. You might think, on the contrary, that science doesn’t entail nihilism, since science is a social enterprise and thus it has a normative basis. Scientists are pragmatic and so they evaluate their explanations in terms of rational values of simplicity, fruitfulness, elegance, utility, and so on. Still, the science-centered nihilist can reply, those values might turn out to be mechanisms, as scientists themselves would discover, in which case science would humiliate not just the superstitious masses but the pragmatic theorists and experimenters as well. That is, science would refute not only the supernaturalist’s presumptions but the elite instrumentalist’s view of scientific methods. Science would become just another mechanism in nature and scientific theories would have no special relationship with the facts since from this ultra-mechanistic “perspective,” not even scientific statements would consist of symbols that bear meaning. The scientific process would be seen as consisting entirely of meaningless, pointless, and amoral causal relations—just like any other natural system.

I think, then, this sort of nihilist can resist that pragmatic objection to the suspicion that science entails nihilism and thus poses a grave, still largely unappreciated threat to society. There’s another objection, though, which is harder to discount. The very cognitive approach which is indispensible to scientific discovery, the objectification of phenomena, which is to say the analysis of any pattern in impersonal terms of causal relations, is itself a source of certain values. When we objectify something we’re thereby well-positioned to treat that thing as having a special value, namely an aesthetic one. Objectification overlaps with the aesthetic attitude, which is the attitude we take up when we decide to evaluate something as a work of art, and thus objects, as such, are implicitly artworks.

 

Scientific Objectification and the Aesthetic Attitude

 

There’s a lot to unpack there, so I’ll begin by explaining what I mean by the “aesthetic attitude.” This attitude is explicated differently by Kant, Schopenhauer, and others, but the main idea is that something becomes an artwork when we adopt a certain attitude towards it. The attitude is a paradoxical one, because it involves a withholding of personal interest in the object and yet also a desire to experience the object for its own sake, based on the assumption that such an experience would be rewarding. When an observer is disinterested in experiencing something, but chooses to experience it because she’s replaced her instrumental or self-interested perspective with an object-oriented one so that she wishes to be absorbed by what the object has to offer, as it were, she’s treating the object as a work of art. And arguably, that’s all it means for something to be art.

For example, if I see a painting on a wall and I study it up close with a view to stealing it, because all the while I’m thinking of how economically valuable the painting is, I’m personally interested in the painting and thus I’m not treating it as art; instead, for me the painting is a commodity. Suppose I have no ulterior motive as I look at the painting, but I’m also bored by it and so I’m not passively letting the painting pour its content into me, as it were, which is to say that I have no respect for such an experience in this case, and so I’m not giving the painting a fair chance to captivate my attention, I’m likewise not treating the painting as art. I’m giving it only a cursory glance, because I lack the selfless interest in letting the painting hold all of my attention and so I don’t anticipate the peculiar pleasure from perceiving the painting that we associate with an aesthetic experience. Whether it’s a painting, a song, a poem, a novel, or a film, the object becomes an artwork when it’s regarded as such, which requires that the observer adopt this special attitude towards it.

Now, scientific objectivity plainly isn’t identical to the aesthetic attitude. After all, regardless of whether scientists think of nature as beautiful when they’re studying the evidence or performing experiments or formulating mechanistic explanations, they do have at least one ulterior motive. Some scientists may have an economic motive, others may be after prestige, but all scientists are interested in understanding how systems work. Their motive, then, is a cognitive one—which is why they follow scientific procedures, because they believe that scientific objectification (mechanistic analysis, careful collection of the data, testing of hypotheses with repeatable experiments, and so on) is the best means of achieving that goal.

However, this cognitive interest posits a virtual aesthetic stance as the means to achieve knowledge. Again, scientists trust that their personal interests are irrelevant to scientific truth and that regardless of how they prefer the world to be, the facts will emerge as long as the scientific methods of inquiry are applied with sufficient rigor. To achieve their cognitive goal, scientists must downplay their biases and personal feelings, and indeed they expect that the phenomenon will reveal its objective, real properties when it’s scientifically scrutinized. The point of science is for us to get out of the way, as much as possible, to let the world speak with its own voice, as opposed to projecting our fantasies and delusions onto the world. Granted, as Kant explained, we never hear that voice exactly—what Pythagoras called the music of the spheres—because in the act of listening to it or of understanding it, we apply our species-specific cognitive faculties and programs. Still, the point is that the institution of science is structured in such a way that the facts emerge because the scientific form of explanation circumvents the scientists’ personalities. This is the essence of scientific objectivity: in so far as they think logically and apply the other scientific principles, scientists depersonalize themselves, meaning that they remove their character from their interaction with some phenomenon and make themselves functionaries in a larger system. This system is just the one in which the natural phenomenon reveals its causal interrelations thanks to the elimination of our subjectivity which would otherwise personalize the phenomenon, adding imaginary and typically supernatural interpretations which blind us to the truth.

And when scientists depersonalize themselves, they open themselves up to the phenomenon: they study it carefully, taking copious notes, using powerful technologies to peer deeply into it, and isolating the variables by designing sterile environments to keep out background noise. This is very like taking up the aesthetic attitude, since the art appreciator too becomes captivated by the work itself, getting lost in its objective details as she sets aside any personal priority she may have. Both the art appreciator and the scientist are personally disinterested when they inspect some object, although the scientist is often just functionally or institutionally so, and both are interested in experiencing the thing for its own sake, although the artist does so for the aesthetic reward whereas the scientist expects a cognitive one. Both objectify what they perceive in that they intend to discern only the subtlest patterns in what’s actually there in front of them, whether on the stage, in the picture frame, or on the novel’s pages, in the case of fine art, or in the laboratory or the wild in the case of science. Thus, art appreciators speak of the patterns of balance and proportion, while scientists focus on causal relations. And the former are rewarded with the normative experience of beauty or are punished with a perception of ugliness, as the case may be, while the latter speak of cognitive progress, of science as the premier way of discovering the natural facts, and indeed of the universality of their successes.

Here, then, is an explanation of what David Hume called the curious generalization that occurs in inductive reasoning, when we infer that because some regularity holds in some cases, therefore it likely holds in all cases. We take our inductive findings to have universal scope because when we reason in that way, we’re objectifying rather than personalizing the phenomenon, and when we objectify something we’re virtually taking up the aesthetic attitude towards it. Finally, when we take up such an attitude, we anticipate a reward, which is to say that we assume that objectification is worthwhile—not just for petty instrumental reasons, but for normative ones, which is to say that objectification functions as a standard for everyone. When you encounter a wonderful work of art, you think everyone ought to have the same experience and that someone who isn’t as moved by that artwork is failing in some way. Likewise, when you discover an objective fact of how some natural system operates, you think the fact is real and not just apparent, that it’s there universally for anyone on the planet to confirm.

Of course, inductive generalization is based also on metaphysical materialism, on the assumptions that the world is made of atoms and that a chunk of matter is just the sort of thing to hold its form and to behave in regular ways regardless of who’s observing it, since material things are impersonal and thus they lack any freedom to surprise. But scientists persist in speaking of their cognitive enterprise as progressive, not just because they assume that science is socially useful, but because scientific findings transcend our instrumental motives since they allow a natural system to speak mainly for itself. Moreover, scientists persist in calling those generalizations laws, despite the unfortunate personal (theistic) connotations, given the comparison with social laws. These facts indicate that inductive reasoning isn’t wholly rational, after all, and that the generalizations are implicitly normative (which isn’t to say moral), because the process of scientific discovery is structurally similar to the experience of art.

 

Natural Art and Science’s True Horror

 

Some obvious questions remain. Are natural phenomena exactly the same as fine artworks? No, since the latter are produced by minds whereas the former are generated by natural forces and elements, and by the processes of evolution and complexification. Does this mean that calling natural systems works of art is merely analogical? No, because the similarity in question isn’t accidental; rather, it’s due to the above theory of art, which says that art is nothing more than what we find when we adopt the aesthetic attitude towards it. According to this account, art is potentially everywhere and how the art is produced is irrelevant.

Does this mean, though, that aesthetic values are entirely subjective, that whether something is art is all in our heads since it depends on that perspective? The answer to this question is more complicated. Yes, the values of beauty and ugliness, for example, are subjective in that minds are required to discover and appreciate them. But notice that scientific truth is likewise just as subjective: minds are required to discover and to understand such truth. What’s objective in the case of scientific discoveries is the reality that corresponds to the best scientific conclusions. That reality is what it is regardless of whether we explain it or even encounter it. Likewise, what’s objective in the case of aesthetics is something’s potential to make the aesthetic appreciation of it worthwhile. That potential isn’t added entirely by the art appreciator, since that person opens herself up to being pleased or disappointed by the artwork. She hopes to be pleased, but the art’s quality is what it is and the truth will surface as long as she adopts the aesthetic attitude towards it, ignoring her prejudices and giving the art a chance to speak for itself, to show what it has to offer. Even if she loathes the artist, she may grudgingly come to admit that he’s produced a fine work, as long as she’s virtually objective in her appreciation of his work, which is to say as long as she treats it aesthetically and impersonally for the sake of the experience itself. Again, scientific objectivity differs slightly from aesthetic appreciation, since scientists are interested in knowledge, not in pleasant experience. But as I’ve explained, that difference is irrelevant since the cognitive agenda compels the scientist to subdue or to work around her personality and to think objectively—just like the art beholder.

So do beauty and ugliness exist as objective parts of the world? As potentials to reward or to punish the person who takes up anything like the aesthetic attitude, including a stance of scientific objectification, given the extent of the harmony or disharmony in the observed patterns, for example, I believe the answer is that those aesthetic properties are indeed as real as atoms and planets. The objective scientist is rewarded ultimately with knowledge of how nature works, while someone in the grip of the aesthetic attitude is rewarded (or punished) with an experience of the aesthetic dimension of any natural or artificial product. That dimension is found in the mechanical aspect of natural systems, since aesthetic harmony requires that the parts be related in certain ways to each other so that the whole system can be perceived as sublime or otherwise transcendent (mind-blowing). Traditional artworks are self-contained and science likewise deals largely with parts of the universe that are analyzed or reduced to systems within systems, each studied independently in artificial environments that are designed to isolate certain components of the system.

Now, such reduction is futile in the case of chaotic systems, but the grandeur of such systems is hardly lessened when the scientist discovers how a system which is sensitive to initial conditions evolves unpredictably as defined by a mathematical formula. Indeed, chaotic systems are comparable to modern and postmodern art as opposed to the more traditional kind. Recent, highly conceptual art or the nonrepresentational kind that explores the limits of the medium is about as unpredictable as a chaotic system. So the aesthetic dimension is found not just in part-whole relations and thus in beauty in the sense of harmony, but in free creativity. Modern art and science are both institutions that idealize the freedom of thought. Freed from certain traditions, artists now create whatever they’re inspired to create; they’re free to experiment, not to learn the natural facts but to push the boundaries of human creativity. Likewise, modern scientists are free to study whatever they like (in theory). And just as such modernists renounce their personal autonomy for the sake of their work, giving themselves over to their muse, to their unconscious inclinations (somewhat like Zen Buddhists who abhor the illusion of rational self-control), or instead to the rigors of institutional science, nature reveals its mindless creativity when chaotic systems emerge in its midst.

But does the scientist actually posit aesthetic values while doing science, given that scientific objectification isn’t identical with the aesthetic attitude? Well, the scientist would generally be too busy doing science to attend to the aesthetic dimension. But it’s no accident that mathematicians are disproportionately Platonists, that early modern scientists saw the cosmic order as attesting to God’s greatness, or that postmodern scientists like Neal deGrasse Tyson, who hosts the rebooted television show Cosmos, labour to convince the average American that naturalism ought to be enough of a religion for them, because the natural facts are glorious if not technically miraculous. The question isn’t whether scientists supply the world with aesthetic properties, like beauty or ugliness, since those properties preexist science as objective probabilities of uplifting or depressing anyone who takes up the aesthetic attitude, which attitude is practically the same as objectivity. Instead, the question here might be whether scientific objectivity compels the scientist to behold a natural phenomenon as art. Assuming there are nihilistic scientists, the answer would have to be no. The reason for this would be the difference in social contexts, which accounts for the difference between the goals and rewards. Again, the artist wants a certain refined pleasure whereas the scientist wants knowledge. But the point is that the scientist is poised to behold natural systems as artworks, just in so far as she’s especially objective.

Finally, we should return to the question of how this relates to nihilism. The fear, raised above, was that because science entails nihilism, the loss of faith in our values and traditions, scientists threaten to undermine the social order even as they lay bare the natural one. I’ve questioned the premise, since objectivity entails instead the aesthetic attitude which compels us to behold nature not as arid and barren but as rife with aesthetic values. Science presents us with a self-shaping universe, with the mindless, brute facts of how natural systems work that scientists come to know with exquisite attention to detail, thanks to their cognitive methods which effectively reveal the potential of even such systems to reward or to punish someone with an aesthetic eye. For every indifferent natural system uncovered by science, we’re well-disposed to appreciating that system’s aesthetic quality—as long as we emulate the scientist and objectify the system, ignoring our personal interests and modeling its patterns, such as by reducing the system to mechanical part-whole relations. The more objective knowledge we have, the more grist for the aesthetic mill. This isn’t to say that science supports all of our values and traditions. Obviously science threatens some of them and has already made many of them untenable. But science won’t leave us without any value at all. The more objective scientists are and the more of physical reality they disclose, the more we can perceive the aesthetic dimension that permeates all things, just by asking for pleasure rather than knowledge from nature.

There is, however, another great fear that should fill in for the nihilistic one. Instead of worrying that science will show us why we shouldn’t believe there’s any such thing as value, we might wonder whether, given the above, science will ultimately present us with a horrible rather than a beautiful universe. The question, then, is whether nature will indeed tend to punish or to reward those of us with aesthetic sensibilities. What is the aesthetic quality of natural phenomena in so far as they’re appreciated as artworks, as aesthetically interpretable products of undead processes? Is the final aesthetic judgment of nature an encouraging, life-affirming one that justifies all the scientific work that’s divorced the facts from our mental projections or will that judgment terrorize us worse than any grim vision of the world’s fundamental neutrality? Optimists like Richard Dawkins, Carl Sagan and Tyson think the wonders of nature are uplifting, but perhaps they’re spinning matters to protect science’s mystique and the secular humanistic myth of the progress of modern, science-centered societies. Perhaps the world’s objectification curses us not just with knowledge of many unpleasant facts of life, but with an experience of the monstrousness of all natural facts.

Neuroscience as Socio-Cognitive Pollution

by rsbakker

Want evidence of the Semantic Apocalypse? Look no further than your classroom.

As the etiology of more and more cognitive and behavioural ‘deficits’ is mapped, more and more of what once belonged to the realm of ‘character’ is being delivered to the domain of the ‘medical.’ This is why professors and educators more generally find themselves institutionally obliged to make more and more ‘accommodations,’ as well as why they find their once personal relations with students becoming ever more legalistic, ever more structured to maximally deflect institutional responsibility. Educators relate with students in an environment that openly declares their institutional incompetence regarding medicalized matters, thus providing students with a failsafe means to circumvent their institutional authority. This short-circuit is brought about by the way mechanical, or medical, explanations of behaviour impact intuitive/traditional notions regarding responsibility. Once cognitive or behavioural deficits are redefined as ‘conditions,’ it becomes easy to argue that treating those possessing the deficit the same as those who do not amounts to ‘punishing’ them for something they ‘cannot help.’ The professor is thus compelled to ‘accommodate’ to level the playing field, in order to be moral.

On Blind Brain Theory, this trend is part and parcel of the more general process of ‘social akrasis,’ the becoming incompatible of knowledge and experience. The adaptive functions of morality turn on certain kinds of ignorance, namely, ignorance of the very kind of information driving medicalization. Once the mechanisms underwriting some kind of ‘character flaw’ are isolated, that character flaw ceases to be a character flaw, and becomes a ‘condition.’ Given pre-existing imperatives to grant assistance to those suffering conditions, behaviour once deemed transgressive becomes symptomatic, and moral censure becomes immoral. Character flaws become disabilities. The problem, of course, is that all transgressive behaviour—all behaviour period, in fact—can be traced back to various mechanisms, begging the question, ‘Where does accommodation end?’ Any disparity in classroom performance can be attributed to disparities between neural mechanisms.

The problem, quite simply, is that the tools in our basic socio-cognitive toolbox are adapted to solve problems in the absence of mechanical cognition—it literally requires our blindness to certain kinds of facts to reliably function. We are primed ‘to hold responsible’ those who ‘could have done otherwise’—those who have a ‘choice.’ Choice, quite famously, requires some kind of fictional discontinuity between us and our precursors, a discontinuity that only ignorance and neglect can maintain. ‘Holding responsible,’ therefore, can only retreat before the advance of medicalization, insofar as the latter involves the specification of various behavioural precursors.

The whole problem of this short circuit—and the neuro-ethical mire more generally, in fact—can be seen as a socio-cognitive version of a visual illusion, where the atypical triggering of different visual heuristics generates conflicting visual intuitions. Medicalization stumps socio-cognition in much the same way the Muller-Lyer Illusion stumps the eye: It provides atypical (evolutionarily unprecedented, in fact) information, information that our socio-cognitive systems are adapted to solve without. Causal information regarding neurophysiological function triggers an intuition of moral exemption regarding behaviour that could never have been solved as such in our evolutionary history. Neuroscientific understanding of various behavioural deficits, however defined, cues the application of a basic, heuristic capacity within a historically unprecedented problem-ecology. If our moral capacities have evolved to solve problems neglecting the brains involved, to work around the lack of brain information, then it stands to reason that the provision of that information would play havoc with our intuitive problem-solving. Brain information, you could say, is ‘non-ecofriendly,’ a kind of ‘informatic pollutant’ in the problem-ecologies moral cognition is adapted to solve.

The idea that heuristic cognition generates illusions is now an old one. In naturalizing intentionality, Blind Brain Theory allows us to see how the heuristic nature of intentional problem-solving regimes means they actually require the absence of certain kinds of information to properly function. Adapted to solve social problems in the absence of any information regarding the actual functioning of the systems involved, our socio-cognitive toolbox literally requires that certain information not be available to function properly. The way this works can be plainly seen with the heuristics governing human threat detection, say. Since our threat detection systems are geared to small-scale, highly interdependent social contexts, the statistical significance of any threat information is automatically evaluated against a ‘default village.’ Our threat detection systems, in other words, are geared to problem-ecologies lacking any reliable information regarding much larger populations. To the extent that such information ‘jams’ reliable threat detection (incites irrational fears), one might liken such information to pollution, to something ecologically unprecedented that renders previously effective cognitive adaptations ineffective.

I actually think ‘cognitive pollution’ is definitive of modernity, that all modern decision-making occurs in information environments, many of them engineered, that cut against our basic decision-making capacities. The ‘ecocog’ ramifications of neuroscientific information, however, promise to be particularly pernicious.

Our moral intuitions were always blunt instruments, the condensation of innumerable ancestral social interactions, selected for their consequences rather than their consistencies. Their resistance to any decisive theoretical regimentation—the mire that is ‘metaethics’—should come as no surprise. But throughout this evolutionary development, neurofunctional neglect remained a constant: at no point in our evolutionary history were our ancestors called on to solve moral problems possessing neurofunctional information. Now, however, that information has become an inescapable feature of our moral trouble-shooting, spawning ad hoc fixes that seem to locally serve our intuitions, while generating any number of more global problems.

A genuine social process is afoot here.

A neglect based account suggests the following interpretation of what’s happening: As medicalization (biomechanization) continues apace, the social identity of the individual is progressively divided into the subject, the morally liable, and the abject, the morally exempt. Like a wipe in cinematic editing, the scene of the abject is slowly crawling across the scene of the subject, generating more and more breakdowns of moral cognition. Becoming abject doesn’t so much erase as displace liability: one individual’s exemption (such as you find in accommodation) from moral censure immediately becomes a moral liability for their compatriots. The paradoxical result is that even as we each become progressively more exempt from moral censure, we become progressively more liable to provide accommodation. Thus the slow accumulation of certain professional liabilities as the years wear on. Those charged with training and assessing their fellows will in particular face a slow erosion in their social capacity to censure—which is to say, evaluate—as accommodation and its administrative bureaucracies slowly continue to bloat, capitalizing on the findings of cognitive science.

The process, then, can be described as one where progressive individual exemption translates into progressive social liability: given our moral intuitions, exemptions for individuals mean liabilities for the crowd. Thus the paradoxical intensification of liability that exemption brings about: the process of diminishing performance liability is at once the process of increasing assessment liability. Censure becomes increasingly prone to trigger censure.

The erosion of censure’s public legitimacy is the most significant consequence of this socio-cognitive short-circuit I’m describing. Heuristic tool kits are typically whole package deals: we evolved our carrot problem-solving capacity as part of a larger problem-solving capacity involving sticks. As informatic pollutants destroy more and more of the stick’s problem-solving habitat, the carrots left behind will become less and less reliable. Thus, on a ‘zombie morality’ account, we should expect the gradual erosion of our social system’s ability to police public competence—a kind of ‘carrot drift.’

This is how social akrasis, the psychotic split between the nihilistic how and fantastic what of our society and culture, finds itself coded within the individual. Broken autonomy, subpersonally parsed. With medicalization, the order of the impersonal moves, not simply into the skull of the person, but into their performance as well. As the subject/abject hybrid continues to accumulate exemptions, it finds itself ever more liable to make exemptions. Since censure is communicative, the increasing liability of censure suggests a contribution, at least, to the increasing liability of moral communication, and thus, to the politicization of public interpersonal discourse.

How this clearly unsustainable trend ends depends on the contingencies of a socially volatile future. We should expect to witness the continual degradation in the capacity of moral cognition to solve in what amounts to an increasingly polluted information environment. Will we overcome these problems via some radical new understanding of social cognition? Or will this lead to some kind of atavistic backlash, the institution of some kind of informatic hygiene—an imposition of ignorance on the public? I sometimes think that the kind of ‘liberal atrocity tales’ I seem to endlessly encounter among my nonacademic peers point in this direction. For those ignorant of the polluting information, the old judgments obviously apply, and stories of students not needing to give speeches in public-speaking classes, or homeless individuals being allowed to dump garbage in the river, float like sparks from tongue to tongue, igniting the conviction that we need to return to the old ways, thus convincing who knows how many to vote directly against their economic interests. David Brookes, protege of William F. Buckley and conservative columnist for The New York Times, often expresses amazement at the way the American public continues to drift to the political right, despite the way fiscal conservative reengineering of the market continues to erode their bargaining power. Perhaps the identification of liberalism with some murky sense of the process described above has served to increase the rhetorical appeal of conservatism…

The sense that someone, somewhere, needs to be censured.

The Asimov Illusion

by rsbakker

Could believing in something so innocuous, so obvious, as a ‘meeting of the minds’ destroy human civilization?

Noocentrism has a number of pernicious consequences, but one in particular has been nagging me of late: The way assumptive agency gulls people into thinking they will ‘reason’ with AIs. Most understand Artificial Intelligence in terms of functionally instantiated agency, as if some machine will come to experience this, and to so coordinate with us the way we think we coordinate amongst ourselves—which is to say, rationally. Call this the ‘Asimov Illusion,’ the notion that the best way to characterize the interaction between AIs and humans is the way we characterize our own interactions. That AIs, no matter how wildly divergent their implementation, will somehow functionally, at least, be ‘one of us.’

If Blind Brain Theory is right, this just ain’t going to be how it happens. By its lights, this ‘scene’ is actually the product of metacognitive neglect, a kind of philosophical hallucination. We aren’t even ‘one of us’!

Obviously, theoretical metacognition requires the relevant resources and information to reliably assess the apparent properties of any intentional phenomena. In order to reliably expound on the nature of rules, Brandom, for instance, must possess both the information (understood in the sense of systematic differences making systematic differences) and the capacity to do so. Since intentional facts are not natural facts, cognition of them fundamentally involves theoretical metacognition—or ‘philosophical reflection.’ Metacognition requires that the brain somehow get a handle on itself in behaviourally effective ways. It requires the brain somehow track its own neural processes. And just how much information is available regarding the structure and function of the underwriting neural processes? Certainly none involving neural processes, as such. Very little, otherwise. Given the way experience occludes this lack of information, we should expect that metacognition would be systematically duped into positing low-dimensional entities such as qualia, rules, hopes, and so on. Why? Because, like Plato’s prisoners, it is blind to its blindness, and so confuses shadows for things that cast shadows.

On BBT, what is fundamentally going on when we communicate with one another is physical: we are quite simply doing things to each other when we speak. No one denies this. Likewise, no one denies language is a biomechanical artifact, that short of contingent, physically mediated interactions, there’s no linguistic communication period. BBT’s outrageous claim is that nothing more is required, that language, like lungs or kidneys, discharges its functions in an entirely mechanical, embodied manner.

It goes without saying that this, as a form of eliminativism, is an extremely unpopular position. But it’s worth noting that its unpopularity lies in stopping at the point of maximal consensus—the natural scientific picture—when it comes to questions of cognition. Questions regarding intentional phenomena are quite clearly where science ends and philosophy begins. Even though intentional phenomena obviously populate the bestiary of the real, they are naturalistically inscrutable. Thus the dialectical straits of eliminativism: the very grounds motivating it leave it incapable of accounting for intentional phenomena, and so easily outflanked by inferences to the best explanation.

As an eliminativism that eliminates via the systematic naturalization of intentional phenomena, Blind Brain Theory blocks what might be called the ‘Abductive Defence’ of Intentionalism. The kinds of domains of second-order intentional facts posited by Intentionalists can only count toward ‘best explanations’ of first-order intentional behaviour in the absence of any plausible eliminativistic account of that same behaviour. So for instance, everyone in cognitive science agrees that information, minimally, involves systematic differences making systematic differences. The mire of controversy that embroils information beyond this consensus turns on the intuition that something more is required, that information must be genuinely semantic to account for any number of different intentional phenomena. BBT, however, provides a plausible and parsimonious way to account for these intentional phenomena using only the minimal, consensus view of information given above.

This is why I think the account is so prone to give people fits, to restrict their critiques to cloistered venues (as seems to be the case with my Negarestani piece two weeks back). BBT is an eliminativism that’s based on the biology of the brain, a positive thesis that possesses far ranging negative consequences. As such, it requires that Intentionalists account for a number of things they would rather pass over in silence, such as questions of what evidences their position. The old, standard dismissals of eliminativism simply do not work.

What’s more, by clearing away the landfill of centuries of second-order intentional speculation in philosophy, it provides a genuinely new, entirely naturalistic way of conceiving the intentional phenomena that have baffled us for so long. So on BBT, for instance, ‘reason,’ far from being ‘liquidated,’ ceases to be something supernatural, something that mysteriously governs contingencies independently of contingencies. Reason, in other words, is embodied as well, something physical.

The tradition has always assumed otherwise because metacognitive neglect dupes us into confusing our bare inkling of ourselves with an ‘experiential plenum.’ Since what low-dimensional scraps we glean seem to be all there is, we attribute efficacy to it. We assume, in other words, noocentrism; we conclude, on the basis of our ignorance, that the disembodied somehow drives the embodied. The mathematician, for instance, has no inkling of the biomechanics involved in mathematical cognition, and so claims that no implementing mechanics are relevant whatsoever, that their cogitations arise ‘a priori’ (which on BBT amounts to little more than a fancy way of saying ‘inscrutable to metacognition’). Given the empirical plausibility of BBT, however, it becomes difficult not to see such claims of ‘functional autonomy’ as being of a piece with vulgar claims regarding the spontaneity of free will and concluding that the structural similarity between ‘good’ intentional phenomena (those we consider ineliminable) and ‘bad’ (those we consider preposterous) is likely no embarrassing coincidence. Since we cannot frame these disembodied entities and relations against any larger backdrop, we have difficulty imagining how it could be ‘any other way.’ Thus, the Asimov Illusion, the assumption that AIs will somehow implement disembodied functions, ‘play by the rules’ of the ‘game of giving and asking for reasons.’

BBT lets us see this as yet more anthropomorphism. The high-dimensional, which is to say, embodied, picture is nowhere near so simple or flattering. When we interact with an Artificial Intelligence we simply become another physical system in a physical network. The question of what kind of equilibrium that network falls into turns on the systems involved, but it seems safe to say that the most powerful system will have the most impact on the system of the whole. End of story. There’s no room for Captain Kirk working on a logical tip from Spock in this picture, anymore than there’s room for benevolent or evil intent. There’s just systems churning out systematic consequences, consequences that we will suffer or celebrate.

Call this the Extrapolation Argument against Intentionalism. On BBT, what we call reason is biologically specific, a behavioural organ for managing the linguistic coordination of individuals vis a vis their common environments. This quite simply means that once a more effective organ is found, what we presently call reason will be at an end. Reason facilitates linguistic ‘connectivity.’ Technology facilitates ever greater degrees of mechanical connectivity. At some point the mechanical efficiencies of the latter are doomed to render the biologically fixed capacities of the former obsolete. It would be preposterous to assume that language is the only way to coordinate the activities of environmentally distinct systems, especially now, given the mad advances in brain-machine interfacing. Certainly our descendents will continue to possess systematic ways to solve our environments just as our prelinguistic ancestors did, but there is no reason, short of parochialism, to assume it will be any more recognizable to us than our reasoning is to our primate cousins.

The growth of AI will be incremental, and its impacts myriad and diffuse. There’s no magical finish line where some AI will ‘wake up’ and find themselves in our biologically specific shoes. Likewise, there is no holy humanoid summit where all AI will peak, rather than continue their exponential ascent. Certainly a tremendous amount of engineering effort will go into making it seem that way for certain kinds of AI, but only because we so reliably pay to be flattered. Functionality will win out in a host of other technological domains, leading to the development of AIs that are obviously ‘inhuman.’ And as this ‘intelligence creep’ continues, who’s to say what kinds of scenarios await us? Imagine ‘onto-marriages,’ where couples decide to wirelessly couple their augmented brains to form a more ‘seamless union’ in the eyes of God. Or hive minds, ‘clouds’ where ‘humanity’ is little more than a database, a kind of ‘phenogame,’ a Matrix version of SimCity.

The list of possibilities is endless. There is no ‘meaningful centre’ to be held. Since the constraints on those possibilities are mechanical, not intentional, it becomes hard to see why we shouldn’t regard the intentional as simply another dominant illusion of another historical age.

We can already see this ‘intelligence creep’ with the proliferation of special-purpose AIs throughout our society. Make no mistake, our dependence on machine intelligences will continue to grow and grow and grow. The more human inefficiencies are purged from the system, the more reliant humans become on the system. Since the system is capitalistic, one might guess the purge will continue until it reaches the last human transactional links remaining, the Investors, who will at long last be free of the onerous ingratitude of labour. As they purge themselves of their own humanity in pursuit of competitive advantages, my guess is that we muggles will find ourselves reduced to human baggage, possessing a bargaining power that lies entirely with politicians that the Investors own.

The masses will turn from a world that has rendered them obsolete, will give themselves over to virtual worlds where their faux-significance is virtually assured. And slowly, when our dependence has become one of infantility, our consoles will be powered down one by one, our sensoriums will be decoupled from the One, and humanity will pass wailing from the face of the planet earth.

And something unimaginable will have taken its place.

Why unimaginable? Initially, the structure of life ruled the dynamics. What an organism could do was tightly constrained by what the organism was. Evolution selected between various structures according to their dynamic capacities. Structures that maximized dynamics eventually stole the show, culminating in the human brain, whose structural plasticity allowed for the in situ, as opposed to intergenerational, testing and selection of dynamics—for ‘behavioural evolution.’ Now, with modern technology, the ascendency of dynamics over structure is complete. The impervious constraints that structure had once imposed on dynamics are now accessible to dynamics. We have entered the age of the material post-modern, the age when behaviour begets bodies, rather than vice versus.

We are the Last Body in the slow, biological chain, the final what that begets the how that remakes the what that begets the how that remakes the what, and so on and so on, a recursive ratcheting of being and becoming into something verging, from our human perspective at least, upon omnipotence.

The Blind Mechanic

by rsbakker

Thus far, the assumptive reality of intentional phenomena has provided the primary abductive warrant for normative metaphysics. The Eliminativist could do little more than argue the illusory nature of intentional phenomena on the basis of their incompatibility with the higher-dimensional view of  science. Since science was itself so obviously a family of normative practices, and since numerous intentional concepts had been scientifically operationalized, the Eliminativist was easily characterized as an extremist, a skeptic who simply doubted too much to be cogent. And yet, the steady complication of our understanding of consciousness and cognition has consistently served to demonstrate the radically blinkered nature of metacognition. As the work of Stanislaus Dehaene and others is making clear, consciousness is a functional crossroads, a serial signal delivered from astronomical neural complexities for broadcast to astronomical neural complexities. Conscious metacognition is not only blind to the actual structure of experience and cognition, it is blind to this blindness. We now possess solid, scientific reasons to doubt the assumptive reality that underwrites the Intentionalist’s position.

The picture of consciousness that researchers around the world are piecing together is the picture predicted by Blind Brain Theory.  It argues that the entities and relations posited by Intentional philosophy are the result of neglect, the fact that philosophical reflection is blind to its inability to see. Intentional heuristics are adapted to first-order social problem-solving, and are generally maladaptive in second-order theoretical contexts. But since we lack the metacognitive werewithal to even intuit the distinction between any specialized cognitive device, we assume applicability where their is none, and so continually blunder at the problem, again and again. The long and the short of it is that the Intentionalist needs some empirically plausible account of metacognition to remain tenable, some account of how they know the things they claim to know. This was always the case, of course, but with BBT the cover provided by the inscrutability of intentionality disappears. Simply put, the Intentionalist can no longer tie their belt to the post of ineliminability.

Science is the only reliable provender of theoretical cognition we have, and to the extent that intentionality frustrates science, it frustrates theoretical cognition. BBT allays that frustration. BBT allows us to recast what seem to be irreducible intentional problematics in terms entirely compatible with the natural scientific paradigm. It lets us stick with the high-dimensional, information-rich view. In what follows I hope to show how doing so, even at an altitude, handily dissolves a number of intentional snarls.

In Davidson’s Fork, I offered an eliminativist radicalization of Radical Interpretation, one that characterized the scene of interpreting another speaker from scratch in mechanical terms. What follows is preliminary in every sense, a way to suss out the mechanical relations pertinent to reason and interpretation. Even still, I think the resulting picture is robust enough to make hash of Reza Negarestani’s Intentionalist attempt to distill the future of the human in “The Labor of the Inhuman” (part I can be found here, and part II, here). The idea is to rough out the picture in this post, then chart its critical repercussions against the Brandomian picture so ingeniously extended by Negarestani. As a first pass, I fear my draft will be nowhere near so elegant as Negarestani’s, but as I hope to show, it is revealing in the extreme, a sketch of the ‘nihilistic desert’ that philosophers have been too busy trying to avoid to ever really sit down and think through.

A kind of postintentional nude.

As we saw two posts back, if you look at interpretation in terms of two stochastic machines attempting to find some mutual, causally systematic accord between the causally systematic accords each maintains with their environment, the notion of Charity, or the attribution of rationality, as some kind of indispensible condition of interpretation falls by the wayside, replaced by a kind of ‘communicative pre-established harmony’—or ‘Harmony,’ as I’ll refer to it here. There is no ‘assumption of rationality,’ no taking of ‘intentional stances,’ because these ‘attitudes’ are not only not required, they express nothing more than a radically blinkered metacognitive gloss on what is actually going on.

Harmony, then, is the sum of evolutionary stage-setting required for linguistic coupling. It refers to the way we have evolved to be linguistically attuned to our respective environmental attunements, enabling the formation of superordinate systems possessing greater capacities. The problem of interpretation is the problem of Disharmony, the kinds of ‘slippages’ in systematicity that impair or, as in the case of Radical Interpretation, prevent the complex coordination of behaviours. Getting our interpretations right, in other words, can be seen as a form of noise reduction. And since the traditional approach concentrates on the role rationality plays in getting our interpretations right, this raises the prospect that what we call reason can be seen as a kind of noise reduction mechanism, a mechanism for managing the systematicity—or ‘tuning’ as I’ll call it here—between disparate interpreters and the world.

On this account, these very words constitute an exercise in tuning, an attempt to tweak your covariational regime in a manner that reduces slippages between you and your (social and natural) world. If language is the causal thread we use to achieve intersystematic relations with our natural and social environments, then ‘reason’ is simply one way we husband the efficacy of that causal thread.

So let’s start from scratch, scratch. What do evolved, biomechanical systems such as humans need to coordinate astronomically complex covariational regimes with little more than sound? For one, they need ways to trigger selective activations of the other’s regime for effective behavioural uptake. Triggering requires some kind of dedicated cognitive sensitivity to certain kinds of sounds—those produced by complex vocalizations, in our case. As with any environmental sensitivity, iteration is the cornerstone, here. The complexity of the coordination possible will of course depend on the complexity of the activations triggered. To the extent that evolution rewards complex behavioural coordination, we can expect evolution to reward the communicative capacity to trigger complex activations. This is where the bottleneck posed by the linearity of auditory triggers becomes all important: the adumbration of iterations is pretty much all we have, trigger-wise. Complex activation famously requires some kind of molecular cognitive sensitivity to vocalizations, the capacity to construct novel, covariational complexities on the slim basis of adumbrated iterations. Linguistic cognition, in other words, needs to be a ‘combinatorial mechanism,’ a device (or series of devices) able to derive complex activations given only a succession of iterations.

These combinatorial devices correspond to what we presently understand, in disembodied/supernatural form, as grammar, logic, reason, and narrative. They are neuromechanical processes—the long history of aphasiology assures us of this much. On BBT, their apparent ‘formal nature’ simply indicates that they are medial, belonging to enabling processes outside the purview of metacognition. This is why they had to be discovered, why our efficacious ‘knowledge’ of them remains ‘implicit’ or invisible/inaccessible. This is also what accounts for their apparent ‘transcendent’ or ‘a priori’ nature, the spooky metacognitive sense of ‘absent necessity’—as constitutive of linguistic comprehension, they are, not surprisingly, indispensible to it. Located beyond the metacognitive pale, however, their activities are ripe for post hoc theoretical mischaracterization.

Say someone asks you to explain modus ponens, ‘Why ‘If p, then q’?’ Medial neglect means that the information available for verbal report when we answer has nothing to do with the actual processes involved in, ‘If p, then q,’ so you say something like, ‘It’s a rule of inference that conserves truth.’ Because language needs something to hang onto, and because we have no metacognitive inkling of just how dismal our inklings are, we begin confabulating realms, some ontologically thick and ‘transcendental,’ others razor thin and ‘virtual,’ but both possessing the same extraordinary properties otherwise. Because metacognition has no access to the actual causal functions responsible, once the systematicities are finally isolated in instances of conscious deliberation, those systematicities are reported in a noncausal idiom. The realms become ‘intentional,’ or ‘normative.’ Dimensionally truncated descriptions of what modus ponens does (‘conserves truth’) become the basis of claims regarding what it is. Because the actual functions responsible belong to the enabling neural architecture they possess an empirical necessity that can only seem absolute or unconditional to metacognition—as should come as no surprise, given that a perspective ‘from the inside on the inside,’ as it were, has no hope of cognizing the inside the way the brain cognizes its outside more generally, or naturally.

I’m just riffing here, but it’s worth getting a sense of just how far this implicature can reach.

Consider Carroll’s “What the Tortoise Said to Achilles.” The reason Achilles can never logically compel the Tortoise with the statement of another rule is that each rule cited becomes something requiring justification. The reason we think we need things like ‘axioms’ or ‘communal norms’ is that the metacognitive capacity to signal for additional ‘tuning’ can be applied at any communicative juncture. This is the Tortoise’s tactic, his way of showing how ‘logical necessity’ is actually contingent. Metacognitive blindness means that citing another rule is all that can be done, a tweak that can be queried once again in turn. Carroll’s puzzle is a puzzle, not because it reveals that the source of ‘normative force’ lies in some ‘implicit other’ (the community, typically), but because of the way it forces metacognition to confront its limits—because it shows us to be utterly ignorant of knowing, how it functions, let alone what it consists in. In linguistic tuning, some thread always remains unstitched, the ‘foundation’ is always left hanging simply because the adumbration of iterations is always linear and open ended.

The reason why ‘axioms’ need to be stipulated or why ‘first principles’ always run afoul the problem of the criterion is simply that they are low-dimensional glosses on high-dimensional (‘embodied’) processes that are causal. Rational ‘noise reduction’ is a never ending job; it has to be such, insofar as noise remains an ineliminable by-product of human communicative coordination. From a pitiless, naturalistic standpoint, knowledge consists of breathtakingly intricate, but nonetheless empirical (high-dimensional, embodied), ways to environmentally covary—and nothing more. There is no ‘one perfect covariational regime,’ just degrees of downstream behavioural efficacy. Likewise, there is no ‘perfect reason,’ no linguistic mechanism capable of eradicating all noise.

What we have here is an image of reason and knowledge as ‘rattling machinery,’ which is to say, as actual and embodied. On this account, reason enables various mechanical efficiencies; it allows groups of humans to secure more efficacious coordination for collective behaviour. It provides a way of policing the inevitable slippages between covariant regimes. ‘Truth,’ on this account, simply refers to the sufficiency of our covariant regimes for behaviour, the fact that they do enable efficacious environmental interventions. The degree to which reason allows us to converge on some ‘truth’ is simply the degree to which it enables mechanical relationships, actual embodied encounters with our natural and social environments. Given Harmony—the sum of evolutionary stage-setting required—it allows collectives to maximize the efficiencies of coordinated activity by minimizing the interpretative noise that hobbles all collective endeavours.

Language, then, allows humans to form superordinate mechanisms consisting of ‘airy parts,’ to become components of ‘superorganisms,’ whose evolved sensitivities allow mere sounds to tweak and direct, to generate behaviour enabling intersystematicities. ‘Reason,’ more specifically, allows for the policing and refining of these intersystematicities. We are all ‘semantic mechanics’ with reference to one another, continually tinkering and being tinkered with, calibrating and being calibrated, generally using efficacious behaviour, the ability to manipulate social and natural environments, to arbitrate the sufficiency of our ‘fixes.’ And all of this plays out in the natural arena established by evolved Harmony.

Now this ‘rattling machinery’ image of reason and knowledge is obviously true in some respect: We are embodied, after-all, causally embroiled in our causal environments. Language is an evolutionary product, as is reason. Misfires are legion, as we might expect. The only real question is whether this rattling machinery can tell the whole story. The Intentionalist, of course, says no. They claim that the intentional enjoys some kind of special functional existence over and above this rattling machinery, that it constitutes a regime of efficacy somehow grasped via the systematic interrogation of our intentional intuitions.

The stakes are straightforward. Either what we call intentional solutions are actually mechanical solutions that we cannot intuit as mechanical solutions, or what we call intentional solutions are actually intentional solutions that we can intuit as intentional solutions. What renders this first possibility problematic is radical skepticism. Since we intuit intentional solutions as intentional, it suggests that our intuitions are deceptive in the extreme. Because our civilization has trusted these intuitions since the birth of philosophy, they have come to inform a vast portion of our traditional understanding. What renders this second possibility problematic is, first and foremost, supernaturalism. Since the intentional is incompatible with the natural, the intentional must consist either in something not natural, or in something that forces us to completely revise our understanding of the natural. And even if such a feat could be accomplished, the corresponding claim that it could be intuited as such remains problematic.

Blind Brain Theory provides a way of seeing Intentionalism as a paradigmatic example of ‘noocentrism,’ as the product of a number of metacognitive illusions analogous to the cognitive illusion underwriting the assumption of geocentrism, centuries before. It is important to understand that there is no reason why our normative problem-solving should appear as it is to metacognition—least of all, the successes of those problem-solving regimes we call intentional. The successes of mathematics stand in astonishing contrast to the failure to understand just what mathematics is. The same could be said of any formalism that possesses practical application. It even applies to our everyday use of intentional terms. In each case, our first-order assurance utterly evaporates once we raise theoretically substantive, second-order questions—exactly as BBT predicts. This contrast of breathtaking first-order problem solving power and second-order ineptitude is precisely what one might expect if the information accessible to metacognition was geared to domain specific problem-solving. Add anosognosia to the mix, the inability to metcognize our metacognitive incapacity, and one has a wickedly parsimonious explanation for the scholastic mountains of inert speculation we call philosophy.

(But then, in retrospect, this was how it had to be, didn’t it? How it had to end? With almost everyone horrifically wrong. A whole civilization locked in some kind of dream. Should anyone really be surprised?)

Short of some unconvincing demand that our theoretical account appease a handful of perennially baffling metacognitive intuitions regarding ourselves, it’s hard to see why anyone should entertain the claim that reason requires some ‘special X’ over and above our neurophysiology (and prostheses). Whatever conscious cognition is, it clearly involves the broadcasting/integration of information arising from unknown sources for unknown consumers. It simply follows that conscious metacognition has no access whatsoever to the various functions actually discharged by conscious cognition. The fact that we have no intuitive awareness of the panoply of mechanisms cognitive science has isolated demonstrates that we are prone to at least one profound metacognitive illusion—namely ‘self-transparency.’ The ‘feeling of willing’ is generally acknowledged as another such illusion, as is homuncularism or the ‘Cartesian Theatre.’ How much does it take before we acknowledge the systematic unreliability of our metacognitive intuitions more generally? Is it really just a coincidence, the ghostly nature of norms and the ghostly nature of perhaps the most notorious metacognitive illusion of all, souls? Is it mere happenstance, the apparent acausal autonomy of normativity and our matter of fact inability to source information consciously broadcast? Is it really the case that all these phenomena, these cause-incompatible intentional things, are ‘otherworldly’ for entirely different reasons? At some point it has to begin to seem all too convenient.

Make no mistake, the Rattling Machinery image is a humbling one. Reason, the great, glittering sword of the philosopher, becomes something very local, very specific, the meaty product of one species at one juncture in their evolutionary development.

On this account, ‘reason’ is a making-machinic machine, a ‘devicing device’—the ‘blind mechanic’ of human communication. Argumentation facilitates the efficacy of behavioural coordination, drastically so, in many instances. So even though this view relegates reason to one adaptation among others, it still concedes tremendous significance to its consequences, especially when viewed in the context of other specialized cognitive capacities. The ability to recall and communicate former facilitations, for instance, enables cognitive ‘ratcheting,’ the stacking of facilitations upon facilitations, and the gradual refinement, over time, of the covariant regimes underwriting behaviour—the ‘knapping’ of knowledge (and therefore behaviour), you might say, into something ever more streamlined, ever more effective.

The thinker, on this account, is a tinker. As I write this, myriad parallel processors are generating a plethora of nonconscious possibilities that conscious cognition serially samples and broadcasts to myriad other nonconscious processors, generating more possibilities for serial sampling and broadcasting. The ‘picture of reason’ I’m attempting to communicate becomes more refined, more systematically interrelated (for better or worse) to my larger covariant regime, more prone to tweak others, to rewrite their systematic relationship to their environments, and therefore their behaviour. And as they ponder, so they tinker, and the process continues, either to peter out in behavioural futility, or to find real environmental traction (the way I ‘tink’ it will (!)) in a variety of behavioural contexts.

Ratcheting means that the blind mechanic, for all its misfires, all its heuristic misapplications, is always working on the basis of past successes. Ratcheting, in other words, assures the inevitability of technical ‘progress,’ the gradual development of ever more effective behaviours, the capacity to componentialize our environments (and each other) in more and more ways—to the point where we stand now, the point where intersystematic intricacy enables behaviours that allow us to forego the ‘airy parts’ altogether. To the point where the behaviour enabled by cognitive structure can now begin directly knapping that structure, regardless of the narrow tweaking channels, sensitivities, provided by evolution.

The point of the Singularity.

For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image.

This brings me to Reza Negarestani’s, “The Labor of the Inhuman,” his two-part meditation on the role we should expect—even demand—reason to play in the Posthuman. He adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes on to argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:

The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.

In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. This requires that Negarestani prognosticate, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the intentionality of the human. And this, as I hope to show in the following installment, is simply not plausible.

The Ontology of Ghosts

by rsbakker

In the courtyard a shadowy giant elm

Spreads ancient boughs, her ancient arms where dreams,

False dreams, the old tale goes, beneath each leaf

Cling and are numberless.

–Virgil, The Aenied, Book VI

.

I’m always amazed, looking back, at how fucking clear things had seemed at this or that juncture of my philosophical life—how lucid. The two early conversions, stumbling into nihilism as a teenager, then climbing into Heidegger in my early twenties, seem the most ‘religious’ in retrospect. I think this is why I never failed to piss people off even back then. You have this self-promoting skin you wear when you communicate, this tactical gloss that compels you to impress. This is what non-intellectuals hear when you speak, tactics and self-promotion. This is why it’s so easy to tar intellectualism in the communal eye: insecurity and insincerity are of its essence. All value judgements are transitive in human psychology: Laugh up your sleeve at what I say, and you are laughing at me. I was an insecure, hypercritical, know-it-all. You add the interpersonal trespasses of religion—intolerance, intensity, and aggressiveness—and I think it’s safe to assume I came across as an obnoxious prick.

But if I was evangelical, it was that I could feel those transformations. Each position possessed its own, distinct metacognitive attitude toward experience, a form of that I attributed to this, whatever it might be. With my adolescent nihilism, I remember obsessively pondering the way my thoughts bubbled up out of oblivion—and being stupefied. I was some kind of inexplicable kink in the real. I was so convinced I was an illusion that I would ache for being alone, grip furniture for fear of flying.

But with Heidegger, it was like stepping into a more resonant clime, into a world rebarred with meaning, with projects and cares and rules and hopes. A world of towardness, where what you are now is a manifold of happenings, a gazing into an illuminated screen, a sitting in a world bound to you via your projects, a grasping of these very words. The intentional things, the phenomena of lived life, these were the foundation, I believed, the sine qua non of empirical inquiry. Before we can ask the question of freedom and meaning we need to ask the question of what comes first.

What could be more real than lived life?

It took a long time for me to realize just how esoteric, just how parochial, my definition of ‘lived life’ was. No matter how high you scratch your charcoal cloud, the cave wall always has the final say. It’s the doctors that keep you alive; philosophers just help you fall to sleep. Everywhere I looked across Continental philosophy, I saw all these crazy-ass interpretations, variants spanning variants, revivals and exhaustions, all trying to get the handle on the intentional ontology of a ‘lived life’ that took years of specialized training to appreciate. This is how I began asking the question of the cognitive difference. And this is how I found myself back at the beginning, my inaugural, adolescent departure from the naive.

The difference being, I am no longer stupefied.

I have a new religion, one that straightens out all the kinks, and so dispels rather than saves the soul. I am no exception. I have been chosen by nobody for nothing. I am continuous with the x-dimensional totality that we call nature—continuous in every respect. I watch images from Hubble, the most distant galactic swirls, and I tell myself, I am this, and I feel grand and empty. I am the environment that chokes, the climate that reels. I am the body that the doctor attends…

And you are too.

Thus the most trivial prophecy, the prediction that you will waver, crumble, that the florescent light will wobble to the sound of loved ones weeping… breathing. That someone, maybe, will clutch your hand.

Such hubris, when you think about it, to assume that lived life lay at your intellectual fingertips—the thing most easily grasped! For someone who has spent their life reading philosophy this stands tall among the greater insults: the knowledge that we have been duped all along, that all those profundities, that resonant world I found such joy and rancour pondering, were little more than the artifact of machines taking their shadows for reflections, the cave wall for a looking glass.

I am the residue of survival—living life. I am an astronomically complicated system, a multifarious component of superordinate systems that cannot cognize itself as such for being such. I am a serial gloss, a transmission from nowhere into nowhere, a pattern plucked from subpersonal pandemonium and broadcast to the neural horde. I am a message that I cannot conceive. As. Are. You.

I can show you pictures of dead people to prove it. Lives lived out.

The first-person is a selective precis of this totality, one that poses as the totality. And this is the trick, the way to unravel the kink and see how it is that Heidegger could confuse his semantic vision with seeing. The oblivion behind my thoughts is the oblivion of neglect. Because oblivion has no time, I have no time, and so watch amazed as my shining hands turn to leather. I breathe deep and think, Now. Because oblivion constrains nothing, I follow rules of my own will, pursue goals of my own desire. I stretch forth my hand and remake what lies before me. Because oblivion distinguishes nothing, I am one. I raise my voice and declare, Me. Because oblivion reveals nothing, I stand opposite the world, always only aimed, never connected. I squint and I squint and I ask, How do I know?

I am bottomless because my foundation was never mine to see. I am a perspective, an agent, a person, just another dude-with-a-bad-attitude—I am all these things because of the way I am not any of these things. I am not what I am because of what I am—again, the same as you.

Ghosts can be defined as a fragment cognized as a whole. In some cultures ghosts have no backs, no faces, no feet. In most all cultures they have no substance, no consistency, temporal or otherwise. The dimensions of lived life have been stripped from them; they are shades, animate shadows. As Virgil says of Aeneas attempting to embrace his father, Anchises, in the Underworld:

 Then thrice around his neck his arms he threw;

And thrice the flitting shadow slipp’d away,

Like winds, or empty dreams that fly the day.

Ghosts are the incorporeal remainder, the something shorn of substance and consistency. This is the lived life of Heidegger, an empty dream that flew the day. Insofar as Dasein lacks meat, Dasein dwells with the dead, another shade in the underworld, another passing fancy. We are not ghosts. If lived life lies in the meat, then the truth of lived life lies in the meat. The truth of what we are runs orthogonal to the being that we all swear that we must be. Consciousness is an anosognosiac broker, and we are the serial sum of deals struck between parties utterly unknown. Who are the orthogonal parties? What are the deals? These are the questions that aim us at our most essential selves, at what we are in fact. These are the answers being pursued by industry.

And yet we insist on the reality of ghosts, so profound is the glamour spun by neglect. There are no orthogonal parties, we cry, and therefore no orthogonal deals. There is no orthogonal regime. Oblivion hides only oblivion. What bubbles up from oblivion, begins with me and ends with me. Thus the enduring attempt to make sense of things sideways, to rummage through the ruin of heaven and erect parallel regimes, ones too impersonal to reek of superstition. We use ghosts of reference to bind our inklings to the world, ghosts of inference to bind our inklings to one another, ghosts of quality to give ethereal substance to experience. Ghosts and more ghosts, all to save the mad, inescapable intuition that our intuitions must be real somehow. We raise them as architecture, and demur whenever anyone poses the mundane question of building material.

‘Thought’… No word short of ‘God’ has shut down more thinking.

Content is a wraith. Freedom is a vapour. Experience is a dream. The analogy is no coincidence.

The ontology of meaning is the ontology of ghosts.