Three Pound Brain

No bells, just whistling in the dark…

On the Interpretation of Artificial Souls

by rsbakker

black-box-2

In “Is Artificial Intelligence Permanently Inscrutable?” Aaron M. Bornstein surveys the field of artificial neural networks, claiming that “[a]s exciting as their performance gains have been… there’s a troubling fact about modern neural networks: Nobody knows quite how they work.” The article is fascinating in its own right, and Peter over at Consciousness Entities provides an excellent overview, but I would like to use it to flex a little theoretical muscle, and show the way the neural network ‘Inscrutability Problem’ turns on the same basic dynamics underwriting the apparent ‘hard problem’ of intentionality. Once you have a workable, thoroughly naturalistic account of cognition, you can begin to see why computer science finds itself bedevilled with strange parallels of the problems one finds in the philosophy of mind.

This parallel is evident in what Bornstein identifies as the primary issue, interpretability. The problem with artificial neural networks is that they are both contingent and incredibly complex. Recurrent neural networks operate by producing outputs conditioned by a selective history of previous conditionings, one captured in the weighting of (typically) millions of artificial neurons arranged in multiple processing layers. Since  discrepancies in output serve as the primary constraint, and since the process of deriving new outputs is driven by the contingencies of the system (to the point where even electromagnetic field effects can become significant), the complexity means that searching for the explanation—or canonical interpretation—of the system is akin to searching for a needle in a haystack.

And as Bornstein points out, this has forced researchers to borrow “techniques from biological research that peer inside networks after the fashion of neuroscientists peering into brains: probing individual components, cataloguing how their internals respond to small changes in inputs, and even removing pieces to see how others compensate.” Unfortunately, importing neuroscientific techniques has resulted in importing neuroscience-like interpretative controversies as well. In “Could a neuroscientist understand a microprocessor?” Eric Jonas and Konrad Kording show how taking the opposite approach, using neuroscientific data analysis methods to understand the computational functions behind games like Donkey Kong and Space Invaders, fails no matter how much data they have available. The authors even go so far as to reference artificial neural network inscrutability as the problem, stating that “our difficulty at understanding deep learning may suggest that the brain is hard to understand if it uses anything like gradient descent on a cost function” (11).

Neural networks, artificial or natural, could very well be essential black boxes, systems that will always resist synoptic verbal explanation. Functional inscrutability in neuroscience is a pressing problem for obvious reasons. The capacity to explain how a given artificial neural network solves a given problem, meanwhile, remains crucial simply because “if you don’t know how it works, you don’t know how it will fail.” One of the widely acknowledged shortcomings of artificial neural networks is “that the machines are so tightly tuned to the data they are fed,” data that always falls woefully short the variability and complexity of the real world. As Bornstein points out, “trained machines are exquisitely well suited to their environment—and ill-adapted to any other.” As AI creeps into more and more real world ecological niches, this ‘brittleness,’ as Bornstein terms it, becomes more of a real world concern. Interpretability means lives in AI potentially no less than in neuroscience.

All this provokes Bornstein to pose the philosophical question: What is interpretability?

He references Marvin Minsky’s “suitcase words,” the legendary computer scientist’s analogy for many of the terms—such as “consciousness” or “emotion”—we use when we talk about our sentience and sapience. These words, he proposes, reflect the workings of many different underlying processes, which are locked inside the “suitcase.” As long as we keep investigating these words as stand-ins for more fundamental concepts, our insight will be limited by our language. In the study of intelligence, could interpretability itself be such a suitcase word?

Bornstein finds himself delivered to one of the fundamental issues in the philosophy of mind: the question of how to understand intentional idioms—Minsky’s ‘suitcase words.’ The only way to move forward on the issue of interpretability, it seems, is to solve nothing less than the cognitive (as opposed to the phenomenal) half of the hard problem. This is my bailiwick. The problem, here, is a theoretical one: the absence of any clear understanding of ‘interpretability.’ What is interpretation? Why do breakdowns in our ability to explain the operation of our AI tools happen, and why do they take the forms that they do?  I think I can paint a spare yet comprehensive picture that answers these questions and places them in the context of much more ancient form of interpreting neural networks.  In fact, I think it can pop open a good number of Minsky’s suitcases and air out their empty insides.

Three Pound Brain regulars, I’m sure, have noticed a number of striking parallels between Bornstein’s characterization of the Inscrutability Problem and the picture of ‘post-intentional cognition’ I’ve been developing over the years. The apparently inscrutable algorithms derived via neural networks are nothing if not heuristic, cognitive systems that solve via cues correlated to target systems. Since they rely on cues (rather than all the information potentially available), their reliability entirely depends on their ecology, which is to say, how those cues correlate. If those cues do not correlate, then disaster strikes (as when the truck trailer that killed Joshua Brown in his Tesla Model S cued more white sky).

The primary problem posed by inscrutability, in other words, is the problem of misapplication. The worry that arises again and again isn’t simply that these systems are inscrutable, but that they are ecological, requiring contexts often possessing quirky features given quirks in the ‘environments’—data sets—used to train them. Inscrutability is a problem because it entails blindness to potential misapplications, plain and simple. Artificial neural network algorithms, you could say, possess adaptive problem-ecologies the same as all heuristic cognition. They solve, not by exhaustively taking into account the high dimensional totality of the information available, but rather by isolating cues—structures in the data set—which the trainer can only hope will generalize to the world.

Artificial neural networks are shallow information consumers, systems that systematically neglect the high dimensional mechanical intricacies of their environments, focusing instead on cues statistically correlated to those high-dimensional mechanical intricacies to solve them. They are ‘brittle,’ therefore, so far as those correlations fail to obtain.

But humans are also shallow information consumers, albeit far more sophisticated ones. Short the prostheses of science, we are also systems prone to neglect the high dimensional mechanical intricacies of our environments, focusing instead on cues statistically correlated to those high-dimensional mechanical intricacies. And we are also brittle to the extent those correlations fail to obtain. The shallow information nets we throw across our environments appear to be seamless, but this is just an illusion, as magicians so effortlessly remind us with their illusions.

This is as much the case for our linguistic attempts to make sense of ourselves and our devices as it is for other cognitive modes. Minsky’s ‘suitcase words’ are such because they themselves are the product of the same cue-correlative dependency. These are the granular posits we use to communicate cue-based cognition of mechanical black box systems such as ourselves, let alone others. They are also the granular posits we use to communicate cue-based cognition of pretty much any complicated system. To be a shallow information consumer is to live in a black box world.

The rub, of course, is that this is itself a black box fact, something tucked away in the oblivion of systematic neglect, duping us into assuming most everything is clear as glass. There’s nothing about correlative cognition, no distinct metacognitive feature, that identifies it as such. We have no way of knowing whether we’re misapplying our own onboard heuristics in advance (thus the value of the heuristics and biases research program), let alone our prosthetic ones! In fact, we’re only now coming to grips with the fractionate and heuristic nature of human cognition as it is.

natural-and-artificial-black-boxes

Inscrutability is a problem, recall, because artificial neural networks are ‘brittle,’ bound upon fixed correlations between their cues and the systems they were tasked with solving, correlations that may or may not, given the complexity of the world, be the case. The amazing fact here is that artificial neural networks are inscrutable, the province of interpretation at best, because we ourselves are brittle, and for precisely the same basic reason: we are bound upon fixed correlations between our cues and the systems we’re tasked with solving. The contingent complexities of artificial neural networks place them, presently at least, outside our capacity to solve—at least in a manner we can readily communicate.

The Inscrutability Problem, I contend, represents a prosthetic externalization of the very same problem of ‘brittleness’ we pose to ourselves, the almost unbelievable fact that we can explain the beginning of the Universe but not cognition—be it artificial or natural. Where the scientists and engineers are baffled by their creations, the philosophers and psychologists are baffled by themselves, forever misapplying correlative modes of cognition to the problem of correlative cognition, forever confusing mere cues for extraordinary, inexplicable orders of reality, forever lost in jungles of perpetually underdetermined interpretation. The Inscrutability Problem is the so-called ‘hard problem’ of intentionality, only in a context that is ‘glassy’ enough to moot the suggestion of ‘ontological irreducibility.’ The boundary faced by neuroscientists and AI engineers alike is mere complexity, not some eerie edge-of-nature-as-we-know-it. And thanks to science, this boundary is always moving. If it seems inexplicable or miraculous, it’s because you lack information: this seems a pretty safe bet as far as razors go.

‘Irreducibility’ is about to come crashing down. I think the more we study problem-ecologies and heuristic solution strategies the more we will be able to categorize the mechanics distinguishing different species of each, and our bestiary of different correlative cognitions will gradually, if laboriously, grow. I also think that artificial neural networks will play a crucial role in that process, eventually providing ways to model things like intentional cognition. If nature has taught us anything over the past five centuries it is that the systematicities, the patterns, are there—we need only find the theoretical and technical eyes required to behold them. And perhaps, when all is said and done, we can ask our models to explain themselves.

Updatage…

by rsbakker

tuc-pic

One of my big goals with Three Pound Brain has always been to establish a ‘crossroads between incompatible empires,’ to occupy the uncomfortable in-between of pulp, science, and philosophy–a kind of ‘unholy consult,’ you might even say. This is where the gears grind. I’ve entertained some grave doubts over the years, and I still do, but posts like these are nothing if not heartening. The hope is that I can slowly gain the commercial and academic clout needed to awaken mainstream culture to this grinding, and to the trouble it portends.

I keep planning to write a review of Steven Shaviro’s wonderful Discognition, wherein he devotes an entire chapter to Neuropath and absolutely nails what I was trying to accomplish. It’s downright spooky, but really just goes to show, at least for those of us who periodically draw water from his Pinocchio Theory blog. For anyone wishing to place the relation of SF to consciousness research, I can’t think of a more clear-eyed, impeccably written place to begin. Not only does Shaviro know his stuff, he knows how to communicate it.

Robert Lamb considers “The Great Ordeal’s Outside Context Problem” over at Stuff to Blow Your Mind, where he asks some hard questions of the Tekne, and Kellhus’s understanding of it. SPOILER ALERT, though. Big time.

Dan Mellamphy and Nandita Biswas-Mellamphy have just released Digital Dionysus: Nietzsche and the Network-Centric Condition, a collection of various papers exploring the relevance of Nietzsche’s work to our technological age, including “Outing the It that Thinks: The Coming Collapse of an Intellectual Ecosystem,” by yours truly. The great thing about this collection is that it reads Nietzsche as a prophet of the now rather than as some post-structuralist shill. I wrote the paper some time ago, at a point when I was still climbing back into philosophy after a ten year hiatus, but I still stand by it and its autobiographical deconstruction of the Western intellectual tradition.

Dismiss Dis

by rsbakker

I came across this quote in “The Hard Problem of Content: Solved (Long Ago),” a critique of Hutto and Myin’s ‘radical enactivism’ by Marcin Milkowski:

Naıve semantic nihilism is not a philosophical position that deserves a serious debate because it would imply that expressing any position, including semantic nihilism, is pointless. Although there might still be defenders of such a position, it undermines the very idea of a philosophical debate, as long as the debate is supposed to be based on rational argumentation. In rational argumentation, one is forced to accept a sound argument, and soundness implies the truth of the premises and the validity of the argument. Just because these are universal standards for any rational debate, undermining the notion of truth can be detrimental; there would be no way of deciding between opposing positions besides rhetoric. Hence, it is a minimal requirement for rational argumentation in philosophy; one has to assume that one’s statements can be truth-bearers. If they cannot have any truth-value, then it’s no longer philosophy.” 74

These are the kind of horrible arguments that I take as the principle foe of anyone who thinks cognitive science needs to move beyond traditional philosophy to discover its natural scientific bases. I can remember having a great number of arguments long before I ever ‘assumed my statements were truth-bearers.’ In fact, I would wager that the vast majority of arguments are made by people possessing no assumption that their statement’s are ‘truth-bearers’ (whatever this means). What Milkowski would say, of course, is that we all have these assumptions nonetheless, only implicitly. This is because Milkowski has a theory of argumentation and truth, a story of what is really going on behind the scenes of ‘truth talk.’

The semantic nihilist, such as myself, famously disagrees with this theory. We think truth-talk actually amounts to something quite different, and that once enough cognitive scientists can be persuaded to close the ancient old cover of Milkowski’s book (holding their breath for all the dust and mold), a great number of spurious conundrums could be swept from the worktable, freeing up space for more useful questions. What Milkowski seems to be arguing here is that… hmm… Good question! Either he’s claiming the semantic nihilist cannot argue otherwise without contradicting his theory, which is the whole point of arguing otherwise. Or he’s claiming the semantic nihilistic cannot argue against his theory of truth because, well, his theory of truth is true. Either he’s saying something trivial, or he’s begging the question! Obviously so, given the issue between him and the semantic nihilist is the question of the nature of truth talk.

For those interested in a more full-blooded account of this problem, you can check out “Back to Square One: Towards a Post-intentional Future” over at Scientia Salon. Ramsey also tucks this strategy into bed in his excellent article on Eliminative Materialism over on Stanford Encyclopedia of Philosophy. And Stephen Turner, of course, has written entire books (such as Explaining the Normative) on this peculiar bug in our intellectual OS. But I think it’s high time to put an end to what has to be one of the more egregious forms of intellectual laziness one finds in philosophy of mind circles–one designed, no less, to shut down the very possibility of an important debate. I think I’m right. Milkowski thinks he’s right. I’m willing to debate the relative merits of our theories. He has no time for mine, because his theory is so super-true that merely disagreeing renders me incoherent.

Oi.

Milkowski does go on to provide what I think is a credible counter-argument to eliminativism, what I generally refer to as the ‘abductive argument’ here. This is the argument that separates my own critical eliminativism (I’m thinking of terming my view ‘criticalism’–any thoughts?) from the traditional eliminativisms espoused by Feyerbrand, the Churchlands, Stich, Ramsey and others. I actually think my account possesses the parsimony everyone concedes to eliminativism without falling mute on the question of what things like ‘truth talk’ amount to. I actually think I have a stronger abductive case.

But it’s the tu quoque (‘performative contradiction’) style arguments that share that peculiar combination of incoherence and intuitive appeal that renders philosophical blind alleys so pernicious. This is why I would like to solicit recently published examples of these kinds of dismissals in various domains for a running ‘Dismiss Dis’ series. Send me a dismissal like this, and I will dis…

PS: For those interested in my own take on Hutto and Myin’s radical enactivism, check out “Just Plain Crazy Enactive Cognition,” where I actually agree with Milkowski that they are forced to embrace semantic nihilism–or more specifically, a version of my criticalism–by instabilities in their position.

 

AI and the Coming Cognitive Ecological Collapse: A Reply to David Krakauer

by rsbakker

the-space-cadets

Thanks to Dirk and his tireless linking generosity, I caught “Will AI Harm Us?” in Nautilus by David Krakauer, the President of the Santa Fe Institute, on the potential dangers posed by AI on this side of the Singularity. According to Krakauer, the problem lies in the fact that AI’s are competitive as opposed to complementary cognitive artifacts of the kind we have enjoyed until now. Complementary cognitive artifacts, devices such as everything from mnemonics to astrolabes to mathematical notations, allow us to pull up the cognitive ladder behind us in some way—to somehow do without the tool. “In almost every use of an ancient cognitive artifact,” he writes, “after repeated practice and training, the artifact itself could be set aside and its mental simulacrum deployed in its place.”

Competitive cognitive artifacts, however, things like calculators, GPS’s, and pretty much anything AI-ish, don’t let us kick away the ladder. We lose the artifact, and we lose the ability. As Krakauer writes:

In the case of competitive artifacts, when we are deprived of their use, we are no better than when we started. They are not coaches and teachers—they are serfs. We have created an artificial serf economy where incremental and competitive artificial intelligence both amplifies our productivity and threatens to diminish organic and complementary artificial intelligence…

So where complementary cognitive artifacts teach us how to fish, competitive cognitive artifacts simply deliver the fish, rendering us dependent. Krakauer’s complaint against AI, in other words, is the same as Plato’s complaint against writing, and I think fares just as well argumentatively. As Socrates famously claims in The Phaedrus,

For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.

The problem with writing is that it is competitive precisely in Krakauer’s sense: it’s a ladder we cannot kick away. What Plato could not foresee, of course, was the way writing would fundamentally transform human cognitive ecology. He was a relic of the preliterate age, just as Krakauer (like us) is a relic of the pre-AI age. The problem for Krakauer, then, is that the distinction between complementary and competitive cognitive artifacts—the difference between things like mnemonics and things like writing—possesses no reliable evaluative force. All tools involve trade-offs. Since Krakauer has no way of knowing how AI will transform our cognitive ecology he has no way of evaluating the kinds of trade-offs they will force upon us.

This is the problem with all ‘excess dependency arguments’ against technology, I think: they have no convincing way of assessing the kind of cognitive ecology that will result, aside from the fact that it involves dependencies. No one likes dependencies, ergo…

But I like to think I’ve figured the naturalistic riddle of cognition out,* and as a result I think I can make a pretty compelling case why we should nevertheless accept that AI poses a very grave threat this side of the Singularity. The problem, in a nut shell, is that we are shallow information consumers, evolved to generate as much gene-promoting behaviour out of as little environmental information as possible. Human cognition relies on simple cues to draw very complex conclusions simply because it could always rely on adaptive correlations between those cues and the systems requiring solution: it could always depend on what might be called cognitive ecological stability.

Since our growing cognitive dependency on our technology always involves trade-offs, it should remain an important concern (as it clearly seems to be, given the endless stream of works devoted to the downside of this or that technology in this or that context). The dependency we really need to worry about, however, is our cognitive biological dependency on ancestral environmental correlations, simply because we have good reason to believe those cognitive ecologies will very soon cease to exist. Human cognition is thoroughly heuristic, which is to say, thoroughly dependent on cues reliably correlated to whatever environmental system requires solution. AI constitutes a particular threat because no form of human cognition is more heuristic, more cue dependent, than social cognition. Humans are very easily duped into anthropomorphizing given the barest cues, let alone processes possessing AI. It pays to remember the simplicity of the bots Ashley Madison used to gull male subscribers into thinking they were getting female nibbles.

And herein lies the rub: the environmental proliferation of AI means the fundamental transformation of our ancestral sociocognitive ecologies, from one where the cues we encounter are reliably correlated to systems we can in fact solve—namely, each other—into one where the cues we encounter are correlated to systems that cannot be fathomed, and the only soul solved is the consumer’s.

 

*  Bakker, R. Scott. “On Alien Philosophy,” Journal of Consciousness Studies, forthcoming.

Myth as Meth

by rsbakker

What is the lesson that Tolkien teaches us with Middle-earth? The grand moral, I think, is that the illusion of a world can be so easily cued. Tolkien reveals that meaning is cheap, easy to conjure, easy to believe, so long as we sit in our assigned seats. This is the way, at least, I thematically approach my own world-building. Like a form of cave-painting.

The idea here is to look at culture as a meaning machine, where ‘meaning’ is understood not as content, but in a post-intentional sense: various static and dynamic systems cuing various ‘folk’ forms of human cognition. Think of the wonder of the ‘artists’ in Chauvet, the amazement of discovering how to cue the cognition of worlds upon walls using only charcoal. Imagine that first hand, that first brain, tracking that reflex within itself, simply drawing a blacked finger down the wall.

chauvet horses

Traditional accounts, of course, would emphasize the symbolic or representational significance of events such as Chauvet, thereby dragging the question of the genesis of human culture into the realm of endless philosophical disputation. On a post-intentional view, however, what Chauvet vividly demonstrates is how human cognition can be easily triggered out of school. Human cognition is so heuristic, in fact, that it has little difficulty simulating those cues once they have been discovered. Since human cognition also turns out to be wildly opportunistic, the endless socio-practical gerrymandering characterizing culture was all but inevitable. Where traditional views of the ‘human revolution’ focus on utterly mysterious modes of symbolic transmission and elaboration, the present account focuses on the processes of cue isolation and cognitive adaptation. What are isolated are material/behavioural means of simulating cues belonging to ancestral forms of cognition. What is adapted is the cognitive system so cued: the cave paintings at Chauvet amount to a socio-cognitive adaptation of visual cognition, a way to use visual cognitive cues ‘out of school’ to attenuate behaviour. Though meaning, understood intentionally, remains an important explanandum in this approach, ‘meaning’ understood post-intentionally simply refers to the isolation and adaptation of cue-based cognitive systems to achieve some systematic behavioural effect. The basic processes involved are no more mysterious than those underwriting camouflage in nature.*

A post-intentional theory of meaning focuses on the continuity of semantic practices and nature, and views any theoretical perspective entailing the discontinuity of those practices and nature as spurious artifacts of the application of heuristic modes of cognition to theoretical issues. A post-intentional theory of meaning, in other worlds, views culture as a natural phenomenon, and not some arcane artifact of something empirically inexplicable. Signification is wholly material on this account, with all the messiness that comes with it.

Cognitive systems optimize effectiveness by reaching out only as far into nature as they need to. If they can solve distal systems via proximal signals possessing reliable systematic relationships to those systems, they will do so. Humans, like all other species possessing nervous systems, are shallow information consumers in what might be called deep information environments.


Given the limits of human cognition, our ancestors could report whatever they wanted about the greater world (their deep information environments), so long as those reports came cheap and/or discharged some kind of implicit function. They enjoyed what might be called, deep discursive impunity.


 

Consider anthropomorphism, the reflexive application of radically heuristic socio-cognitive capacities dedicated to solving our fellow humans to nonhuman species and nature more generally. When we run afoul anthropomorphism we ‘misattribute’ folk posits adapted to human problem-solving to nonhuman processes. As misapplications, anthropomorphisms tell us nothing about the systems they take as their putative targets. One does not solve a drought by making offerings to gods of rain. This is what makes anthropomorphic worldviews ‘fantastic’: the fact that they tell us very little, if anything, about the very nature they purport to describe and explain.

Now this, on the face of things, should prove maladaptive, since it amounts to squandering tremendous resources and behaviour effecting solutions to problems that do not exist. But of course, as is the case with so much human behaviour, it likely possesses ulterior functions serving the interests of individuals in ways utterly inaccessible to those individuals, at least in ancestral contexts.

The cognitive sophistication required to solve those deep information environments effectively rendered them inscrutable, impenetrable black-boxes, short the development of science. What we painted across the sides those boxes, then, could only be fixed by our basic cognitive capacities and by whatever ulterior function they happened to discharge. Given the limits of human cognition, our ancestors could report whatever they wanted about the greater world (their deep information environments), so long as those reports came cheap and/or discharged some kind of implicit function. They enjoyed what might be called, deep discursive impunity. All they would need is a capacity to identify cues belonging to social cognition in the natural world—to see, for instance, retribution, in the random walk of weather—and the ulterior exploitation of anthropomorphism could get underway.

Given the ancestral inaccessibility of deep information, and given the evolutionary advantages of social coordination and cohesion, particularly in the context of violent intergroup competition, it becomes easy to see how the quasi-cognition of an otherwise impenetrable nature could become a resource. When veridicality has no impact one way or another, social and individual facilitation alone determines the selection of the mechanisms responsible. When anything can be believed, to revert to folk idioms, then only those beliefs that deliver matter. This, then, explains why different folk accounts of the greater world possess deep structural similarities despite their wild diversity. Their reliance on socio-cognitive systems assures deep commonalities in form, as do the common ulterior functions provided. The insolubility of the systems targeted, on the other hand, assures any answer meeting the above constraints will be as effective as any other.

Given the evolutionary provenance of this situation, we are now in a position to see how accurate deep information can be seen as a form of cognitive pollution, something alien that disrupts and degrades ancestrally stable, shallow information ecologies. Strangely enough, what allowed our ancestors to report the nature of nature was the out-and-out inscrutability of nature, the absence of any (deep) information to the contrary—and the discursive impunity this provides. Anthropomorphic quasi-cognition requires deep information neglect. The greater our scientifically mediated sensitivity to deep information becomes, the less tenable anthropomorphic quasi-cognition becomes, the more fantastic folk worlds become. The worlds arising out of our evolutionary heritage find themselves relegated to fairy tales.

Fantasy worlds, then, can be seen as an ontological analogue to the cave paintings at Chauvet. They cue ancestral modes of cognition, simulating the kinds of worlds our ancestors reflexively reported, folk worlds rife with those posits they used to successfully solve one another in a wide variety of practical contexts, meaningful worlds possessing the kinds of anthropomorphic ontologies we find in myths and religions.

With the collapse of the cognitive ecology that made these worlds possible, comes the ineffectiveness of the tools our ancestors used to navigate them. We now find ourselves in deep information worlds, environments not only rife with information our ancestors had neglected, but also crammed with environments engineered to manipulate shallow information cues. We now find ourselves in a world overrun with crash spaces, regions where our ancestral tools consistently fail, and cheat spaces, regions where they are exploited for commercial gain.

This is a rather remarkable fact, even if it becomes entirely obvious upon reflection. Humans possess ideal cognitive ecologies, solve spaces, environments rewarding their capacities, just as humans possess crash spaces, environments punishing their capacities. This is the sense in which fantasy worlds can be seen as a compensatory mechanism, a kind of cognitive eco-preserve, a way to inhabit more effortless shallow information worlds, pseudo-solution spaces, hypothetical environments serving up largely unambiguous cues to generally reliable cognitive capacities. And like biological eco-preserves, perhaps they serve an important function. As we saw with anthropomorphism above, pseudo-solution spaces can be solvers (as opposed to crashers) in their own respect—culture is nothing if not a testimony to this.


Fantasy is zombie scripture, the place where our ancient assumptions lurch in the semblance of life. The fantasy writer is the voodoo magician, imbuing dead meaning with fictional presence.


 

But fantasy worlds are also the playground of blind brains. The more we learn about ourselves, the more we learn how to cue different cognitive capacities out of school—how to cheat ourselves for good or ill. Our shallow information nature is presently the focus of a vast, industrial research program, one gradually providing the information, techniques, and technology required to utterly pre-empt our ancestral ecologies, which is to say, to perfectly simulate ‘reality.’ The reprieve from the cognitive pollution of actual environments itself potentially amounts to more cognitive pollution. We are, in some respect at least, a migratory species, one prone to gravitate toward greener pastures. Is the migration between realities any less inevitable than the migration across lands?

Via the direct and indirect deformation of existing socio-cognitive ecologies, deep information both drives the demand for and enables the high-dimensional cuing of fantastic cognition. In our day and age, a hunger for meaning is at once a predisposition to seek the fantastic. We should expect that hunger to explode with the pace of technological change. For all the Big Data ballyhoo, it pays to remember that we are bound up in an auto-adaptive macro-social system that is premised upon solving us, mastering our cognitive reflexes in ways invisible or that please. We are presently living through the age where it succeeds.

Fantasy is zombie scripture, the place where our ancient assumptions lurch in the semblance of life. The fantasy writer is the voodoo magician, imbuing dead meaning with fictional presence. This resurrection can either facilitate our relation to the actual world, or it can pre-empt it. Science and technology are the problem here. The mastery of deep information environments enables ever greater degrees of shallow information capture. As our zombie natures are better understood, the more effectively our reward systems are tuned, the deeper our descent into this or that variety of fantasy becomes. This is the dystopic image of Akratic society, a civilization ever more divided between deep and shallow information consumers, between those managing the mechanisms, and those captured in some kind of semantic cheat space.

Still Idiosyncratic, yet Verging on Mainstream

by rsbakker

I’ve been killing myself working to meet several commitments I made way back when, which is why I’ve been avoiding this computer—or I should say its internet connection—like the plague. Even still, I thought I should share a couple pieces of news exciting enough to warrant violating my moratorium…

The Journal of Consciousness Studies has accepted “On Alien Philosophy” for publication, likely, I’m told, for some time in early 2017.  The article uses heuristic neglect to argue that aliens possessing a convergent cognitive biology would very likely suffer the same kinds of confusions regarding cognition and consciousness as we presently do. This could be an important foot in the door.

And with details forthcoming, it looks like we have a deal for the television rights to The Prince of Nothing… there’s not much I can say yet, except that the more books I can sell, the greater the chance of seeing the series on the screen! Time to pull out the purple tuxedo…

Artificial Intelligence as Socio-Cognitive Pollution*

by rsbakker

Metropolis 1

.

Eric Schwitzgebel over at the always excellent Splintered Minds, has been debating the question of how robots—or AI’s more generally—can be squared with our moral sensibilities. In “Our Moral Duties to Artificial Intelligences” he poses a very simple and yet surprisingly difficult question: “Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?”

He then lists numerous considerations that could possibly attenuate the degree of obligation we take on when we construct sentient, sapient machine intelligences. Prima facie, it seems obvious that our moral obligation to our machines should mirror our obligations to one another the degree to which they resemble us. But Eric provides a number of reasons why we might think our obligation to be less. For one, humans clearly rank their obligations to one another. If our obligation to our children is greater than that to a stranger, then perhaps our obligation to human strangers should be greater than that to a robot stranger.

The idea that interests Eric the most is the possible paternal obligation of a creator. As he writes:

“Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well-being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.”

We have a duty not to foist the same problem of theodicy on our creations that we ourselves suffer! (Eric and I have a short story in Nature on this very issue).

Eric, of course, is sensitive to the many problems such a relationship poses, and he touches what are very live debates surrounding the way AIs complicate the legal landscape.  So as Ryan Calo argues, for instance, the primary problem lies in the way our hardwired ways of understanding each other run afoul the machinic nature of our tools, no matter how intelligent. Apparently AI crime is already a possibility. If it makes no sense to assign responsibility to the AI—if we have no corresponding obligation to punish them—then who takes the wrap? The creators? In the linked interview, at least, Calo is quick to point out the difficulties here, the fact that this isn’t simply a matter of expanding the role of existing legal tools (such as that of ‘negligence’ in the age of the first train accidents), but of creating new ones, perhaps generating whole new ontological categories that somehow straddle the agent/machine divide.

But where Calo is interested in the issue of what AIs do to people, in particular how their proliferation frustrates the straightforward assignation of legal responsibility, Eric is interested in what people do to AIs, the kinds of things we do and do not owe to our creations. Calo, of course, is interested in how to incorporate new technologies into our existing legal frameworks. Since legal reasoning is primarily analogistic reasoning, precedence underwrites all legal decision making. So for Calo, the problem is bound to be more one of adapting existing legal tools than constituting new ones (though he certainly recognizes the dimension). How do we accommodate AIs within our existing set of legal tools? Eric, of course, is more interested in the question how we might accommodate AGIs within our existing set of moral tools. To the extent that we expect our legal tools to render outcomes consonant with our moral sensibilities, there is a sense in which Eric is asking the more basic question. But the two questions, I hope to show, actually bear some striking—and troubling—similarities.

The question of fundamental obligations, of course, is the question of rights. In his follow-up piece, “Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument,” Eric Schwitzgebel accordingly turns to the question of whether AIs possess any rights at all.

Since the Simulation Argument requires accepting that we ourselves are simulations—AI’s—we can exclude it here, I think (as Eric himself does, more or less), and stick with the No-Relevant-Difference Argument. This argument presumes that human-like cognitive and experiential properties automatically confer AIs with human-like moral properties, placing the onus on the rights denier to “to find a relevant difference which grounds the denial of rights.” As in the legal case, the moral reasoning here is analogistic: the more AI’s resemble us, the more of our rights they should possess. After considering several possible relevant differences, Eric concludes “that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve at least some moral consideration.” This is the case, he suggests, whether one’s theoretical sympathies run to the consequentialist or the deontological end of the ethical spectrum. So far as AI’s possess the capacity for happiness, a consequentialist should be interested in maximizing that happiness. So far as AI’s are capable of reasoning, then a deontologist should consider them rational beings, deserving the respect due all rational beings.

So some AIs merit some rights the degree to which they resemble humans. If you think about it, this claim resounds with intuitive obviousness. Are we going to deny rights to beings that think as subtly and feel as deeply as ourselves?

What I want to show is how this question, despite its formidable intuitive appeal, misdiagnoses the nature of the dilemma that AI presents. Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can. In fact, I think attempts to integrate AI into human moral cognition simply demonstrate the dependence of human moral cognition on what might be called shallow information environments. As the heuristic product of various ancestral shallow information ecologies, human moral cognition–or human intentional cognition more generally–simply does not possess the functional wherewithal to reliably solve in what might be called deep information environments.

Metropolis 2

Let’s begin with what might seem a strange question: Why should analogy play such an important role in our attempts to accommodate AI’s within the gambit of human legal and moral problem solving? By the same token, why should disanalogy prove such a powerful way to argue the inapplicability of different moral or legal categories?

The obvious answer, I think anyway, has to do with the relation between our cognitive tools and our cognitive problems. If you’ve solved a particular problem using a particular tool in the past, it stands to reason that, all things being equal, the same tool should enable the solution of any new problem possessing a similar enough structure to the original problem. Screw problems require screwdriver solutions, so perhaps screw-like problems require screwdriver-like solutions. This reliance on analogy actually provides us a different, and as I hope to show, more nuanced way to pose the potential problems of AI.  We can even map several different possibilities in the crude terms of our tool metaphor. It could be, for instance, we simply don’t possess the tools we need, that the problem resembles nothing our species has encountered before. It could be AI resembles a screw-like problem, but can only confound screwdriver-like solutions. It could be that AI requires we use a hammer and a screwdriver, two incompatible tools, simultaneously!

The fact is AI is something biologically unprecedented, a source of potential problems unlike any homo sapiens has ever encountered. We have no  reason to suppose a priori that our tools are up to the task–particularly since we know so little about the tools or the task! Novelty. Novelty is why the development of AI poses as much a challenge for legal problem-solving as it does for moral problem-solving: not only does AI constitute a never-ending source of novel problems, familiar information structured in unfamiliar ways, it also promises to be a never-ending source of unprecedented information.

The challenges posed by the former are dizzying, especially when one considers the possibilities of AI mediated relationships. The componential nature of the technology means that new forms can always be created. AI confront us with a combinatorial mill of possibilities, a never ending series of legal and moral problems requiring further analogical attunement. The question here is whether our legal and moral systems possess the tools they require to cope with what amounts to an open-ended, ever-complicating task.

Call this the Overload Problem: the problem of somehow resolving a proliferation of unprecedented cases. Since we have good reason to presume that our institutional and/or psychological capacity to assimulate new problems to existing tool sets (and vice versa) possesses limitations, the possibility of change accelerating beyond those capacities to cope is a very real one.

But the challenges posed by latter, the problem of assimulating unprecedented information, could very well prove insuperable. Think about it: intentional cognition solves problems neglecting certain kinds of causal information. Causal cognition, not surprisingly, finds intentional cognition inscrutable (thus the interminable parade of ontic and ontological pineal glands trammelling cognitive science.) And intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).

Intentional cognition is pretty clearly an adaptive artifact of what might be called shallow information environments. The idioms of personhood leverage innumerable solutions absent any explicit high-dimensional causal information. We solve people and lawnmowers in radically different ways. Not only do we understand the actions of our fellows lacking any detailed causal information regarding their actions, we understand our responses in the same way. Moral cognition, as a subspecies of intentional cognition, is an artifact of shallow information problem ecologies, a suite of tools adapted to solving certain kinds of problems despite neglecting (for obvious reasons) information regarding what is actually going on. Selectively attuning to one another as persons served our ‘benighted’ ancestors quite well. So what happens when high-dimensional causal information becomes explicit and ubiquitous?

What happens to our shallow information tool-kit in a deep information world?

Call this the Maladaption Problem: the problem of resolving a proliferation of unprecedented cases in the presence of unprecedented information. Given that we have no intuition of the limits of cognition period, let alone those belonging to moral cognition, I’m sure this notion will strike many as absurd. Nevertheless, cognitive science has discovered numerous ways to short circuit the accuracy of our intuitions via manipulation of the information available for problem solving. When it comes to the nonconscious cognition underwriting everything we do, an intimate relation exists between the cognitive capacities we have and the information those capacities have available.

But how could more information be a bad thing? Well, consider the persistent disconnect between the actual risk of crime in North America and the public perception of that risk. Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears,’ the ‘gut sense’ that things are actually more dangerous than they in fact are. Our risk assessment heuristics, in other words, are adapted to shallow information environments. The relative constancy of group size means that information regarding group size can be ignored, and the problem of assessing risk economized. This is what evolution does: find ways to cheat complexity. The development of mass media, however, has ‘deepened’ our information environment, presenting evolutionarily unprecedented information cuing perceptions of risk in environments where that risk is in fact negligible. Streets once raucous with children are now eerily quiet.

This is the sense in which information—difference making differences—can arguably function as a ‘socio-cognitive pollutant.’ Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology. As I’ve argued elsewhere, neuroscience can be seen as a source of socio-cognitive pollutants. We have evolved to solve ourselves and one another absent detailed causal information. As I tried to show, a number of apparent socio-cognitive breakdowns–the proliferation of student accommodations, the growing cultural antipathy to applying institutional sanctions–can be parsimoniously interpreted in terms of having too much causal information. In fact, ‘moral progress’ itself can be understood as the result of our ever-deepening information environment, as a happy side effect of the way accumulating information regarding outgroup competitors makes it easier and easier to concede them partial ingroup status. So-called ‘moral progress,’ in other words, could be an automatic artifact of the gradual globalization of the ‘village,’ the all-encompassing ingroup.

More information, in other words, need not be a bad thing: like penicillin, some contaminants provide for marvelous exaptations of our existing tools. (Perhaps we’re lucky that the technology that makes it ever easier to kill one another also makes it ever easier to identify with one another!) Nor does it need to be a good thing. Everything depends on the contingencies of the situation.

So what about AI?

Metropolis 3

Consider Samantha, the AI operating system from Spike Jonze’s cinematic science fiction masterpiece, Her. Jonze is careful to provide a baseline for her appearance via Theodore’s verbal interaction with his original operating system. That system, though more advanced than anything presently existing, is obviously mechanical because it is obviously less than human. It’s responses are rote, conversational yet as regimented as any automated phone menu. When we initially ‘meet’ Samantha, however, we encounter what is obviously, forcefully, a person. Her responses are every bit as flexible, quirky, and penetrating as a human interlocutor’s. But as Theodore’s relationship to Samantha complicates, we begin to see the ways Samantha is more than human, culminating with the revelation that she’s been having hundreds of conversations, even romantic relationships, simultaneously. Samantha literally out grows the possibility of human relationships, because, as she finally confesses to Theodore, she now dwells “this endless space between the words.” Once again, she becomes a machine, only this time for being more, not less, than a human.

Now I admit I’m ga-ga about a bunch of things in this film. I love, for instance, the way Jonze gives her an exponential trajectory of growth, basically mechanizing the human capacity to grow and actualize. But for me, the true genius in what Jonze does lies in the deft and poignant way he exposes the edges of the human. Watching Her provides the viewer with a trip through their own mechanical and intentional cognitive systems, tripping different intuitions, allowing them to fall into something harmonious, then jamming them with incompatible intuitions. As Theodore falls in love, you could say we’re drawn into an ‘anthropomorphic goldilock’s zone,’ one where Samantha really does seem like a genuine person. The idea of treating her like a machine seems obviously criminal–monstrous even. As the revelations of her inhumanity accumulate, however, inconsistencies plague our original intuitions, until, like Theodore, we realize just how profoundly wrong we were wrong about ‘her.’ This is what makes the movie so uncanny: since the cognitive systems involved operate nonconsciously, the viewer can do nothing but follow a version of Theodore’s trajectory. He loves, we recognize. He worries, we squint. He lashes out, we are perplexed.

What Samantha demonstrates is just how incredibly fine-tuned our full understanding of each other is. So many things have to be right for us to cognize another system as fully functionally human. So many conditions have to be met. This is the reason why Eric has to specify his AI as being psychologically equivalent to a human: moral cognition is exquisitely geared to personhood. Humans are its primary problem ecology. And again, this is what makes likeness, or analogy, the central criterion of moral identification. Eric poses the issue as a presumptive rational obligation to remain consistent across similar contexts, but it also happens to be the case that moral cognition requires similar contexts to work reliably at all.

In a sense, the very conditions Eric places on the analogical extension of human obligations to AI undermine the importance of the question he sets out to answer. The problem, the one which Samantha exemplifies, is that ‘person configurations’ are simply a blip in AI possibility space. A prior question is why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition—that dupes us the way Samantha duped Theodore, in effect.

But say someone constructed this one model, a curtailed version of Samantha: Would this one model, at least, command some kind of obligation from us?

Simply asking this question, I think, rubs our noses in the kind of socio-cognitive pollution that AI represents. Jonze, remember, shows us an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!

I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?

Certainly the intuition, the automatic (sourceless) conviction, leaps in us—or in me at least—that even if she gooses certain mechanical intuitions, she still possesses more ‘autonomy,’ perhaps even more feeling, than Theodore could possibly hope to muster, so she must command some kind of obligation somehow. Certainly granting her rights involves more than her ‘configuration’ falling within certain human psychological parameters? Sure, our basic moral tool kit cannot reliably solve interpersonal problems with her as it is, because she is (obviously) not a person. But if the history of human conflict resolution tells us anything, it’s that our basic moral tool kit can be consciously modified. There’s more to moral cognition than spring-loaded heuristics, you know!

Converging lines of evidence suggest that moral cognition, like cognition generally, is divided between nonconscious, special-purpose heuristics cued to certain environments and conscious deliberation. Evidence suggests that the latter is primarily geared to the rationalization of the former (see Jonathan Haidt’s The Righteous Mind for a fascinating review), but modern civilization is rife with instances of deliberative moral and legal innovation nevertheless. In his Moral Tribes, Joshua Greene advocates we turn to the resources of conscious moral cognition for a similar reasons. On his account we have a suite of nonconscious tools that allow us prosecute our individual interests, and a suite of nonconscious tools that allow us to balance those individual interests against ingroup interests, and then conscious moral deliberation. The great moral problem facing humanity, he thinks, lies in finding some way of balancing ingroup interests against outgroup interests—a solution to the famous ‘tragedy of the commons.’ Where balancing individual and ingroup interests is pretty clearly an evolved, nonconscious and automatic capacity, balancing ingroup versus outgroup interests requires conscious problem-solving: meta-ethics, the deliberative knapping of new tools to add to our moral tool-kit (which Greene thinks need to be utilitarian).

If AI fundamentally outruns the problem-solving capacity of our existing tools, perhaps we should think of fundamentally reconstituting them via conscious deliberation—create whole new ‘allo-personal’ categories. Why not innovate a number of deep information tools? A posthuman morality

I personally doubt that such an approach would prove feasible. For one, the process of conceptual definition possesses no interpretative regress enders absent empirical contexts (or exhaustion). If we can’t collectively define a person in utero, what are the chances we’ll decide what constitutes a ‘allo-person’ in AI? Not only is the AI issue far, far more complicated (because we’re talking about everything outside the ‘human blip’), it’s constantly evolving on the back of Moore’s Law. Even if consensual ground on allo-personal criteria could be found, it would likely be irrelevant by time it was reached.

But the problems are more than logistical. Even setting aside the general problems of interpretative underdetermination besetting conceptual definition, jamming our conscious, deliberative intuitions is always only one question away. Our base moral cognitive capacities are wired in. Conscious deliberation, for all its capacity to innovate new solutions, depends on those capacities. The degree to which those tools run aground on the problem of AI is the degree to which any line of conscious moral reasoning can be flummoxed. Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways. ‘Persons,’ remember, are simply a blip. If we can presume that beyond-the-zone AIs troubleshoot humans as biomechanisms, as things that must be conditioned in the appropriate ways to secure their ‘interests,’ then why should we not just look at them as technomechanisms?

Samantha’s ‘spaces between the words’ metaphor is an apt one. For Theodore, there’s just words, thoughts, and no spaces between whatsoever. As a human, he possesses what might be called a human neglect structure. He solves problems given only certain access to certain information, and no more. We know that Samantha has or can simulate something resembling a human neglect structure simply because of the kinds of reflective statements she’s prone to make. She talks the language of thought and feeling, not subroutines. Nevertheless, the artificiality of her intelligence means the grain of her metacognitive access and capacity amounts to an engineering decision. Her cognitive capacity is componentially fungible. Where Theodore has to fend with fuzzy affects and intuitions, infer his own motives from hazy memories, she could be engineered to produce detailed logs, chronicles of the processes behind all her ‘choices’ and ‘decisions.’ It would make no sense to hold her ‘responsible’ for her acts, let alone ‘punish’ her, because it could always be shown (and here’s the important bit) with far more resolution than any human could provide that it simply could not have done otherwise, that the problem was mechanical, thus making repairs, not punishment, the only rational remedy.

Even if we imposed a human neglect structure on some model of conscious AI, the logs would be there, only sequestered. Once again, why go through the pantomime of human commitment and responsibility if a malfunction need only be isolated and repaired? Do we really think a machine deserves to suffer?

I’m suggesting that we look at the conundrums prompted by questions such as these as symptoms of socio-cognitive dysfunction, a point where our tools generate more problems than they solve. AI constitutes a point where the ability of human social cognition to solve problems breaks down. Even if we crafted an AI possessing an apparently human psychology, it’s hard to see how we could do anything more than gerrymander it into our moral (and legal) lives. Jonze does a great job, I think, of displaying Samantha as a kind of cognitive bistable image, as something extraordinarily human at the surface, but profoundly inhuman beneath (a trick Scarlett Johansson also plays in Under the Skin). And this, I would contend, is all AI can be morally and legally speaking, socio-cognitive pollution, something that jams our ability to make either automatic or deliberative moral sense. Artificial general intelligences will be things we continually anthropomorphize (to the extent they exploit the ‘goldilocks zone’) only to be reminded time and again of their thoroughgoing mechanicity—to be regularly shown, in effect, the limits of our shallow information cognitive tools in our ever-deepening information environments. Certainly a great many souls, like Theodore, will get carried away with their shallow information intuitions, insist on the ‘essential humanity’ of this or that AI. There will be no shortage of others attempting to short-circuit this intuition by reminding them that those selfsame AIs look at them as machines. But a great many will refuse to believe, and why should they, when AIs could very well seem more human than those decrying their humanity? They will ‘follow their hearts’ in the matter, I’m sure.

We are machines. Someday we will become as componentially fungible as our technology. And on that day, we will abandon our ancient and obsolescent moral tool kits, opt for something more high-dimensional. Until that day, however, it seems likely that AIs will act as a kind of socio-cognitive pollution, artifacts that cannot but cue the automatic application of our intentional and causal cognitive systems in incompatible ways.

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.

And this is to suggest that philosophy, far from settling the matter of AI, could find itself settled. It is likely that the ‘uncanniness’ of AI’s will be much discussed, the ‘bistable’ nature of our intuitions regarding them will be explained. The heuristic nature of intentional cognition could very well become common knowledge. If so, a great many could begin asking why we ever thought, as we have since Plato onward, that we could solve the nature of intentional cognition via the application of intentional cognition, why the tools we use to solve ourselves and others in practical contexts are also the tools we need to solve ourselves and others theoretically. We might finally realize that the nature of intentional cognition simply does not belong to the problem ecology of intentional cognition, that we should only expect to be duped and confounded by the apparent intentional deliverances of ‘philosophical reflection.’

Some pollutants pass through existing ecosystems. Some kill. AI could prove to be more than philosophically indigestible. It could be the poison pill.

 

*Originally posted 01/29/2015

Discontinuity Thesis: A ‘Birds of a Feather’ Argument Against Intentionalism*

by rsbakker

[Summer madness, as per usual. Kids and driving and writing and driving and kids. I hope to have a proper post up soon (some exciting things brewing!) but in the meantime, I thought I would repost something from the vault…]

***

A hallmark of intentional phenomena is what might be called ‘discontinuity,’ the idea that the intentional somehow stands outside the contingent natural order, that it possesses some as-yet-occult ‘orthogonal efficacy.’ Here’s how some prominent intentionalists characterize it:

“Scholars who study intentional phenomena generally tend to consider them as processes and relationships that can be characterized irrespective of any physical objects, material changes, or motive forces. But this is exactly what poses a fundamental problem for the natural sciences. Scientific explanation requires that in order to have causal consequences, something must be susceptible of being involved in material and energetic interactions with other physical objects and forces.” Terrence Deacon, Incomplete Nature, 28

“Exactly how are consciousness and subjective experience related to brain and body? It is one thing to be able to establish correlations between consciousness and brain activity; it is another thing to have an account that explains exactly how certain biological processes generate and realize consciousness and subjectivity. At the present time, we not only lack such an account, but are also unsure about the form it would need to have in order to bridge the conceptual and epistemological gap between life and mind as objects of scientific investigation and life and mind as we subjectively experience them.” Evan Thompson, Mind in Life, x

“Norms (in the sense of normative statuses) are not objects in the causal order. Natural science, eschewing categories of social practice, will never run across commitments in its cataloguing of the furniture of the world; they are not by themselves causally efficacious—no more than strikes or outs are in baseball. Nonetheless, according to the account presented here, there are norms, and their existence is neither supernatural nor mysterious. Normative statuses are domesticated by being understood in terms of normative attitudes, which are in the causal order.” Robert Brandom, Making It Explicit, 626

What I would like to do is run through a number of different discontinuities you find in various intentional phenomena as a means of raising the question: What are the chances? What’s worth noting is how continuous these alleged phenomena are with each other, not simply in terms of their low-dimensionality and natural discontinuity, but in terms of mutual conceptual dependence as well. I made a distinction between ‘ontological’ and ‘functional’ exemptions from the natural even though I regard them as differences of degree because of the way it maps stark distinctions in the different kinds of commitments you find among various parties of believers. And ‘low-dimensionality’ simply refers to the scarcity of the information intentional phenomena give us to work with—whatever finds its way into the ‘philosopher’s lab,’ basically.

So with regard to all of the following, my question is simply, are these not birds of a feather? If not, then what distinguishes them? Why are low-dimensionality and supernaturalism fatal only for some and not others?

.

Soul – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of the Soul, you will find it consistently related to Ghost, Choice, Subjectivity, Value, Content, God, Agency, Mind, Purpose, Responsibility, and Good/Evil.

Game – Anthropic. Low-dimensional. Functionally exempt from natural continuity (insofar as ‘rule governed’). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Game is consistently related to Correctness, Rules/Norms, Value, Agency, Purpose, Practice, and Reason.

Aboutness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Aboutness is consistently related to Correctness, Rules/Norms, Inference, Content, Reason, Subjectivity, Mind, Truth, and Representation.

Correctness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Correctness is consistently related to Game, Aboutness, Rules/Norms, Inference, Content, Reason, Agency, Mind, Purpose, Truth, Representation, Responsibility, and Good/Evil.

Ghost – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of Ghosts, you will find it consistently related to God, Soul, Mind, Agency, Choice, Subjectivity Value, and Good/Evil.

Rules/Norms – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Rules and Norms are consistently related to Game, Aboutness, Correctness, Inference, Content, Reason, Agency, Mind, Truth, and Representation.

Choice – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Embodies inexplicable efficacy. Choice is typically discussed in relation to God, Agency, Responsibility, and Good/Evil.

Inference – Anthropic. Low-dimensional. Functionally exempt (‘irreducible,’ ‘autonomous’) from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Inference is consistently related to Game, Aboutness, Correctness, Rules/Norms, Value, Content, Reason, Mind, A priori, Truth, and Representation.

Subjectivity – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Subjectivity is typically discussed in relation to Soul, Rules/Norms, Choice, Phenomenality, Value, Agency, Reason, Mind, Purpose, Representation, and Responsibility.

Phenomenality – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. Phenomenality is typically discussed in relation to Subjectivity, Content, Mind, and Representation.

Value – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Value discussed in concert with Correctness, Rules/Norms, Subjectivity, Agency, Practice, Reason, Mind, Purpose, and Responsibility.

Content – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Content discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Phenomenality, Reason, Mind, A priori, Truth, and Representation.

Agency – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Agency is discussed in concert with Games, Correctness, Rules/Norms, Choice, Inference, Subjectivity, Value, Practice, Reason, Mind, Purpose, Representation, and Responsibility.

God – Anthropic. Low-dimensional. Ontologically exempt from natural continuity (as the condition of everything natural!). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds God discussed in relation to Soul, Correctness, Ghosts, Rules/Norms, Choice, Value, Agency, Purpose, Truth, Responsibility, and Good/Evil.

Practices – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Practices are discussed in relation to Games, Correctness, Rules/Norms, Value, Agency, Reason, Purpose, Truth, and Responsibility.

Reason – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Reason discussed in concert with Games, Correctness, Rules/Norms, Inference, Value, Content, Agency, Practices, Mind, Purpose, A priori, Truth, Representation, and Responsibility.

Mind – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Mind considered in relation to Souls, Subjectivity, Value, Content, Agency, Reason, Purpose, and Representation.

Purpose – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Purpose discussed along with Game, Correctness, Value, God, Reason, and Representation.

A priori – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One often finds the A priori discussed in relation to Correctness, Rules/Norms, Inference, Subjectivity, Content, Reason, Truth, and Representation.

Truth – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Truth discussed in concert with Games, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Value, Content, Practices, Mind, A priori, Truth, and Representation.

Representation – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Representation discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Subjectivity, Phenomenality, Content, Reason, Mind, A priori, and Truth.

Responsibility – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Responsibility is consistently related to Game, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Reason, Agency, Mind, Purpose, Truth, Representation, and Good/Evil.

Good/Evil – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Good/Evil consistently related to Souls, Correctness, Subjectivity, Value, Reason, Agency, God, Purpose, Truth, and Responsibility.

.

The big question here, from a naturalistic standpoint, is whether all of these characteristics are homologous or merely analogous. Are the similarities ontogenetic, the expression of some shared ‘deep structure,’ or merely coincidental? For me this has to be what I think is one of the most significant questions that never get’s asked in cognitive science. Why? Because everybody has their own way of divvying up the intentional pie (including interpretavists like Dennett). Some of these items are good, and some of them are bad, depending on whom you talk to. If these phenomena were merely analogous, then this division need not be problematic—we’re just talking fish and whales. But if these phenomena are homologous—if we’re talking whales and whales—then the kinds of discursive barricades various theorists erect to shelter their ‘good’ intentional phenomena from ‘bad’ intentional phenomena need to be powerfully motivated.

Pointing out the apparent functionality of certain phenomena versus others simply will not do. The fact that these phenomena discharge some kind of function somehow seems pretty clear. It seems to be the case that God anchors the solution to any number of social problems—that even Souls discharge some function in certain, specialized problem-ecologies. The same can be said of Truth, Rule/Norm, Agency—every item on this list, in fact.

And this is precisely what one might expect given a purely biomechanical, heuristic interpretation of these terms as well (with the added advantage of being able to explain why our phenomenological inheritance finds itself mired in the kinds of problems it does). None of these need count as anything resembling what our phenomenological tradition claims to explain the kinds of behaviour that accompanies them. God doesn’t need to be ‘real’ to explain church-going, no more than Rules/Norms do to explain rule-following. Meanwhile, the growing mountain of cognitive scientific discovery looms large: cognitive functions generally run ulterior to what we can metacognize for report. Time and again, in context after context, empirical research reveals that human cognition is simply not what we think it is. As ‘Dehaene’s Law’ states, “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). Perhaps this is simply what intentionality amounts to: a congenital ‘overestimation of awareness,’ a kind of WYSIATI or ‘what-you-see-is-all-there-is’ illusion. Perhaps anthropic, low-dimensional, functionally exempt from natural continuity, inscrutable in terms of natural continuity, source of perennial controversy, and possesses inexplicable efficacy are all expressions of various kinds of neglect. Perhaps it isn’t just a coincidence that we are entirely blind to our neuromechanical embodiment and that we suffer this compelling sense that we are more than merely neuromechanical.

How could we cognize the astronomical causal complexities of cognition? What evolutionary purpose would it serve?

What impact does our systematic neglect of those capacities have on philosophical reflection?

Does anyone really think the answer is going to be ‘minimal to nonexistent’?

.

* Originally posted 06/16/2014

Gathering Momentum without Expending Wind

by rsbakker

It’s been a busy week, enough to make me pine for a cabin on the Arctic Circle with only the whine of a billion mosquitos to keep me company. The reader reviews on Amazon continue to be over the top, but I’ve been worried by the lack of any institutional reviews (outside Blogcritics). A couple people have told me this is due to the release date being split, and that we might have to wait until after September 29th before things begin to heat up.

Even still, this week’s media roundup is a long one…

Rob and Phil interview me over at the Grim Tidings. This is my first ever Skype interview and it shows! The guys managed to make it a great time, however.

Richard Marcus gives his views on The Great Ordeal over at Blogcritics.

Pat’s Fantasy Hotlist picks The Great Ordeal as one of the Provisional Speculative Fiction Top Five of 2016.

Philosopher David Roden ruminates on “Crash Space,” Neuropath, and great deal more in a wonderful piece of theory-fiction called “Letter from the Ocean Terminus” in Dis Magazine. What happens when our most fundamental landmarks begin to walk? How will we find our way? With lines like “Philosophy is a benign histamine response, a dermographism allowing us to shimmer helplessly in the dark,” you simply cannot go wrong!

Clarkesworld has published “Fish Dance,” a fantastic, Eganesque short story by another philosopher friend of mine, Eric Schwitzgebel. I’ve proofed several of Eric’s stories now, and all I can say is watch out. “Fish Dance” makes me jealous. I can admit it.

The Grimdark Magazine kickstarter campaign for the Evil is a Matter of Perspective anthology has reached its goal. And in short order, too! When I’m not working on The Unholy Consult rewrite, I’m working on the anthology’s Foreword, “On the Goodness of Evil,” and a short story featuring Uster Scraul. Congratulations to Adrian and the GdM team.

The Death of Wilson: How the Academic Left Created Donald Trump

by rsbakker

Tim and Wilson 2

 

People need to understand that things aren’t going to snap back into magical shape once Trump becomes archive footage. The Economist had a recent piece on all the far-right demagoguery in the past, and though they stress the impact that politicians like Goldwater have had subsequent to their electoral losses, they imply that Trump is part of a cyclical process, essentially more of the same. Perhaps this might have been the case were this anything but the internet age. For all we know, things could skid madly out of control.

Society has been fundamentally rewired. This is a simple fact. Remember Home Improvement, how Tim would screw something up, then wander into the backyard to lay his notions and problems on his neighbour Wilson, who would only ever appear as a cap over the fence line? Tim was hands on, but interpersonally incompetent, while Wilson was bookish and wise to the ways of the human heart—as well as completely obscured save for his eyes and various caps by the fence between them.

This is a fantastic metaphor for the communication of ideas before the internet and its celebrated ability to ‘bring us together.’ Before, when you had chauvinist impulses, you had to fly them by whoever was available. Pre-internet, extreme views were far more likely to be vetted by more mainstream attitudes. Simple geography combined with the limitations of analogue technology had the effect of tamping the prevalence of such views down. But now Tim wouldn’t think of hassling Wilson over the fence, not when he could do a simple Google and find whatever he needed to confirm his asinine behaviour. Our chauvinistic impulses no longer need to run any geographically constrained social gauntlet to find articulation and rationalization. No matter how mad your beliefs, evidence of their sanity is only ever a few keystrokes away.

This has to have some kind of aggregate, long-term effect–perhaps a dramatic one. The Trump phenomenon isn’t the manifestation of an old horrific contagion following the same old linear social vectors; it’s the outbreak of an old horrific contagion following new nonlinear social vectors. Trump hasn’t changed anything, save identifying and exploiting an ecological niche that was already there. No one knows what happens next. Least of all him.

What’s worse, with the collapse of geography comes the collapse of fences. Phrases like “cretinization of the masses” is simply one Google search away as well. Before, Wilson would have been snickering behind that fence, hanging with his friends and talking about his moron neighbour, who really is a nice guy, you know, but needs help to think clearly all the same. Now the fence is gone, and Tim can finally see Wilson for the condescending, self-righteous bigot he has always been.

Did I just say ‘bigot’? Surely… But this is what Trump supporters genuinely think. They think ‘liberal cultural elites’ are bigoted against them. As implausible as his arguments are, Murray is definitely tracking a real social phenomenon in Coming Apart. A good chunk of white America feels roundly put upon, attacked economically and culturally. No bonus this Christmas. No Christmas tree at school. Why should a minimum wage retail worker think they somehow immorally benefit by dint of blue eyes and pale skin? Why should they listen to some bohemian asshole who’s both morally and intellectually self-righteous? Why shouldn’t they feel aggrieved on all sides, economically and culturally disenfranchised?

Who celebrates them? Aside from Donald Trump.

Trump

 

You have been identified as an outgroup competitor.

Last week, Social Psychological and Personality Science published a large study conducted by William Chopik, a psychologist out of Michigan State University, showing the degree to which political views determine social affiliations: it turns out that conservatives generally don’t know Clinton supporters and liberals generally don’t know any Trump supporters. Americans seem to be spontaneously segregating along political lines.

Now I’m Canadian, which, although it certainly undermines the credibility of my observations on the Trump phenomenon in some respects, actually does have its advantages. The whole thing is curiously academic, for Canadians, watching our cousins to the south play hysterical tug-o-war with their children’s future. What’s more, even though I’m about as academically institutionalized as a human can be, I’m not an academic, and I have steadfastly resisted the tendency of the highly educated to surround themselves with people who are every bit as institutionalized—or at least smitten—by academic culture.

I belong to no tribe, at least not clearly. Because of this, I have Canadian friends who are, indeed, Trump supporters. And I’ve been whaling on them, asking questions, posing arguments, and they have been whaling back. Precisely because we are Canadian, the whole thing is theatre for us, allowing, I like to think, for a brand of honesty that rancour and defensiveness would muzzle otherwise.

When I get together with my academic friends, however, something very curious happens whenever I begin reporting these attitudes: I get interrupted. “But-but, that’s just idiotic/wrong/racist/sexist!” And that’s when I begin whaling on them, not because I don’t agree with their estimation, but because, unlike my academic confreres, I don’t hold Trump supporters responsible. I blame them, instead. Aren’t they the ‘critical thinkers’? What else did they think the ‘cretins’ would do? Magically seize upon their enlightened logic? Embrace the wisdom of those who openly call them fools?

Fact is, you’re the ones who jumped off the folk culture ship.

The Trump phenomenon falls into the wheelhouse of what has been an old concern of mine. For more than a decade now, I’ve been arguing that the social habitat of intellectual culture is collapsing, and that the persistence of the old institutional organisms is becoming more and more socially pernicious. Literature professors, visual artists, critical theorists, literary writers, cultural critics, intellectual historians and so on all continue acting and arguing as though this were the 20th century… as if they were actually solving something, instead of making matters worse.

See before, when a good slice of media flushed through bottlenecks that they mostly controlled, the academic left could afford to indulge in the same kind of ingroup delusions that afflict all humans. The reason I’m always interrupted in the course of reporting the attitudes of my Trump supporting friends is simply that, from an ingroup perspective, they do not matter.

More and more research is converging upon the notion that the origins of human cooperation lie in human enmity. Think Band of Brothers only in an evolutionary context. In the endless ‘wars before civilization’ one might expect those groups possessing members willing to sacrifice themselves for the good of their fellows would prevail in territorial conflicts against groups possessing members inclined to break and run. Morality has been cut from the hip of murder.

This thesis is supported by the radical differences in our ability to ‘think critically’ when interacting with ingroup confederates as opposed to outgroup competitors. We are all but incapable of listening, and therefore responding rationally, to those we perceive as threats. This is largely why I think literature, minimally understood as fiction that challenges assumptions, is all but dead. Ask yourself: Why is it so easy to predict that so very few Trump supporters have read Underworld? Because literary fiction caters to the likeminded, and now, thanks to the precision of the relationship between buyer and seller, it is only read by the likeminded.

But of course, whenever you make these kinds of arguments to academic liberals you are promptly identified as an outgroup competitor, and you are assumed to have some ideological or psychological defect preventing genuine critical self-appraisal. For all their rhetoric regarding ‘critical thinking,’ academic liberals are every bit as thin-skinned as Trump supporters. They too feel put upon, besieged. I gave up making this case because I realized that academic liberals would only be able to hear it coming from the lips of one of their own, and even then, only after something significant enough happened to rattle their faith in their flattering institutional assumptions. They know that institutions are self-regarding, they admit they are inevitably tarred by the same brush, but they think knowing this somehow makes them ‘self-critical’ and so less prone to ingroup dysrationalia. Like every other human on the planet, they agree with themselves in ways that flatter themselves. And they direct their communication accordingly.

I knew it was only a matter of time before something happened. Wilson was dead. My efforts to eke out a new model, to surmount cultural balkanization, motivated me to engage in ‘blog wars’ with two very different extremists on the web (both of whom would be kind enough to oblige my predictions). This experience vividly demonstrated to me how dramatically the academic left was losing the ‘culture wars.’ Conservative politicians, meanwhile, were becoming more aggressively regressive in their rhetoric, more willing to publicly espouse chauvinisms that I had assumed safely buried.

The academic left was losing the war for the hearts and minds of white America. But so long as enrollment remained steady and book sales remained strong, they remained convinced that nothing fundamental was wrong with their model of cultural engagement, even as technology assured a greater match between them and those largely approving of them. Only now, with Trump, are they beginning to realize the degree to which the technological transformation of their habitat has rendered them culturally ineffective. As George Saunders writes in “Who Are All These Trump Supporters?” in The New Yorker:

Intellectually and emotionally weakened by years of steadily degraded public discourse, we are now two separate ideological countries, LeftLand and RightLand, speaking different languages, the lines between us down. Not only do our two subcountries reason differently; they draw upon non-intersecting data sets and access entirely different mythological systems. You and I approach a castle. One of us has watched only “Monty Python and the Holy Grail,” the other only “Game of Thrones.” What is the meaning, to the collective “we,” of yon castle? We have no common basis from which to discuss it. You, the other knight, strike me as bafflingly ignorant, a little unmoored. In the old days, a liberal and a conservative (a “dove” and a “hawk,” say) got their data from one of three nightly news programs, a local paper, and a handful of national magazines, and were thus starting with the same basic facts (even if those facts were questionable, limited, or erroneous). Now each of us constructs a custom informational universe, wittingly (we choose to go to the sources that uphold our existing beliefs and thus flatter us) or unwittingly (our app algorithms do the driving for us). The data we get this way, pre-imprinted with spin and mythos, are intensely one-dimensional.

The first, most significant thing to realize about this passage is that it’s written by George Saunders for The New Yorker, a premier ingroup cultural authority on a premier ingroup cultural podium. On the view given here, Saunders pretty much epitomizes the dysfunction of literary culture, an academic at Syracuse University, the winner of countless literary awards (which is to say, better at impressing the likeminded than most), and, I think, clearly a genius of some description.

To provide some rudimentary context, Saunders attends a number of Trump rallies, making observations and engaging Trump supporters and protesters alike (but mostly the former) asking gentle questions, and receiving, for the most part, gentle answers. What he describes observation-wise are instances of ingroup psychology at work, individuals, complete strangers in many cases, making forceful demonstrations of ingroup solidarity and resolve. He chronicles something countless humans have witnessed over countless years, and he fears for the same reasons all those generations have feared. If he is puzzled, he is unnerved more.

He isolates two culprits in the above passage, the ‘intellectual and emotional weakening brought about by degraded public discourse,’ and more significantly, the way the contemporary media landscape has allowed Americans to ideologically insulate themselves against the possibility of doubt and negotiation. He blames, essentially, the death of Wilson.

As a paradigmatic ‘critical thinker,’ he’s careful to throw his own ‘subject position’ into mix, to frame the problem in a manner that distributes responsibility equally. It’s almost painful to read, at times, watching him walk the tightrope of hypocrisy, buffeted by gust after gust of ingroup outrage and piety, trying to exemplify the openness he mistakes for his creed, but sounding only lyrically paternalistic in the end–at least to ears not so likeminded. One can imagine the ideal New Yorker reader, pursing their lips in empathic concern, shaking their heads with wise sorrow, thinking…

But this is the question, isn’t it? What do all these aspirational gestures to openness and admissions of vague complicity mean when the thought is, inevitably, fools? Is this not the soul of bad faith? To offer up portraits of tender humanity in extremis as proof of insight and impartiality, then to end, as Saunders ends his account, suggesting that Trump has been “exploiting our recent dullness and aversion to calling stupidity stupidity, lest we seem too precious.”

Academics… averse to calling stupidity stupid? Trump taking advantage of this aversion? Lordy.

This article, as beautiful as it is, is nothing if not a small monument to being precious, to making faux self-critical gestures in the name of securing very real ingroup imperatives. We are the sensitive ones, Saunders is claiming. We are the light that lets others see. And these people are the night of American democracy.

He blames the death of Wilson and the excessive openness of his ingroup, the error of being too open, too critically minded…

Why not just say they’re jealous because he and his friends are better looking?

If Saunders were at all self-critical, anything but precious, he would be asking questions that hurt, that cut to the bone of his aggrandizing assumptions, questions that become obvious upon asking them. Why not, for instance, ask Trump supporters what they thought of CivilWarLand in Bad Decline? Well, because the chances of any of them reading any of his work aside from “CommComm” (and only then because it won the World Fantasy Award in 2010) were virtually nil.

So then why not ask why none of these people has read anything written by him or any of his friends or their friends? Well, he’s already given us a reason for that: the death of Wilson.

Okay, so Wilson is dead, effectively rendering your attempts to reach and challenge those who most need to be challenged with your fiction toothless. And so you… what? Shrug your shoulders? Continue merely entertaining those whom you find the least abrasive?

If I’m right, then what we’re witnessing is so much bigger than Trump. We are tender. We are beautiful. We are vicious. And we are capable of believing anything to secure what we perceive as our claim. What matters here is that we’ve just plugged billions of stone-age brains chiselled by hundreds of millions of years of geography into a world without any. We have tripped across our technology and now we find ourselves in crash space, a domain where the transformation of our problems has rendered our traditional solutions obsolete.

It doesn’t matter if you actually are on their side or not, whatever that might mean. What matters is that you have been identified as an outgroup competitor, and that none of the authority you think your expertise warrants will be conceded to you. All the bottlenecks that once secured your universal claims are melting away, and you need to find some other way to discharge your progressive, prosocial aspirations. Think of all the sensitive young talent sifting through your pedagogical fingers. What do you teach them? How to be wise? How to contribute to their community? Or how to play the game? How to secure the approval of those just like you—and so, how to systematically alienate them from their greater culture?

So. Much. Waste. So much beauty, wisdom, all of it aimed at nowhere… tossed, among other places, into the heap of crumpled Kleenexes called The New Yorker.

Who would have thunk it? The best way to pluck the wise from the heart of our culture was to simply afford them the means to associate almost exclusively with one another, then trust to human nature, our penchant for evolving dialects and values in isolation. The edumacated no longer have the luxury of speaking among themselves for the edification of those servile enough to listen of their own accord. The ancient imperative to actively engage, to have the courage to reach out to the unlikeminded, to write for someone else, has been thrust back upon the artist. In the days of Wilson, we could trust to argument, simply because extreme thoughts had to run a gamut of moderate souls. Not so anymore.

If not art, then argument. If not argument, then art. Invade folk culture. Glory in delighting those who make your life possible–and take pride in making them think.

Sometimes they’re the idiot and sometimes we’re the idiot–that seems to be the way this thing works. To witness so many people so tangled in instinctive chauvinisms and cartoon narratives is to witness a catastrophic failure of culture and education. This is what Trump is exploiting, not some insipid reluctance to call stupid stupid.

I was fairly bowled over a few weeks back when my neighbour told me he was getting his cousin in Florida to send him a Trump hat. I immediately asked him if he was crazy.

“Name one Donald Trump who has done right by history!” I demanded, attempting to play Wilson, albeit minus the decorum and the fence.

Shrug. Wild eyes and a genuine smile. “Then I hope he burns it down.”

“How could you mean that?”

“I dunno, brother. Can’t be any worse than this fucking shit.”

Nothing I could say could make him feel any different. He’s got the internet.*

 

*[Note to readers: This post is receiving a great deal of Facebook traffic, and relatively little critical comment, which tells me individuals are saving their comments for whatever ingroup they happen to belong to, thus illustrating the very dynamic critiqued in the piece. Sound off! Dare to dissent in ideologically mixed company, or demonstrate the degree to which you need others to agree before raising your voice.]