Three Pound Brain

No bells, just whistling in the dark…

Month: September, 2016

The UK and World Release of THE GREAT ORDEAL

by rsbakker

greatordeal

So it’s been a busy summer. Fawk.

I answered next to no emails. The computer I’m writing on this very moment is my only portal to the web, which tends to play laser light show to my kitten, so I avoided it like the plague, and managed to piss off a good number of people, I’m sure.

I’ve finished both “The Carathayan,” an Uster Scraul  story for the Evil is a Matter of Perspective anthology, as well as a philowank Foreword entitled, “On the Goodness of Evil.”

I submitted an outline for “Reading After the Death of Meaning,” an essay on literary criticism and eliminativism solicited by Palgrave for a critical anthology on Literature and Philosophy.

I finished a serious rewrite of The Unholy Consult, which I printed up and sent out to a few fellow humans for some critical feedback. My favourite line so far is “Perfect and insane”!

This brought back some memories, probably because I’m still basking in the post-coital glow of finishing The Unholy Consult. It really is hard to believe that I’m here, on the far side of the beast that has been gnawing at my creative bones for more than thirty years now. My agent has completed the deal with Overlook, so I can look forward to the odd night eating out, maybe even buying a drink or two!

And tomorrow, of course, is the day The Great Ordeal is set to be released in the UK and around the world. If you have a tub handy, thump away. Link the trailer if you think it might work. Or if you’re engaging an SF crowd, maybe link “Crash Space.” It would be nice to sell a bazillion books, but really, I would be happy selling enough to convince my publishers to continue investing in the Second Apocalypse.

To the Coffers, my friends. The Slog of Slogs is nearing its end.

 

On the Interpretation of Artificial Souls

by rsbakker

black-box-2

In “Is Artificial Intelligence Permanently Inscrutable?” Aaron M. Bornstein surveys the field of artificial neural networks, claiming that “[a]s exciting as their performance gains have been… there’s a troubling fact about modern neural networks: Nobody knows quite how they work.” The article is fascinating in its own right, and Peter over at Consciousness Entities provides an excellent overview, but I would like to use it to flex a little theoretical muscle, and show the way the neural network ‘Inscrutability Problem’ turns on the same basic dynamics underwriting the apparent ‘hard problem’ of intentionality. Once you have a workable, thoroughly naturalistic account of cognition, you can begin to see why computer science finds itself bedevilled with strange parallels of the problems one finds in the philosophy of mind.

This parallel is evident in what Bornstein identifies as the primary issue, interpretability. The problem with artificial neural networks is that they are both contingent and incredibly complex. Recurrent neural networks operate by producing outputs conditioned by a selective history of previous conditionings, one captured in the weighting of (typically) millions of artificial neurons arranged in multiple processing layers. Since  discrepancies in output serve as the primary constraint, and since the process of deriving new outputs is driven by the contingencies of the system (to the point where even electromagnetic field effects can become significant), the complexity means that searching for the explanation—or canonical interpretation—of the system is akin to searching for a needle in a haystack.

And as Bornstein points out, this has forced researchers to borrow “techniques from biological research that peer inside networks after the fashion of neuroscientists peering into brains: probing individual components, cataloguing how their internals respond to small changes in inputs, and even removing pieces to see how others compensate.” Unfortunately, importing neuroscientific techniques has resulted in importing neuroscience-like interpretative controversies as well. In “Could a neuroscientist understand a microprocessor?” Eric Jonas and Konrad Kording show how taking the opposite approach, using neuroscientific data analysis methods to understand the computational functions behind games like Donkey Kong and Space Invaders, fails no matter how much data they have available. The authors even go so far as to reference artificial neural network inscrutability as the problem, stating that “our difficulty at understanding deep learning may suggest that the brain is hard to understand if it uses anything like gradient descent on a cost function” (11).

Neural networks, artificial or natural, could very well be essential black boxes, systems that will always resist synoptic verbal explanation. Functional inscrutability in neuroscience is a pressing problem for obvious reasons. The capacity to explain how a given artificial neural network solves a given problem, meanwhile, remains crucial simply because “if you don’t know how it works, you don’t know how it will fail.” One of the widely acknowledged shortcomings of artificial neural networks is “that the machines are so tightly tuned to the data they are fed,” data that always falls woefully short the variability and complexity of the real world. As Bornstein points out, “trained machines are exquisitely well suited to their environment—and ill-adapted to any other.” As AI creeps into more and more real world ecological niches, this ‘brittleness,’ as Bornstein terms it, becomes more of a real world concern. Interpretability means lives in AI potentially no less than in neuroscience.

All this provokes Bornstein to pose the philosophical question: What is interpretability?

He references Marvin Minsky’s “suitcase words,” the legendary computer scientist’s analogy for many of the terms—such as “consciousness” or “emotion”—we use when we talk about our sentience and sapience. These words, he proposes, reflect the workings of many different underlying processes, which are locked inside the “suitcase.” As long as we keep investigating these words as stand-ins for more fundamental concepts, our insight will be limited by our language. In the study of intelligence, could interpretability itself be such a suitcase word?

Bornstein finds himself delivered to one of the fundamental issues in the philosophy of mind: the question of how to understand intentional idioms—Minsky’s ‘suitcase words.’ The only way to move forward on the issue of interpretability, it seems, is to solve nothing less than the cognitive (as opposed to the phenomenal) half of the hard problem. This is my bailiwick. The problem, here, is a theoretical one: the absence of any clear understanding of ‘interpretability.’ What is interpretation? Why do breakdowns in our ability to explain the operation of our AI tools happen, and why do they take the forms that they do?  I think I can paint a spare yet comprehensive picture that answers these questions and places them in the context of much more ancient form of interpreting neural networks.  In fact, I think it can pop open a good number of Minsky’s suitcases and air out their empty insides.

Three Pound Brain regulars, I’m sure, have noticed a number of striking parallels between Bornstein’s characterization of the Inscrutability Problem and the picture of ‘post-intentional cognition’ I’ve been developing over the years. The apparently inscrutable algorithms derived via neural networks are nothing if not heuristic, cognitive systems that solve via cues correlated to target systems. Since they rely on cues (rather than all the information potentially available), their reliability entirely depends on their ecology, which is to say, how those cues correlate. If those cues do not correlate, then disaster strikes (as when the truck trailer that killed Joshua Brown in his Tesla Model S cued more white sky).

The primary problem posed by inscrutability, in other words, is the problem of misapplication. The worry that arises again and again isn’t simply that these systems are inscrutable, but that they are ecological, requiring contexts often possessing quirky features given quirks in the ‘environments’—data sets—used to train them. Inscrutability is a problem because it entails blindness to potential misapplications, plain and simple. Artificial neural network algorithms, you could say, possess adaptive problem-ecologies the same as all heuristic cognition. They solve, not by exhaustively taking into account the high dimensional totality of the information available, but rather by isolating cues—structures in the data set—which the trainer can only hope will generalize to the world.

Artificial neural networks are shallow information consumers, systems that systematically neglect the high dimensional mechanical intricacies of their environments, focusing instead on cues statistically correlated to those high-dimensional mechanical intricacies to solve them. They are ‘brittle,’ therefore, so far as those correlations fail to obtain.

But humans are also shallow information consumers, albeit far more sophisticated ones. Short the prostheses of science, we are also systems prone to neglect the high dimensional mechanical intricacies of our environments, focusing instead on cues statistically correlated to those high-dimensional mechanical intricacies. And we are also brittle to the extent those correlations fail to obtain. The shallow information nets we throw across our environments appear to be seamless, but this is just an illusion, as magicians so effortlessly remind us with their illusions.

This is as much the case for our linguistic attempts to make sense of ourselves and our devices as it is for other cognitive modes. Minsky’s ‘suitcase words’ are such because they themselves are the product of the same cue-correlative dependency. These are the granular posits we use to communicate cue-based cognition of mechanical black box systems such as ourselves, let alone others. They are also the granular posits we use to communicate cue-based cognition of pretty much any complicated system. To be a shallow information consumer is to live in a black box world.

The rub, of course, is that this is itself a black box fact, something tucked away in the oblivion of systematic neglect, duping us into assuming most everything is clear as glass. There’s nothing about correlative cognition, no distinct metacognitive feature, that identifies it as such. We have no way of knowing whether we’re misapplying our own onboard heuristics in advance (thus the value of the heuristics and biases research program), let alone our prosthetic ones! In fact, we’re only now coming to grips with the fractionate and heuristic nature of human cognition as it is.

natural-and-artificial-black-boxes

Inscrutability is a problem, recall, because artificial neural networks are ‘brittle,’ bound upon fixed correlations between their cues and the systems they were tasked with solving, correlations that may or may not, given the complexity of the world, be the case. The amazing fact here is that artificial neural networks are inscrutable, the province of interpretation at best, because we ourselves are brittle, and for precisely the same basic reason: we are bound upon fixed correlations between our cues and the systems we’re tasked with solving. The contingent complexities of artificial neural networks place them, presently at least, outside our capacity to solve—at least in a manner we can readily communicate.

The Inscrutability Problem, I contend, represents a prosthetic externalization of the very same problem of ‘brittleness’ we pose to ourselves, the almost unbelievable fact that we can explain the beginning of the Universe but not cognition—be it artificial or natural. Where the scientists and engineers are baffled by their creations, the philosophers and psychologists are baffled by themselves, forever misapplying correlative modes of cognition to the problem of correlative cognition, forever confusing mere cues for extraordinary, inexplicable orders of reality, forever lost in jungles of perpetually underdetermined interpretation. The Inscrutability Problem is the so-called ‘hard problem’ of intentionality, only in a context that is ‘glassy’ enough to moot the suggestion of ‘ontological irreducibility.’ The boundary faced by neuroscientists and AI engineers alike is mere complexity, not some eerie edge-of-nature-as-we-know-it. And thanks to science, this boundary is always moving. If it seems inexplicable or miraculous, it’s because you lack information: this seems a pretty safe bet as far as razors go.

‘Irreducibility’ is about to come crashing down. I think the more we study problem-ecologies and heuristic solution strategies the more we will be able to categorize the mechanics distinguishing different species of each, and our bestiary of different correlative cognitions will gradually, if laboriously, grow. I also think that artificial neural networks will play a crucial role in that process, eventually providing ways to model things like intentional cognition. If nature has taught us anything over the past five centuries it is that the systematicities, the patterns, are there—we need only find the theoretical and technical eyes required to behold them. And perhaps, when all is said and done, we can ask our models to explain themselves.

Updatage…

by rsbakker

tuc-pic

One of my big goals with Three Pound Brain has always been to establish a ‘crossroads between incompatible empires,’ to occupy the uncomfortable in-between of pulp, science, and philosophy–a kind of ‘unholy consult,’ you might even say. This is where the gears grind. I’ve entertained some grave doubts over the years, and I still do, but posts like these are nothing if not heartening. The hope is that I can slowly gain the commercial and academic clout needed to awaken mainstream culture to this grinding, and to the trouble it portends.

I keep planning to write a review of Steven Shaviro’s wonderful Discognition, wherein he devotes an entire chapter to Neuropath and absolutely nails what I was trying to accomplish. It’s downright spooky, but really just goes to show, at least for those of us who periodically draw water from his Pinocchio Theory blog. For anyone wishing to place the relation of SF to consciousness research, I can’t think of a more clear-eyed, impeccably written place to begin. Not only does Shaviro know his stuff, he knows how to communicate it.

Robert Lamb considers “The Great Ordeal’s Outside Context Problem” over at Stuff to Blow Your Mind, where he asks some hard questions of the Tekne, and Kellhus’s understanding of it. SPOILER ALERT, though. Big time.

Dan Mellamphy and Nandita Biswas-Mellamphy have just released Digital Dionysus: Nietzsche and the Network-Centric Condition, a collection of various papers exploring the relevance of Nietzsche’s work to our technological age, including “Outing the It that Thinks: The Coming Collapse of an Intellectual Ecosystem,” by yours truly. The great thing about this collection is that it reads Nietzsche as a prophet of the now rather than as some post-structuralist shill. I wrote the paper some time ago, at a point when I was still climbing back into philosophy after a ten year hiatus, but I still stand by it and its autobiographical deconstruction of the Western intellectual tradition.

Dismiss Dis

by rsbakker

I came across this quote in “The Hard Problem of Content: Solved (Long Ago),” a critique of Hutto and Myin’s ‘radical enactivism’ by Marcin Milkowski:

Naıve semantic nihilism is not a philosophical position that deserves a serious debate because it would imply that expressing any position, including semantic nihilism, is pointless. Although there might still be defenders of such a position, it undermines the very idea of a philosophical debate, as long as the debate is supposed to be based on rational argumentation. In rational argumentation, one is forced to accept a sound argument, and soundness implies the truth of the premises and the validity of the argument. Just because these are universal standards for any rational debate, undermining the notion of truth can be detrimental; there would be no way of deciding between opposing positions besides rhetoric. Hence, it is a minimal requirement for rational argumentation in philosophy; one has to assume that one’s statements can be truth-bearers. If they cannot have any truth-value, then it’s no longer philosophy.” 74

These are the kind of horrible arguments that I take as the principle foe of anyone who thinks cognitive science needs to move beyond traditional philosophy to discover its natural scientific bases. I can remember having a great number of arguments long before I ever ‘assumed my statements were truth-bearers.’ In fact, I would wager that the vast majority of arguments are made by people possessing no assumption that their statement’s are ‘truth-bearers’ (whatever this means). What Milkowski would say, of course, is that we all have these assumptions nonetheless, only implicitly. This is because Milkowski has a theory of argumentation and truth, a story of what is really going on behind the scenes of ‘truth talk.’

The semantic nihilist, such as myself, famously disagrees with this theory. We think truth-talk actually amounts to something quite different, and that once enough cognitive scientists can be persuaded to close the ancient old cover of Milkowski’s book (holding their breath for all the dust and mold), a great number of spurious conundrums could be swept from the worktable, freeing up space for more useful questions. What Milkowski seems to be arguing here is that… hmm… Good question! Either he’s claiming the semantic nihilist cannot argue otherwise without contradicting his theory, which is the whole point of arguing otherwise. Or he’s claiming the semantic nihilistic cannot argue against his theory of truth because, well, his theory of truth is true. Either he’s saying something trivial, or he’s begging the question! Obviously so, given the issue between him and the semantic nihilist is the question of the nature of truth talk.

For those interested in a more full-blooded account of this problem, you can check out “Back to Square One: Towards a Post-intentional Future” over at Scientia Salon. Ramsey also tucks this strategy into bed in his excellent article on Eliminative Materialism over on Stanford Encyclopedia of Philosophy. And Stephen Turner, of course, has written entire books (such as Explaining the Normative) on this peculiar bug in our intellectual OS. But I think it’s high time to put an end to what has to be one of the more egregious forms of intellectual laziness one finds in philosophy of mind circles–one designed, no less, to shut down the very possibility of an important debate. I think I’m right. Milkowski thinks he’s right. I’m willing to debate the relative merits of our theories. He has no time for mine, because his theory is so super-true that merely disagreeing renders me incoherent.

Oi.

Milkowski does go on to provide what I think is a credible counter-argument to eliminativism, what I generally refer to as the ‘abductive argument’ here. This is the argument that separates my own critical eliminativism (I’m thinking of terming my view ‘criticalism’–any thoughts?) from the traditional eliminativisms espoused by Feyerbrand, the Churchlands, Stich, Ramsey and others. I actually think my account possesses the parsimony everyone concedes to eliminativism without falling mute on the question of what things like ‘truth talk’ amount to. I actually think I have a stronger abductive case.

But it’s the tu quoque (‘performative contradiction’) style arguments that share that peculiar combination of incoherence and intuitive appeal that renders philosophical blind alleys so pernicious. This is why I would like to solicit recently published examples of these kinds of dismissals in various domains for a running ‘Dismiss Dis’ series. Send me a dismissal like this, and I will dis…

PS: For those interested in my own take on Hutto and Myin’s radical enactivism, check out “Just Plain Crazy Enactive Cognition,” where I actually agree with Milkowski that they are forced to embrace semantic nihilism–or more specifically, a version of my criticalism–by instabilities in their position.

 

AI and the Coming Cognitive Ecological Collapse: A Reply to David Krakauer

by rsbakker

the-space-cadets

Thanks to Dirk and his tireless linking generosity, I caught “Will AI Harm Us?” in Nautilus by David Krakauer, the President of the Santa Fe Institute, on the potential dangers posed by AI on this side of the Singularity. According to Krakauer, the problem lies in the fact that AI’s are competitive as opposed to complementary cognitive artifacts of the kind we have enjoyed until now. Complementary cognitive artifacts, devices such as everything from mnemonics to astrolabes to mathematical notations, allow us to pull up the cognitive ladder behind us in some way—to somehow do without the tool. “In almost every use of an ancient cognitive artifact,” he writes, “after repeated practice and training, the artifact itself could be set aside and its mental simulacrum deployed in its place.”

Competitive cognitive artifacts, however, things like calculators, GPS’s, and pretty much anything AI-ish, don’t let us kick away the ladder. We lose the artifact, and we lose the ability. As Krakauer writes:

In the case of competitive artifacts, when we are deprived of their use, we are no better than when we started. They are not coaches and teachers—they are serfs. We have created an artificial serf economy where incremental and competitive artificial intelligence both amplifies our productivity and threatens to diminish organic and complementary artificial intelligence…

So where complementary cognitive artifacts teach us how to fish, competitive cognitive artifacts simply deliver the fish, rendering us dependent. Krakauer’s complaint against AI, in other words, is the same as Plato’s complaint against writing, and I think fares just as well argumentatively. As Socrates famously claims in The Phaedrus,

For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.

The problem with writing is that it is competitive precisely in Krakauer’s sense: it’s a ladder we cannot kick away. What Plato could not foresee, of course, was the way writing would fundamentally transform human cognitive ecology. He was a relic of the preliterate age, just as Krakauer (like us) is a relic of the pre-AI age. The problem for Krakauer, then, is that the distinction between complementary and competitive cognitive artifacts—the difference between things like mnemonics and things like writing—possesses no reliable evaluative force. All tools involve trade-offs. Since Krakauer has no way of knowing how AI will transform our cognitive ecology he has no way of evaluating the kinds of trade-offs they will force upon us.

This is the problem with all ‘excess dependency arguments’ against technology, I think: they have no convincing way of assessing the kind of cognitive ecology that will result, aside from the fact that it involves dependencies. No one likes dependencies, ergo…

But I like to think I’ve figured the naturalistic riddle of cognition out,* and as a result I think I can make a pretty compelling case why we should nevertheless accept that AI poses a very grave threat this side of the Singularity. The problem, in a nut shell, is that we are shallow information consumers, evolved to generate as much gene-promoting behaviour out of as little environmental information as possible. Human cognition relies on simple cues to draw very complex conclusions simply because it could always rely on adaptive correlations between those cues and the systems requiring solution: it could always depend on what might be called cognitive ecological stability.

Since our growing cognitive dependency on our technology always involves trade-offs, it should remain an important concern (as it clearly seems to be, given the endless stream of works devoted to the downside of this or that technology in this or that context). The dependency we really need to worry about, however, is our cognitive biological dependency on ancestral environmental correlations, simply because we have good reason to believe those cognitive ecologies will very soon cease to exist. Human cognition is thoroughly heuristic, which is to say, thoroughly dependent on cues reliably correlated to whatever environmental system requires solution. AI constitutes a particular threat because no form of human cognition is more heuristic, more cue dependent, than social cognition. Humans are very easily duped into anthropomorphizing given the barest cues, let alone processes possessing AI. It pays to remember the simplicity of the bots Ashley Madison used to gull male subscribers into thinking they were getting female nibbles.

And herein lies the rub: the environmental proliferation of AI means the fundamental transformation of our ancestral sociocognitive ecologies, from one where the cues we encounter are reliably correlated to systems we can in fact solve—namely, each other—into one where the cues we encounter are correlated to systems that cannot be fathomed, and the only soul solved is the consumer’s.

 

*  Bakker, R. Scott. “On Alien Philosophy,” Journal of Consciousness Studies, forthcoming.

Myth as Meth

by rsbakker

What is the lesson that Tolkien teaches us with Middle-earth? The grand moral, I think, is that the illusion of a world can be so easily cued. Tolkien reveals that meaning is cheap, easy to conjure, easy to believe, so long as we sit in our assigned seats. This is the way, at least, I thematically approach my own world-building. Like a form of cave-painting.

The idea here is to look at culture as a meaning machine, where ‘meaning’ is understood not as content, but in a post-intentional sense: various static and dynamic systems cuing various ‘folk’ forms of human cognition. Think of the wonder of the ‘artists’ in Chauvet, the amazement of discovering how to cue the cognition of worlds upon walls using only charcoal. Imagine that first hand, that first brain, tracking that reflex within itself, simply drawing a blacked finger down the wall.

chauvet horses

Traditional accounts, of course, would emphasize the symbolic or representational significance of events such as Chauvet, thereby dragging the question of the genesis of human culture into the realm of endless philosophical disputation. On a post-intentional view, however, what Chauvet vividly demonstrates is how human cognition can be easily triggered out of school. Human cognition is so heuristic, in fact, that it has little difficulty simulating those cues once they have been discovered. Since human cognition also turns out to be wildly opportunistic, the endless socio-practical gerrymandering characterizing culture was all but inevitable. Where traditional views of the ‘human revolution’ focus on utterly mysterious modes of symbolic transmission and elaboration, the present account focuses on the processes of cue isolation and cognitive adaptation. What are isolated are material/behavioural means of simulating cues belonging to ancestral forms of cognition. What is adapted is the cognitive system so cued: the cave paintings at Chauvet amount to a socio-cognitive adaptation of visual cognition, a way to use visual cognitive cues ‘out of school’ to attenuate behaviour. Though meaning, understood intentionally, remains an important explanandum in this approach, ‘meaning’ understood post-intentionally simply refers to the isolation and adaptation of cue-based cognitive systems to achieve some systematic behavioural effect. The basic processes involved are no more mysterious than those underwriting camouflage in nature.*

A post-intentional theory of meaning focuses on the continuity of semantic practices and nature, and views any theoretical perspective entailing the discontinuity of those practices and nature as spurious artifacts of the application of heuristic modes of cognition to theoretical issues. A post-intentional theory of meaning, in other worlds, views culture as a natural phenomenon, and not some arcane artifact of something empirically inexplicable. Signification is wholly material on this account, with all the messiness that comes with it.

Cognitive systems optimize effectiveness by reaching out only as far into nature as they need to. If they can solve distal systems via proximal signals possessing reliable systematic relationships to those systems, they will do so. Humans, like all other species possessing nervous systems, are shallow information consumers in what might be called deep information environments.


Given the limits of human cognition, our ancestors could report whatever they wanted about the greater world (their deep information environments), so long as those reports came cheap and/or discharged some kind of implicit function. They enjoyed what might be called, deep discursive impunity.


 

Consider anthropomorphism, the reflexive application of radically heuristic socio-cognitive capacities dedicated to solving our fellow humans to nonhuman species and nature more generally. When we run afoul anthropomorphism we ‘misattribute’ folk posits adapted to human problem-solving to nonhuman processes. As misapplications, anthropomorphisms tell us nothing about the systems they take as their putative targets. One does not solve a drought by making offerings to gods of rain. This is what makes anthropomorphic worldviews ‘fantastic’: the fact that they tell us very little, if anything, about the very nature they purport to describe and explain.

Now this, on the face of things, should prove maladaptive, since it amounts to squandering tremendous resources and behaviour effecting solutions to problems that do not exist. But of course, as is the case with so much human behaviour, it likely possesses ulterior functions serving the interests of individuals in ways utterly inaccessible to those individuals, at least in ancestral contexts.

The cognitive sophistication required to solve those deep information environments effectively rendered them inscrutable, impenetrable black-boxes, short the development of science. What we painted across the sides those boxes, then, could only be fixed by our basic cognitive capacities and by whatever ulterior function they happened to discharge. Given the limits of human cognition, our ancestors could report whatever they wanted about the greater world (their deep information environments), so long as those reports came cheap and/or discharged some kind of implicit function. They enjoyed what might be called, deep discursive impunity. All they would need is a capacity to identify cues belonging to social cognition in the natural world—to see, for instance, retribution, in the random walk of weather—and the ulterior exploitation of anthropomorphism could get underway.

Given the ancestral inaccessibility of deep information, and given the evolutionary advantages of social coordination and cohesion, particularly in the context of violent intergroup competition, it becomes easy to see how the quasi-cognition of an otherwise impenetrable nature could become a resource. When veridicality has no impact one way or another, social and individual facilitation alone determines the selection of the mechanisms responsible. When anything can be believed, to revert to folk idioms, then only those beliefs that deliver matter. This, then, explains why different folk accounts of the greater world possess deep structural similarities despite their wild diversity. Their reliance on socio-cognitive systems assures deep commonalities in form, as do the common ulterior functions provided. The insolubility of the systems targeted, on the other hand, assures any answer meeting the above constraints will be as effective as any other.

Given the evolutionary provenance of this situation, we are now in a position to see how accurate deep information can be seen as a form of cognitive pollution, something alien that disrupts and degrades ancestrally stable, shallow information ecologies. Strangely enough, what allowed our ancestors to report the nature of nature was the out-and-out inscrutability of nature, the absence of any (deep) information to the contrary—and the discursive impunity this provides. Anthropomorphic quasi-cognition requires deep information neglect. The greater our scientifically mediated sensitivity to deep information becomes, the less tenable anthropomorphic quasi-cognition becomes, the more fantastic folk worlds become. The worlds arising out of our evolutionary heritage find themselves relegated to fairy tales.

Fantasy worlds, then, can be seen as an ontological analogue to the cave paintings at Chauvet. They cue ancestral modes of cognition, simulating the kinds of worlds our ancestors reflexively reported, folk worlds rife with those posits they used to successfully solve one another in a wide variety of practical contexts, meaningful worlds possessing the kinds of anthropomorphic ontologies we find in myths and religions.

With the collapse of the cognitive ecology that made these worlds possible, comes the ineffectiveness of the tools our ancestors used to navigate them. We now find ourselves in deep information worlds, environments not only rife with information our ancestors had neglected, but also crammed with environments engineered to manipulate shallow information cues. We now find ourselves in a world overrun with crash spaces, regions where our ancestral tools consistently fail, and cheat spaces, regions where they are exploited for commercial gain.

This is a rather remarkable fact, even if it becomes entirely obvious upon reflection. Humans possess ideal cognitive ecologies, solve spaces, environments rewarding their capacities, just as humans possess crash spaces, environments punishing their capacities. This is the sense in which fantasy worlds can be seen as a compensatory mechanism, a kind of cognitive eco-preserve, a way to inhabit more effortless shallow information worlds, pseudo-solution spaces, hypothetical environments serving up largely unambiguous cues to generally reliable cognitive capacities. And like biological eco-preserves, perhaps they serve an important function. As we saw with anthropomorphism above, pseudo-solution spaces can be solvers (as opposed to crashers) in their own respect—culture is nothing if not a testimony to this.


Fantasy is zombie scripture, the place where our ancient assumptions lurch in the semblance of life. The fantasy writer is the voodoo magician, imbuing dead meaning with fictional presence.


 

But fantasy worlds are also the playground of blind brains. The more we learn about ourselves, the more we learn how to cue different cognitive capacities out of school—how to cheat ourselves for good or ill. Our shallow information nature is presently the focus of a vast, industrial research program, one gradually providing the information, techniques, and technology required to utterly pre-empt our ancestral ecologies, which is to say, to perfectly simulate ‘reality.’ The reprieve from the cognitive pollution of actual environments itself potentially amounts to more cognitive pollution. We are, in some respect at least, a migratory species, one prone to gravitate toward greener pastures. Is the migration between realities any less inevitable than the migration across lands?

Via the direct and indirect deformation of existing socio-cognitive ecologies, deep information both drives the demand for and enables the high-dimensional cuing of fantastic cognition. In our day and age, a hunger for meaning is at once a predisposition to seek the fantastic. We should expect that hunger to explode with the pace of technological change. For all the Big Data ballyhoo, it pays to remember that we are bound up in an auto-adaptive macro-social system that is premised upon solving us, mastering our cognitive reflexes in ways invisible or that please. We are presently living through the age where it succeeds.

Fantasy is zombie scripture, the place where our ancient assumptions lurch in the semblance of life. The fantasy writer is the voodoo magician, imbuing dead meaning with fictional presence. This resurrection can either facilitate our relation to the actual world, or it can pre-empt it. Science and technology are the problem here. The mastery of deep information environments enables ever greater degrees of shallow information capture. As our zombie natures are better understood, the more effectively our reward systems are tuned, the deeper our descent into this or that variety of fantasy becomes. This is the dystopic image of Akratic society, a civilization ever more divided between deep and shallow information consumers, between those managing the mechanisms, and those captured in some kind of semantic cheat space.