Three Pound Brain

No bells, just whistling in the dark…

The Knowledge Illusion Illusion

by rsbakker



When academics encounter a new idea that doesn’t conform to their preconceptions, there’s often a sequence of three reactions: first dismiss, then reject, then finally declare it obvious. Steven Sloman and Philip Fernbach, The Knowledge Illusion, 255


The best example illustrating the thesis put forward in Steven Sloman and Philip Fernbach’s excellent The Knowledge Illusion: Why We Never Think Alone is one I’ve belaboured before, the bereft  ‘well-dressed man’ in Byron Haskin’s 1953 version of The War of the Worlds, dismayed at his malfunctioning pile of money, unable to comprehend why it couldn’t secure him passage out of Los Angeles. So keep this in mind: if all goes well, we shall return to the well-dressed man.

The Knowledge Illusion is about a great many things, everything from basic cognitive science to political polarization to educational reform, but it all comes back to how individuals are duped by the ways knowledge outruns individual human brains. The praise for this book has been nearly universal, and deservedly so, given the existential nature of the ‘knowledge problematic’ in the technological age. Because of this consensus, however, I’ll play the devil’s advocate and focus on what I think are core problems. For all the book’s virtues, I think Steven Sloman, Professor of Cognitive, Linguistic, and Psychological Sciences at Brown University, and Philip Fernbach, Assistant Professor at the University of Colorado, find themselves wandering the same traditional dead ends afflicting all philosophical and psychological discourses on the nature of human knowledge. The sad fact is nobody knows what knowledge is. They only think they do.

Sloman and Fernbach begin with a consideration of our universal tendency to overestimate our understanding. In a wide variety of tests, individuals regularly fail to provide first order evidence regarding second order reports of what they know. So for instance, they say they understand how toilets or bicycles work, yet find themselves incapable of accurately drawing the mechanisms responsible. Thus the ‘knowledge illusion,’ or the ‘illusion of explanatory depth,’ the consistent tendency to think our understanding of various phenomena and devices is far more complete than it in fact is.

This calves into two interrelated questions: 1) Why are we so prone to think we know more than we do? and 2) How can we know so little yet achieve so much? Sloman and Fernbach think the answer to both these questions lies in the way human cognition is embodied, embedded, and enactive, which is to say, the myriad ways it turns on our physical and social environmental interactions. They also hold the far more controversial position that cognition is extended, that ‘mind,’ understood as a natural phenomenon, just ain’t in our heads. As they write:

The main lesson is that we should not think of the mind as an information processor that spends its time doing abstract computation in the brain. The brain and the body and the external environment all work together to remember, reason, and make decisions. The knowledge is spread through the system, beyond just the brain. Thought does not take place on a stage inside the brain. Thought uses knowledge in the brain, the body, and the world more generally to support intelligent action. In other words, the mind is not in the brain. Rather, the brain is in the mind. The mind uses the brain and other things to process information. 105

The Knowledge Illusion, in other words, lies astride the complicated fault-line between cognitivism, the tendency to construe cognition as largely representational and brain-bound, and post-cognitivism, the tendency to construe cognition as constitutively dependent on the community and environment. Since the book is not only aimed at a general audience but also about the ways humans are so prone to confuse partial for complete accounts, it is more than ironic that Sloman and Fernbach fail to contextualize the speculative, and therefore divisive, nature of their project. Charitably, you could say The Knowledge Illusion runs afoul the very ‘curse of knowledge’ illusion it references throughout, the failure to appreciate the context of cognitive reception—the tendency to assume that others know what you know, and so will draw similar conclusions. Less charitably, the suspicion has to be that Sloman and Fernbach are actually relying on the reader’s ignorance to cement their case. My guess is that the answer lies somewhere in the middle, and that the authors, given their sensitivity to the foibles and biases built into human communication and cognition, would acknowledge as much.

But the problem runs deeper. The extended mind hypothesis is subject to a number of apparently decisive counter-arguments. One could argue a la Adams and Aizawa, for instance, and accuse Sloman and Fernbach, of committing the so-called ‘causal-constitutive fallacy,’ mistaking causal influences on cognition for cognition proper. Even if we do accept that external factors are constitutive of cognition, the question becomes one of where cognition begins and ends. What is the ‘mark of the cognitive’? After all, ‘environment’ potentially includes the whole of the physical universe, and ‘community’ potentially reaches back to the origins of life. Should we take a page from Hegel and conclude that everything is cognitive? If our minds outrun our brains, then just where do they end?

So far, every attempt to overcome these and other challenges has only served to complicate the controversy. Cognitivism remains a going concern for good reason: it captures a series of powerful second-order intuitions regarding the nature of human cognition, intuitions that post-cognitivists like Sloman and Fernbach would have us set aside on the basis of incompatible second-order intuitions regarding that self-same nature. Where the intuitions milked by cognitivism paint an internalist portrait of knowledge, the intuitions milked by post-cognitivism sketch an externalist landscape. Back and forth the arguments go, each side hungry to recruit the latest scientific findings into their explanatory paradigms. At some point, the unspoken assumption seems to be, the abductive weight supporting either position will definitively tip in favour of either one or the other. By time we return to our well-dressed man and his heap of useless money, I hope to show how and why this will never happen.

For the nonce, however, the upshot is that either way you cut it, knowledge, as the subject of theoretical investigation, is positively awash in illusions, intuitions that seem compelling, but just ain’t so. For some profound reason, knowledge and other so-called ‘intentional phenomena’ baffle us in way distinct from all other natural phenomena with the exception of consciousness. This is the sense in which one can speak of the Knowledge Illusion Illusion.

Let’s begin with Sloman and Fernbach’s ultimate explanation for the Knowledge Illusion:

The Knowledge Illusion occurs because we live in a community of knowledge and we fail to distinguish the knowledge that is in our heads from the knowledge outside of it. We think the knowledge we have about how things work sits inside our skulls when in fact we’re drawing a lot of it from the environment and from other people. This is as much a feature of cognition as it is a bug. The world and our community house most of our knowledge base. A lot of human understanding consists simply of awareness that the knowledge is out there. 127-128.

The reason we presume knowledge sufficiency, in other words, is that we fail to draw a distinction between individual knowledge and collective knowledge, between our immediate know-how and know-how requiring environmental and social mediation. Put differently, we neglect various forms of what might be called cognitive dependency, and so assume cognitive independence, the ability to answer questions and solve problems absent environmental and social interactions. We are prone to forget, in other words, that our minds are actually extended.

This seems elegant and straightforward enough: as any parent (or spouse) can tell you, humans are nothing if not prone to take things for granted! We take the contributions of our fellows for granted, and so reliably overestimate our own epistemic were-withal. But something peculiar has happened. Framed in these terms, the knowledge illusion suddenly bears a striking resemblance to the correspondence or attribution error, our tendency to put our fingers on our side of the scales when apportioning social credit. We generally take ourselves to have more epistemic virtue than we in fact possess for the same reason we generally take ourselves to have more virtue than we in fact possess: because ancestrally, confabulatory self-promotion paid greater reproductive dividends than accurate self-description. The fact that we are more prone to overestimate epistemic virtue given accessibility to external knowledge sources, on this account, amounts to no more than the awareness that we have resources to fall back on, should someone ‘call bullshit.’

There’s a great deal that could be unpacked here, not the least of which is the way changing demonstrations of knowledge into demonstrations of epistemic virtue radically impacts the case for the extended mind hypothesis. But it’s worth considering, I think, how this alternative explanation illuminates an earlier explanation they give of the illusion:

So one way to conceive of the illusion of explanatory depth is that our intuitive system overestimates what it can deliberate about. When I ask you how a toilet works, your intuitive system reports, “No problem, I’m very comfortable with toilets. They are part of my daily experience.” But when your deliberative system is probed by a request to explain how they work, it is at a loss because your intuitions are only superficial. The real knowledge lies elsewhere. 84

In the prior explanation, the illusion turns on confusing our individual with our collective resources. We presume that we possess knowledge that other people have. Here, however, the illusion turns on the superficiality of intuitive cognition. “The real knowledge lies elsewhere” plays no direct explanatory role whatsoever. The culprit here, if anything, lies with what Daniel Kahneman terms WYSIATI, or ‘What-You-See-Is-All-There-Is,’ effects, the way subpersonal cognitive systems automatically presume the cognitive sufficiency of whatever information/capacity they happen to have at their disposal.

So, the question is, do we confabulate cognitive independence because subpersonal cognitive processing lacks the metacognitive monitoring capacity to flag problematic results, or because such confabulations facilitated ancestral reproductive success, or because our blindness to the extended nature of knowledge renders us prone to this particular type of metacognitive error?

The first two explanations, at least, can be combined. Given the divide and conquer structure of neural problem-solving, the presumptive cognitive sufficiency (WYSIATI) of subpersonal processing is inescapable. Each phase of cognitive processing turns on the reliability of the phases preceding (which is why we experience sensory and cognitive illusions rather than error messages). If those illusions happen to facilitate reproduction, as they often do, then we end up with biological propensities to commit things like epistemic attribution errors. We both think and declare ourselves more knowledgeable than we in fact are.

Blindness to the ‘extended nature of knowledge,’ on this account, doesn’t so much explain the knowledge illusion as follow from it.

The knowledge illusion is primarily a metacognitive and evolutionary artifact. This actually follows as an empirical consequence of the cornerstone commitment of Sloman and Fernbach’s own theory of cognition: the fact that cognition is fractionate and heuristic, which is to say, ecological. This becomes obvious, I think, but only once we see our way past the cardinal cognitive illusion afflicting post-cognitivism.

Sloman and Fernbach, like pretty much everyone writing popular accounts of embodied, embedded, and enactive approaches to cognitive science, provide the standard narrative of the rise and fall of GOFAI, standard computational approaches to cognition. Cognizing, on this approach, amounts to recapitulating environmental systems within universal computational systems, going through the enormous expense of doing in effigy in order to do in the world. Not only is such an approach expensive, it requires prior knowledge of what needs to be recapitulated and what can be ignored—tossing the project into the infamous jaws of the Frame Problem. A truly general cognitive system is omni-applicable, capable of solving any problem in any environment, given the requisite resources. The only way to assure that ecology doesn’t matter, however, is to have recapitulated that ecology in advance.

The question from a biological standpoint is simply one of why we need to go through all the bother of recapitulating a problem-solving ecology when that ecology is already there, challenging us, replete with regularities we can exploit without needing to know whatsoever. “This assumption that the world is behaving normally gives people a giant crutch,” as Sloman and Fernbach put it. “It means that we don’t have to remember everything because the information is stored in the world” (95). All cognition requires are reliable interactive systematicities—cognitive ecologies—to steer organisms through their environments. Heuristics are the product of cognitive systems adapted to the exploitation of the correlations between regularities available for processing and environmental regularities requiring solution. And since the regularities happened upon, cues, are secondary to the effects they enable, heuristic systems are always domain specific. They don’t travel well.

And herein lies the rub for Sloman and Fernbach: If the failure of cognitivism lies in its insensitivity to cognitive ecology, then the failure of post-cognitivism lies in its insensitivity to metacognitive ecology, the fact that intentional modes of theorizing cognition are themselves heuristic. Humans had need to troubleshoot claims, to distinguish guesswork from knowledge. But they possessed no access whatsoever to the high-dimensional facts of the matter, so they made do with what was available. Our basic cognitive intuitions facilitate this radically heuristic ‘making do,’ allowing us to debug any number of practical communicative problems. The big question is whether they facilitate anything theoretical. If intentional cognition turns on systems selected to solve practical problem ecologies absent information, why suppose it possesses any decisive theoretical power? Why presume, as post-cognitivists do, that the theoretical problem of intentional cognition lies within the heuristic purview of intentional cognition?

Its manifest inapplicability, I think, can be clearly discerned in The Knowledge Illusion. Consider Sloman and Fernbach’s contention that the power of heuristic problem-solving turns on the ‘deep’ and ‘abstract’ nature of the information exploited by heuristic cognitive systems. As they write:

Being smart is all about having the ability to extract deeper, more abstract information from the flood of data that comes into our senses. Instead of just reacting to the light, sounds, and smells that surround them, animals with sophisticated large brains respond to deep, abstract properties of the that they are sensing. 46

But surely ‘being smart’ lies in the capacity to find, not abstracta, but tells, sensory features possessing reliable systematic relationships to deep environments. There’s nothing ‘deep’ or ‘abstract’ about the moonlight insects use to navigate at night—which is precisely why transverse orientation is so easily hijacked by bug-zappers and porch-lights. There’s nothing ‘deep’ or ‘abstract’ about the tastes triggering aversion in rats, which is why taste aversion is so easily circumvented by using chronic rodenticides. Animals with more complex brains, not surprisingly, can discover and exploit more tells, which can also be hijacked, cued ‘out of school.’ We bemoan the deceptive superficiality of political and commercial marketing for good reason! It’s unclear what ‘deeper’ or ‘more abstract’ add here, aside from millennial disputation. And yet Sloman and Fernbach continue, “[t]he reason that deeper, more abstract information is helpful is that it can be used to pick out what we’re interested in from an incredibly complex array of possibilities, regardless of how the focus of our interest presents itself” (46).

If a cue, or tell—be it a red beak or a prolonged stare or a scarlet letter—possesses some exploitable systematic relationship to some environmental problem, then nothing more is needed. Talk of ‘depth’ or ‘abstraction’ plays no real explanatory function, and invites no little theoretical mischief.

The term ‘depth’ is perhaps the biggest troublemaker, here. Insofar as human cognition is heuristic, we dwell in shallow information environments, ancestral need-to-know ecologies, remaining (in all the myriad ways Sloman and Fernbach describe so well) entirely ignorant of the deeper environment, and the super-complex systems comprising them. What renders tells so valuable is their availability, the fact that they are at once ‘superficial’ and systematically correlated to the neglected ‘deeps’ requiring solution. Tells possess no intrinsic mark of their depth or abstraction. It is not the case that “[a]s brains get more complex, they get better at responding to deeper, more abstract cues from the environment, and this makes them ever more adaptive to new situations” (48). What is the case is far more mundane: they get better at devising, combining, and collecting environmental tells.

And so, one finds Sloman and Fernbach at metaphoric war with themselves:

It is rare for us to directly perceive the mechanisms that create outcomes. We experience our actions and we experience the outcomes of those actions; only by peering inside the machine do we see the mechanism that makes it tick. We can peer inside when the components are visible. 73

As they go on to admit, “[r]easoning about social situations is like reasoning about physical objects: pretty shallow” (75).

The Knowledge Illusion is about nothing if not the superficiality of human cognition, and all the ways we remain oblivious to this fact because of this fact. “Normal human thought is just not engineered to figure out some things” (71), least of all the deep/fundamental abstracta undergirding our environment! Until the institutionalization of science, we were far more vulture than lion, information scavengers instead of predators. Only the scientific elucidation of our deep environments reveals how shallow and opportunistic we have always been, how reliant on ancestrally unfathomable machinations.

So then why do Sloman and Fernbach presume that heuristic cognition grasps things both abstract and deep?

The primary reason, I think, turns on the inevitably heuristic nature of our attempts to cognize cognition. We run afoul these heuristic limits every time we look up at the night sky. Ancestrally, light belonged to those systems we could take for granted; we had no reason to intuit anything about its deeper nature. As a result, we had no reason to suppose we were plumbing different pockets of the ancient past whenever we paused to gaze into the night sky. Our ability to cognize the medium of visual cognition suffers from what might be called medial neglect. We have to remind ourselves we’re looking across gulfs of time because the ecological nature of visual cognition presumes the ‘transparency’ of light. It vanishes into what it reveals, generating a simultaneity illusion.

What applies to vision applies to all our cognitive apparatuses. Medial neglect, in other words, characterizes all of our intuitive ways of cognizing cognition. At fairly every turn, the enabling dimension of our cognitive systems is consigned to oblivion, generating, upon reflection, the metacognitive impression of ‘transparency,’ or ‘aboutness’—intentionality in Brentano’s sense. So when Sloman and Fernbach attempt to understand the cognitive nature of heuristic selectivity, they cue the heuristic systems we evolved to solve practical epistemic problems absent any sensitivity to the actual systems responsible, and so run afoul a kind of ‘transparency illusion,’ the notion that heuristic cognition requires fastening onto something intrinsically simple and out there—a ‘truth’ of some description, when all our brain need to do is identify some serendipitously correlated cue in its sensory streams.

This misapprehension is doubly attractive, I think, for the theoretical cover it provides their contention that all human cognition is causal cognition. As they write:

… the purpose of thinking is to choose the most effective action given the current situation. That requires discerning the deep properties that are constant across situations. What sets humans apart is our skill at figuring out what those deep, invariant properties are. It takes human genius to identify the key properties that indicate if someone has suffered a concussion or has a communicable disease, or that it’s time to pump up a car’s tires. 53

In fact, they go so far as to declare us “the world’s master causal thinkers” (52)—a claim they spend the rest of the book qualifying. As we’ve seen, humans are horrible at understanding how things work: “We may be better at causal reasoning than other kinds of reasoning, but the illusion of explanatory depth shows that we are still quite limited as individuals in how much of it we can do” (53).

So, what gives? How can we be both causal idiots and causal savants?

Once again, the answer lies in their own commitments. Time and again, they demonstrate the way the shallowness of human cognition prevents us from cognizing that shallowness as such. The ‘deep abstracta’ posited by Sloman and Fernbach constitute a metacognitive version of the very illusion of explanatory depth they’re attempting to solve. Oblivious to the heuristic nature of our metacognitive intuitions, they presume those intuitions deep, theoretically sufficient ways to cognize the structure of human cognition. Like the physics of light, the enabling networks of contingent correlations assuring the efficacy of various tells get flattened into oblivion—the mediating nature vanishes—and the connection between heuristic systems and the environments they solve becomes an apparently intentional one, with ‘knowing’ here, ‘known’ out there, and nothing in between. Rather than picking out strategically connected cues, heuristic cognition isolates ‘deep causal truths.’

How can we be both idiots and savants when it comes to causality? The fact is, all cognition is not causal cognition. Some cognition is causal, while other cognition—the bulk of it—is correlative. What Sloman and Fernbach systematically confuse are the kinds of cognitive efficacy belonging to the isolation of actual mechanisms with the kinds of cognitive efficacy belonging to the isolation of tells possessing unfathomable (‘deep’) correlations to those mechanisms. The latter cognition, if anything, turns on ignoring the actual causal regularities involved. This is what makes it both so cheap and so powerful (for both humans and AI): it relieves us of the need to understand the deeper nature of things, allowing us to focus on what happens next.

Although some predictions turn on identifying actual causes, those requiring the heuristic solution of complex systems turn on identifying tells, triggers that are systematically correlated precursors to various significant events. Given our metacognitive neglect of the intervening systems, we regularly fetishize the tells available, take them to be the causes of the kinds of effects we require. Sloman and Fernbach’s insistence on the causal nature of human cognition commits this very error: it fetishizes heuristic cues. (Or to use Klaus Fiedler’s terminology, it confuses pseudocontingencies for genuine contingencies, or to use Andrei Cimpian’s, it fails to recognize a kind of ‘inherence heuristic’ as heuristic).

The power of predictive reasoning turns on the plenitude of potential tells, our outright immersion in environmental systematicities. No understanding of celestial mechanics is required to use the stars to anticipate seasonal changes and so organize agricultural activities. The cost of this immersion, on the other hand, is the inverse problem, the problem of isolating genuine causes as opposed to mere correlations on the basis of effects. In diagnostic reasoning, the sheer plenitude of correlations is the problem: finding causes amounts to finding needles in haystacks, sorting systematicities that are genuinely counterfactual from those that are not. Given this difficulty, it should come as no surprise that problems designed to cue predictive deliberation tend to neglect the causal dimension altogether. Tells, even when imbued with causal powers, fetishized, stand entirely on their own.

Sloman and Fernbach’s explanation of ‘alternative cause neglect’ thoroughly illustrates, I think, the way cognitivism and post-cognitivism have snarled cognitive psychology in the barbed wire of incompatible intuitions. They also point out the comparative ease of predictive versus diagnostic reasoning. But where the above sketch explains this disparity in thoroughly ecological terms, their explanation is decidedly cognitivist: we recapitulate systems, they claim, run ‘mental simulations’ to explore the space of possible effects. Apparently, running these tapes backward to explore the space of possible causes is not something nature has equipped us to do, at least easily. “People ignore alternative causes when reasoning from cause to effect,” they contend, “because their mental simulations have no room for them, and because we’re unable to run mental simulations backward in time from effect to cause” (61).

Even setting aside the extravagant metabolic expense their cognitivist tack presupposes, it’s hard to understand how this explains much of anything, let alone how the difference between these two modes figures in the ultimate moral of Sloman and Fernbach’s story: the social intransigence of the knowledge illusion.

Toward the end of the book, they provide a powerful and striking picture of the way false beliefs seem to have little, if anything, to do with the access to scientific facts. The provision of reasons likewise has little or no effect. People believe what their group believes, thus binding generally narcissistic or otherwise fantastic worldviews to estimations of group membership and identity. For Sloman and Fernbach, this dovetails nicely with their commitment to extended minds, the fact that ‘knowing’ is fundamentally collective.

Beliefs are hard to change because they are wrapped up with our values and identities, and they are shared with our community. Moreover, what is actually in our own heads—our causal models—are sparse and often wrong. This explains why false beliefs are so hard to weed out. Sometimes communities get the science wrong, usually in ways supported by our causal models. And the knowledge illusion means that we don’t check our understanding often or deeply enough. This is a recipe for antiscientific thinking. 169

But it’s not simply the case that reports of belief signal group membership. One need only think of the ‘kooks’ or ‘eccentrics’ in one’s own social circles (and fair warning, if you can’t readily identify one, that likely means you’re it!) to bring home the cognitive heterogeneity one finds in every community, people who demonstrate reliability in some other way (like my wife’s late uncle who never once attended church, but who cut the church lawn every week all the same).

Like every other animal on this planet, we’ve evolved to thrive in shallow cognitive ecologies, to pick what we need when we need it from wherever we can, be it the world or one another. We are cooperative cognitive scavengers, which is to say, we live in communal shallow cognitive ecologies. The cognitive reports of ingroup members, in other words, are themselves powerful tells, correlations allowing us to predict what will happen next absent deep environmental access or understanding. As an outgroup commentator on these topics, I’m intimately acquainted with the powerful way the who trumps the what in claim-making. I could raise a pyramid with all the mud and straw I’ve accumulated! But this has nothing to do with the ‘intrinsically communal nature of knowledge,’ and everything to do with the way we are biologically primed to rely on our most powerful ancestral tools. It’s not simply that we ‘believe to belong,’ but because, ancestrally speaking, it provided an extraordinarily metabolically cheap way to hack our natural and social environments.

So cheap and powerful, in fact, we’ve developed linguistic mechanisms, ‘knowledge talk,’ to troubleshoot cognitive reports.

And this brings us back to the well-dressed man in The War of the Worlds, left stranded with his useless bills, dumbfounded by the sudden impotence of what had so reliably commanded the actions of others in the past. Paper currency requires vast systems of regularities to produce the local effects we all know and love and loathe. Since these local, or shallow, effects occur whether or not we possess any inkling of the superordinate, deep, systems responsible, we can get along quite well simply supposing, like the well-dressed man, that money possesses this power on its own, or intrinsically. Pressed to explain this intrinsic power, to explain why this paper commands such extraordinary effects, we posit a special kind of property, value.

What the well-dressed man illustrates, in other words, is the way shallow cognitive ecologies generate illusions of local sufficiency. We have no access to the enormous amount of evolutionary, historical, social, and personal stage-setting involved when our doctor diagnoses us with depression, so we chalk it up to her knowledge, not because any such thing exists in nature, but because it provides us a way to communicate and troubleshoot an otherwise incomprehensible local effect. How did your doctor make you better? Obviously, she knows her stuff!

What could be more intuitive?

But then along comes science, and lo, we find ourselves every bit as dumbfounded when asked to causally explain knowledge as (to use Sloman and Fernbach’s examples) when asked to explain toilets or bicycles or vaccination or climate warming or why incest possessing positive consequences is morally wrong. Given our shallow metacognitive ecology, we presume that the heuristic systems applicable to troubleshooting practical cognitive problems can solve the theoretical problem of cognition as well. When we go looking for this or that intentional formulation of ‘knowledge’ (because we cannot even agree on what it is we want to explain) in the head, we find ourselves, like the well-dressed man, even more dumbfounded. Rather than finding anything sufficient, we discover more and more dependencies, evidence of the way our doctor’s ability to cure our depression relies on extrinsic environmental and social factors. But since we remain committed to our fetishization of knowledge, we conclude that knowledge, whatever it is, simply cannot be in the head. Knowledge, we insist, must be nonlocal, reliant on natural and social environments. But of course, this cuts against the very intuition of local sufficiency underwriting the attribution of knowledge in the first place. Sure, my doctor has a past, a library, and a community, but ultimately, it’s her knowledge that cures my depression.

And so, cognitivism and post-cognitivism find themselves at perpetual war, disputing theoretical vocabularies possessing local operational efficacy in everyday or specialized experimental contexts, but perpetually deferring the possibility of any global, genuinely naturalistic understanding of human cognition. The strange fact of the matter is that there’s no such thing or function as ‘knowledge’ in nature, nothing deep to redeem our shallow intuitions, though knowledge talk (which is very real) takes us a long way to resolve a wide variety of practical problems. The trick isn’t to understand what knowledge ‘really is,’ but rather to understand the deep, supercomplicated systems underwriting the optimization of behaviour, and how they underwrite our shallow intuitive and deliberative manipulations. Insofar as knowledge talk forms a component of those systems, we must content ourselves with studying ‘knowledge’ as a term rather than an entity, leaving intentional cognition to solve what problems it can where it can. The time has come to leave both cognitivism and post-cognitivism behind, and to embrace genuinely post-intentional approaches, such as the ecological eliminativism espoused here.

The Knowledge Illusion, in this sense, provides a wonderful example of crash space, the way in which the introduction of deep, scientific information into our shallow cognitive ecologies is prone to disrupt or delude or simply fall flat altogether. Intentional cognition provides a way for us to understand ourselves and each other while remaining oblivious to any of the deep machinations actually responsible. To suffer ‘medial neglect’ is to be blind to one’s actual sources, to comprehend and communicate human knowledge, experience, and action via linguistic fetishes, irreducible posits possessing inexplicable efficacies, entities fundamentally incompatible with the universe revealed by natural science.

For all the conceits Sloman and Fernbach reveal, they overlook and so run afoul perhaps greatest, most astonishing conceit of them all: the notion that we should have evolved the basic capacity to intuit our own deepest nature, that hunches belonging to our shallow ecological past could show us the way into our deep nature, rather than lead us, on pain of systematic misapplication, into perplexity. The time has come to dismantle the glamour we have raised around traditional philosophical and psychological speculation, to stop spinning abject ignorance into evidence of glorious exception, and to see our millennial dumbfounding as a symptom, an artifact of a species that has stumbled into the trap of interrogating its heuristic predicament using shallow heuristic tools that have no hope of generating deep theoretical solutions. The knowledge illusion illusion.


On Artificial Belonging: How Human Meaning is Falling between the Cracks of the AI Debate

by rsbakker

I hate people. Or so I used to tell myself in the thick of this or that adolescent crowd. Like so many other teens, my dawning social awareness occasioned not simply anxiety, but agony. Everyone else seemed to have the effortless manner, the well-groomed confidence, that I could only pretend to have. Lord knows I would try to tell amusing anecdotes, to make rooms boom with humour and admiration, but my voice would always falter, their attention would always wither, and I would find myself sitting alone with my butterflies. I had no choice but to hate other people: I needed them too much, and they needed me not at all. Never in my life have I felt so abandoned, so alone, as I did those years. Rarely have I felt such keen emotional pain.

Only later would I learn that I was anything but alone, that a great number of my peers felt every bit as alienated as I did. Adolescence represents a crucial juncture in the developmental trajectory of the human brain, the time when the neurocognitive tools required to decipher and navigate the complexities of human social life gradually come online. And much as the human immune system requires real-world feedback to discriminate between pathogens and allergens, human social cognition requires the pain of social failure to learn the secrets of social success.

Humans, like all other forms of life on this planet, require certain kinds of ecologies to thrive. As so-called ‘feral children’ dramatically demonstrate, the absence of social feedback at various developmental junctures can have catastrophic consequences.

So what happens when we introduce artificial agents into our social ecology? The pace of development is nothing short of boggling. We are about to witness a transformation in human social ecology without evolutionary let alone historical precedent. And yet the debate remains fixated on jobs or the prospects of apocalyptic superintelligences.

The question we really need to be asking is what happens when we begin talking to our machines more than to each other. What does it mean to dwell in social ecologies possessing only the appearance of love and understanding?

“Hell,” as Sartre famously wrote, “is other people.” Although the sentiment strikes a chord in most everyone, the facts of the matter are somewhat more complex. The vast majority of those placed in prolonged solitary confinement, it turns out, suffer a mixture of insomnia, cognitive impairment, depression, and even psychosis. The effects of social isolation are so dramatic, in fact, that the research has occasioned a worldwide condemnation of punitive segregation. Hell, if anything, would seem to be the absence of other people.

The reason for this is that we are a fundamentally social species, ‘eusocial’ in a manner akin to ants or bees, if E.O. Wilson is to be believed. To understand just how social we are, you need only watch the famous Heider-Simmel illusion, a brief animation portraying the movements of a small circle, a small rectangle, and larger rectangle, in and about a motionless, hollow square. Objectively speaking, all one sees are a collection of shapes moving relative one another and the hollow square. But despite the radical absence of information, nearly everyone watching the animation sees a little soap opera, usually involving the big square attempting to prevent the union of the small square and circle.

This leap from shapes to soap operas reveals, in dramatic fashion, just how little information we require to draw enormous social conclusions. Human social cognition is very easy to trigger out of school, as our ancient tendency to ‘anthropomorphize’ our natural surroundings shows. Not only are we prone to see faces in things like flaking paint or water stains, we’re powerfully primed to sense minds as well—so much so that segregated inmates often begin perceiving them regardless. As Brian Keenan, who was held by Islamic Jihad from 1986 to 1990, says of the voices he heard, “they were in the room, they were in me, they were coming from me but they were audible to no one else but me.”

What does this have to do with the impact of AI? More than anyone has yet imagined.

Imagine a social ecology populated by billions upon billions of junk intelligences


The problem, in a nutshell, is that other people aren’t so much heaven or hell as both. Solitary confinement, after all, refers to something done to people by other people. The argument to redefine segregation as torture finds powerful support in evidence showing that social exclusion activates the same regions of the brain as physical pain. At some point in our past, it seems, our social attachment systems coopted the pain system to motivate prosocial behaviors. As a result, the mere prospect of exclusion triggers analogues of physical suffering in human beings.

But as significant as this finding is, the experimental props used to derive these findings are even more telling. The experimental paradigm typically used to neuroimage social rejection turns on a strategically deceptive human-computer interaction, or HCI. While entombed in an fMRI, subjects are instructed to play an animated three-way game of catch—called ‘Cyberball’—with what they think are two other individuals on the internet, but which is in fact a program designed to initially include, then subsequently exclude, the subject. As the other ‘players’ begin throwing more and more to each other, the subject begins to feel real as opposed to metaphorical pain. The subjects, in other words, need only be told that other minds control the graphics on the screen before them, and the scant information provided by those graphics trigger real world pain. A handful of pixels and a little fib is all that’s required to cue the pain of social rejection.

As one might imagine, Silicon Valley has taken notice.

The HCI field finds its roots in the 1960’s with the research of Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory. Even given the rudimentary computing power at his disposal, his ‘Eliza’ program, which relied on simple matching and substitution protocols to generate questions, was able to cue strong emotional reactions in many subjects. As it turns out, people regularly exhibit what the late Clifford Nass called ‘mindlessness,’ the reliance on automatic scripts, when interacting with artificial agents. Before you scoff at the notion, recall the 2015 Ashley Madison hack, and the subsequent revelation that it deployed more than 70,000 bots to conjure the illusion of endless extramarital possibility. These bots, like Eliza, were simple, mechanical affairs, but given the context of Ashley Madison, their behaviour apparently convinced millions of men that some kind of (promising) soap opera was afoot.

The great paradox, of course, is that those automatic scripts belong to the engine of ‘mindreading,’ our ability to predict, explain, and manipulate our fellow human beings, not to mention ourselves. They only stand revealed as mechanical, ‘mindless,’ when tasked to cognize something utterly without evolutionary precedent: an artificial agent. Our power to peer into one another’s souls, in other words, becomes little more than a grab-bag of exploitable reflexes in the presence of AI.

The claim boggles, I admit, but from a Darwinian perspective, it’s hard to see how things could be otherwise. Our capacity to solve one another is largely a product of our hunter-gatherer past, which is to say, environments where human intelligence was the only game in town. Why evolve the capacity to solve for artificial intelligences, let alone ones possessing Big Data resources? The cues underwriting human social cognition may seem robust, but this is an artifact of ecological stability, the fact that our blind trust in our shared social biology has served so far. We always presume our environments indestructible. As the species responsible for the ongoing Anthropocene extinction, we have a long history of recognizing ecological peril only after the fact.

Sherry Turkle, MIT professor and eminent author of Alone Together, has been warning of what she calls “Darwinian buttons” for over a decade now. Despite the explosive growth in Human-Computer Interaction research, her concerns remain at best, a passing consideration. As part of our unconscious, automatic cognitive systems, we have no conscious awareness that such buttons even exist. They are, to put it mildly, easy to overlook. Add to this the overwhelming institutional and economic incentive to exploit these cues, and the AI community’s failure to consider Turkle’s misgivings seems all but inevitable.

Like most all scientists, researchers in the field harbor only the best of intentions, and the point of AI, as they see it, is to empower consumers, to give them what they want. The vast bulk of ongoing research in Human-Computer Interaction is aimed at “improving the user experience,” identifying what cues trust instead of suspicion, attachment instead of avoidance. Since trust requires competence, a great deal of the research remains focused on developing the core cognitive competencies of specialized AI systems—and recent advances on this front have been nothing if not breathtaking. But the same can be said regarding interpersonal competencies as well—enough to inspire Clifford Nass and Corina Yen to write, The Man Who Lied to his Laptop, a book touted as the How to Win Friends and Influence People of the 21st century. In the course of teaching machines how to better push our buttons, we’re learning how to better push them as well.

Simply because it is so easily miscued, human social cognition depends on trust. Shapes, after all, are cheap, while soap operas represent a potential goldmine. This explains our powerful, hardwired penchant for tribalism: the intimacy of our hunter-gatherer past all but assured trustworthiness, providing a cheap means of nullifying our vulnerability to social deception. When Trump decries ‘fake news,’ for instance, what he’s primarily doing is signaling group membership. He understands, the instinctive way we all understand, that the best way to repudiate damaging claims is to circumvent them altogether, and focus on the group membership of the claimer. Trust, the degree we can take one another for granted, is the foundation of cooperative interaction.

We are about to be deluged with artificial friends. In a recent roundup of industry forecasts, Forbes reports that AI related markets are already growing, and expected to continue growing, by more than 50% per annum. Just last year, Microsoft launched its Bot Framework service, a public platform for creating ‘conversational user interfaces’ for a potentially endless variety of commercial purposes, all of it turning on Microsoft’s rapidly advancing AI research. “Build a great conversationalist,” the site urges. “Build and connect intelligent bots to interact with your users naturally wherever they are…” Of course, the term “naturally,” here, refers to the seamless way these inhuman systems cue our human social cognitive systems. Learning how to tweak, massage, and push our Darwinian buttons has become an out-and-out industrial enterprise.

As mentioned above, Human-Human Interaction consists of pushing these buttons all the time, prompting automatic scripts that prompt further automatic scripts, with only the rare communicative snag giving us pause for genuine conscious deliberation. It all works simply because our fellow humans comprise the ancestral ecology of social cognition. As it stands, cuing social cognitive reflexes out of school is largely the province of magicians, con artists, and political demagogues. Seen in this light, the AI revolution looks less a cornucopia of marvels than the industrialized unleashing of endless varieties of invasive species—an unprecedented overthrow of our ancestral social cognitive habitats.

A habitat that, arguably, is already under severe duress.

In 2006, Maki Fukasawa coined the term ‘herbivore men’ to describe the rising number of Japanese males expressing disinterest in marital or romantic relationships with women. And the numbers have only continued to rise. A 2016 National Institute of Population and Social Security Research survey reveals that 42 percent of Japanese men between the ages of 18 and 34 remain virgins, up six percent from a mere five years previous. For Japan, a nation already struggling with the economic consequences of depopulation, such numbers are disastrous.

And Japan is not alone. In Man, Interrupted: Why Young Men are Struggling and What We Can Do About It, Philip Zimbardo (of the Stanford Prisoner Experiment fame) and Nikita Coulombe provide a detailed account of how technological transformations—primarily online porn, video-gaming, and virtual peer groups—are undermining the ability of American boys to academically achieve as well as maintain successful relationships. They see phenomena such as the growing MGTOW (‘men going their own way’) movement as the product of the way exposure to virtual, technological environments leaves them ill-equipped to deal with the rigours of genuine social interaction.

More recently, Jean Twenge, a psychologist at San Diego State University, has sounded the alarm on the catastrophic consequences of smartphone use for post-Millennials, arguing that “the twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever.” The primary culprit: loneliness. “For all their power to link kids day and night, social media also exacerbate the age-old teen concern about being left out.” Social media, in other words, seem to be playing the same function as the Cyberball game used by researchers to neuroimage the pain of social rejection. Only this time the experiment involves an entire generation of kids, and the game has no end.

The list of curious and troubling phenomena apparently turning on the ways mere connectivity has transformed our social ecology is well-nigh endless. Merely changing how we push one another’s Darwinian buttons, in other words, has impacted the human social ecology in historically unprecedented ways. And by all accounts, we find ourselves becoming more isolated, more alienated, than at any other time in human history.

So what happens when we change the who? What happens when the heaven of social belonging goes on sale?

Good question. There is no “Centre for the Scientific Study of Human Meaning” in the world. Within the HCI community, criticism is primarily restricted to the cognitivist/post-cognitivist debate, the question of whether cognition is intrinsically independent or dependent of an agent’s ongoing environmental interactions. As the preceding should make clear, numerous disciplines find themselves wandering this or that section of the domain, but we have yet to organize any institutional pursuit of the questions posed here. Human social ecology, the study of human interaction in biologically amenable terms, remains the province of storytellers.

We quite literally have no clue as to what we are about to do.

Consider Mark Zuckerberg’s and Elon Musk’s recent ‘debate’ regarding the promise and threat of AI. Musk, of course, has garnered headlines for quite some time with fears of artificial superintelligence. He’s famously called AI “our biggest existential threat,” openly referring to Skynet and the prospect of robots mowing down civilians on the streets. On a Sunday this past July, Zuckerberg went live in his Palo Alto backyard while smoking meats to host an impromptu Q&A. At the fifty-minute mark, he answers a question regarding Musk’s fears, and responds, “I think people who are naysayers and try to drum up these doomsday scenarios—I don’t understand it. It’s really negative and in some ways I think it’s pretty irresponsible.”

On the Tuesday following, Musk tweeted in response: “I’ve talked to Mark about this. His understanding of the subject is limited.”

To the extent that human interaction is ecological (and how could it be otherwise?), both can be accused of irresponsibility and limited understanding. The threat of ‘superintelligence,’ though perhaps inevitable, remains far enough in the future to easily dismiss as a bogeyman. The same can be said regarding “peak human” arguments predicting mass unemployment. The threat of economic disruption, though potentially dire, is counter-balanced by the promise of new, unforeseen economic opportunity. This leaves us with the countless number of ways AI will almost certainly improve our lives: fewer car crashes, fewer misdiagnoses, and so on. As a result, one can predict how all such exchanges will end.

The contemporary AI debate, in other words, is largely a pseudo-debate.

The futurist Richard Yonck’s account of ‘affective computing’ somewhat redresses this problem in his recently released Heart of the Machine, but since he begins with the presupposition that AI represents a natural progression, that the technological destruction of ancestral social habitats is the ancestral habitat of humanity, he remains largely blind to the social ecological consequences of his subject matter. Espousing a kind of technological fatalism (or worse, fundamentalism), he characterizes AI as the culmination of a “buddy movie” as old as humanity itself. The oxymoronic, if not contradictory, prospects of ‘artificial friends’ simply does not dawn on him.

Neil Lawrence, a professor of machine learning at the University of Sheffield and technology columnist at The Guardian, is the rare expert who recognizes the troubling ecological dimensions of the AI revolution. Borrowing the distinction between System Two, or conscious, ‘mindful’ problem-solving, and System One, or unconscious, ‘mindless’ problem-solving, from cognitive psychology, he warns of what he calls System Zero, what happens when the market—via Big Data, social media, and artificial intelligence—all but masters our Darwinian buttons. As he writes,

“The actual intelligence that we are capable of creating within the next 5 years is an unregulated System Zero. It won’t understand social context, it won’t understand prejudice, it won’t have a sense of a larger human objective, it won’t empathize. It will be given a particular utility function and it will optimize that to its best capability regardless of the wider negative effects.”

To the extent that modern marketing (and propaganda) techniques already seek to cue emotional as opposed to rational responses, however, there’s a sense in which ‘System Zero’ and consumerism are coeval. Also, economics comprises but a single dimension of human social ecology. We have good reason to fear that Lawrence’s doomsday scenario, one where market and technological forces conspire to transform us into ‘consumer Borg,’ understates the potential catastrophe that awaits.

The closest one gets to a genuine analysis of the interpersonal consequences of AI lies in movies such as Spike Jonze’s science-fiction masterpiece, Her, or the equally brilliant HBO series, Westworld, scripted by Charles Yu. ‘Science fiction,’ however, happens to be the blanket term AI optimists use to dismiss their critical interlocutors.

When it comes to assessing the prospect of artificial intelligence, natural intelligence is failing us.

The internet was an easy sell. After all, what can be wrong with connecting likeminded people?

The problem, of course, is that we are the evolutionary product of small, highly interdependent, hunter-gatherer communities. Historically, those disposed to be permissive had no choice but to continually negotiate with those disposed to be authoritarian. Each party disliked the criticism of the other, but the daily rigors of survival forced them to get along. No longer. Only now, a mere two decades later, are we discovering the consequences of creating a society that systematically segregates permissives and authoritarians. The election of Donald Trump has, if nothing else, demonstrated the degree to which technology has transformed human social ecology in novel, potentially disastrous ways.

AI has also been an easy sell—at least so far. After all, what can be wrong with humanizing our technological environments? Imagine a world where everything is ‘user friendly,’ compliant to our most petulant wishes. What could be wrong with that?

Well, potentially everything, insofar as ‘humanizing our environments’ amounts to dehumanizing our social ecology, replacing the systems we are adapted to solve, our fellow humans, with systems possessing no evolutionary precedent whatsoever, machines designed to push our buttons in ways that optimize hidden commercial interests. Social pollution, in effect.

Throughout the history of our species, finding social heaven has required risking social hell. Human beings are as prone to be demanding, competitive, hurtful—anything but ‘user friendly’—as otherwise. Now the industrial giants of the early 21st century are promising to change all that, to flood the spaces between us with machines designed to shoulder the onerous labour of community, citizenship, and yes, even love.

Imagine a social ecology populated by billions upon billions of junk intelligences. Imagine the solitary confinement of an inhuman crowd. How will we find one another? How will we tolerate the hypersensitive infants we now seem doomed to become?

Unkempt Nation, Disheveled Soul

by rsbakker

So this has been a mad summer in pretty much every respect. The first week of May, my hard-drive died, and I lost pretty much everything I had written the previous six months. My wife was in Venezuela at the time, marching, so I had a hard time wrapping my head around the psychological enormity of the event. It’s not every day you turn on the news to watch events embroiling your loved ones.

Anyway, I’m still pulling the pieces together. I had occasion to revisit some of my first blog posts, and I thought I would post a few snippets from way back in 2010, when we could still pretend technology wasn’t driving the world insane. Rather than get angry all over again at the lack of reviews, or fret for the future of democratic society in the technological age, I thought I would let my younger, less well-groomed self do the ranting.

I’ll be back with things more substantial soon.


September 14, 2010 – So why are so many writers heros? Aside from good old human psychology, I blame it on the old ‘Write What You Know’ literary maxim.

Like so many literary maxims it sounds appealing at first blush. After all, how can you be honest–authentic–unless you write ‘what you know.’ But like all maxims it has a flip side: Telling practitioners what they should do is at once telling them what they should not do. Telling writers to only write what they know is telling them to studiously avoid all the things their lives lack–adventure, romance, spectacle–which is to say, the very things that regular people crave.

So this maxim has the happy side-effect of policing who gets to communicate to whom, and so securing the institutional boundaries of the literary specialist. Not only is real culture left to its own naive devices, it becomes the unflagging foil, a kind of self-congratulatory resource, one that can be tapped over and over again to confirm the literary writer’s sense of superiority. Thus all the writerly heros, stranded in seas of absurdity.

September 16, 2010 – The pigeonhole has no bottom, believe you me. I used to be so naive as to think I could climb out, but now I’m starting to think that it swallows everyone in the end. I wonder about all the other cranks and crackpots out there, about all the other sparks that have been snuffed by relentless inattention. It’s no accident that eulogies are so filled with cliches.

After all, it’s neurophysiology that I’m up against more than any passing cultural bigotry. The brain pigeonholes everything it encounters to better lower its caloric load, to economize. We sort far more than we ponder. Novelty, when we encounter it, is either confused for something old and stupid or comes across as errant noise. Things were this way long before corporations and capital.

So I find myself wondering what I should do. Maybe I should just resign myself to my fate, numb the pain, mellow those revenge fantasies. Become a fatalist.

But then there’s nothing like bitterness to keep that fire scorching your belly. And there’s nothing I fear more than becoming old and complacent. Only the well-groomed don’t have chips on their shoulders.

September 18, 2010 – What really troubles me is the way this hypocrisy has been institutionalized. So long as you treat ‘culture’ as a what, which is to say, as a abstract construct, a formalism, then you can congratulate yourself for all the myriad ways in which your abstractions disrupt those abstractions. But as soon as you treat ‘culture’ as a who, which is to say, as a cartoon we use to generalize over millions of living, breathing people, the notion of ‘disruption’ becomes pretty ridiculous pretty quick. All it takes is one simple question: “Who is disrupted?” and the illusion of criticality is dispelled. One little question.

The conceit is so weak. And yet somehow we’ve managed to raise a veritable landfill of illusory subversion upon it. ‘Literature,’ we call it.

Says a lot about the power of vanity, if you think about it.

As well as why I’m probably doomed to fail.

September 20, 2010 – But our culture has become frightfully compartmentalized. The web, which was supposed to blow open the doors of culture–to ‘flatten everything’–seems to have had the opposite effect. Since we’re hardwired to reflexively seek out affirmation and confirmation, rendering everything equally available has meant our paths of least resistence no longer take us across unfamiliar territory. We can get what we want and need without taking detours through things we didn’t realize we wanted or needed. We can make an expedient bastion out of our parochial tastes.

February 27, 2011 – These people, it seems to me, have to be engaged, have to be challenged, if only so that the masses don’t succumb to their own weaknesses for self-serving chauvinism. These people are appealing simply because they are so adept at generating ‘reasons’ for self-serving intuitions that we all share. That we and our ways are special, exempt, and that Others are a threat to us. That our high-school is, like, really the greatest high-school on the planet. Confirmation bias, my-side bias, the list goes on. And given that humans have evolved to be easily and almost irrevocably programmed, it seems to me that the most important place to wage this battle is in classroom. To begin teaching doubt as the highest virtue, as opposed to the madness of belief.

The prevailing madness.

Funny, huh? It’s the lapse in belief that these guys typically see as symptomatic of modern societal decline. But really what they’re talking about is a lapse in agreement. Belief is as pervasive as ever, but as a principle rather than any specific consensual canon. It stands to reason that the lack of ‘moral and cognitive solidarity’ would make us uncomfortable, considering the kinds of scarcity and competition faced by our ancestors.

January 13, 2011 – The problem is that human nature is adapted to environments where the access to information was geographically indexed, where its accumulation exacted a significant caloric toll. We don’t call private investigators ‘gumshoes’ for no reason. We are adapted to environments where the info-gathering workload continually forced us to ‘settle,’ which is to say, make due with something other than what we originally desired, when it comes to information.

This is what makes the ‘global village’ such a deceptive misnomer. In the preindustrial village, where everyone depended upon one another, our cognitive selfishness made quite a bit of adaptive sense: in environments where scarcity and interdependency force cognitive compromise, you can see how cognitive selfishness–finding ways to justify oneself while impugning potential competitors–might pay real dividends in terms of in-group prestige. Where the circumstantial leash is tight, it pays to pull and pull, and perhaps reach those morsels that escape others.

In the industrial village, however, the leash is far longer. But even still, if you want pursue your views, geographical constraints force you to engage individuals who do not share them. Who knows what Bob across the road believes? (My Bob was an evangelical Christian, and I count myself lucky for having endlessly argued with him).

In the information village the leash is cut altogether. The likeminded can effortlessly congregate in innumerable echo chambers. Of course, they can effortlessly congregate with those they disagree with as well, but… The tendency, by and large, is not only to seek confirmation, but to confuse it with intelligence and truth–which is why right-wingers tend to watch more Fox than PBS.

Now, enter all these specialized programs, which are bent on moulding your information environment into something as pleasing as possible. Don’t like the N-word? Well, we can make sure you never need to encounter it again–ever.

The world is sycophantic, and it’s becoming more so all the time. This, I think, is a far better cartoon generalization than ‘flat,’ insofar as it references the user, the intermediary, as well as the information environment.

The contemporary (post-posterity) writer has to incorporate this radically different social context into their practice (if that practice is to be considered even remotely self-critical). If you want to produce literary effects, then you have to write for a sycophantic world, find ways not simply to subvert the ideological defences of readers, but to trick the inhuman, algorithmic gate-keepers as well.

This means being strategically sycophantic. To give people what they want, sure, but with something more as well.


Visions of the Semantic Apocalypse: James Andow and Dispositional Metasemantics

by rsbakker

The big problem faced by dispositionalist accounts of meaning lies in their inability to explain the apparent normativity of meaning. Claims that the meaning of X turns on the disposition to utter ‘X’ requires some way to explain the pragmatic dimensions of meaning, the fact that ‘X’ can be both shared and misapplied. Every attempt to pin meaning to natural facts, even ones so low-grained as dispositions, runs aground on the external relationality of the natural, the fact that things in the world just do not stand in relations of rightness or wrongness relative one another. No matter how many natural parameters you pile onto your dispositions, you will still have no way of determining the correctness of any given application of X.

This problem falls into the wheelhouse of heuristic neglect. If we understand that human cognition is fractionate, then the inability of dispositions to solve for correctness pretty clearly indicates a conflict between cognitive subsystems. But if we let metacognitive neglect, our matter of fact blindness to our own cognitive constitution, dupe us into thinking we possess one big happy cognition, this conflict is bound to seem deeply mysterious, a clash of black cows in the night. And as history shows us, mysterious problems beget mysterious answers.

So for normativists, this means that only intentional cognition, those systems adapted to solve problems via articulations of ‘right or wrong’ talk, can hope to solve the theoretical nature of meaning. For dispositionalists, however, this amounts to ceding whole domains of nature hostage to perpetual philosophical disputation. The only alternative, they think, is to collect and shuffle the cards yet again, in the hope that some articulation of natural facts will somehow lay correctness bare. The history of science, after all, is a history of uncovering hidden factors—a priori intuitions be damned. Even still, it remains very hard to understand how to stack external relations into normative relations. Ignorant of the structure of intentional cognition, and the differences between it and natural (mechanical) cognition, the dispositionalist assumes that meaning is real, and that since all real things are ultimately natural, meaning must have a natural locus and function. Both approaches find themselves stalled in different vestibules of the same crash space.

For me, the only way to naturalize meaning is to understand it not as something ‘real out there’ but as a component of intentional cognition, biologically understood. The trick lies in stacking external relations into the mirage of normative relations: laying out the heuristic misapplications generating traditional philosophical crash spaces. The actual functions of linguistic communication turn on the vast differential systems implementing it. We focus on the only things we apparently see. Given the intuition of sufficiency arising out of neglect, we assume these form autonomous systems. And so tools that allow conscious cognition to blindly mediate the function of vast differential systems—histories, both personal and evolutionary—become an ontological nightmare.

In “Zebras, Intransigence & Semantic Apocalypse: Problems for Dispositional Metasemantics,” James Andow considers the dispositionalist attempt to solve for normativity via the notion of ‘complete information.’ The title alone had me hooked (for obvious reasons), but the argument Andow lays out is a wry and fascinating one. Where dispositions to apply terms are neither right nor wrong, dispositions to apply terms given all relevant information seems to enable the discrimination of normative discrepancies between performances. The problem arises when one asks what counts as ‘all relevant information.’ Offloading determinacy onto relevant information simply raises the question of determinacy at the level of relevant information. What constrains ‘relevance’? What about future relevance? Andow chases this inability to delimit complete information to the most extreme case:

It seems pretty likely that there is information out there which would radically restructure the nature of human existence, make us abandon technologies, reconsider our values and place in nature, information that would lead us to restructure the political organization of our species, reconsider national boundaries, and the ‘artificial divisions’ which having distinct languages impose on us. The likely effect of complete information is semantic apocalypse. (Just to be clear—my claim here is not that it is likely we will undergo such a shift. Who is to say what volume of information humankind will become aware of before extinction? Rather, the claim is that the probable result of being exposed to all information which would alter one’s dispositions, i.e., complete information, would involve a radical overhaul in semantic dispositions).

This paragraph is brilliant, especially given the grand way it declares the semantic apocalypse only to parenthetically take it all back! For my money, though, Andow’s throwaway question, “Who is to say what volume of information humankind will become aware of before extinction?” is far and away the most pressing one. But then I see these issues in light of a far different theory of meaning.

What is the information threshold of semantic apocalypse?

Dispositionalism entails the possibility of semantic apocalypse to the degree the tendencies of biological systems are ecologically dependent, and so susceptible to gradual or catastrophic change. This draws out the importance of the semantic apocalypse as distinct from other forms of global catastrophe. A zombie apocalypse, for instance, might also count as a semantic apocalypse, but only if our dispositions to apply terms were radically transformed. It’s possible, in other words, to suffer a zombie apocalypse without suffering a semantic apocalypse. The physical systems underwriting meaning are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Meaning, in other words, can survive radical ecological destruction. (This is one of the reasons we remain, despite all our sophistication, largely blind to the issue of cognitive ecology: so far it’s been with us through thick and thin). The advantage of dispositionalist approaches, Andow thinks, lies in the way it anchors meaning in our nature. One may dispute how ‘meanings’ find themselves articulated in intentional cognition more generally, while agreeing that intentional cognition is biological; a suite of sensitivities attuned to very specific sets of cues, leveraging reliable predictions. One can be agnostic on the ontological status of ‘meaning’ in other words, and still agree that meaning talk turns on intentional cognition, which turns on heuristic capacities whose development we can track through childhood. So long as a catastrophe leaves those cues and their predictive power intact, it will not precipitate a semantic apocalypse.

So the question of the threshold of the semantic apocalypse becomes the question of the stability of a certain biological system of specialized sensitivities and correlations. Whatever collapses this system engenders the semantic apocalypse (which for Andow means the global indeterminacy of meanings, and for me the global unreliability of intentional cognition more generally). The thing to note here, however, is the ease with which such systems do collapse once the correlations between sensitivities and outcomes cease to become reliable. Meaning talk, in other words, is ecological, which is to say it requires its environments be a certain way to discharge ancestral functions.

Suddenly the summary dismissal of the genuine possibility of a semantic apocalypse becomes ill-advised. Ecologies can collapse in a wide variety of ways. The form any such collapse takes turns on the ‘pollutants’ and the systems involved. We have no assurance that human cognitive ecology is robust in all respects. Meaning may be able to survive a zombie apocalypse, but as an ecological artifact, it is bound to be vulnerable somehow.

That vulnerability, on my account, is cognitive technology. We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travellers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts. The list goes on.

The semantic apocalypse isn’t simply possible: it’s happening.

No results found for “cognitive psychology of philosophy”.

by rsbakker

That is, until today.

The one thing I try to continuously remind people is that philosophy is itself a data point, a telling demonstration of what has to be one of the most remarkable facts of our species. We don’t know ourselves for shit. We have been stumped since the beginning. We’ve unlocked the mechanism for aging for Christ’s sake: there’s a chance we might become immortal without having the faintest clue as to what ‘we’ amounts to.

There has to be some natural explanation for that, some story explaining why it belongs to our nature to be theoretically mystified by our nature, to find ourselves unable to even agree on formulations of the explananda. So what is it? Why all the apparent paradoxes?

Why, for instance, the fascination with koans?

Take the famous, “What is the sound of one hand clapping?” Apparently, the point of pondering this lies in realizing the koan is at once the questioning and the questioned, and coming to see oneself as the sound. For many, the pedagogical function of koans lies in revealing one’s Buddha nature, breaking down the folk reasoning habits barring the apprehension of the identity of subject and object.

Strangely enough, the statement I gave you in the previous post could be called a koan, of sorts:

It is true there is no such thing as truth.

But the idea wasn’t so much to break folk reasoning habits as to alert readers to an imperceptible complication belonging to discursive cognition: a complication that breaks the reliability of our folk-reasoning habits. The way deliberative cognition unconsciously toggles between applications and ontologizations of truth talk can generate compelling cognitive illusions—illusions so compelling, in fact, as to hold the whole of humanity in their grip for millennia.

Wittgenstein, and the pragmatists glimpsed the fractionate specialization of cognition, how it operated relative various practical contexts. They understood the problem in terms of concrete application, which for them was pragmatic application, a domain generally navigated via normative cognition. Impressed by the inability of mechanical cognition to double as normative cognition, they decided that only normative cognition could explain cognition, and so tripped into a different version of the ancient trap: that of using intentional cognition to theoretically solve intentional cognition.

Understanding cognition in terms of heuristic neglect lets us frame the problem subpersonally, to look at what’s going on in statements like the above in terms of possible neurobiological systems recruited. The fact that human cognition is heuristic, fractionate, and combinatory means that we should expect koans, puzzles, paradoxes, apories, and the like. We should expect that different systems possessing overlapping domains will come into conflict. We should expect them in the same way and for the same reason we should expect to encounter visual, auditory, and other kinds of systematic illusions. Because the brain picks out only the correlations it needs to predict its environments, cues predicting the systems requiring solution the way they need to be predicted to be solved. Given this, we should begin looking at traditional philosophy as a rich, discursive reservoir of pathologies, breakdowns providing information regarding the systems and misapplications involved. Like all corpses, meaning will provide a feast for worms.

In a sense, then, a koan demonstrates what a great many seem to think it’s meant to demonstrate: a genuine limit to some cognitive modality, a point where our automatic applications fail us, alerting us both to their automaticity and their specialized nature. And this, the idea would be, draws more of the automaticity (and default universal application) of the subject/object (aboutness) heuristic into deliberative purview, leading to… Enlightenment?

Does Heuristic Neglect Theory suggest a path to the Absolute?

I suppose… so long as we keep in mind that ‘Absolute’ means ‘abject stupidity.’ I think we’re better served looking at these kinds of things as boundaries rather than destinations.

The Point Being…

by rsbakker

Louie Savva has our podcast interview up over at Everything is Pointless. It was fun stuff, despite the fact that this one time farm boy has devolved into a complete technical bumbleclad.

It also really got me thinking about the most challenging whirlpool at the heart of my theory, and how to best pilot understanding around it. Say the human brain possessed two cognitive systems A and X, the one dedicated to prediction absent access to sources, the other dedicated to prediction via access to sources. And say the brain had various devious ways of combining these systems to solve even more problems. Now imagine the conscious subsystem mediating these systems is entirely insensitive to this structure, so that toggling between them leaves no trace in experience.

Now consider the manifest absurdity:

It is true that there is no such thing as truth.

If truth talk belonged to system A, and such thing talk belonged to system X, then it really could be true that there’s no such thing as truth. But given conscious insensitivity to this, we would have no way of discerning the distinct cognitive ecologies involved, and so presume One Big Happy Cognition by default. If there is no such thing as truth, we would cry, then no statement could be true.

How does one argue against that? short knowledge of the heuristic, fractionate structure of human cognition. Small wonder we’ve been so baffled by our attempts to make sense of ourselves! Our intuitions walk us into the same traps over and over.



April Fool’s Update

by rsbakker

Jorge linked this, and for transparent reasons I thought it worth linking again. I can almost see the idol of Ajokli, laughing.

Life was so much simpler back when children could just pull the legs off insects.

I’m doing a Q&A on Reddit Fantasy beginning next Monday morning, April 3rd. With The Unholy Consult completed I’m looking forward to talking more freely about the World (trying to be mindful, of course, of any potential spoilers). Spread the word. The organizers recommended I keep the intro jaunty and light, so I decided to begin with, “If God is dead, then fantasy is His grave.”

I’m also scheduled to do a couple podcast interviews, one for Everything is Pointless, and another for Stuff to Blow Your Mind. My hope is to keep doing as many interviews, media pieces, as I can running up to the release of The Unholy Consult. Ideas are always appreciated.

I’ve also accumulated a fair number of book related links, thanks to emails sent and comments posted. Barnes and Nobles had a readout of Book Three, The Great Ordeal, which The Fantasy Faction selected for their Best of 2016 list (weird, isn’t it, the way everything ‘pre-Trump’ seems ancient and naive). The Great Ordeal was also given a rave review for SFF Den by silentroamer, who can see the narrative lens drawing into tighter focus. JP Gowdner offers an excellent aesthetic assessment of the series so far, though he finds himself morally troubled by many of my apparent decisions. For those hemming and hawing about starting The Aspect-Emperor, I heartily recommend Leona Henry’s eloquent review of The Judging Eye. I’ve noticed, lately, that almost all reviews of my books concede that they may inaccessible to the tastes of some readers, and even though this is undoubtedly true, the whole point of writing fantasy, for me, is to challenge actual readers as opposed to ‘ideal philistines,’ to confront folks with an unfamiliar (and probably uncomfortable) story-telling sensibility. If the election has taught us anything, I think, it’s that we desperately need to create a culture dedicated to spanning ingroup boundaries. We need to be urging one another to take risks, to drink from strangers’ glasses instead of hogging the same old straw for the entirety of our lives. We need to shame our most talented communicators back into honest dialogue with the communities that make their ingroup luxury possible.

Next up for TPB, someone not only stumbles across the semantic apocalypse in complete independence from my work, they even end up calling it the ‘semantic apocalypse.’


The Truth Behind the Myth of Correlationism

by rsbakker

A wrong turn lies hidden in the human cultural code, an error that has scuttled our every attempt to understand consciousness and cognition. So much philosophical activity reeks of dead ends: we try and we try, and yet we find ourselves mired in the same ancient patterns of disputation. The majority of thinkers believe the problem is local, that they need only tinker with the tools they’ve inherited. They soldier on, arguing that this or that innovative modification will overcome our confusion. Some, however, believe the problem lies deeper. I’m one of those thinkers, as is Meillassoux. I think the solution lies in speculation bound to the hip of modern science, in something I call ‘heuristic neglect.’ For me, the wrong turn lies in the application of intentional cognition to solve the theoretical problem of intentional cognition. Meillassoux thinks it lies in what he calls ‘correlationism.’

Since I’ve been accused of ‘correlationism’ on a couple of occasions now, I thought it worthwhile tackling the issue in more detail. This will not be an institutional critique a la Golumbia’s, who manages to identify endless problems with Meillassoux’s presentation, while somehow entirely missing his skeptical point: once cognition becomes artifactual, it becomes very… very difficult to understand. Cognitive science is itself fractured about Meillassoux’s issue.

What follows will be a constructive critique, an attempt to explain the actual problem underwriting what Meillassoux calls ‘correlationism,’ and why his attempt to escape that problem simply collapses into more interminable philosophy. The problem that artifactuality poses to the understanding of cognition is very real, and it also happens to fall into the wheelhouse of Heuristic Neglect Theory (HNT). For those souls growing disenchanted with Speculative Realism, but unwilling to fall back into the traditional bosom, I hope to show that HNT not only offers the radical break with tradition that Meillassoux promises, it remains inextricably bound to the details of this, the most remarkable age.

What is correlationism? The experts explain:

Correlation affirms the indissoluble primacy of the relation between thought and its correlate over the metaphysical hypostatization or representational reification of either term of the relation. Correlationism is subtle: it never denies that our thoughts or utterances aim at or intend mind-independent or language-independent realities; it merely stipulates that this apparently independent dimension remains internally related to thought and language. Thus contemporary correlationism dismisses the problematic of scepticism, and or epistemology more generally, as an antiquated Cartesian hang-up: there is supposedly no problem about how we are able to adequately represent reality; since we are ‘always already’ outside ourselves and immersed in or engaging with the world (and indeed, this particular platitude is constantly touted as the great Heideggerean-Wittgensteinian insight). Note that correlationism need not privilege “thinking” or “consciousness” as the key relation—it can just as easily replace it with “being-in-the-world,” “perception,” “sensibility,” “intuition,” “affect,” or even “flesh.” Ray Brassier, Nihil Unbound, 51

By ‘correlation’ we mean the idea according to which we only ever have access to the correlation between thinking and being, and never to either term considered apart from the other. We will henceforth call correlationism any current of thought which maintains the unsurpassable character of the correlation so defined. Consequently, it becomes possible to say that every philosophy which disavows naive realism has become a variant of correlationism. Quentin Meillassoux, After Finitude, 5

Correlationism rests on an argument as simple as it is powerful, and which can be formulated in the following way: No X without givenness of X, and no theory about X without a positing of X. If you speak about something, you speak about something that is given to you, and posited by you. Consequently, the sentence: ‘X is’, means: ‘X is the correlate of thinking’ in a Cartesian sense. That is: X is the correlate of an affection, or a perception, or a conception, or of any subjective act. To be is to be a correlate, a term of a correlation . . . That is why it is impossible to conceive an absolute X, i.e., an X which would be essentially separate from a subject. We can’t know what the reality of the object in itself is because we can’t distinguish between properties which are supposed to belong to the object and properties belonging to the subjective access to the object. Quentin Meillassoux,”Time without Becoming

The claim of correlationism is the corollary of the slogan that ‘nothing is given’ to understanding: everything is mediated. Once knowing becomes an activity, then the objects insofar as they are known become artifacts in some manner: reception cannot be definitively sorted from projection and as a result no knowledge can be said to be absolute. We find ourselves trapped in the ‘correlationist circle,’ trapped in artifactual galleries, never able to explain the human-independent reality we damn well know exists. Since all cognition is mediated, all cognition is conditional somehow, even our attempts (or perhaps, especially our attempts) to account for those conditions. Any theory unable to decisively explain objectivity is a theory that cannot explain cognition. Ergo, correlationism names a failed (cognitivist) philosophical endeavour.

It’s a testament to the power of labels in philosophy, I think, because as Meillassoux himself acknowledges there’s nothing really novel about the above sketch. Explaining the ‘cognitive difference’ was my dissertation project back in the 90’s, after all, and as smitten as I was with my bullshit solution back then, I didn’t think the problem itself was anything but ancient. Given this whole website is dedicated to exploring and explaining consciousness and cognition, you could say it remains my project to this very day! One of the things I find so frustrating about the ‘critique of correlationism’ is that the real problem—the ongoing crisis—is the problem of meaning. If correlationism fails because correlationism cannot explain cognition, then the problem of correlationism is an expression of a larger problem, the problem of cognition—or in other words, the problem of intentionality.

Why is the problem of meaning an ongoing crisis? In the past six fiscal years, from 2012 to 2017, the National Institute of Health will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. [1] And this is just one public institution in one nation involving health related research. If you include the cognitive sciences more generally—research into everything from consumer behaviour to AI—you could say that solving the human soul commands more resources than any other domain in history. The reason all this money is being poured into the sciences rather than philosophy departments is that the former possesses real world consequences: diseases cured, soap sold, politicians elected. As someone who tries to keep up with developments in Continental philosophy, I already find the disconnect stupendous, how whole populations of thinkers continue discoursing as if nothing significant has changed, bitching about traditional cutlery in the shadow of the cognitive scientific tsunami.

Part of the popularity of the critique of correlationism derives from anxieties regarding the growing overlap of the sciences of the human and the humanities. All thinkers self-consciously engaged in the critique of correlationism reference scientific knowledge as a means of discrediting correlationist thought, but as far as I can tell, the project has done very little to bring the science, what we’re actually learning about consciousness and cognition, to the fore of philosophical debates. Even worse, the notion of mental and/or neural mediation is actually central to cognitive science. What some neuroscientists term ‘internal models,’ which monolopolize our access to ourselves and the world, is nothing if not a theoretical correlation of environments and cognition, trapping us in models of models. The very science that Meillassoux thinks argues against correlationism in one context, explicitly turns on it in another. The mediation of knowledge is the domain of cognitive science—full stop. A naturalistic understanding of cognition is a biological understanding is an artifactual understanding: this is why the upshot of cognitive science is so often skeptical, prone to further diminish our traditional (if not instinctive) hankering for unconditioned knowledge—to reveal it as an ancestral conceit

A kind of arche-fossil.

If an artifactual approach to cognition is doomed to misconstrue cognition, then cognitive science is a doomed enterprise. Despite the vast sums of knowledge accrued, the wondrous and fearsome social instrumentalities gained, knowledge itself will remain inexplicable. What we find lurking in the bones of Meillassoux’s critique, in other words, is precisely the same commitment to intentional exceptionality we find in all traditional philosophy, the belief that the subject matter of traditional philosophical disputation lies beyond the pale of scientific explanation… that despite the cognitive scientific tsunami, traditional intentional speculation lies secure in its ontological bunkers.

Only more philosophy, Meillassoux thinks, can overcome the ‘scandal of philosophy.’ But how is mere opinion supposed to provide bona fide knowledge of knowledge? Speculation on mathematics does nothing to ameliorate this absurdity: even though paradigmatic of objectivity, mathematics remains as inscrutable as knowledge itself. Perhaps there is some sense to be found in the notion of interrogating/theorizing objects in a bid to understand objectivity (cognition), but given what we now know regarding our cognitive shortcomings in low-information domains, we can be assured that ‘object-oriented’ approaches will bog down in disputation.

I just don’t know how to make the ‘critique of correlationism’ workable, short ignoring the very science it takes as its motivation, or just as bad, subordinating empirical discoveries to some school of ‘fundamental ontological’ speculation. If you’re willing to take such a leap of theoretical faith, you can be assured that no one in the vicinity of cognitive science will take it with you—and that you will make no difference in the mad revolution presently crashing upon us.

We know that knowledge is somehow an artifact of neural function—full stop. Meillassoux is quite right to say this renders the objectivity of knowledge very difficult to understand. But why think the problem lies in presuming the artifactual nature of cognition?—especially now that science has begun reverse-engineering that nature in earnest! What if our presumption of artifactuality weren’t so much the problem, as the characterization? What if the problem isn’t that cognitive science is artifactual so much as how it is?

After all, we’ve learned a tremendous amount about this how in the past decades: the idea of dismissing all this detail on the basis of a priori guesswork seems more than a little suspect. The track record would suggest extreme caution. As the boggling scale of the cognitive scientific project should make clear, everything turns on the biological details of cognition. We now know, for instance, that the brain employs legions of special purpose devices to navigate its environments. We know that cognition is thoroughly heuristic, that it turns on cues, bits of available information statistically correlated to systems requiring solution.

Most all systems in our environment shed information enabling the prediction of subsequent behaviours absent the mechanical particulars of that information. The human brain is exquisitely tuned to identify and exploit the correlation of information available and subsequent behaviours. The artifactuality of biology is an evolutionary one, and as such geared to the thrifty solution of high impact problems. To say that cognition (animal or human) is heuristic is to say it’s organized according to the kinds of problems our ancestors needed to solve, and not according to those belonging to academics. Human cognition consists of artifactualities, subsystems dedicated to certain kinds of problem ecologies. Moreover, it consists of artifactualities selected to answer questions quite different from those posed by philosophers.

These two facts drastically alter the landscape of the apparent problem posed by ‘correlationism.’ We have ample theoretical and empirical reasons to believe that mechanistic cognition and intentional cognition comprise two quite different cognitive regimes, the one dedicated to explanation via high-dimensional (physical) sourcing, the other dedicated to explanation absent that sourcing. As an intentional phenomena, objectivity clearly belongs to the latter. Mechanistic cognition, meanwhile, is artifactual. What if it’s the case that ‘objectivity’ is the turn of a screw in a cognitive system selected to solve in the absence of artifactual information? Since intentional cognition turns on specific cues to leverage solutions, and since those cues appear sufficient (to be the only game in town where that behaviour is concerned), the high-dimensional sourcing of that same behavior generates a philosophical crash space—and a storied one at that! What seems sourceless and self-evident becomes patently impossible.

Short magic, cognitive systems possess the environmental relationships they do thanks to super-complicated histories of natural and neural selection—evolution and learning. Let’s call this their orientation, understood as the nonintentional (‘zombie’) correlate of ‘perspective.’ The human brain is possibly the most complex thing we know of in the universe (a fact which should render any theory of the human neglecting that complexity suspect). Our cognitive systems, in other words, possess physically intractable orientations. How intractable? Enough that billions of dollars in research has merely scratched the surface.

Any capacity to cognize this relationship will perforce be radically heuristic, which is to say, provide a means to solve some critical range of problems—a problem ecology—absent natural historical information. The orientation heuristically cognized, of course, is the full-dimensional relationship we actually possess, only hacked in ways that generate solutions (repetitions of behaviour) while neglecting the physical details of that relationship.

Most significantly, orientation neglects the dimension of mediation: thought and perception (whatever they amount to) are thoroughly blind to their immediate sources. This cognitive blindness to the activity of cognition, or medial neglect, amounts to a gross insensitivity to our physical continuity with our environments, the fact that we break no thermodynamic laws. Our orientation, in other words, is characterized by a profound, structural insensitivity to its own constitution—its biological artifactuality, among other things. This auto-insensitivity, not surprisingly, includes insensitivity to the fact of this insensitivity, and thus the default presumption of sufficiency. Specialized sensitivities are required to flag insufficiencies, after all, and like all biological devices, they do not come for free. Not only are we blind to our position within the superordinate systems comprising nature, we’re blind to our blindness, and so, unable to distinguish table-scraps from a banquet, we are duped into affirming inexplicable spontanieties.

‘Truth’ belongs to our machinery for communicating (among other things) the sufficiency of iterable orientations within superordinate systems given medial neglect. You could say it’s a way to advertise clockwork positioning (functional sufficiency) absent any inkling of the clock. ‘Objectivity,’ the term denoting the supposed general property of being true apart from individual perspectives, is a deliberative contrivance derived from practical applications of ‘truth’—the product of ‘philosophical reflection.’ The problem with objectivity as a phenomenon (as opposed to ‘objectivity’ as a component of some larger cognitive articulation) is that the sufficiency of iterable orientations within superordinate systems is always a contingent affair. Whether ‘truth’ occasions sufficiency is always an open question, since the system provides, at best, a rough and ready way to communicate and/or troubleshoot orientation. Unpredictable events regularly make liars of us all. The notion of facts ‘being true’ absent the mediation of human cognition, ‘objectivity,’ also provides a rough and ready way to communicate and/or troubleshoot orientation in certain circumstances. We regularly predict felicitous orientations without the least sensitivity to their artifactual nature, absent any inkling how their pins lie in intractable high-dimensional coincidences between buzzing brains. This insensitivity generates the illusion of absolute orientation, a position outside natural regularities—a ‘view from nowhere.’ We are a worm in the gut of nature convinced we possess disembodied eyes. And so long as the consequences of our orientations remain felicitous, our conceit need not be tested. Our orientations might as well ‘stand nowhere’ absent cognition of their limits.

Thus can ‘truth’ and ‘objectivity’ be naturalized and their peculiarities explained.

The primary cognitive moral here is that lacking information has positive cognitive consequences, especially when it comes to deliberative metacognition, our attempts to understand our nature via philosophical reflection alone. Correlationism evidences this in a number of ways.

As soon as the problem of cognition is characterized as the problem of thought and being, it becomes insoluble. Intentional cognition is heuristic: it neglects the nature of the systems involved, exploiting cues correlated to the systems requiring solution instead. The application of intentional cognition to theoretical explanation, therefore, amounts to the attempt to solve natures using a system adapted to neglect natures. A great deal of traditional philosophy is dedicated to the theoretical understanding of cognition via intentional idioms—via applications of intentional cognition. Thus the morass of disputation. We presume that specialized problem-solving systems possess general application. Lacking the capacity to cognize our inability to cognize the theoretical nature of cognition, we presume sufficiency. Orientation, the relation between neural systems and their proximal and distal environments—between two systems of objects—becomes perspective, the relation between subjects (or systems of subjects) and systems of objects (environments). If one conflates the manifest artifactual nature of orientation for the artifactual nature of perspective (subjectivity), then objectivity itself becomes a subjective artifact, and therefore nothing objective at all. Since orientation characterizes our every attempt to solve for cognition, conflating it with perspective renders perspective inescapable, and objectivity all but inexplicable. Thus the crash space of traditional epistemology.

Now I know from hard experience that the typical response to the picture sketched above is to simply insist on the conflation of orientation and perspective, to assert that my position, despite its explanatory power, simply amounts to more of the same, another perspectival Klein Bottle distinctive only for its egregious ‘scientism.’ Only my intrinsically intentional perspective, I am told, allows me to claim that such perspectives are metacognitive artifacts, a consequence of medial neglect. But asserting perspective before orientation on the basis of metacognitive intuitions alone not only begs the question, it also beggars explanation, delivering the project of cognizing cognition to never-ending disputation—an inability to even formulate explananda, let alone explain anything. This is why I like asking intentionalists how many centuries of theoretical standstill we should expect before that oft advertised and never delivered breakthrough finally arrives. The sin Meillassoux attributes to correlationism, the inability to explain cognition, is really just the sin belonging to intentional philosophy as a whole. Thanks to medial neglect, metcognition,  blind to both its sources and its source blindness, insists we stand outside nature. Tackling this intuition with intentional idioms leaves our every attempt to rationalize our connection underdetermined, a matter of interminable controversy. The Scandal dwells on eternal.

I think orientation precedes perspective—and obviously so, having watched loved ones dismantled by brain disease. I think understanding the role of neglect in orientation explains the peculiarities of perspective, provides a parsimonious way to understand the apparent first-person in terms of the neglect structure belonging to the third. There’s no problem with escaping the dream tank and touching the world simply because there’s no ontological distinction between ourselves and the cosmos. We constitute a small region of a far greater territory, the proximal attuned to the distal. Understanding the heuristic nature of ‘truth’ and ‘objectivity,’ I restrict their application to adaptive problem-ecologies, and simply ask those who would turn them into something ontologically exceptional why they would trust low-dimensional intuitions over empirical data, especially when those intuitions pretty much guarantee perpetual theoretical underdetermination. Far better trust to our childhood presumptions of truth and reality, in the practical applications of these idioms, than in any one of the numberless theoretical misapplications ‘discovering’ this trust fundamentally (as opposed to situationally) ‘naïve.’

The cognitive difference, what separates the consequences of our claims, has never been about ‘subjectivity’ versus ‘objectivity,’ but rather intersystematicity, the integration of ever-more sensitive orientations possessing ever more effectiveness into the superordinate systems encompassing us all. Physically speaking, we’ve long known that this has to be the case. Short actual difference making differences, be they photons striking our retinas or compression waves striking our eardrums or so on, no difference is made. Even Meillassoux acknowledges the necessity of physical contact. What we’ve lacked is a way of seeing how our apparently immediate intentional intuitions, be they phenomenological, ontological, or normative, fit into this high-dimensional—physical—picture.

Heuristic Neglect Theory not only provides this way, it also explains why it has proven so elusive over the centuries. HNT explains the wrong turn mentioned above. The question of orientation immediately cues the systems our ancestors developed to circumvent medial neglect. Solving for our behaviourally salient environmental relationships, in other words, automatically formats the problem in intentional terms. The automaticity of the application of intentional cognition renders it apparently ‘self-evident.’

The reason the critique of correlationism and speculative realism suffer all the problems of underdetermination their proponents attribute to correlationism is that they take this very same wrong turn. How is Meillassoux’s ‘hyper-chaos,’ yet another adventure in a priori speculation, anything more than another pebble tossed upon the heap of traditional disputation? Novelty alone recommends them. Otherwise they leave us every bit as mystified, every bit as unable to accommodate the torrent of relevant scientific findings, and therefore every bit as irrelevant to the breathtaking revolutions even now sweeping us and our traditions out to sea. Like the traditions they claim to supersede, they peddle cognitive abjection, discursive immobility, in the guise of fundamental insight.

Theoretical speculation is cheap, which is why it’s so frightfully easy to make any philosophical account look bad. All you need do is start worrying definitions, then let the conceptual games begin. This is why the warrant of any account is always a global affair, why the power of Evolutionary Theory, for example, doesn’t so much lie in the immunity of its formulations to philosophical critique, but in how much it explains on nature’s dime alone. The warrant of Heuristic Neglect Theory likewise turns on the combination of parsimony and explanatory power.

Anyone arguing that HNT necessarily presupposes some X, be it ontological or normative, is simply begging the question. Doesn’t HNT presuppose the reality of intentional objectivity? Not at all. HNT certainly presupposes applications of intentional cognition, which, given medial neglect, philosophers pose as functional or ontological realities. On HNT, a theory can be true even though, high-dimensionally speaking, there is no such thing as truth. Truth talk possesses efficacy in certain practical problem-ecologies, but because it participates in solving something otherwise neglected, namely the superordinate systematicity of orientations, it remains beyond the pale of intentional resolution.

Even though sophisticated critics of eliminativism acknowledge the incoherence of the tu quoque, I realize this remains a hard twist for many (if not most) to absorb, let alone accept. But this is exactly as it should be, both insofar as something has to explain why isolating the wrong turn has proven so stupendously difficult, and because this is precisely the kind of trap we should expect, given the heuristic and fractionate nature of human cognition. ‘Knowledge’ provides a handle on the intersection of vast, high-dimensional histories, a way to manage orientations without understanding the least thing about them. To know knowledge, we will come to realize, is to know there is no such thing, simply because ‘knowing’ is a resolutely practical affair, almost certainly inscrutable to intentional cognition. When you’re in the intentional mode, this statement simply sounds preposterous—I know it once struck me as such! It’s only when you appreciate how far your intuitions have strayed from those of your childhood, back when your only applications of intentional cognition were practical, that you can see the possibility of a more continuous, intersystematic way to orient ourselves to the cosmos. There was a time before you wandered into the ancient funhouse of heuristic misapplication, when you could not distinguish between your perspective and your orientation. HNT provides a theoretical way to recover that time and take a radically different path.

As a bona fide theory of cognition, HNT provides a way to understand our spectacular inability to understand ourselves. HNT can explain ‘aporia.’ The metacognitive resources recruited for the purposes of philosophical reflection possess alarm bells—sensitivities to their own limits—relevant only to their ancestral applications. The kinds of cognitive apories (crash spaces) characterizing traditional philosophy are precisely those we might expect, given the sudden ability to exercise specialized metacognitive resources out of school, to apply, among other things, the problem-solving power of intentional cognition to the question of intentional cognition.

As a bona fide theory of cognition, HNT bears as much on artificial cognition as on biological cognition, and as such, can be used to understand and navigate the already radical and accelerating transformation of our cognitive ecologies. HNT scales, from the subpersonal to the social, and this means that HNT is relevant to the technological madness of the now.

As a bona fide empirical theory, HNT, unlike any traditional theory of intentionality, will be sorted. Either science will find that metacognition actually neglects information in the ways I propose, or it won’t. Either science will find this neglect possesses the consequences I theorize, or it won’t. Nothing exceptional and contentious is required. With our growing understanding of the brain and consciousness comes a growing understanding of information access and processing capacity—and the neglect structures that fall out of them. The human brain abounds in bottlenecks, none of which are more dramatic than consciousness itself.

Cognition is biomechanical. The ‘correlation of thought and being,’ on my account, is the correlation of being and being. The ontology of HNT is resolutely flat. Once we understand that we only glimpse as much of our orientations as our ancestors required for reproduction, and nothing more, we can see that ‘thought,’ whatever it amounts to, is material through and through.

The evidence of this lies strewn throughout the cognitive wreckage of speculation, the alien crash site of philosophy.



[1] This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegenerative (10.183 billion). 21/01/2017


Occult Wellness versus Being Philosophical

by rsbakker

This is a picture of where (I think) my books belong, somewhere in the blurry boundary between these folk/commercial and scholarly/non-commercial genres of intentional confusion.

It’s been a mad couple of months, but the final draft of The Unholy Consult is out the door. Now I can only wring my hands while pretending to crack my knuckles… My seventeen year old self stands agog.

My schedule is still brimming, but it now includes finishing off a number of posts left hanging by the arrival of the manuscript. First up will be a piece on heuristic neglect and correlationism.

Framing “On Alien Philosophy”…

by rsbakker


Peter Hankins of Conscious Entities fame has a piece considering “On Alien Philosophy.” The debate is just getting started, but I thought it worthwhile explaining why I think this particular paper of mine amounts to more than yet just another interpretation to heap onto the intractable problem of ourselves.

Consider the four following claims:

1) We have biologically constrained (in terms of information access and processing resources) metacognitive capacities ancestrally tuned to the solution of various practical problem ecologies, and capable of exaptation to various other problems.

2) ‘Philosophical reflection’ constitutes such an exaptation.

3) All heuristic exaptations inherit, to some extent, the problem-solving limitations of the heuristic exapted.

4) ‘Philosophical reflection’ inherits the problem-solving limitations of deliberative metacognition.

Now I don’t think there’s much anything controversial about any of these claims (though, to be certain, there’s a great many devils lurking in the details adduced). So note what happens when we add the following:

5) We should expect human philosophical practice will express, in a variety of ways, the problem-solving limitations of deliberative metacognition.

Which seems equally safe. But note how the terrain of the philosophical debate regarding the nature of the soul has changed. Any claim purporting the exceptional nature of this or that intentional phenomena now needs to run the gauntlet of (5). Why assume we cognize something ontologically exceptional when we know we are bound to be duped somehow? All things being equal, mediocre explanations will always trump exceptional ones, after all.

The challenge of (5) has been around for quite some time, but if you read (precritical) eliminativists like Churchland, Stich, or Rosenberg, this is where the battle grinds to a standstill. Why? Because they have no general account of how the inevitable problem-solving limitations of deliberative metacognition would be expressed in human philosophical practice, let alone how they would generate the appearance of intentional phenomena. Since all they have are promissory notes and suggestive gestures, ontologically exceptional accounts remain the only game in town. So, despite the power of (5), the only way to speak of intentional phenomena remains the traditional, philosophical one. Science is blind without theory, so absent any eliminativist account of intentional phenomena, it has no clear way to proceed with their investigation. So it hews to exceptional posits, trusting in their local efficacy, and assuming they will be demystified by discoveries to come.

Thus the challenge posed by Alien Philosophy. By giving real, abductive teeth to (5), my account overturns the argumentative terrain between eliminativism and intentionalism by transforming the explanatory stakes. It shows us how stupidity, understood ecologically, provides everything we need to understand our otherwise baffling intuitions regarding intentional phenomena. “On Alien Philosophy” challenges the Intentionalist to explain more with less (the very thing, of course, he or she cannot do).

Now I think I’ve solved the problem, that I have a way to genuinely naturalize meaning and cognition. The science will sort my pretensions in due course, but in the meantime, the heuristic neglect account of intentionality, given its combination of mediocrity and explanatory power, has to be regarded as a serious contender.