Three Pound Brain

No bells, just whistling in the dark…

Month: February, 2016

The Zombie Enlightenment

by rsbakker

rick zombie

Understanding what comes next depends on understanding what’s going on now, which is to say, cognizing modernity. The premise, recall, is that, due to metacognitive myopia, traditional intentional vocabularies lock us into perpetual conundrums. This means understanding modernity requires some kind of post-intentional explanatory framework—we need some way to understand it in naturalistic terms. Since cognizing modernity requires cognizing the Enlightenment, this puts us on the hook for an alternative, post-intentional explanation of the processes at work—a zombie Enlightenment story.

I say ‘zombie,’ of course, as much to keep the horror of the perspective in view as to underscore the naturalistic character of the explanations. What follows is a dry-run of sorts, an attempt to sketch what has brought about this extraordinary era of accelerating transformation. Keep in mind the ludicrous speculative altitudes involved, but also remember that all such attempts to theorize macrosocial phenomena suffer this liability. I don’t think it’s so important that the case be made as some alternative be proposed at this point. For one, the mere existence of such an account, the bare fact of its plausibility, requires the intentionalist account for the superiority of their approach, and this, as we shall see below, can have a transformative effect on cognitive ecologies.

In zombie terms, the Enlightenment, as we think we know it, had nothing to do with the ‘power of reason’ to ‘emancipate,’ to free us from the tyranny of Kant’s ‘tutelary natures.’ This is the Myth. Likewise, Nietzsche’s Gegenaufklarung had nothing to do with somehow emancipating us from the tyrannical consequences of this emancipation. The so-called Counter-Enlightenment, or ‘postmodernism’ as it has come to be called, was a completion, or a consummation, if you wish. The antagonism is merely a perspectival artifact. Postmodernism, if anything, represents the processes characteristic of the zombie Enlightenment colonizing and ultimately overcoming various specialized fields of cultural endeavour.

To understand this one needs to understand something crucial about human nature, namely, the way understanding, all understanding, is blind understanding. The eye cannot be seen. Olfaction has no smell, just as touch has no texture. To enable knowledge, in other words, is to stand outside the circuit of what is known. A great many thinkers have transformed this observation into something both extraordinary and occult, positing all manner of inexplicable things by way of explanation, everything from transparencies to transcendentals to trace structures. But the primary reason is almost painfully mundane: the seeing eye cannot be seen simply because it is mechanically indisposed.

Human beings suffer ‘cognitive indisposition’ or as I like to call it, medial neglect, a ‘brain blindness’ so profound as to escape them altogether, to convince them, at every stage of their ignorance, that they could see pretty much everything they needed to see.

Now according to the Myth, the hundred million odd souls populating Europe in the 18th century shuffled about in unconscious acquiescence to authority, each generation blindly repeating the chauvinisms of the generation prior. The Enlightenment institutionalized inquiry, the asking of questions, and the asking of questions, far from merely setting up ‘choice situations’ between assertions, makes cognitive incapacity explicit. The Enlightenment, in other words, institutionalized the erosion of traditional authority, thus ‘freeing’ individuals to pursue other possible answers. The great dividend of the Enlightenment was nothing less than autonomy, the personal, political, and material empowerment of the individual via knowledge. They were blind, but now they could see–or at least so they thought.

Postmodernism, on the other hand, arose out of the recognition that inquiry has no end, that the apparent rational verities of the Enlightenment were every bit as vulnerable to delegitimization (‘deconstruction’) as the verities of the tradition that it swept away. Enlightenment critique was universally applicable, every bit as toxic to successor as to traditional claims. Enlightenment reason, therefore, could not itself be the answer, a conviction that the increasingly profound technical rationalization of Western society only seemed to confirm. The cognitive autonomy promised by Kant and his contemporaries had proven too radical, missing the masses altogether, and stranding intellectuals in the humanities, at least, with relativistic guesses. The Enlightenment deconstruction of religious narrative—the ‘death of God’—was at once the deconstruction of all absolute narratives, all foundations. Autonomy had collapsed into anomie.

This is the Myth of the Enlightenment, at least in cartoon thumbnail.

But if we set aside our traditional fetish for ‘reason’ and think of post-Medieval European society as a kind of information processing system, a zombie society, the story actually looks quite different. Far from the death of authority and the concomitant birth of a frightening, ‘postmodern autonomy,’ the ‘death of God’ becomes the death of supervision. Supervised learning, of course, refers to one of the dominant learning paradigms in artificial neural networks, one where training converges on known targets, as opposed to unsupervised learning, where training converges on unknown targets. So long as supervised cognitive ecologies monopolized European society, European thinkers were bound to run afoul the ‘only-game-in-town effect,’ the tendency to assume claims true for the simple want of alternatives. There were gains in cognitive efficiency, certainly, but they arose adventitiously, and had to brave selection in generally unforgiving social ecologies. Pockets of unsupervised learning appear in every supervised society, in fact, but in the European case, the economic and military largesse provided by these isolated pockets assured they would be reproduced across the continent. The process was gradual, of course. What we call the ‘Enlightenment’ doesn’t so much designate the process as the point when the only-game-in-town effect could no longer be sustained among the learned classes. In all corners of society, supervised optima found themselves competing more and more with unsupervised optima—and losing. What Kant and his contemporaries called ‘Enlightenment’ simply made explicit an ecology that European society had been incubating for centuries, one that rendered cognitive processes responsive to feedback via empirical and communicative selection.

On an information processing view, in other words, the European Enlightenment did not so much free up individuals as cognitive capacity. Once again, we need to appreciate the zombie nature of this view, how it elides ethical dimensions. On this view, traditional chauvinisms represent maladaptive optima, old fixes that now generate more problems than they solve. Groups were not so much oppressed, on this account, as underutilized. What we are prone to call ‘moral progress’ in folk political terms amounts to the optimization of collective neurocomputational resources. These problematic ethical and political consequences, of course, have no bearing on the accuracy of the view. Any cultural criticism that makes ideological orthodoxy a condition of theoretical veracity is nothing more than apologia in the worst sense, self-serving rationalization. In fact, since naturalistic theories are notorious for the ways they problematize our moral preconceptions, you might even say this kind of problematization is precisely what we should expect. Pursuing hard questions can only be tendentious if you cannot countenance hard answers.

The transition from a supervised to an unsupervised learning ecology was at once a transition from a slow selecting to a rapid selecting ecology. One of the great strengths of unsupervised learning, it turns out, is blind source separation, something your brain wonderfully illustrates for you every time you experience the famed ‘cocktail party effect.’ Artificial unsupervised learning algorithms, of course, allow for the causal sourcing of signals in a wide variety of scientific contexts. Causal sourcing, of course, amounts to identifying causes, which is to say, mechanical cognition, which in turn amounts to behavioural efficacy, the ability to remake environments. So far as behavioural efficacy cues selection, then, we suddenly find ourselves with a social ecology (‘science’) dedicated to the accumulation of ever more efficacies—ever more power over themselves and their environments.

Power begets power; efficiency, efficiency. Human ecologies were not only transformed, they were transformed in ways that facilitated transformation. Each new optimization selected and incorporated generated ecological changes, social or otherwise, changes bearing on the efficiency of previous optimizations. And so the shadow of maladaptation, or obsolescence, fell across all existing adaptations, be they behavioural or technological.

The inevitability of maladaptation, of course, merely expresses the contingency of ecology, the fact that all ecologies change over time. In ancestral (slow selecting) ecologies, the information required to cognize this process was scarce to nonexistent: the only game in town effect—the assumption of sufficiency in the absence of alternatives—was all but inevitable. Given the way cognitive invariance cues cognitive stability, the fact that we can trust our inheritance, the spectre of accelerating obsolescence could only represent a threat.

“Expect the unexpected,” a refrain that only modernity could abide, wonderfully recapitulates, I think, the inevitability of postmodernism. Cognitive instability became the only cognitive stability, the only humanistic ‘principle’ remaining. And thus the great (perhaps even perverse) irony of philosophical modernity: the search for stability in difference, and the development, across the humanities, of social behaviours (aesthetic or theoretical) bent on making obsolete.

Rather than wait for obsolescence to arise out ecological transformation, many began forcing the issue, isolating instances of the only game in town effect in various domains of aesthetic and theoretical behaviour, and adducing alternatives in an attempt to communicate their obsolescence. Supervised or ‘traditional’ ecologies readily broke down. Unsupervised learning ecologies, quickly became synonymous with cognitive stability—and more attractive for it. The scientific fetish for innovation found itself replicated in humanistic guise. Despite the artificial nature of this process, the lack of any alternative account of semantic instability gave rise to a new series of only game in town effects. What had begun as an unsupervised exploration of solution spaces, quickly lapsed into another supervised ecology. Avante garde and post-structuralist zombies adapted to exploit microsocial ecologies they themselves had fashioned.

The so-called ‘critique of Enlightenment reason,’ whether implicit in aesthetic behaviour or explicit in theoretical behaviour, demonstrates the profundity of medial neglect, the blindness of zombie components to the greater machinery compelling them. The Gegenaufklarung merely followed through on the actual processes of ‘ratcheting ecological innovation’ responsible, undermining, as it did, the myths that had been attached to those processes in lieu of actual understanding. In communicating the performative dimension of ‘reason’ and the irrationality of Enlightenment rationality, postmodernism cleared a certain space for post-intentional thinking, but little more. Otherwise it is best viewed as an inadvertent consummation of a logic it can only facilitate and never ‘deconstruct.’

Our fetish for knowledge and innovation remain. We have been trained to embrace an entirely unknown eventuality, and that training has been supervised.

The Discursive Meanie

by rsbakker

So I went to see Catherine Malabou speak on the relation between deep history, consciousness and neuroscience last night. As she did in her Critical Inquiry piece, she argued that some new conceptuality was required to bridge the natural historical and the human, a conceptuality that neuroscience could provide. When I introduced myself to her afterward, she recognized my name, said that she had read my post, “Malabou, Continentalism, and New Age Philosophy.” When I asked her what she thought, she blushed and told me that she thought it was mean.

I tried to smooth things over, but for most people, I think, expressing aggression in interpersonal exchanges is like throwing boulders tied to their waist. Hard words rewrite communicative contexts, and it takes the rest of the brain several moments to catch up. Once she tossed her boulder it was only a matter of time before the rope yanked her away. Discussion over.

I appreciate that I’m something of an essayistic asshole, and that academics, adapted to genteel communicative contexts as they are, generally have little experience with, let alone stomach for, the more bruising environs of the web. But then the near universal academic tendency to take the path of least communicative resistance, to foster discursive ingroups, is precisely the tendency Three Pound Brain is dedicated to exposing. The problem, of course, is that cuing people to identify you as a threat pretty much guarantees they will be unable to engage you rationally, as was the case here. Malabou had dismissed me, and so my arguments simply followed.

How does one rattle ingroup assumptions as an outgroup competitor, short disguising oneself as an ingroup sympathizer, that is? Interesting conundrum, that. I suppose if I had more notoriety, they would feel compelled to engage me…

Is it time to rethink my tactics?

The Dim Future of Human Brilliance

by rsbakker

Moths to a flame

Humans are what might be called targeted shallow information consumers in otherwise unified deep information environments. We generally skim only what information we need—from our environments or ourselves—to effect reproduction, and nothing more. We neglect gamma radiation for good reason: ‘deep’ environmental information that makes no reproductive difference makes no cognitive difference. As the product of innumerable ancestral ecologies, human cognitive biology is ecological, adapted to specific, high-impact environments. As ecological, one might expect that human cognitive biology is every bit as vulnerable to ecological change as any other biological system.

Under the rubric of  the Semantic Apocalypse, the ecological vulnerability of human cognitive biology has been my focus here for quite some time at Three Pound Brain. Blind to deep structures, human cognition largely turns on cues, sensitivity to information differentially related to the systems cognized.  Sociocognition, where a mere handful of behavioural cues can trigger any number of predictive/explanatory assumptions, is paradigmatic of this. Think, for instance, how easy it was for Ashley Madison to convince its predominantly male customers that living women were checking their profiles.  This dependence on cues underscores a corresponding dependence on background invariance: sever the differential relations between the cues and systems to be cognized (the way Ashley Madison did) and what should be sociocognition, the solution of some fellow human, becomes confusion (we find ourselves in ‘crash space’) or worse, exploitation (we find ourselves in instrumentalized crash space, or ‘cheat space’).

So the questions I think we need to be asking are:

What effect does deep information have on our cognitive ecologies? The so-called ‘data deluge’ is nothing but an explosion in the availability of deep or ancestrally inaccessible information. What happens when targeted shallow information consumers suddenly find themselves awash in different kinds of deep information? A myriad of potential examples come to mind. Think of the way medicalization drives accommodation creep, how instructors are gradually losing the ability to judge character in the classroom. Think of the ‘fear of crime’ phenomena, how the assessment of ancestrally unavailable information against implicit, ancestral baselines skews general perceptions of criminal threat. For that matter, think of the free will debate, or the way mechanistic cognition scrambles intentional cognition more generally: these are paradigmatic instances of the way deep information, the primary deliverance of science, crashes the targeted and shallow cognitive capacities that comprise our evolutionary inheritance.

What effect does background variation have on targeted, shallow modes of cognition? What happens when cues become differentially detached, or ‘decoupled,’ from their ancestral targets? Where the first question deals with the way the availability of deep information (literally, not metaphorically) pollutes cognitive ecologies, the ways human cognition requires the absence of certain information, this question deals with the way human cognition requires the presence of certain environmental continuities. There’s actually been an enormous amount of research done on this question in a wide variety of topical guises. Nikolaas Tinbergen coined the term “supernormal stimuli” to designate ecologically variant cuing, particularly the way exaggerated stimuli can trigger misapplications of different heuristic regimes. He famously showed how gull chicks, for instance, could be fooled into pecking false “super beaks” for food given only a brighter-than-natural red spot. In point of fact, you see supernormal stimuli in dramatic action anytime you see artificial outdoor lighting surrounded by a haze of bugs: insects that use lunar transverse orientation to travel at night continually correct their course vis a vis streetlights, porch lights, and so on, causing them to spiral directly into them. What Tinbergen and subsequent ethology researchers have demonstrated is the ubiquity of cue-based cognition, the fact that all organisms are targeted, shallow information consumers in unified deep information environments.

Deirdre Barrett has recently applied the idea to modern society, but lacking any theory of meaning, she finds herself limited to pointing out suggestive speculative parallels between ecological readings and phenomena that are semantically overdetermined otherwise. For me this question calves into a wide variety of domain-specific forms, but there’s an important distinction to be made between the decoupling of cues generally and strategic decoupling, between ‘crash space’ and ‘cheat space.’ Where the former involves incidental cognitive incapacity, human versions of transverse orientation, the latter involves engineered cognitive incapacity. The Ashley Madison case I referenced above provides an excellent example of simply how little information is needed to cue our sociocognitive systems in online environments. In one sense, this facility evidences the remarkable efficiency of human sociocognition, the fact that it can do so much with so little. But, as with specialization in evolution more generally, this efficiency comes at the cost of ecological dependency: you can only neglect information in problem-solving so long as the systems ignored remain relatively constant.

And this is basically the foundational premise of the Semantic Apocalypse: intentional cognition, as a radically specialized system, is especially vulnerable to both crashing and cheating. The very power of our sociocognitive systems is what makes them so liable to be duped (think religious anthropomorphism), as well as so easy to dupe. When Sherry Turkle, for instance, bemoans the ease with which various human-computer interfaces, or ‘HCIs,’ push our ‘Darwinian buttons’ she is talking about the vulnerability of sociocognitive cues to various cheats (but since she, like Barrett, lacks any theory of meaning, she finds herself in similar explanatory straits). In a variety of experimental contexts, for instance, people have been found to trust artificial interlocutors over human ones. Simple tweaks in the voices and appearance of HCIs have a dramatic impact on our perceptions of those encounters—we are in fact easily manipulated, cued to draw erroneous conclusions, given what are quite literally cartoonish stimuli. So the so-called ‘internet of things,’ the distribution of intelligence throughout our artifactual ecologies, takes on a far more sinister cast when viewed through the lens of human sociocognitive specialization. Populating our ecologies with gadgets designed to cue our sociocognitive capacities ‘out of school’ will only degrade the overall utility of those capacities. Since those capacities underwrite what we call meaning or ‘intentionality,’ the collapse of our ancestral sociocognitive ecologies signals the ‘death of meaning.’

The future of human cognition looks dim. We can say this because we know human cognition is heuristic, and that specific forms of heuristic cognition turn on specific forms of ecological stability, the very forms that our ongoing technological revolution promises to sweep away. Blind Brain Theory, in other words, offers a theory of meaning that not only explains away the hard problem, but can also leverage predictions regarding the fate of our civilization. It makes me dizzy thinking about it, and suspicious—the empty can, as they say, rattles the loudest. But this preposterous scope is precisely what we should expect from a genuinely naturalistic account of intentional phenomena. The power of mechanistic cognition lies in the way it scales with complexity, allowing us to build hierarchies of components and subcomponents. To naturalize meaning is to understand the soul in terms continuous with the cosmos.

This is precisely what we should expect from a theory delivering the Holy Grail, the naturalization of meaning.

You could even argue that the unsettling, even horrifying consequences evidence its veracity, given there’s so many more ways for the world to contradict our parochial conceits than to appease them. We should expect things will end ugly.