Three Pound Brain

No bells, just whistling in the dark…

The Zombie Enlightenment

by rsbakker

rick zombie

Understanding what comes next depends on understanding what’s going on now, which is to say, cognizing modernity. The premise, recall, is that, due to metacognitive myopia, traditional intentional vocabularies lock us into perpetual conundrums. This means understanding modernity requires some kind of post-intentional explanatory framework—we need some way to understand it in naturalistic terms. Since cognizing modernity requires cognizing the Enlightenment, this puts us on the hook for an alternative, post-intentional explanation of the processes at work—a zombie Enlightenment story.

I say ‘zombie,’ of course, as much to keep the horror of the perspective in view as to underscore the naturalistic character of the explanations. What follows is a dry-run of sorts, an attempt to sketch what has brought about this extraordinary era of accelerating transformation. Keep in mind the ludicrous speculative altitudes involved, but also remember that all such attempts to theorize macrosocial phenomena suffer this liability. I don’t think it’s so important that the case be made as some alternative be proposed at this point. For one, the mere existence of such an account, the bare fact of its plausibility, requires the intentionalist account for the superiority of their approach, and this, as we shall see below, can have a transformative effect on cognitive ecologies.

In zombie terms, the Enlightenment, as we think we know it, had nothing to do with the ‘power of reason’ to ‘emancipate,’ to free us from the tyranny of Kant’s ‘tutelary natures.’ This is the Myth. Likewise, Nietzsche’s Gegenaufklarung had nothing to do with somehow emancipating us from the tyrannical consequences of this emancipation. The so-called Counter-Enlightenment, or ‘postmodernism’ as it has come to be called, was a completion, or a consummation, if you wish. The antagonism is merely a perspectival artifact. Postmodernism, if anything, represents the processes characteristic of the zombie Enlightenment colonizing and ultimately overcoming various specialized fields of cultural endeavour.

To understand this one needs to understand something crucial about human nature, namely, the way understanding, all understanding, is blind understanding. The eye cannot be seen. Olfaction has no smell, just as touch has no texture. To enable knowledge, in other words, is to stand outside the circuit of what is known. A great many thinkers have transformed this observation into something both extraordinary and occult, positing all manner of inexplicable things by way of explanation, everything from transparencies to transcendentals to trace structures. But the primary reason is almost painfully mundane: the seeing eye cannot be seen simply because it is mechanically indisposed.

Human beings suffer ‘cognitive indisposition’ or as I like to call it, medial neglect, a ‘brain blindness’ so profound as to escape them altogether, to convince them, at every stage of their ignorance, that they could see pretty much everything they needed to see.

Now according to the Myth, the hundred million odd souls populating Europe in the 18th century shuffled about in unconscious acquiescence to authority, each generation blindly repeating the chauvinisms of the generation prior. The Enlightenment institutionalized inquiry, the asking of questions, and the asking of questions, far from merely setting up ‘choice situations’ between assertions, makes cognitive incapacity explicit. The Enlightenment, in other words, institutionalized the erosion of traditional authority, thus ‘freeing’ individuals to pursue other possible answers. The great dividend of the Enlightenment was nothing less than autonomy, the personal, political, and material empowerment of the individual via knowledge. They were blind, but now they could see–or at least so they thought.

Postmodernism, on the other hand, arose out of the recognition that inquiry has no end, that the apparent rational verities of the Enlightenment were every bit as vulnerable to delegitimization (‘deconstruction’) as the verities of the tradition that it swept away. Enlightenment critique was universally applicable, every bit as toxic to successor as to traditional claims. Enlightenment reason, therefore, could not itself be the answer, a conviction that the increasingly profound technical rationalization of Western society only seemed to confirm. The cognitive autonomy promised by Kant and his contemporaries had proven too radical, missing the masses altogether, and stranding intellectuals in the humanities, at least, with relativistic guesses. The Enlightenment deconstruction of religious narrative—the ‘death of God’—was at once the deconstruction of all absolute narratives, all foundations. Autonomy had collapsed into anomie.

This is the Myth of the Enlightenment, at least in cartoon thumbnail.

But if we set aside our traditional fetish for ‘reason’ and think of post-Medieval European society as a kind of information processing system, a zombie society, the story actually looks quite different. Far from the death of authority and the concomitant birth of a frightening, ‘postmodern autonomy,’ the ‘death of God’ becomes the death of supervision. Supervised learning, of course, refers to one of the dominant learning paradigms in artificial neural networks, one where training converges on known targets, as opposed to unsupervised learning, where training converges on unknown targets. So long as supervised cognitive ecologies monopolized European society, European thinkers were bound to run afoul the ‘only-game-in-town effect,’ the tendency to assume claims true for the simple want of alternatives. There were gains in cognitive efficiency, certainly, but they arose adventitiously, and had to brave selection in generally unforgiving social ecologies. Pockets of unsupervised learning appear in every supervised society, in fact, but in the European case, the economic and military largesse provided by these isolated pockets assured they would be reproduced across the continent. The process was gradual, of course. What we call the ‘Enlightenment’ doesn’t so much designate the process as the point when the only-game-in-town effect could no longer be sustained among the learned classes. In all corners of society, supervised optima found themselves competing more and more with unsupervised optima—and losing. What Kant and his contemporaries called ‘Enlightenment’ simply made explicit an ecology that European society had been incubating for centuries, one that rendered cognitive processes responsive to feedback via empirical and communicative selection.

On an information processing view, in other words, the European Enlightenment did not so much free up individuals as cognitive capacity. Once again, we need to appreciate the zombie nature of this view, how it elides ethical dimensions. On this view, traditional chauvinisms represent maladaptive optima, old fixes that now generate more problems than they solve. Groups were not so much oppressed, on this account, as underutilized. What we are prone to call ‘moral progress’ in folk political terms amounts to the optimization of collective neurocomputational resources. These problematic ethical and political consequences, of course, have no bearing on the accuracy of the view. Any cultural criticism that makes ideological orthodoxy a condition of theoretical veracity is nothing more than apologia in the worst sense, self-serving rationalization. In fact, since naturalistic theories are notorious for the ways they problematize our moral preconceptions, you might even say this kind of problematization is precisely what we should expect. Pursuing hard questions can only be tendentious if you cannot countenance hard answers.

The transition from a supervised to an unsupervised learning ecology was at once a transition from a slow selecting to a rapid selecting ecology. One of the great strengths of unsupervised learning, it turns out, is blind source separation, something your brain wonderfully illustrates for you every time you experience the famed ‘cocktail party effect.’ Artificial unsupervised learning algorithms, of course, allow for the causal sourcing of signals in a wide variety of scientific contexts. Causal sourcing, of course, amounts to identifying causes, which is to say, mechanical cognition, which in turn amounts to behavioural efficacy, the ability to remake environments. So far as behavioural efficacy cues selection, then, we suddenly find ourselves with a social ecology (‘science’) dedicated to the accumulation of ever more efficacies—ever more power over themselves and their environments.

Power begets power; efficiency, efficiency. Human ecologies were not only transformed, they were transformed in ways that facilitated transformation. Each new optimization selected and incorporated generated ecological changes, social or otherwise, changes bearing on the efficiency of previous optimizations. And so the shadow of maladaptation, or obsolescence, fell across all existing adaptations, be they behavioural or technological.

The inevitability of maladaptation, of course, merely expresses the contingency of ecology, the fact that all ecologies change over time. In ancestral (slow selecting) ecologies, the information required to cognize this process was scarce to nonexistent: the only game in town effect—the assumption of sufficiency in the absence of alternatives—was all but inevitable. Given the way cognitive invariance cues cognitive stability, the fact that we can trust our inheritance, the spectre of accelerating obsolescence could only represent a threat.

“Expect the unexpected,” a refrain that only modernity could abide, wonderfully recapitulates, I think, the inevitability of postmodernism. Cognitive instability became the only cognitive stability, the only humanistic ‘principle’ remaining. And thus the great (perhaps even perverse) irony of philosophical modernity: the search for stability in difference, and the development, across the humanities, of social behaviours (aesthetic or theoretical) bent on making obsolete.

Rather than wait for obsolescence to arise out ecological transformation, many began forcing the issue, isolating instances of the only game in town effect in various domains of aesthetic and theoretical behaviour, and adducing alternatives in an attempt to communicate their obsolescence. Supervised or ‘traditional’ ecologies readily broke down. Unsupervised learning ecologies, quickly became synonymous with cognitive stability—and more attractive for it. The scientific fetish for innovation found itself replicated in humanistic guise. Despite the artificial nature of this process, the lack of any alternative account of semantic instability gave rise to a new series of only game in town effects. What had begun as an unsupervised exploration of solution spaces, quickly lapsed into another supervised ecology. Avante garde and post-structuralist zombies adapted to exploit microsocial ecologies they themselves had fashioned.

The so-called ‘critique of Enlightenment reason,’ whether implicit in aesthetic behaviour or explicit in theoretical behaviour, demonstrates the profundity of medial neglect, the blindness of zombie components to the greater machinery compelling them. The Gegenaufklarung merely followed through on the actual processes of ‘ratcheting ecological innovation’ responsible, undermining, as it did, the myths that had been attached to those processes in lieu of actual understanding. In communicating the performative dimension of ‘reason’ and the irrationality of Enlightenment rationality, postmodernism cleared a certain space for post-intentional thinking, but little more. Otherwise it is best viewed as an inadvertent consummation of a logic it can only facilitate and never ‘deconstruct.’

Our fetish for knowledge and innovation remain. We have been trained to embrace an entirely unknown eventuality, and that training has been supervised.

The Discursive Meanie

by rsbakker

So I went to see Catherine Malabou speak on the relation between deep history, consciousness and neuroscience last night. As she did in her Critical Inquiry piece, she argued that some new conceptuality was required to bridge the natural historical and the human, a conceptuality that neuroscience could provide. When I introduced myself to her afterward, she recognized my name, said that she had read my post, “Malabou, Continentalism, and New Age Philosophy.” When I asked her what she thought, she blushed and told me that she thought it was mean.

I tried to smooth things over, but for most people, I think, expressing aggression in interpersonal exchanges is like throwing boulders tied to their waist. Hard words rewrite communicative contexts, and it takes the rest of the brain several moments to catch up. Once she tossed her boulder it was only a matter of time before the rope yanked her away. Discussion over.

I appreciate that I’m something of an essayistic asshole, and that academics, adapted to genteel communicative contexts as they are, generally have little experience with, let alone stomach for, the more bruising environs of the web. But then the near universal academic tendency to take the path of least communicative resistance, to foster discursive ingroups, is precisely the tendency Three Pound Brain is dedicated to exposing. The problem, of course, is that cuing people to identify you as a threat pretty much guarantees they will be unable to engage you rationally, as was the case here. Malabou had dismissed me, and so my arguments simply followed.

How does one rattle ingroup assumptions as an outgroup competitor, short disguising oneself as an ingroup sympathizer, that is? Interesting conundrum, that. I suppose if I had more notoriety, they would feel compelled to engage me…

Is it time to rethink my tactics?

The Dim Future of Human Brilliance

by rsbakker

Moths to a flame

Humans are what might be called targeted shallow information consumers in otherwise unified deep information environments. We generally skim only what information we need—from our environments or ourselves—to effect reproduction, and nothing more. We neglect gamma radiation for good reason: ‘deep’ environmental information that makes no reproductive difference makes no cognitive difference. As the product of innumerable ancestral ecologies, human cognitive biology is ecological, adapted to specific, high-impact environments. As ecological, one might expect that human cognitive biology is every bit as vulnerable to ecological change as any other biological system.

Under the rubric of  the Semantic Apocalypse, the ecological vulnerability of human cognitive biology has been my focus here for quite some time at Three Pound Brain. Blind to deep structures, human cognition largely turns on cues, sensitivity to information differentially related to the systems cognized.  Sociocognition, where a mere handful of behavioural cues can trigger any number of predictive/explanatory assumptions, is paradigmatic of this. Think, for instance, how easy it was for Ashley Madison to convince its predominantly male customers that living women were checking their profiles.  This dependence on cues underscores a corresponding dependence on background invariance: sever the differential relations between the cues and systems to be cognized (the way Ashley Madison did) and what should be sociocognition, the solution of some fellow human, becomes confusion (we find ourselves in ‘crash space’) or worse, exploitation (we find ourselves in instrumentalized crash space, or ‘cheat space’).

So the questions I think we need to be asking are:

What effect does deep information have on our cognitive ecologies? The so-called ‘data deluge’ is nothing but an explosion in the availability of deep or ancestrally inaccessible information. What happens when targeted shallow information consumers suddenly find themselves awash in different kinds of deep information? A myriad of potential examples come to mind. Think of the way medicalization drives accommodation creep, how instructors are gradually losing the ability to judge character in the classroom. Think of the ‘fear of crime’ phenomena, how the assessment of ancestrally unavailable information against implicit, ancestral baselines skews general perceptions of criminal threat. For that matter, think of the free will debate, or the way mechanistic cognition scrambles intentional cognition more generally: these are paradigmatic instances of the way deep information, the primary deliverance of science, crashes the targeted and shallow cognitive capacities that comprise our evolutionary inheritance.

What effect does background variation have on targeted, shallow modes of cognition? What happens when cues become differentially detached, or ‘decoupled,’ from their ancestral targets? Where the first question deals with the way the availability of deep information (literally, not metaphorically) pollutes cognitive ecologies, the ways human cognition requires the absence of certain information, this question deals with the way human cognition requires the presence of certain environmental continuities. There’s actually been an enormous amount of research done on this question in a wide variety of topical guises. Nikolaas Tinbergen coined the term “supernormal stimuli” to designate ecologically variant cuing, particularly the way exaggerated stimuli can trigger misapplications of different heuristic regimes. He famously showed how gull chicks, for instance, could be fooled into pecking false “super beaks” for food given only a brighter-than-natural red spot. In point of fact, you see supernormal stimuli in dramatic action anytime you see artificial outdoor lighting surrounded by a haze of bugs: insects that use lunar transverse orientation to travel at night continually correct their course vis a vis streetlights, porch lights, and so on, causing them to spiral directly into them. What Tinbergen and subsequent ethology researchers have demonstrated is the ubiquity of cue-based cognition, the fact that all organisms are targeted, shallow information consumers in unified deep information environments.

Deirdre Barrett has recently applied the idea to modern society, but lacking any theory of meaning, she finds herself limited to pointing out suggestive speculative parallels between ecological readings and phenomena that are semantically overdetermined otherwise. For me this question calves into a wide variety of domain-specific forms, but there’s an important distinction to be made between the decoupling of cues generally and strategic decoupling, between ‘crash space’ and ‘cheat space.’ Where the former involves incidental cognitive incapacity, human versions of transverse orientation, the latter involves engineered cognitive incapacity. The Ashley Madison case I referenced above provides an excellent example of simply how little information is needed to cue our sociocognitive systems in online environments. In one sense, this facility evidences the remarkable efficiency of human sociocognition, the fact that it can do so much with so little. But, as with specialization in evolution more generally, this efficiency comes at the cost of ecological dependency: you can only neglect information in problem-solving so long as the systems ignored remain relatively constant.

And this is basically the foundational premise of the Semantic Apocalypse: intentional cognition, as a radically specialized system, is especially vulnerable to both crashing and cheating. The very power of our sociocognitive systems is what makes them so liable to be duped (think religious anthropomorphism), as well as so easy to dupe. When Sherry Turkle, for instance, bemoans the ease with which various human-computer interfaces, or ‘HCIs,’ push our ‘Darwinian buttons’ she is talking about the vulnerability of sociocognitive cues to various cheats (but since she, like Barrett, lacks any theory of meaning, she finds herself in similar explanatory straits). In a variety of experimental contexts, for instance, people have been found to trust artificial interlocutors over human ones. Simple tweaks in the voices and appearance of HCIs have a dramatic impact on our perceptions of those encounters—we are in fact easily manipulated, cued to draw erroneous conclusions, given what are quite literally cartoonish stimuli. So the so-called ‘internet of things,’ the distribution of intelligence throughout our artifactual ecologies, takes on a far more sinister cast when viewed through the lens of human sociocognitive specialization. Populating our ecologies with gadgets designed to cue our sociocognitive capacities ‘out of school’ will only degrade the overall utility of those capacities. Since those capacities underwrite what we call meaning or ‘intentionality,’ the collapse of our ancestral sociocognitive ecologies signals the ‘death of meaning.’

The future of human cognition looks dim. We can say this because we know human cognition is heuristic, and that specific forms of heuristic cognition turn on specific forms of ecological stability, the very forms that our ongoing technological revolution promises to sweep away. Blind Brain Theory, in other words, offers a theory of meaning that not only explains away the hard problem, but can also leverage predictions regarding the fate of our civilization. It makes me dizzy thinking about it, and suspicious—the empty can, as they say, rattles the loudest. But this preposterous scope is precisely what we should expect from a genuinely naturalistic account of intentional phenomena. The power of mechanistic cognition lies in the way it scales with complexity, allowing us to build hierarchies of components and subcomponents. To naturalize meaning is to understand the soul in terms continuous with the cosmos.

This is precisely what we should expect from a theory delivering the Holy Grail, the naturalization of meaning.

You could even argue that the unsettling, even horrifying consequences evidence its veracity, given there’s so many more ways for the world to contradict our parochial conceits than to appease them. We should expect things will end ugly.

Flashlight Philosophy

by rsbakker

I want to believe

Imagine you’re shopping for groceries and this thick, impenetrable fog rolls into town, and the power goes out, and a chorus of screams rings out from the surrounding town, until finally, everything goes eerily quiet. Then people begin disappearing, somehow sucked into the fog roiling just outside the windows. You and the surviving customers rush to the flashlight section, arm yourselves with visibility, in effect, then take turns probing the fog with your lights.

Everyone agrees that something is out there, and that whatever that something is, it’s grabbing shoppers one by one. And lo, almost everyone, peering into the noxious fume, claims they can see what they are up against. But the problem is that no one agrees—everyone sees something completely different. Some see winged creatures, others terrestrial, but everyone insists they see only that type of creature, and that the others must be wrong.

The survivors begin sorting themselves according to the affinities in their views, and soon we find ourselves with three different ‘flashlight tribes,’ those convinced the threat is airborne (though they disagree on morphological specifics), those convinced the threat is terrestrial (though they also disagree on the morphological specifics), and those that think something fishy is going on. People disappear one by one, and the aerial partisans say, “Yes, I saw it! Something swooped down from above and carried them off,” while the terrestrial partisans say, “Yes, I saw it! Something reared up from the ground and carried them off,” and the skeptics say, “C’mon, guys, obviously something fishy is going on here!”

So they alone begin running experiments, rolling beach-balls out into the fog, setting up cameras, doing everything they can to gather more information.

Now consider what Levi Bryant has to say about the “methodology of philosophy”:

Put in Heideggerian terms, we could say that a philosophy of biology interrogates the “alethetic field” through which the bios is open as an object that is given to the investigating biologist. This, of course, requires some knowledge of the field of biology and its present state of knowledge. Often philosophers forget that they need to acquaint themselves with the other disciplines they investigate and therefore end up proceeding on the basis of doxa or the prejudices of folk biology. A philosophy of biology must be familiar with the field that it takes as an object. However, it does something quite different than what is done in this discipline. In making the concepts of this alethetic field its object, it tries to bring these concepts before reflective consciousness, to explore their interdependence, to uncover what is unspoken in them, and it perpetually shuttles back and forth between those beings we refer to as living and this space of conceptuality. In doing so, philosophy often discovers something unspoken in these concepts.

This is about as concise a description of the Myth of Making Explicit as I’ve come across, the comforting idea that philosophy somehow sheds light on what comes before scientific theoretical cognition. We solve things all the time, we humans, but thanks to medial neglect, we have no intuitive means of solving our solving, no way of sourcing our thoughts or behaviours. So what do we do? We invent sources, sets of systematic constraints that rationalize our thoughts and behaviours; we posit things like ‘language games,’ ‘grammars,’ ‘norms,’ ‘conceptual schemes,’ ‘conditions of possibility,’ ‘alethic fields,’ and so on. The problem, however, is that the deliverances of ‘reflective consciousness,’ as Levi calls it, never suffice to arbitrate between any of these formulations. Everybody is left swearing by their own flashlight. More than one hundred generations on everyone is still arguing posits. As a partisan of this methodology, Levi assumes its efficacy, the ability to theoretically cognize the darkness that comes before human thought and behaviour. On the strength of his flashlight, he believes that something terrestrial and/or aerial has to inhabit the impenetrable mists. He literally believes that he and others are making something explicit, as opposed to merely making something up.

And this is the real question behind any question of methodology: How do you know? How do you know you’re making things explicit rather than making things up?

The thing to note, of course, is that Bryant’s answer is no answer. Claiming that philosophy tackles the darkness that comes before cognition in no way answers the question of how philosophy tackles the darkness that comes before cognition. Referencing controversial posits such as ‘concepts,’ or factually unreliable cognitive modes like ‘reflective consciousness’ simply underscores the theoretical plight that he and other traditional philosophers find themselves in. It amounts to saying, “We just aim our flashlights and squint real, real hard.”

But the bigger problem plaguing Bryant’s answer is that it is simply not the case that biology runs into some kind of fundamental limit when it comes to the question of itself. In fact, the one thing we know for sure is that brain function does come before thought and behaviour. Thus the billions being plowed into cognitive scientific research. The image of the ontologically/conceptually blind scientist being led by the ontologically/conceptually sighted philosopher is becoming an ever more preposterous one, an increasingly obvious example of prescientific conceit. With every passing year, it becomes more a matter of the empirically sighted scientist leaving the empirically blind philosopher behind.

“If,” Bryant writes, “it is hopeless to seek a philosophical methodology, then this is because philosophy is a form of thought that precedes anything like the givenness of an object that could then be investigated empirically.” The domain of philosophy, he would have us believe, lies in the darkness that comes before cognition. And yet all across the cognitive sciences one finds researchers tackling this very domain, not simply ‘theorizing,’ but reverse-engineering innumerable cognitive capacities (thus launching us into an engineering future we can scarce imagine). Biology isn’t something passed down from on high, something somehow outside (above, beyond, before) the biological. Biology is itself biological, the physical expression of capacities turning on evolution.

The high dimensional story of biology, the theory or motley of theories arising out of all the data amassed, is the story of the darkness that comes before. It will be the story that sources our thought and our behaviour in an ever complicating (ever empowering) picture.  The “something quite different” that sets philosophy apart, when all is said and done, is the reliance on sparse and ambiguous information (the deliverances of ‘reflective consciousness’) to make theoretical claims without hope of arbitration.

And this leaves us with a far different way to understand what Bryant calls the ‘philosophical situation.’ He refers to the famous quote from the Sophist that Heidegger uses as an epigraph for Being and Time, where the Eleatic stranger reconstructs grounds for demanding some clarification of being, referring to the paradox of knowing how to use the term ‘being’ without understanding being. This ground of perplexity, and the corresponding need for clarification, are what Bryant identifies as the ‘before’ of biological thought. The darkness requiring illumination.

This epigraph so wonderfully illustrates the crisis now embroiling traditional, preemptive philosophical modes. On the one hand it underscores how nothing has been resolved since Plato. Twenty-four centuries of futile inquiry, in my humble opinion, out and out screams that the ‘philosophical situation’ is a kind of cognitive crash space, a place where systems (like intentional cognition) adapted to neglect what’s going on are asked to tell us what’s going on. On the other hand it demonstrates the profundity of our metacognitive innocence, the fact that we are so blind to ourselves as to be everywhere perplexed by what we already know, to be perpetually baffled by the apparent miracle of our understanding.

What are we? The philosopher wants to convince you that only he gets to answer this question in its most fundamental form. Of course, since no philosopher can agree on the answer, this is tantamount to declaring that no one gets to answer this question. And this borders on the farcical, as do all claims to authority (conceptual or otherwise) where no authority is recognized.

Bottomline? Philosophy only has post hoc guesses, and nothing more.

The science, meanwhile, is turning us inside out as you read.

Maybe it’s time to get real, to come to grips with the ugly, as opposed to the flattering.

Orbital Corpses

by rsbakker

Speaking of dead worlds…

IMG_8907

It’s hard to express how cool it is to map out the final corners of the World.

IMG_8913

To finally ink in Golgotterath, where it lies waiting.

If we don’t know how it ends, then at least we know where.

Dead World (by Paul J. Ennis)

by rsbakker

Futility

What’s it like to really give up on philosophy? I don’t mean to give up on a specific brand of philosophy or even to tune out and churn out something akin to it. I mean embracing the knowledge that philosophy is no longer worth doing. I can only answer with a response I would have chastised a student for saying: I can only speak for myself. At some point I came to fully own up to the impossibility that I might work something out about this world that was positive. That I might find a niche in philosophy that I could latch onto and develop, bit by bit. Maybe it might even impress someone at a conference (assuming anyone would even be listening at a conference, they never are). All I really learned from philosophy is that it is very unlikely a bunch of people might reason their way toward an understanding of how it goes with the world. Except in that quirky round-about way where philosophy demonstrates the limitations of reasoning stripped of any lead. You need a bit of lead to weigh things down. But what happens when you realise you could just describe the lead and leave it at that?

If philosophy has a bunch of questions it grapples with perhaps the only decent one left is consciousness. It’s got this edge that apparently makes it resistant to reduction to neurobiological processes such that, contrary to everything we know about reality, it is somehow distinct from nature. Now, there is an entire botnet of thinkers that will, for a fee, find a way to say ‘well it’s both in nature and distinct from it,’ but better them than me. It’s a lot of fuss with little reward. Unless you really think future generations are going to care that you defended the Real or objects or invented the future, all positions currently on offer at discounted prices. My point is that philosophy is not just weird, but doing it is weirder. Even better it has some hilariously entertaining group dynamics. Philosophy is a discipline where you can have a guy defending the necessity of diversity whilst railing against another group doing the exact same. The kind of place where one bully shouts over another about just how damned intolerant the other fellow is. Lots of fellows too. The kind of discipline, to be sure, where men will chastise other men for how their group of men has too many men.

Since there is no common ground anymore, outside the mainstays of security and other mundane issues, we end up with little more than a situation of jockeying for status. Assuming, that is, one is comfortable enough to do so. There are marginalised groups everywhere, but unless you’ve just decided to volunteer or something, I’m going to take the oh-so-bold wager you are mostly in it so that others now you are a really good guy. Or, on the flipside, a rogue. Either way it’s ugly. Whether it’s wilful intellectual censorship or calculated trolling it’s mostly a clamour for the goods. ‘Life is a war of all against all,’ as the eminently reasonable Hobbes once said.  That’s not a bad place to start from. Why? Because it’s honest. It has a ring of truth to it. We organise ourselves for peace, security, and the path of least resistance. In doing so we operate from a suite of facts, of how it goes with the world, and find niches where we might take on a few adventures, like improving our lot. And if this sounds like what you say when with friends that’s because it’s the one group you don’t lie so often to.

I think that’s more or less what consciousness looks at when viewed without romance. As ultra-sophisticated animals we have evolved in a certain direction. There’s a lot in there about just getting on and, indeed, getting along through empathy. This is not entirely neat. Empathy is limited, associated with bonds and kinship often, and it flows into protection. And we know all this exists at least partially because of the threat of others humans and their groups. Even in our own groups the pact is partially rooted in the knowledge that there is a violent streak in us. I say partially because I’m appeasing. Because I don’t want those who dislike such readings to be upset. I’m signalling I’m not so bad. The things we learned and shared were also, and here is a word I know other groups will dislike, arrived at through trade. Our cultural evolution is intimately bound up with the traders who moved between the semi-settled and the settled. Information, tactics, methods, goods, means and ends. Traded. Enough that trade, alongside the embodied sovereign interests we call nations, are intrinsic to our species.

If capitalism is evil then so too are humans. Capitalism is such a clear-eyed ordering of how we are in the world it is no wonder that it no longer has any serious competitors (this, in itself, was always a game of some players at the table operating with one hand tied behind their backs). It captures our mixed feelings about being here at all. It offers the possibility, no matter how remote, of generating a social force field known as wealth. It includes in the chase for that risk. And also every grimy, awful aspect of what our species will do when reward is high enough. It is so essential that those who manage to truly move beyond it take on a holy sheen. It can even present you with the vilest caricature of a human and make you ponder what you would do in their shoes. Most important of all, it’s nothing more than a powerful idea. Like its chief representative, fiat currency, it’s a cognitive agreement. This is worth something and it is worth something because that’s the agreed upon organisational field one is in. But this organisational field is not arbitrary. It’s an expression of what humans need to function. It came about because it worked. Not from the ether.

It worked. Humans and heuristics, peas in a pod. The thing about heuristics is that when you try to grapple with them you are trying to retroactively explain something your brain pushed toward for ends that may not have been all that clear during the push. But that is how we have tended to make discoveries. We do first and fail. Eat berries and die. Try again, well someone else alive would, and live. Then as time passes, not even deep time mind, it seems it has always been so. Since we are especially good at this we might even seem special, bearing an almost supernatural ability to adapt, except, of course, it only looks this way because most of the time we have very limited information about what is going on. Leaving aside the very natural deceptions humans practise as they go about their business there is the much weirder structural fact that the brain, as Bakker has shown, is pretty good at hiding information about its own operations from…well, itself, or us, or whatever tangle of words you prefer.

Heuristically it’s better not to know too much. As is well known it is easier to do something when you are not thinking too much about it than it is when you do. Consider that for a moment. Although we value reason as one of our highest virtues when it comes to doing something it’s best not to think about it too much. You can practise, get better, learn, be trained, and so on, but ultimately your ambition is to perform the action without cognition throwing you off. Now, let’s apply this to self-reflection. By its very nature self-reflection, since it involves thinking too much about something (in this case, thinking), is bound to be tricky. Humans, nonetheless, have engaged in this practise for quite a while. We celebrate Socrates precisely for his ability to force others to trip over themselves as they try. Unless you are Nietzsche and you call this out as ugly. Famously, this two-thousand year old practise has yielded pretty much no clues about the true nature of consciousness. Indeed, the only reason that sentence even matters is because almost every other discipline philosophy concerned itself with, with the exception of maybe ethics, is now analysed by specialists elsewhere. No wonder philosophers are so precious about it.

Previous because nobody likes to have spent a long time working on some thinker or another and then have to admit they have learned very little beyond a few historical curios. That’s pretty much the state of play in any contemporary philosophy department. Ashen-faced at thirty and defending a tiny set of ideas to maybe thirty other specialists across the entire planet the overworked philosophy academic has basically ceased original production in favour of the repetition of a few notes they know by heart. On this score I’m not even railing against those who at least keep zipping around searching. Rather, what stuns me, and I’m not stunned easily these days, is how someone in a discipline dedicated to dropping bad ideas when faced with better ones spends their entire time building defences to ensure they never have to.

Years and years ago at some god-awful leftist event someone told me that Trotsky had said something like, ‘imagine all the Aristotles that have gone unnoticed amongst the working class?’ I’ve always liked this quote, but I guess my point is imagine all the Aristotles that have been lost to organised philosophy (and yes, I do want you to make that association)? I’m going to stop here because, as Nick Land once said, concluding is ugly.

 

 

A Secret History of Enlightened Animals (by Ben Cain)

by rsbakker

Stair of Being

 

As proud and self-absorbed as most of us are, you’d expect we’d be obsessed with reading history to discover more and more of our past and how we got where we are. But modern historical narratives concentrate on the mere facts of who did what to whom and exactly when and where such dramas played out. What actually happened in our recent and distant past doesn’t seem grandiose enough for us, and so we prefer myths that situate our endeavours in a cosmic or supernatural background. Those myths can be religious, of course, but also secular as in films, novels, and the other arts. We’re so fixated on ourselves and on our cultural assumptions that we must imagine we’re engaged in more than just humdrum family life, business, political chicanery, and wars. We’re heroes in a universal tale of good and evil, gods and monsters. We thereby resort to the imagination, overlooking the existential importance of our actual evolutionary transformation. When animals became people, the universe turned in its grave.

 

Awakening from Animal Servitude unto Alienation

The so-called wise way of life, that of our species, originates from the birth of an anomalous form of consciousness. That origin has been widely mythologized to protect us from the vertigo of feeling how fine the line is between us and animals. Thus, personal consciousness has been interpreted as an immaterial spirit or as a spark left behind by the intrusion of a higher-dimensional realm into fallen nature, as in Gnosticism, or as an illusion to maintain the play of the slumbering God Brahman, as in some versions of Hinduism, and so on and so forth. But the consciousness that separates people from animals is merely the particular higher-order thought—that is, a thought about thoughts—that you (your lower-order thoughts) are free in the sense of being autonomous, that you’re largely liberated from naturally-selected, animal processes such as hunting for food or seeking mates in the preprogrammed ways. That thought eventually comes to lie in the background of the flurry of mental activity sustained by our oversized brains, along with the frisson of fear that accompanies the revelation that as long as we can think we’re free from nature, we’re actually so. This is because such a higher-order thought, removed as it is from the older, animal parts of our brain, is just what allows us to independently direct our body’s activities. The freedom opened up by human sentience is typically experienced as a falling away from a more secure position. In fact, our collective origin is very likely encapsulated in each child’s development of personhood, fraught as that is with anxiety and sadness as well as with wonder. Children cry and sulk when they don’t get their way, which is when they learn that they stand apart from the world as egos who must strive to live up to otherworldly social standards.

Animals become people by using thought to lever themselves into a black hole-like viewpoint subsisting outside of nature as such. The results are alienation and the existential crisis which are at the root of all our actions. Organic processes are already anomalous and thus virtually miraculous. Personhood represents not progress, since the values that would define such an advance are themselves alien and unnatural by being anthropocentric, but a maximal state of separation from the world, the exclusion of some primates from the environments that would test their genetic mettle. Personal consciousness is the carving of godlike beings from the raw materials of animal slaves, by the realization that thoughts—memories, emotions, imaginings, rational modeling for the sake of problem-solving—comprise an inner world whose contents need not be dictated by stimuli. The cost of personhood, that is, of virtual godhood in the otherwise mostly inanimate universe, is the suffering from alienation that marks our so-called maturity, our fall from childhood innocence whereupon we land in the adult’s clownish struggles with hubris. Our independence empowers us to change ourselves and the world around us, and so we assume we’re the stars of the cosmic show or at least of the narrative of our private life. But because the business of our acting like grownups is witnessed by hardly any audience at all—except in the special case of celebrities who are ironically infantilized by their fame, because the wildly inhuman cosmos is indifferent to our successes and failures—we typically develop into existential mediocrities, not heroes. We overcompensate for the anguish we feel because our thoughts sever us from everything outside our skull, becoming proud of our adult independence; we’re like children begging their parents to admire their finger paintings. The natural world responds with randomness and indiscriminateness, with luck and indifference, humiliating us with a sense of the ultimate futility of our efforts. Our oldest solution is to retreat to the anthropocentric social world in which we can honour our presumed greatness, justly rewarding or punishing each other for our deeds as we feel we deserve.

 

Hypersocialization and the Existential Crisis of Consciousness

The alienation of higher consciousness is followed, then, by intensive socialization. Animals socialize for natural purposes, whereas we do so in the wake of the miracle of personhood. Our relatively autonomous selves are miraculous not just because they’re so rare (count up the rocks and the minds in the universe, for example, and the former will so outnumber the latter that minds will seem to have spontaneously popped into existence without any general cause), but because whereas animals adapt to nature, conforming to genetic and environmental regularities, people negate those regularities, abandoning their genetic upbringing and reshaping the global landscape. The earliest people channeled their resentment against the world they discovered they’re not wholly at home in, by inventing tools to help them best nature and its animal slaves, but also by forming tribes defined by more and more elaborate social conventions. The more arbitrary the implicit and explicit laws that regulate a society, the more frenzied its members’ dread of being embedded in a greater, uncaring wilderness. Again, human societies are animalistic in so far as they rely on the structure of dominance hierarchies, but whereas alpha males in animal groups overpower their inferiors for the natural reason of maintaining group cohesion to protect the alphas whose superior genes are the species’ best hope for future generations, human leaders adopt the pathologies of the God complex. Indeed, all people would act like gods if only they could sustain the farce. Alas, just as every winning lottery ticket necessitates multitudes of losers, every full-blown personal deity depends on an army of worshippers. Personhood makes us all metaphysically godlike with respect to our autonomy and our liberation from some natural, impersonal systems, but only a lucky minority can live like mythical gods on Earth.

We socialize, then, to flatter our potential for godhood, by elevating some of our members to a social position in which they can tantalize us with their extravagant lifestyles and superhuman responsibilities. We form sheltered communities in which we can hide from nature’s alien glare. Our elders, tyrants, kings, and emperors lord it over us and we thank them for it, since their appallingly decadent lives nevertheless prove that personhood can be completed, that an absolute fall from the grace of animal innocence isn’t asymptotic, that our evolution has a finite end in transhumanity. Our psychopathic rulers are living proofs that nature isn’t omnipresent, that escape is possible in the form of insanity sustained by mass hallucination. We daydream the differences between right and wrong, honour and dishonour, meaning and meaninglessness. We fill the air with subtle noises and imagine that those symbols are meant to lay bare the final truth. We thus mitigate the removal of our mind from the world, with a myth of reconciliation between thoughts and facts. But language was likely conceived of in the first place as a magical instrument, that is, as an extension of mentality into nature which was everywhere anthropomorphized. Human tribes were assumed to be mere inner circles within a vast society of gods, monsters, and other living forces. We socialized, then, not just to escape to friendly domains to preserve our dignity as unnatural wonders, but to pretend that we hadn’t emerged just by a satanic/promethean act of cognitive defiance, with the ego-making thought that severs us from natural reality. We childishly presumed that the whole universe is a stage populated by puppets and actors; thus, no existential retreat might have been deemed necessary, because nature’s alienness was blotted out in our mythopoeic imagination. As in Genesis, God created by speaking the world into being, just as shamans and magicians were believed to cast magical spells that bent reality to their will.

But every theistic posit was part of an unconscious strategy to avoid facing the obvious fact that since all gods are people, we’re evidently the only gods. Nevertheless, having conceived of theistic fictions, we drew up models to standardize the behaviour of actual gods. Thus, the Pharaoh had to be as remote and majestic as Osiris, while the Roman Emperor had to rule like Jupiter, the Raj had to adjudicate like Krishna, the Pope had to appear Christ-like, and the U.S. President has to seem to govern like your favourite Hollywood hero. The double standard that exempts the upper classes from the laws that oppress the lowly masses is supposed to prevent an outbreak of consciousness-induced angst. Social exceptions for the upper class work with mass personifications and enchantments of nature, and those propagandistic myths are then made plausible by the fact that superhuman power elites actually exist. Ironically, such class divisions and their concomitant theologies exacerbate the existential predicament by placing those exquisite symbols of our transcendence (the power elites) before public consciousness, reminding us that just as the gods are prior to and thus independent of nature, so too we who are the only potential or actual gods don’t belong within that latter world.

 

Scientific Objectivity and Artificialization

Hypersocialization isn’t our only existential stratagem; there’s also artificialization as a defense against full consciousness of our unnatural self-control. Whereas the socializer tries to act like a god by climbing social ladders, bullying his underlings, spending unseemly wealth in generational projects of self-aggrandizement, and creating and destroying societal frameworks, the artificializer wants to replace all of nature with artifacts. That way, what began as the imaginary negation of nature’s inhuman indifference to life, in the mythopoeic childhood of our species, can be fulfilled when that indifference is literally undone by our re-engineering of natural processes.

To do that, the artificializer needs to think, not just to act, like a god. That required forming cognitive programs that don’t depend on the innate, naturally-selected ones. Cognitive scientists maintain that the brain’s ability to process sensations, for example, evolved not to present us with the absolute truth but to ensure our fitness to our environment, by helping us survive long enough to sexually reproduce. Animal neural pathways differ from personal ones in that the former serve the species, not the individual, and so the animal is fundamentally a puppet acting out its life cycle as directed by its genetic programming and by certain environmental constraints. Animals can learn to adapt their behaviour to their environment and so their behaviour isn’t always robotic, but unless they can apply their learning towards unnatural ends, such as by developing birth control techniques that systematically thwart the pseudo goals of natural selection, they’ll think as animals, not as gods. Animals as such are entirely natural creatures, meaning that in so far as their behaviour is mediated by an independent control center, their thinking nevertheless is dedicated to furthering the end of natural selection, which is just that of transmitting genes to future generations. By contrast, gods don’t merely survive or even thrive. Insects and bacteria thrive, as did the dinosaurs for millions of years, but none were godlike because none were existentially transformed by conscious enlightenment, by a cognitive black hole into which an animal can fall, creating the world of inner space.

People, too, have animal programming, such as the autonomic programs for processing sensory information. Social behaviour is likewise often purely animalistic, as in the cases of sex and the power struggle for some advantage in a dominance hierarchy. Rational thinking is less so and thus less natural, meaning more anti-natural in that it serves rational ideals rather than just lower-order aims. To be sure, Machiavellian reasoning is animalistic, but reason has also taken on an unnatural function. Whereas writing was first used for the utilitarian purpose of record keeping, reason in the Western tradition was initially not so practical. The Presocratics argued about metaphysical substances and other philosophical matters, indicating that they’d been largely liberated from animal concerns of day-to-day survival and were exploring cognitive territory that’s useful only from the internal, personal perspective. Who am I really? What is the world, ultimately speaking? Is there a worthy difference between right and wrong? Such philosophical questions are impossible without rational ideals of skepticism, intellectual integrity, and love of knowledge even if that knowledge should be subversive—as it proved to be in Socrates’ classic case.

While the biblical Abraham was willing to sacrifice his son for the sake of hypersocializing with an imaginary deity, Socrates died for the antisocial quest of pursuing objective knowledge that inevitably threatens the natural order along with the animal social structures that entrench that order, such as the Athenian government of his day. Socrates cared not about face-saving opinions, but about epistemic principles that arm us with rationally-justified beliefs about how the world might be in reality. Much later, in the Scientific Revolution, rationalists (which is to say philosophers) in Europe would revive the ancient pagan ideal of reasoning regardless of the impact on faith-based dogmas. Scientists like Isaac Newton developed cognitive methods that were counterintuitive in that they went against the grain of more natural human thinking that’s prone to fallacies and survival-based biases. In addition, he served rational institutions, namely the Royal Society and Cambridge, which rivaled the genes for control over the enlightened individual’s loyalty. Moreover, the findings of those cognitive methods were symbolized using artificial languages such as mathematics and formal logic, which enabled liberated minds to communicate their discoveries without the genetic tragicomedies of territorialism, fight-or-flight responses, hero worship, demagoguery, and the like that are liable to be triggered by rhetoric and metaphors expressed in natural languages.

But what is objective knowledge? Are scientists and other so-called enlightened rationalists as neutral as the indifferent world they study? No, rationalists in this broad sense are partly liberated from animal life but they’re not lost in a limbo; rather, they participate in another, unnatural process which I’m calling artificialization. Objectivity isn’t a purely mechanical, impersonal capacity; indeed, natural processes themselves have aesthetically interpretable ends and effective means, so there are no such capacities. In any case, the search for objective knowledge builds on human animalism and on our so-called enlightenment, on our having transcended our animal past and instincts. We were once wholly slaves to nature and we often behave as if we were still playthings of natural forces. But consciousness and hypersocialization provided escapes, albeit into fantasy worlds that nevertheless empowered us. We saw ourselves as being special because we became aware of the increasing independence of our mental models from the modeled territory, owing to the formers’ ultra-complexity. The inner world of the mind emerged and detached from the natural order—not just metaphysically or abstractly, but psychologically and historically. That liberation was traumatic and so we fled to the fictitious world of our imagination, to a world we could control, and we pretended the outer world was likewise held captive to our mental projections. The rational enterprise is fundamentally another form of escape, a means of living with the burden of hyper-awareness. Instead of settling for cheap, flimsy mental constructions such as our gods, boogeymen, and the panoply of delusions to which we’re prone, and instead of hording divinity in the upper social classes that exercise their superpowers in petty or sadistic projects of self-aggrandizement, we saw that we could usurp God’s ability to create real worlds, as it were. We could democratize divinity, replacing impersonal nature with artificial constructs that would actually exist outside our minds as opposed to being mere projections of imagination and existential longing.

The pragmatic aspect of objectivity is apparent from the familiar historical connections between science, European imperialism, and modern industries. But it’s apparent also from the analytical structure of scientific explanations itself. The existential point of scientific objectivity was paradoxically to achieve a total divorce from our animal side by de-personalizing ourselves, by restraining our desire for instant gratification, scolding our inner child and its playpen, the imagination, and identifying with rational methods. Whereas an animal relies on its hardwired programs or on learned rules-of-thumb for interpreting its environment, an enlightened person codifies and reifies such rules, suspending disbelief and siding with idealized or instrumental formulations of these rules so that the person can occupy a higher cognitive plane. Once removed from natural processes by this identification with rational procedures and institutions, with teleological algorithms, artificial symbols and the like, the animal has become a person with a godlike view from outside of nature—albeit not an overview of what the universe really is, but an engineer’s perspective of how the universe works mechanically from the ground up.

To see what I mean, consider the Hindu parable of the blind men who try to ascertain the nature of an elephant by touching its different body parts. One of the men feels a tusk and infers that the elephant is like a pipe. Another touches the leg and thinks the whole animal is like a tree trunk. Another touches the belly and believes the animal is like a wall. Another touches the tail and says the elephant is like a rope. Finally, another one touches the ear and thinks the elephant is like a hand fan. One of the traditional lessons of this parable is that we can fallaciously overgeneralize and mistake the part for the whole, but this isn’t my point about science. Still, there is a difference between what the universe is in reality, which is what it is in its entirety in so far as all of its parts form a cohesive order, and how inquisitive primates choose to understand the universe with their divisive concepts and models. Scientists can’t possibly understand everything in nature all at once; the word “universe” is a mere placeholder with no content adequate to the task of representing everything that’s out there interacting to produce what we think of as distinct events. We have no name for the universe which gives us power over it by identifying its essence, as it were. So scientists analyze the whole, observing how parts of the world work in isolation, ideally in a laboratory. They then generalize their findings, positing a natural regularity or nomic relation between those fragments, as pictured by their model or theory. It’s as if scientists were the blind men who lack the brainpower to cognize the whole of natural reality, and so they study each part, perhaps hoping that if they cooperate they can combine their partial understandings and arrive at some inkling of what the natural universe in general is. Unfortunately, the deeper we look into nature, the more complexity we find in its parts and so the more futile becomes any such plan for total comprehension. Scientists can barely keep up with advances in their subfields; the notion that anyone could master all the sciences as they currently stand is ludicrous, and there’s still much in the world that isn’t scientifically understood by anyone.

So whatever the scientist’s aspiration might be, the effect of science isn’t the achievement of complete, final understanding of everything in the universe or of the whole of nature. Instead, science allows us to rebuild the whole based on partial, analytical knowledge of how the world works. Suppose scientists discover an extraterrestrial artifact and they have no clue as to the artifact’s function, which is to say they have no understanding of what the object is in reality. Still, they can reverse-engineer the artifact, taking it apart, identifying the materials used to assemble it and certain patterns in how the parts interact with each other. With that limited knowledge of the artifact’s mechanical aspect, scientists might be able to build a replica or else they could apply that knowledge to create something more useful to them, that is, something that works in similar ways to the original but which works towards an end supplied by the scientists’ interests, not the alien’s. There would be no point in replicating the alien technology, since the artifact would be useless without knowledge of what it’s for or without even a shared interest in pursuing that alien goal. Replace the alien artifact with the natural universe and you have some measure of the Baconian position of human science. Of course, nature has no designer; nevertheless, we experience natural processes as having ends and so we’re faced with the choice of whether to apply our piecemeal knowledge of natural mechanisms to the task of reinforcing those ends or to that of adjusting or even reversing them. The choice is to act as stewards of God’s garden, as it were, or as promethean rebels who seek to be divine creators. There are still enclaves of native tribes living as retro-human animals and preserving nature rather than demolishing the wilderness and establishing in its place a technological wonderland built with knowledge of natural mechanisms. But the billions of participants in the science-driven, global monoculture have evidently chosen the promethean, quasi-satanic path.

 

Existentialism and our Hidden History

History is a narrative that often informs us indirectly about the present state of human affairs, by representing part of our past. Ancient historical narratives were more mythical than fact-based. The New Testament, for example, uses historical details to form an exoteric shell around the Gnostic, transhumanist suspicion that human nature is “fallen” to the extent that we surrender our capacity to transcend the animal life cycle; we must “die” to our natural bodies and be reborn in a glorious, unnatural or “spiritual” form. At any rate, like poetry, the mythical language of such ancient historical narratives is open to endless interpretations, which is to say that such stories are obscure. Josephus’s ancient histories of the Jewish people, written for a Roman audience, aren’t so mythologized but they’re no less propagandistic. By contrast, modern historians strive to avoid the pitfalls of writing highly subjective or biased narratives, and so they seek to analyze and interpret just the facts dug up by archeologists and textual critics. Modern histories are thus influenced by the exoteric presumption about science, which is that science isn’t primarily in the business of artificializing everything that’s wild in the sense of being out of our control, but is just a mode of inquiry for arriving at the objective truth (come what may).

Left out of this development of the telling of history is the existential significance of our evolutionary transition from being animals, which were at one with nature, to being people who are implicitly if not consciously at war with everything nonhuman. What I’ve sketched above is part of our secret history; it’s the story of what it means to be human, which underlies all our endeavours. The significance of our standing between animalism and godhood is hidden and largely unknown or forgotten, because at the root of this purpose that drives us is the trauma of godlike consciousness which we’d rather not relive. We each have our fill of that trauma in our passage from childhood innocence, which approximates the animal state of unknowing, to adult independence. Teen angst, which cultures combat with initiation rituals to distract the teenager with sanctioned, typically delusional pastimes, is the tip of the iceberg of pain that awaits anyone who recognizes the plight entailed by our very form of existence.

In Escape from Freedom, Erich Fromm argued that citizens of modern democracies are in danger of preferring the comfort of a totalitarian system, to escape the ennui and dehumanization generated by modern societies. In particular, capitalistic exploitation of the worker class and the need to assimilate to an environment run more and more by automated, inhuman machines are supposed to drive civilized persons to savage, authoritarian regimes. At least, this was Fromm’s explanation of the Nazis’ rise to power. A similar analysis could apply to the present degeneration of the Republican Party in the U.S. and to the militant jihadist movement in the Middle East. But Fromm’s analysis is limited. To be sure, capitalism and technology have their drawbacks and these may even contribute to totalitarianism’s appeal, as Fromm shows. But this overlooks what liberal, science-driven societies and savage, totalitarian societies have in common. Both are flights from existential reckoning, as I’ve explained: the one revolves around artificialization (Enlightenment, rationalist values of individual autonomy, which deteriorate until we’re left with the fraud of consumerism), the other around hypersocialization (cult of personality, restoring the sadomasochistic interplay between mythical gods and their worshippers). Fromm ignores the existential effect of the rational enlightenment that brought on modern science, democracy, and capitalism in the first place, the effect being our deification. By deifying ourselves, we prevent our treasured religions from being fiascos and we spare ourselves the horror of living in an inhuman wilderness from which we’re alienated by our hyper-awareness.

We created the modern world to accelerate the rate at which nature is removed from our presence. Contrary to optimists like Steven Pinker, modernity hasn’t fulfilled its promise of democratizing divinity, as I’d put it. Robber barons and more parasitic oligarchs do indeed resort to the older departure of hypersocialization, acting like decadent gods in relation to human slaves instead of focusing their divine creativity on our common enemy, the monstrous wilderness. The internet that trivializes everything it touches and the omnipresence of our high-tech gadgets do infantilize us, turning us into cattle-like consumers instead of unleashing our creativity and training us to be the indomitable warriors that alone could endure the promethean mission. This is because we, being the only gods that exist, are woefully unprepared for our responsibility, having retained our animal heritage in the form of our bodies which infect most of our decisions with natural fears and prejudices. At any rate, the deeper story of the animal that becomes a godlike person to obliterate the source of alienation that’s the curse of any flawed, lonely godling helps explain why we now settle more often for the minor anxieties of living in modern civilization, to avoid the major angst of recognizing the existential importance of what we are.

On the Inapplicability of Philosophy to the Future

by rsbakker

By way of continuing the excellent conversation started in Lingering: The problem is that we evolved to be targeted, shallow information consumers in unified, deep information environments. As targeted, shallow information consumers we require two things: 1) certain kinds of information hygiene, and 2) certain kinds of background invariance. (1) is already in a state of free-fall, I think, and (2) is on the technological cusp. I don’t see any plausible way of reversing the degradation of either ecological condition, so I see the prospects for traditional philosophical discourses only diminishing. The only way forward that I can see is just being honest to the preposterous enormity of the problem. The thought that rebranding old tools that never delivered back when (1) and (2) were only beginning to erode will suffice now that they are beginning to collapse strikes me as implausible.

The Lingering of Philosophy

by rsbakker

Nietzsche Poster

The ‘Death of Philosophy’ is something that circulates through arterial underbelly of culture with quite some regularity, a theme periodically goosed whenever high-profile scientific figures bother to express their attitudes on the subject. Scholars in the humanities react the same way stakeholders in any institution react when their authority and privilege are called into question: they muster rationalizations, counterarguments, and pejoratives. They rally troops with whooping war-cries of “positivism” or “scientism,” list all the fields of inquiry where science holds no sway, and within short order the whole question of whether philosophy is dead begins to look very philosophical, and the debate itself becomes evidence that philosophy is alive and well—in some respects at least.

The problem with this pattern, of course, is that the terms like ‘philosophy’ or ‘science’ are so overdetermined that no one ends up talking about the same thing. For physicists like Stephen Hawking or Lawrence Krauss or Neil deGrasse Tyson, the death of philosophy is obvious insofar as the institution has become almost entirely irrelevant to their debates. There are other debates, they understand, debates where scientists are the hapless ones, but they see the process of science as an inexorable, and yes, imperialistic one. More and more debates fall within its purview as the technical capacities of science improve. They presume the institution of philosophy will become irrelevant to more and more debates as this process continues. For them, philosophy has always been something to chase away. Since the presence of philosophers in a given domain of inquiry reliably indicates scientific ignorance to important features of that domain, the relevance of philosophers is directly related to the maturity of a science.

They have history on their side.

There will always be speculation—science is our only reliable provender of theoretical cognition, after all. The question of the death of philosophy cannot be the question of the death of theoretical speculation. The death of philosophy as I see it is the death of a particular institution, a discourse anchored in the tradition of using intentional idioms and metacognitive deliverances to provide theoretical solutions. I think science is killing that philosophy as we speak.

The argument is surprisingly direct, and, I think, fatal to intentionalism, but as always, I would love to hear dissenting opinions.

 

1) Human cognition only has access to the effects of the systems cognized.

2) The mechanical structure of our environments is largely inaccessible.

3) Cognition exploits systematic correlations—‘cues’—between those effects that can be accessed and the systems engaged to solve for those systems.

4) Cognition is heuristic.

5) Metacognition is a form of cognition.

6) Metacognition also exploits systematic correlations—‘cues’—between those effects that can be accessed and the systems engaged to solve for those systems.

7) Metacognition is also heuristic.

8) Metacognition is the product of adventitious adaptations exploiting onboard information in various reproductively decisive ways.

9) The applicability of that ancestral information to second order questions regarding the nature of experience is highly unlikely.

10) The inability of intentionalism to agree on formulations, let alone resolve issues, evidences as much.

11) Intentional cognition is a form of cognition.

12) Intentional cognition also exploits systematic correlations—‘cues’—between those effects that can be accessed and the systems engaged to solve for those systems.

13) Intentional cognition is also heuristic.

14) Intentional cognition is the product of adventitious adaptations exploiting available onboard information in various reproductively decisive ways.

15) The applicability of that ancestral information to second order questions regarding the nature of meaning is highly unlikely.

16) The inability of intentionalism to agree on formulations, let alone resolve issues, evidences as much.

Intentional Philosophy as the Neuroscientific Explananda Problem

by rsbakker

The problem is basically that the machinery of the brain has no way of tracking its own astronomical dimensionality; it can at best track problem-specific correlational activity, various heuristic hacks. We lack not only the metacognitive bandwidth, but the metacognitive access required to formulate the explananda of neuroscientific investigation.

A curious consequence of the neuroscientific explananda problem is the glaring way it reveals our blindness to ourselves, our medial neglect. The mystery has always been one of understanding constraints, the question of what comes before we do. Plans? Divinity? Nature? Desires? Conditions of possibility? Fate? Mind? We’ve always been grasping for ourselves, I sometimes think, such was the strategic value of metacognitive capacity in linguistic social ecologies. The thing to realize is that grasping, the process of developing the capacity to report on our experience, was bootstapped out of nothing and so comprised the sum of all there was to the ‘experience of experience’ at any given stage of our evolution. Our ancestors had to be both implicitly obvious, and explicitly impenetrable to themselves past various degrees of questioning.

We’re just the next step.

What is it we think we want as our neuroscientific explananda? The various functions of cognition. What are the various functions of cognition? Nobody can seem to agree, thanks to medial neglect, our cognitive insensitivity to our cognizing.

Here’s what I think is a productive way to interpret this conundrum.

Generally what we want is a translation between the manipulative and the communicative. It is the circuit between these two general cognitive modes that forms the cornerstone of what we call scientific knowledge. A finding that cannot be communicated is not a finding at all. The thing is, this—knowledge itself—all functions in the dark. We are effectively black boxes to ourselves. In all math and science—all of it—the understanding communicated is a black box understanding, one lacking any natural understanding of that understanding.

Crazy but true.

What neuroscience is after, of course, is a natural understanding of understanding, to peer into the black box. They want manipulations they can communicate, actionable explanations of explanation. The problem is that they have only heuristic, low-dimensional, cognitive access to themselves: they quite simply lack the metacognitive access required to resolve interpretive disputes, and so remain incapable of formulating the explananda of neuroscience in any consensus commanding way. In fact, a great many remain convinced, on intuitive grounds, that the explananda sought, even if they could be canonically formulated, would necessarily remain beyond the pale of neuroscientific explanation. Heady stuff, given the historical track record of the institutions involved.

People need to understand that the fact of a neuroscientific explananda problem is the fact of our outright ignorance of ourselves. We quite simply lack the information required to decide what it is we’re explaining. What we call ‘philosophy of mind’ is a kind of metacognitive ‘crash space,’ a point where our various tools seem to function, but nothing ever comes of it.

The low-dimensionality of the information begets underdetermination, underdetermination begets philosophy, philosophy begets overdetermination. The idioms involved become ever more plastic, more difficult to sort and arbitrate. Crash space bloats. In a sense, intentional philosophy simply is the neuroscientific explananda problem, the florid consequence of our black box souls.

The thing that can purge philosophy is the thing that can tell you what it is.

Follow

Get every new post delivered to your Inbox.

Join 711 other followers