Three Pound Brain

No bells, just whistling in the dark…

Tag: Nihilism

Bleak Theory (By Paul J. Ennis)

by rsbakker

In the beginning there was nothing and it has been getting steadily worse ever since. You might know this, and yet repress it. Why? Because you have a mind that is capable of generating useful illusions, that’s why. How is this possible? Because you are endowed with a brain that creates a self-model which has the capacity to hide things from ‘you.’ This works better for some than for others. Some of us are brain-sick and, for whatever perverse reasons, we chip away at our delusions. In such cases recourse is possible to philosophy, which offers consolation (or so I am told), or to mysticism, which intentionally offers nothing, or to aesthetics, which is a kind of self-externalizing that lets the mind’s eye drift elsewhere. All in all, however, the armor on offer is thin. Such are the options: to mirror (philosophy), to blacken (mysticism), or to embrace contingency (aesthetics). Let’s select the latter for now. By embracing contingency I mean that aesthetics consists of deciding upon and pursuing something quite specific for intuitive rather than rational reasons. This is to try to come to know contingency in your very bones.

As a mirrorer by trade I have to abandon some beliefs to allow myself to proceed this way. My belief that truth comes first and everything else later will be bracketed. I replace this with a less demanding constraint: truth comes when you know why you believe what you believe. Oftentimes I quite simply believe things because they are austere and minimal and I have a soft spot for that kind of thing. When I allow myself to think in line with these bleak tones an unusual desire is generated: to outbleak black, to be bleaker than black. This desire comes from I know not where. It seemingly has no reason. It is an aesthetic impulse. That’s why I ask that you take from what follows what you will. It brings me no peace either way.

I cannot hope to satisfy anyone with a definition of aesthetic experience, but let me wager that those moments that let me identify with the world a-subjectively – but not objectively – are commonly associated in my mind with bleakness. My brain chemistry, my environment, and similar contingent influences have rendered me this way. So be it. Bleakness manifests most often when I am faced with what is most distinctly impersonal: with cloudscapes and dimmed, wet treescapes. Or better yet, any time I witness a stark material disfiguration of the real by our species. And flowering from this is a bleak outlook correlated with the immense, consistent, and mostly hidden, suffering that is our history – our being. The intensity arising from the global reach of suffering becomes impressive when dislocated from the personal and the particular because then you realize that it belongs to us. Whatever the instigator the result is the same: I am alerted not just to the depths of unknowing that I embody, to the fact that I will never know most of life, but also to the industrial-scale sorrow consistently operative in being. All that is, is a misstep away from ruin. Consciousness is the holocaust of happiness.

Not that I expect anything more. Whatever we may say of our cultural evolution there was nothing inscribed in reality suggesting our world should be a fit for us. I am, on this basis, not surprised by our bleak surroundings. The brain, model-creator that it is, does quite a job at systematizing the outside into a representation that allows you to function; assuming, that is, that you have been gifted with a working model. Some have not. Perhaps the real horror is to try to imagine what has been left out (even the most ardent realist surely knows you do not look at the world directly as it is). Thankfully there is no real reason for us to register most of the information out there and we were not designed to know most of it anyway. This is the minimal blessing our evolution has gifted us with. The maximal damage is that from the exaption we call consciousness cultural evolution flowers and puts our self-model at the mercy of a bombardment of social complexity – our factical situation. It is impossible to know how our information age is toying with our brain, suffice to say that the spike in depression, anxiety and self-loathing is surely some kind of signal. The brain though, like the body, can function even when maltreated. Whether this is truly to the good is difficult to say.

And yet we must be careful to remember that even in so-called eliminative materialism the space of reasons remains. The normative dimension is, as Brandom would put it, irreducible. It does not constitute the entire range of cognition, and is perhaps best deflated in light of empirical evidence, but that is beside the point. To some degree, perhaps minor, we are rational animals with the capacity for relatively free decision-making. My intuition is that ultimately the complexity of our structure means that we will never be free of certain troubles arising from what we are. Being embodied is to be torn between immense capacity and the constant threat of losing capacities. A stroke, striking as if from nowhere, can fundamentally alter anyone. This is not to suggest that progress does not occur. It can and it does, but it can also be, and often is, undone. It’s an unfortunate state of affairs, bleak even, but being attuned to the bleakness of reality does not result in passivity by necessity.

Today there are projects that explicitly register all this, and nonetheless intend to operate in line with the potentiality contained within the capacities of reason. What differentiates these projects, oftentimes rationalist in nature, is that they do not follow our various universalist legacies in simply conceiving of the general human as deserving of dignity simply because we all belong to the same class of suffering beings. This is not sufficient to make humans act well. The phenomenon of suffering is easily recognizable and most humans are acutely aware of it, and yet they continue to act in ways contrary to how we ‘ought’ to respond. In fact, it is clear that knowing the sheer scale of suffering may lead to hedonism, egoism or repression. Various functional delusions can be generated by our mind, and it is hardly beyond us to rationalize selfishness on the basis of the universal. We are versatile like that. For this reason, I find myself torn between two poles. I maintain a philosophical respect for various neo-rationalist projects under development. And I remain equally under no illusion they will ever be put to much use. And I do not blame people for falling short of these demands. I am so far from them I only really take them seriously on the page. I find myself drawn, for these reasons, to the pessimist attitude, often considered a suspect stance.

One might suggest that we need only a minimal condition to be ethical. An appeal to the reality of pain in sentient and sapient creatures, perhaps. In that decision you might find solace – despite everything (or in spite of everything). It is a choice, however. Our attempts to assert an ethical universalism are bound up with a counter-logic: the bleak truth of contingency on the basis of the impersonal-in-the-personal. It is a logic quietly operative in the philosophical tradition and one I believe has been suppressed. Self-suppressed it flirts too much with a line leading us to the truth of our hallucination. It’s Nietzsche telling you about perspectivism hinging on the impersonal will-to-power and then you maturing, and forgetting. Not knocking his arguments out of the water, mind. Simply preferring not to accept it. Nobody wants to circle back round to the merry lunatic truths that make a mockery of your life. You might find it hard to get out of bed…whereas now I am sure you leap up every morning, smile on your face…The inhuman, impersonal attachment to each human has many names, but let us look at some that are found right at the heart of the post-Kantian tradition: transcendental subject, Dasein, Notion. Don’t believe me? I don’t mind, it makes no difference to me.

Let’s start with the sheer impersonality involved in Heidegger’s sustained fascination with discussing the human without using the word. Dasein is not supposed to be anything or anyone, in particular. Now once you think about it Dasein really does come across as extraordinarily peculiar. It spends a lot of its time being infested by language since this is, Heidegger insists, the place where its connection to being can be expressed. Yet it is also an easily overrun fortress that has been successfully invaded by techno-scientific jargon. When you hook this thesis up with Heidegger’s epochal shifts then the impersonal forces operative in his schema start to look downright ominous. However, we can’t blame Heidegger on what we can blame on Kant. His transcendental field of sense also belongs to one and all. And so, like Dasein, no one in particular. This aspect of the transcendental field still remains contentious. The transcendental is, at once, housed in a human body but also, in its sense-making functions, to be considered somehow separate from it. It is not quite human, but not exactly inhuman either.

There is, then, some strange aspect, I can think of no other word for it, inhabiting our own flowing world of a coherent ego, or ‘I,’ that allows for the emergence of a pooled intersubjectivity. Kant’s account, of course, had two main aims: to constrain groundless metaphysical speculation and, in turn, to ground the sciences. Yet his readers did not always follow his path. Kant’s decision to make a distinction between the phenomena and the noumena is perhaps the most consequential one in our tradition and is surely one of the greatest examples of opening up what you intended to close down. The nature of the noumenal realm has proven irresistible to philosophers and it has recursive consequences for how we see ourselves. If the nominal realm names a reality that is phenomenally clouded then it surely precedes, ontologically, the ego-as-center; even if it is superseded by the ego’s modelling function for us. Seen within the wider context of the noumenal realm it is legitimate to ask whether the ‘I’ is merely a densely concentrated, discrete packet amidst a wider flow; a locus amidst the chaos. The ontological generation of egos is then shorn back until all you have is Will (Schopenhaeur), Will to Power (Nietzsche), or, in a less generative sense ‘what gives,’ es gibt (Heidegger). This way of thinking belongs, when one takes the long-view, to the slow-motion deconstruction of the Cartesian ego in post-Kantian philosophy, albeit with Husserl cutting a lonely revivalist figure here. Today the ego is trounced everywhere, but there is perhaps no better example that the ‘no-self-at-all’ argument of Metzinger, but even the one-object-amongst-many thesis of object oriented ontology traces a similar line.

The destruction of the Cartesian ego may have its lineage in Kant, but the notion of the impersonal as force, process, or will, owes much to Hegel. In his metaphysics Hegel presents us with a cosmic loop explicable through retroactive justification. At the beginning, the un-articulated Notion, naming what is at the heart-of-the-real, sets off without knowledge of itself, but with the emergence of thinking subjects the Notion is finally able to think itself. In this transition the gap between the un-articulated and articulated Notion is closed, and the entire thing sets off again in directions as yet unknown. Absolute knowing is, after all, not totalized knowing, but a constant, vigilant knowing navigating its way through contingency and recognizing the necessity below it all. But that’s just the thing: despite being important conduits to this process, and having a quite special and specific function, it’s the impersonal process that really counts. In the end Kant’s attempt to close down discussion about the nature of the noumenal realm simply made it one of the most appealing themes for a philosopher to pursue. Censorship helps sales.

Speaking of sales, all kinds of new realism are being hawked on the various para-academic street-corners. All of them benefit from a tint of recognizability rooted, I would suggest, in the fact that ontological realism has always been hidden in plain sight; for any continentalist willing to look. What is different today is how the question of the impersonal attachments affecting the human comes not from inside philosophy, but from a number of external pressures. In what can only be described as a tragic situation for metaphysicians, truth now seeps into the discipline from the outside. We see thinking these days where philosophers promised there was none. The brilliance of continental realism lies in reminding us how this is an immense opportunity for philosophers to wake up from various self-induced slumbers, even if that means stepping outside the protected circle from time to time. It involves bringing this bubbling, left-over question of ontological realism right to the fore. This does not mean ontological realism will come to be accepted and then casually integrated into the tradition. If anything the backlash may eviscerate it, but the attempt will have been made. Or was, and quietly passed.

And the attempt should be made because the impersonality infecting ontological realist excesses such as the transcendental subject (in-itself), the Notion, or Dasein are attuned to what we can now see as the (delayed) flowering of the Copernican revolution. The de-centering is now embedded enough that whatever defense of the human we posit it must not be dishonest. We cannot hallucinate our way out of our ‘cold world’. If we know that our self-model is itself a hallucination, but a very real one, then what do we do then? Is it enough to situate the real in our ontological flesh and blood being-there that is not captured by thinking? Or is it best to remain with thinking as a contingent error that despite its aberrancy nonetheless spews out the truth? These avenues are grounded in consciousness and in our bodies and although both work wonders they can just as easily generate terrors. Truth qualified by these terrors is where one might go. No delusion can outflank these constraints forever. Bled of any delusional disavowal, one tries to think without hope. Hope is undignified anyway. Dignity involves resisting all provocation and remaining sane when you know it’s bleakness all the way down.

Some need hope, no? As I write this I feel the beautiful soul rising from his armchair, but I do not want to hear it. Bleak theory is addressed to your situation: a first worlder inhabiting an accelerated malaise. The ethics to address poverty, inequality, and hardship will be different. Our own heads are disordered and we do not quite know how to respond to the field outside it. You will feel guilty for your myopia, and you deserve it, but you cannot elide by endlessly pointing to the plank in the other’s eye.  You can pray through your tears, and in doing so ironically demonstrate the disturbance left by the death of God, but what does this shore up? It builds upon cathedral ruins: those sites where being is doubled-up and bent-over-backwards trying to look inconspicuous as just another option. Do you want to write religion back into being? Why not, as Ayache suggests, just ruin yourself? I hope it is clear I don’t have any answers: all clarity is a lie these days. I can only offer bleak theory as a way of seeing and perhaps a way of operating. It ‘works’ as follows: begin with confusion and shear away at what you can. Whatever is left is likely the closest thing approximating to what we name truth. It will be strictly negative. Elimination of errors is the best you can hope for.

I don’t know how to end this, so I am just going to end it.

 

Advertisements

Snuffing the Spark: A Nihilistic Account of Moral Progress

by rsbakker

sparkman

 

If we define moral progress in brute terms of more and more individuals cooperating, then I think we can cook up a pretty compelling naturalistic explanation for its appearance.

So we know that our basic capacity to form ingroups is adapted to prehistoric ecologies characterized by resource scarcity and intense intergroup competition.

We also know that we possess a high degree of ingroup flexibility: we can easily add to our teams.

We also know moral and scientific progress are related. For some reason, modern prosocial trends track scientific and technological advance. Any theory attempting to explain moral progress should explain this connection.

We know that technology drastically increases information availability.

It seems modest to suppose that bigger is better in group competition. Cultural selection theory, meanwhile, pretty clearly seems to be onto something.

It seems modest to suppose that ingroup cuing turns on information availability.

Technology, as the homily goes, ‘brings us closer’ across a variety of cognitive dimensions. Moral progress, then, can be understood as the sustained effect of deep (or ancestrally unavailable) social information cuing various ingroup responses–people recognizing fractions of themselves (procedural if not emotional bits) in those their grandfathers would have killed.  The competitive benefits pertaining to cooperation suggest that ingroup trending cultures would gradually displace those trending otherwise.

Certainly there’s a far, far more complicated picture to be told here—a bottomless one, you might argue—but the above set of generalizations strike me as pretty solid. The normativist would cry foul, for instance, claiming that some account of the normative nature of the institutions underpinning such a process is necessary to understanding ‘moral progress.’ For them, moral progress has to involve autonomy, agency, and a variety of other posits perpetually lacking decisive formulation. Heuristic neglect allows us to sidestep this extravagance as the very kind of dead-end we should expect to confound us. At the same time, however, reflection on moral cognition has doubtless had a decisive impact on moral cognition. The problem of explaining ‘norm-talk’ remains. The difference is we now recognize the folly of using normative cognition to theoretically solve the nature of normative cognition. How can systems adapted to solving absent information regarding the nature of normative cognition reveal the nature of normative cognition? Relieved of these inexplicable posits, the generalizations above become unproblematic. We can set aside the notion of some irreducible ‘human spark’ impinging on the process in a manner that makes them empirically inexplicable.

If only our ‘deepest intuitions’ could be trusted.

The important thing about this way of looking at things is that it reveals the degree to which moral progress depends upon its information environments. So far, the technical modification of our environments has allowed our suite of social instincts, combined with institutionally regimented social training, to progressively ratchet the expansion of the franchise. But accepting the contingency of moral progress means accepting vulnerability to radical transformations in our information environment. Nothing guarantees moral progress outside the coincidence of certain capacities in certain conditions. Change those conditions, and you change the very function of human moral cognition.

So, for instance, what if something as apparently insignificant as the ‘online disinhibition effect’ has the gradual, aggregate effect of intensifying adversarial group identifications? What if the network possibilities of the web gradually organizes those possessing authoritarian dispositions, renders them more socially cohesive, while having the opposite impact on those possessing anti-authoritarian dispositions?

Anything can happen here, folks.

One can be a ‘nihilist’ and yet be all for ‘moral progress.’ The difference is that you are advocating for cooperation, for hewing to heuristics that promote prosocial behaviour. More importantly, you have no delusions of somehow standing outside contingency, of ‘rational’ immunity to radical transformations in your cognitive environments. You don’t have the luxury of burning magical holes through actual problems with your human spark. You see the ecology of things, and so you intervene.

A Secret History of Enlightened Animals (by Ben Cain)

by rsbakker

Stair of Being

 

As proud and self-absorbed as most of us are, you’d expect we’d be obsessed with reading history to discover more and more of our past and how we got where we are. But modern historical narratives concentrate on the mere facts of who did what to whom and exactly when and where such dramas played out. What actually happened in our recent and distant past doesn’t seem grandiose enough for us, and so we prefer myths that situate our endeavours in a cosmic or supernatural background. Those myths can be religious, of course, but also secular as in films, novels, and the other arts. We’re so fixated on ourselves and on our cultural assumptions that we must imagine we’re engaged in more than just humdrum family life, business, political chicanery, and wars. We’re heroes in a universal tale of good and evil, gods and monsters. We thereby resort to the imagination, overlooking the existential importance of our actual evolutionary transformation. When animals became people, the universe turned in its grave.

 

Awakening from Animal Servitude unto Alienation

The so-called wise way of life, that of our species, originates from the birth of an anomalous form of consciousness. That origin has been widely mythologized to protect us from the vertigo of feeling how fine the line is between us and animals. Thus, personal consciousness has been interpreted as an immaterial spirit or as a spark left behind by the intrusion of a higher-dimensional realm into fallen nature, as in Gnosticism, or as an illusion to maintain the play of the slumbering God Brahman, as in some versions of Hinduism, and so on and so forth. But the consciousness that separates people from animals is merely the particular higher-order thought—that is, a thought about thoughts—that you (your lower-order thoughts) are free in the sense of being autonomous, that you’re largely liberated from naturally-selected, animal processes such as hunting for food or seeking mates in the preprogrammed ways. That thought eventually comes to lie in the background of the flurry of mental activity sustained by our oversized brains, along with the frisson of fear that accompanies the revelation that as long as we can think we’re free from nature, we’re actually so. This is because such a higher-order thought, removed as it is from the older, animal parts of our brain, is just what allows us to independently direct our body’s activities. The freedom opened up by human sentience is typically experienced as a falling away from a more secure position. In fact, our collective origin is very likely encapsulated in each child’s development of personhood, fraught as that is with anxiety and sadness as well as with wonder. Children cry and sulk when they don’t get their way, which is when they learn that they stand apart from the world as egos who must strive to live up to otherworldly social standards.

Animals become people by using thought to lever themselves into a black hole-like viewpoint subsisting outside of nature as such. The results are alienation and the existential crisis which are at the root of all our actions. Organic processes are already anomalous and thus virtually miraculous. Personhood represents not progress, since the values that would define such an advance are themselves alien and unnatural by being anthropocentric, but a maximal state of separation from the world, the exclusion of some primates from the environments that would test their genetic mettle. Personal consciousness is the carving of godlike beings from the raw materials of animal slaves, by the realization that thoughts—memories, emotions, imaginings, rational modeling for the sake of problem-solving—comprise an inner world whose contents need not be dictated by stimuli. The cost of personhood, that is, of virtual godhood in the otherwise mostly inanimate universe, is the suffering from alienation that marks our so-called maturity, our fall from childhood innocence whereupon we land in the adult’s clownish struggles with hubris. Our independence empowers us to change ourselves and the world around us, and so we assume we’re the stars of the cosmic show or at least of the narrative of our private life. But because the business of our acting like grownups is witnessed by hardly any audience at all—except in the special case of celebrities who are ironically infantilized by their fame, because the wildly inhuman cosmos is indifferent to our successes and failures—we typically develop into existential mediocrities, not heroes. We overcompensate for the anguish we feel because our thoughts sever us from everything outside our skull, becoming proud of our adult independence; we’re like children begging their parents to admire their finger paintings. The natural world responds with randomness and indiscriminateness, with luck and indifference, humiliating us with a sense of the ultimate futility of our efforts. Our oldest solution is to retreat to the anthropocentric social world in which we can honour our presumed greatness, justly rewarding or punishing each other for our deeds as we feel we deserve.

 

Hypersocialization and the Existential Crisis of Consciousness

The alienation of higher consciousness is followed, then, by intensive socialization. Animals socialize for natural purposes, whereas we do so in the wake of the miracle of personhood. Our relatively autonomous selves are miraculous not just because they’re so rare (count up the rocks and the minds in the universe, for example, and the former will so outnumber the latter that minds will seem to have spontaneously popped into existence without any general cause), but because whereas animals adapt to nature, conforming to genetic and environmental regularities, people negate those regularities, abandoning their genetic upbringing and reshaping the global landscape. The earliest people channeled their resentment against the world they discovered they’re not wholly at home in, by inventing tools to help them best nature and its animal slaves, but also by forming tribes defined by more and more elaborate social conventions. The more arbitrary the implicit and explicit laws that regulate a society, the more frenzied its members’ dread of being embedded in a greater, uncaring wilderness. Again, human societies are animalistic in so far as they rely on the structure of dominance hierarchies, but whereas alpha males in animal groups overpower their inferiors for the natural reason of maintaining group cohesion to protect the alphas whose superior genes are the species’ best hope for future generations, human leaders adopt the pathologies of the God complex. Indeed, all people would act like gods if only they could sustain the farce. Alas, just as every winning lottery ticket necessitates multitudes of losers, every full-blown personal deity depends on an army of worshippers. Personhood makes us all metaphysically godlike with respect to our autonomy and our liberation from some natural, impersonal systems, but only a lucky minority can live like mythical gods on Earth.

We socialize, then, to flatter our potential for godhood, by elevating some of our members to a social position in which they can tantalize us with their extravagant lifestyles and superhuman responsibilities. We form sheltered communities in which we can hide from nature’s alien glare. Our elders, tyrants, kings, and emperors lord it over us and we thank them for it, since their appallingly decadent lives nevertheless prove that personhood can be completed, that an absolute fall from the grace of animal innocence isn’t asymptotic, that our evolution has a finite end in transhumanity. Our psychopathic rulers are living proofs that nature isn’t omnipresent, that escape is possible in the form of insanity sustained by mass hallucination. We daydream the differences between right and wrong, honour and dishonour, meaning and meaninglessness. We fill the air with subtle noises and imagine that those symbols are meant to lay bare the final truth. We thus mitigate the removal of our mind from the world, with a myth of reconciliation between thoughts and facts. But language was likely conceived of in the first place as a magical instrument, that is, as an extension of mentality into nature which was everywhere anthropomorphized. Human tribes were assumed to be mere inner circles within a vast society of gods, monsters, and other living forces. We socialized, then, not just to escape to friendly domains to preserve our dignity as unnatural wonders, but to pretend that we hadn’t emerged just by a satanic/promethean act of cognitive defiance, with the ego-making thought that severs us from natural reality. We childishly presumed that the whole universe is a stage populated by puppets and actors; thus, no existential retreat might have been deemed necessary, because nature’s alienness was blotted out in our mythopoeic imagination. As in Genesis, God created by speaking the world into being, just as shamans and magicians were believed to cast magical spells that bent reality to their will.

But every theistic posit was part of an unconscious strategy to avoid facing the obvious fact that since all gods are people, we’re evidently the only gods. Nevertheless, having conceived of theistic fictions, we drew up models to standardize the behaviour of actual gods. Thus, the Pharaoh had to be as remote and majestic as Osiris, while the Roman Emperor had to rule like Jupiter, the Raj had to adjudicate like Krishna, the Pope had to appear Christ-like, and the U.S. President has to seem to govern like your favourite Hollywood hero. The double standard that exempts the upper classes from the laws that oppress the lowly masses is supposed to prevent an outbreak of consciousness-induced angst. Social exceptions for the upper class work with mass personifications and enchantments of nature, and those propagandistic myths are then made plausible by the fact that superhuman power elites actually exist. Ironically, such class divisions and their concomitant theologies exacerbate the existential predicament by placing those exquisite symbols of our transcendence (the power elites) before public consciousness, reminding us that just as the gods are prior to and thus independent of nature, so too we who are the only potential or actual gods don’t belong within that latter world.

 

Scientific Objectivity and Artificialization

Hypersocialization isn’t our only existential stratagem; there’s also artificialization as a defense against full consciousness of our unnatural self-control. Whereas the socializer tries to act like a god by climbing social ladders, bullying his underlings, spending unseemly wealth in generational projects of self-aggrandizement, and creating and destroying societal frameworks, the artificializer wants to replace all of nature with artifacts. That way, what began as the imaginary negation of nature’s inhuman indifference to life, in the mythopoeic childhood of our species, can be fulfilled when that indifference is literally undone by our re-engineering of natural processes.

To do that, the artificializer needs to think, not just to act, like a god. That required forming cognitive programs that don’t depend on the innate, naturally-selected ones. Cognitive scientists maintain that the brain’s ability to process sensations, for example, evolved not to present us with the absolute truth but to ensure our fitness to our environment, by helping us survive long enough to sexually reproduce. Animal neural pathways differ from personal ones in that the former serve the species, not the individual, and so the animal is fundamentally a puppet acting out its life cycle as directed by its genetic programming and by certain environmental constraints. Animals can learn to adapt their behaviour to their environment and so their behaviour isn’t always robotic, but unless they can apply their learning towards unnatural ends, such as by developing birth control techniques that systematically thwart the pseudo goals of natural selection, they’ll think as animals, not as gods. Animals as such are entirely natural creatures, meaning that in so far as their behaviour is mediated by an independent control center, their thinking nevertheless is dedicated to furthering the end of natural selection, which is just that of transmitting genes to future generations. By contrast, gods don’t merely survive or even thrive. Insects and bacteria thrive, as did the dinosaurs for millions of years, but none were godlike because none were existentially transformed by conscious enlightenment, by a cognitive black hole into which an animal can fall, creating the world of inner space.

People, too, have animal programming, such as the autonomic programs for processing sensory information. Social behaviour is likewise often purely animalistic, as in the cases of sex and the power struggle for some advantage in a dominance hierarchy. Rational thinking is less so and thus less natural, meaning more anti-natural in that it serves rational ideals rather than just lower-order aims. To be sure, Machiavellian reasoning is animalistic, but reason has also taken on an unnatural function. Whereas writing was first used for the utilitarian purpose of record keeping, reason in the Western tradition was initially not so practical. The Presocratics argued about metaphysical substances and other philosophical matters, indicating that they’d been largely liberated from animal concerns of day-to-day survival and were exploring cognitive territory that’s useful only from the internal, personal perspective. Who am I really? What is the world, ultimately speaking? Is there a worthy difference between right and wrong? Such philosophical questions are impossible without rational ideals of skepticism, intellectual integrity, and love of knowledge even if that knowledge should be subversive—as it proved to be in Socrates’ classic case.

While the biblical Abraham was willing to sacrifice his son for the sake of hypersocializing with an imaginary deity, Socrates died for the antisocial quest of pursuing objective knowledge that inevitably threatens the natural order along with the animal social structures that entrench that order, such as the Athenian government of his day. Socrates cared not about face-saving opinions, but about epistemic principles that arm us with rationally-justified beliefs about how the world might be in reality. Much later, in the Scientific Revolution, rationalists (which is to say philosophers) in Europe would revive the ancient pagan ideal of reasoning regardless of the impact on faith-based dogmas. Scientists like Isaac Newton developed cognitive methods that were counterintuitive in that they went against the grain of more natural human thinking that’s prone to fallacies and survival-based biases. In addition, he served rational institutions, namely the Royal Society and Cambridge, which rivaled the genes for control over the enlightened individual’s loyalty. Moreover, the findings of those cognitive methods were symbolized using artificial languages such as mathematics and formal logic, which enabled liberated minds to communicate their discoveries without the genetic tragicomedies of territorialism, fight-or-flight responses, hero worship, demagoguery, and the like that are liable to be triggered by rhetoric and metaphors expressed in natural languages.

But what is objective knowledge? Are scientists and other so-called enlightened rationalists as neutral as the indifferent world they study? No, rationalists in this broad sense are partly liberated from animal life but they’re not lost in a limbo; rather, they participate in another, unnatural process which I’m calling artificialization. Objectivity isn’t a purely mechanical, impersonal capacity; indeed, natural processes themselves have aesthetically interpretable ends and effective means, so there are no such capacities. In any case, the search for objective knowledge builds on human animalism and on our so-called enlightenment, on our having transcended our animal past and instincts. We were once wholly slaves to nature and we often behave as if we were still playthings of natural forces. But consciousness and hypersocialization provided escapes, albeit into fantasy worlds that nevertheless empowered us. We saw ourselves as being special because we became aware of the increasing independence of our mental models from the modeled territory, owing to the formers’ ultra-complexity. The inner world of the mind emerged and detached from the natural order—not just metaphysically or abstractly, but psychologically and historically. That liberation was traumatic and so we fled to the fictitious world of our imagination, to a world we could control, and we pretended the outer world was likewise held captive to our mental projections. The rational enterprise is fundamentally another form of escape, a means of living with the burden of hyper-awareness. Instead of settling for cheap, flimsy mental constructions such as our gods, boogeymen, and the panoply of delusions to which we’re prone, and instead of hording divinity in the upper social classes that exercise their superpowers in petty or sadistic projects of self-aggrandizement, we saw that we could usurp God’s ability to create real worlds, as it were. We could democratize divinity, replacing impersonal nature with artificial constructs that would actually exist outside our minds as opposed to being mere projections of imagination and existential longing.

The pragmatic aspect of objectivity is apparent from the familiar historical connections between science, European imperialism, and modern industries. But it’s apparent also from the analytical structure of scientific explanations itself. The existential point of scientific objectivity was paradoxically to achieve a total divorce from our animal side by de-personalizing ourselves, by restraining our desire for instant gratification, scolding our inner child and its playpen, the imagination, and identifying with rational methods. Whereas an animal relies on its hardwired programs or on learned rules-of-thumb for interpreting its environment, an enlightened person codifies and reifies such rules, suspending disbelief and siding with idealized or instrumental formulations of these rules so that the person can occupy a higher cognitive plane. Once removed from natural processes by this identification with rational procedures and institutions, with teleological algorithms, artificial symbols and the like, the animal has become a person with a godlike view from outside of nature—albeit not an overview of what the universe really is, but an engineer’s perspective of how the universe works mechanically from the ground up.

To see what I mean, consider the Hindu parable of the blind men who try to ascertain the nature of an elephant by touching its different body parts. One of the men feels a tusk and infers that the elephant is like a pipe. Another touches the leg and thinks the whole animal is like a tree trunk. Another touches the belly and believes the animal is like a wall. Another touches the tail and says the elephant is like a rope. Finally, another one touches the ear and thinks the elephant is like a hand fan. One of the traditional lessons of this parable is that we can fallaciously overgeneralize and mistake the part for the whole, but this isn’t my point about science. Still, there is a difference between what the universe is in reality, which is what it is in its entirety in so far as all of its parts form a cohesive order, and how inquisitive primates choose to understand the universe with their divisive concepts and models. Scientists can’t possibly understand everything in nature all at once; the word “universe” is a mere placeholder with no content adequate to the task of representing everything that’s out there interacting to produce what we think of as distinct events. We have no name for the universe which gives us power over it by identifying its essence, as it were. So scientists analyze the whole, observing how parts of the world work in isolation, ideally in a laboratory. They then generalize their findings, positing a natural regularity or nomic relation between those fragments, as pictured by their model or theory. It’s as if scientists were the blind men who lack the brainpower to cognize the whole of natural reality, and so they study each part, perhaps hoping that if they cooperate they can combine their partial understandings and arrive at some inkling of what the natural universe in general is. Unfortunately, the deeper we look into nature, the more complexity we find in its parts and so the more futile becomes any such plan for total comprehension. Scientists can barely keep up with advances in their subfields; the notion that anyone could master all the sciences as they currently stand is ludicrous, and there’s still much in the world that isn’t scientifically understood by anyone.

So whatever the scientist’s aspiration might be, the effect of science isn’t the achievement of complete, final understanding of everything in the universe or of the whole of nature. Instead, science allows us to rebuild the whole based on partial, analytical knowledge of how the world works. Suppose scientists discover an extraterrestrial artifact and they have no clue as to the artifact’s function, which is to say they have no understanding of what the object is in reality. Still, they can reverse-engineer the artifact, taking it apart, identifying the materials used to assemble it and certain patterns in how the parts interact with each other. With that limited knowledge of the artifact’s mechanical aspect, scientists might be able to build a replica or else they could apply that knowledge to create something more useful to them, that is, something that works in similar ways to the original but which works towards an end supplied by the scientists’ interests, not the alien’s. There would be no point in replicating the alien technology, since the artifact would be useless without knowledge of what it’s for or without even a shared interest in pursuing that alien goal. Replace the alien artifact with the natural universe and you have some measure of the Baconian position of human science. Of course, nature has no designer; nevertheless, we experience natural processes as having ends and so we’re faced with the choice of whether to apply our piecemeal knowledge of natural mechanisms to the task of reinforcing those ends or to that of adjusting or even reversing them. The choice is to act as stewards of God’s garden, as it were, or as promethean rebels who seek to be divine creators. There are still enclaves of native tribes living as retro-human animals and preserving nature rather than demolishing the wilderness and establishing in its place a technological wonderland built with knowledge of natural mechanisms. But the billions of participants in the science-driven, global monoculture have evidently chosen the promethean, quasi-satanic path.

 

Existentialism and our Hidden History

History is a narrative that often informs us indirectly about the present state of human affairs, by representing part of our past. Ancient historical narratives were more mythical than fact-based. The New Testament, for example, uses historical details to form an exoteric shell around the Gnostic, transhumanist suspicion that human nature is “fallen” to the extent that we surrender our capacity to transcend the animal life cycle; we must “die” to our natural bodies and be reborn in a glorious, unnatural or “spiritual” form. At any rate, like poetry, the mythical language of such ancient historical narratives is open to endless interpretations, which is to say that such stories are obscure. Josephus’s ancient histories of the Jewish people, written for a Roman audience, aren’t so mythologized but they’re no less propagandistic. By contrast, modern historians strive to avoid the pitfalls of writing highly subjective or biased narratives, and so they seek to analyze and interpret just the facts dug up by archeologists and textual critics. Modern histories are thus influenced by the exoteric presumption about science, which is that science isn’t primarily in the business of artificializing everything that’s wild in the sense of being out of our control, but is just a mode of inquiry for arriving at the objective truth (come what may).

Left out of this development of the telling of history is the existential significance of our evolutionary transition from being animals, which were at one with nature, to being people who are implicitly if not consciously at war with everything nonhuman. What I’ve sketched above is part of our secret history; it’s the story of what it means to be human, which underlies all our endeavours. The significance of our standing between animalism and godhood is hidden and largely unknown or forgotten, because at the root of this purpose that drives us is the trauma of godlike consciousness which we’d rather not relive. We each have our fill of that trauma in our passage from childhood innocence, which approximates the animal state of unknowing, to adult independence. Teen angst, which cultures combat with initiation rituals to distract the teenager with sanctioned, typically delusional pastimes, is the tip of the iceberg of pain that awaits anyone who recognizes the plight entailed by our very form of existence.

In Escape from Freedom, Erich Fromm argued that citizens of modern democracies are in danger of preferring the comfort of a totalitarian system, to escape the ennui and dehumanization generated by modern societies. In particular, capitalistic exploitation of the worker class and the need to assimilate to an environment run more and more by automated, inhuman machines are supposed to drive civilized persons to savage, authoritarian regimes. At least, this was Fromm’s explanation of the Nazis’ rise to power. A similar analysis could apply to the present degeneration of the Republican Party in the U.S. and to the militant jihadist movement in the Middle East. But Fromm’s analysis is limited. To be sure, capitalism and technology have their drawbacks and these may even contribute to totalitarianism’s appeal, as Fromm shows. But this overlooks what liberal, science-driven societies and savage, totalitarian societies have in common. Both are flights from existential reckoning, as I’ve explained: the one revolves around artificialization (Enlightenment, rationalist values of individual autonomy, which deteriorate until we’re left with the fraud of consumerism), the other around hypersocialization (cult of personality, restoring the sadomasochistic interplay between mythical gods and their worshippers). Fromm ignores the existential effect of the rational enlightenment that brought on modern science, democracy, and capitalism in the first place, the effect being our deification. By deifying ourselves, we prevent our treasured religions from being fiascos and we spare ourselves the horror of living in an inhuman wilderness from which we’re alienated by our hyper-awareness.

We created the modern world to accelerate the rate at which nature is removed from our presence. Contrary to optimists like Steven Pinker, modernity hasn’t fulfilled its promise of democratizing divinity, as I’d put it. Robber barons and more parasitic oligarchs do indeed resort to the older departure of hypersocialization, acting like decadent gods in relation to human slaves instead of focusing their divine creativity on our common enemy, the monstrous wilderness. The internet that trivializes everything it touches and the omnipresence of our high-tech gadgets do infantilize us, turning us into cattle-like consumers instead of unleashing our creativity and training us to be the indomitable warriors that alone could endure the promethean mission. This is because we, being the only gods that exist, are woefully unprepared for our responsibility, having retained our animal heritage in the form of our bodies which infect most of our decisions with natural fears and prejudices. At any rate, the deeper story of the animal that becomes a godlike person to obliterate the source of alienation that’s the curse of any flawed, lonely godling helps explain why we now settle more often for the minor anxieties of living in modern civilization, to avoid the major angst of recognizing the existential importance of what we are.

Science, Nihilism, and the Artistry of Nature (by Ben Cain)

by rsbakker

nihilism image

Technologically-advanced societies may well destroy themselves, but there are two other reasons to worry that science rather than God will usher in the apocalypse, directly destroying us by destroying our will to live. The threat in question is nihilism, the loss of faith in our values and thus the wholesale humiliation of all of us, due to science’s tendency to falsify every belief that’s traditionally comforted the masses. The two reasons to suspect that science entails nihilism are that scientists find the world to be natural (fundamentally material, mechanical, and impersonal), whereas traditional values tend to have supernatural implications, and that scientific methods famously bypass intuitions and feelings to arrive at the objective truth.

These two features of science, being the content of scientific theories and the scientific methods of inquiry might seem redundant, since the point about methods is that science is methodologically naturalistic. Thus, the point about the theoretical content might seem to come as no surprise. By definition, a theory that posits something supernatural wouldn’t be scientific. While scientists may be open to learning that the world isn’t a natural place, making that discovery would amount to ending or at least transforming the scientific mode of inquiry. Nevertheless, naturalism, the worldview that explains everything in materialistic and mechanistic terms, isn’t just an artifact of scientific methods. What were once thought to be ghosts and gods and spirits really did turn out to be natural phenomena.

Moreover, scientific objectivity seems a separate cause of nihilism in that, by showing us how to be objective, paradigmatic scientists like Galileo, Newton, and Darwin showed us also how to at least temporarily give up on our commonsense values. After all, in the moment when we’re following scientific procedures, we’re ignoring our preferences and foiling our biases. Of course, scientists still have feelings and personal agendas while they’re doing science; for example, they may be highly motivated to prove their pet theory. But they also know that by participating in the scientific process they’re holding their feelings to the ultimate test. Scientific methods objectify not just the phenomenon but the scientist; as a functionary in the institution, she must follow strict procedures, recording the data accurately, thinking logically, and publishing the results, making her scientific work as impersonal as the rest of the natural world. In so far as nonscientists understand this source of science’s monumental success, we might come to question the worth of our subjectivity, of our private intuitions, wishes, and dreams which scientific methods brush aside as so many distortions.

Despite the imperative to take scientists as our model thinkers in the Age of Reason, we might choose to ignore these two threats to our naïve self-image. Nevertheless, the fear is that distraction, repression, and delusion might work only for so long before the truth outs. You might think, on the contrary, that science doesn’t entail nihilism, since science is a social enterprise and thus it has a normative basis. Scientists are pragmatic and so they evaluate their explanations in terms of rational values of simplicity, fruitfulness, elegance, utility, and so on. Still, the science-centered nihilist can reply, those values might turn out to be mechanisms, as scientists themselves would discover, in which case science would humiliate not just the superstitious masses but the pragmatic theorists and experimenters as well. That is, science would refute not only the supernaturalist’s presumptions but the elite instrumentalist’s view of scientific methods. Science would become just another mechanism in nature and scientific theories would have no special relationship with the facts since from this ultra-mechanistic “perspective,” not even scientific statements would consist of symbols that bear meaning. The scientific process would be seen as consisting entirely of meaningless, pointless, and amoral causal relations—just like any other natural system.

I think, then, this sort of nihilist can resist that pragmatic objection to the suspicion that science entails nihilism and thus poses a grave, still largely unappreciated threat to society. There’s another objection, though, which is harder to discount. The very cognitive approach which is indispensible to scientific discovery, the objectification of phenomena, which is to say the analysis of any pattern in impersonal terms of causal relations, is itself a source of certain values. When we objectify something we’re thereby well-positioned to treat that thing as having a special value, namely an aesthetic one. Objectification overlaps with the aesthetic attitude, which is the attitude we take up when we decide to evaluate something as a work of art, and thus objects, as such, are implicitly artworks.

 

Scientific Objectification and the Aesthetic Attitude

 

There’s a lot to unpack there, so I’ll begin by explaining what I mean by the “aesthetic attitude.” This attitude is explicated differently by Kant, Schopenhauer, and others, but the main idea is that something becomes an artwork when we adopt a certain attitude towards it. The attitude is a paradoxical one, because it involves a withholding of personal interest in the object and yet also a desire to experience the object for its own sake, based on the assumption that such an experience would be rewarding. When an observer is disinterested in experiencing something, but chooses to experience it because she’s replaced her instrumental or self-interested perspective with an object-oriented one so that she wishes to be absorbed by what the object has to offer, as it were, she’s treating the object as a work of art. And arguably, that’s all it means for something to be art.

For example, if I see a painting on a wall and I study it up close with a view to stealing it, because all the while I’m thinking of how economically valuable the painting is, I’m personally interested in the painting and thus I’m not treating it as art; instead, for me the painting is a commodity. Suppose I have no ulterior motive as I look at the painting, but I’m also bored by it and so I’m not passively letting the painting pour its content into me, as it were, which is to say that I have no respect for such an experience in this case, and so I’m not giving the painting a fair chance to captivate my attention, I’m likewise not treating the painting as art. I’m giving it only a cursory glance, because I lack the selfless interest in letting the painting hold all of my attention and so I don’t anticipate the peculiar pleasure from perceiving the painting that we associate with an aesthetic experience. Whether it’s a painting, a song, a poem, a novel, or a film, the object becomes an artwork when it’s regarded as such, which requires that the observer adopt this special attitude towards it.

Now, scientific objectivity plainly isn’t identical to the aesthetic attitude. After all, regardless of whether scientists think of nature as beautiful when they’re studying the evidence or performing experiments or formulating mechanistic explanations, they do have at least one ulterior motive. Some scientists may have an economic motive, others may be after prestige, but all scientists are interested in understanding how systems work. Their motive, then, is a cognitive one—which is why they follow scientific procedures, because they believe that scientific objectification (mechanistic analysis, careful collection of the data, testing of hypotheses with repeatable experiments, and so on) is the best means of achieving that goal.

However, this cognitive interest posits a virtual aesthetic stance as the means to achieve knowledge. Again, scientists trust that their personal interests are irrelevant to scientific truth and that regardless of how they prefer the world to be, the facts will emerge as long as the scientific methods of inquiry are applied with sufficient rigor. To achieve their cognitive goal, scientists must downplay their biases and personal feelings, and indeed they expect that the phenomenon will reveal its objective, real properties when it’s scientifically scrutinized. The point of science is for us to get out of the way, as much as possible, to let the world speak with its own voice, as opposed to projecting our fantasies and delusions onto the world. Granted, as Kant explained, we never hear that voice exactly—what Pythagoras called the music of the spheres—because in the act of listening to it or of understanding it, we apply our species-specific cognitive faculties and programs. Still, the point is that the institution of science is structured in such a way that the facts emerge because the scientific form of explanation circumvents the scientists’ personalities. This is the essence of scientific objectivity: in so far as they think logically and apply the other scientific principles, scientists depersonalize themselves, meaning that they remove their character from their interaction with some phenomenon and make themselves functionaries in a larger system. This system is just the one in which the natural phenomenon reveals its causal interrelations thanks to the elimination of our subjectivity which would otherwise personalize the phenomenon, adding imaginary and typically supernatural interpretations which blind us to the truth.

And when scientists depersonalize themselves, they open themselves up to the phenomenon: they study it carefully, taking copious notes, using powerful technologies to peer deeply into it, and isolating the variables by designing sterile environments to keep out background noise. This is very like taking up the aesthetic attitude, since the art appreciator too becomes captivated by the work itself, getting lost in its objective details as she sets aside any personal priority she may have. Both the art appreciator and the scientist are personally disinterested when they inspect some object, although the scientist is often just functionally or institutionally so, and both are interested in experiencing the thing for its own sake, although the artist does so for the aesthetic reward whereas the scientist expects a cognitive one. Both objectify what they perceive in that they intend to discern only the subtlest patterns in what’s actually there in front of them, whether on the stage, in the picture frame, or on the novel’s pages, in the case of fine art, or in the laboratory or the wild in the case of science. Thus, art appreciators speak of the patterns of balance and proportion, while scientists focus on causal relations. And the former are rewarded with the normative experience of beauty or are punished with a perception of ugliness, as the case may be, while the latter speak of cognitive progress, of science as the premier way of discovering the natural facts, and indeed of the universality of their successes.

Here, then, is an explanation of what David Hume called the curious generalization that occurs in inductive reasoning, when we infer that because some regularity holds in some cases, therefore it likely holds in all cases. We take our inductive findings to have universal scope because when we reason in that way, we’re objectifying rather than personalizing the phenomenon, and when we objectify something we’re virtually taking up the aesthetic attitude towards it. Finally, when we take up such an attitude, we anticipate a reward, which is to say that we assume that objectification is worthwhile—not just for petty instrumental reasons, but for normative ones, which is to say that objectification functions as a standard for everyone. When you encounter a wonderful work of art, you think everyone ought to have the same experience and that someone who isn’t as moved by that artwork is failing in some way. Likewise, when you discover an objective fact of how some natural system operates, you think the fact is real and not just apparent, that it’s there universally for anyone on the planet to confirm.

Of course, inductive generalization is based also on metaphysical materialism, on the assumptions that the world is made of atoms and that a chunk of matter is just the sort of thing to hold its form and to behave in regular ways regardless of who’s observing it, since material things are impersonal and thus they lack any freedom to surprise. But scientists persist in speaking of their cognitive enterprise as progressive, not just because they assume that science is socially useful, but because scientific findings transcend our instrumental motives since they allow a natural system to speak mainly for itself. Moreover, scientists persist in calling those generalizations laws, despite the unfortunate personal (theistic) connotations, given the comparison with social laws. These facts indicate that inductive reasoning isn’t wholly rational, after all, and that the generalizations are implicitly normative (which isn’t to say moral), because the process of scientific discovery is structurally similar to the experience of art.

 

Natural Art and Science’s True Horror

 

Some obvious questions remain. Are natural phenomena exactly the same as fine artworks? No, since the latter are produced by minds whereas the former are generated by natural forces and elements, and by the processes of evolution and complexification. Does this mean that calling natural systems works of art is merely analogical? No, because the similarity in question isn’t accidental; rather, it’s due to the above theory of art, which says that art is nothing more than what we find when we adopt the aesthetic attitude towards it. According to this account, art is potentially everywhere and how the art is produced is irrelevant.

Does this mean, though, that aesthetic values are entirely subjective, that whether something is art is all in our heads since it depends on that perspective? The answer to this question is more complicated. Yes, the values of beauty and ugliness, for example, are subjective in that minds are required to discover and appreciate them. But notice that scientific truth is likewise just as subjective: minds are required to discover and to understand such truth. What’s objective in the case of scientific discoveries is the reality that corresponds to the best scientific conclusions. That reality is what it is regardless of whether we explain it or even encounter it. Likewise, what’s objective in the case of aesthetics is something’s potential to make the aesthetic appreciation of it worthwhile. That potential isn’t added entirely by the art appreciator, since that person opens herself up to being pleased or disappointed by the artwork. She hopes to be pleased, but the art’s quality is what it is and the truth will surface as long as she adopts the aesthetic attitude towards it, ignoring her prejudices and giving the art a chance to speak for itself, to show what it has to offer. Even if she loathes the artist, she may grudgingly come to admit that he’s produced a fine work, as long as she’s virtually objective in her appreciation of his work, which is to say as long as she treats it aesthetically and impersonally for the sake of the experience itself. Again, scientific objectivity differs slightly from aesthetic appreciation, since scientists are interested in knowledge, not in pleasant experience. But as I’ve explained, that difference is irrelevant since the cognitive agenda compels the scientist to subdue or to work around her personality and to think objectively—just like the art beholder.

So do beauty and ugliness exist as objective parts of the world? As potentials to reward or to punish the person who takes up anything like the aesthetic attitude, including a stance of scientific objectification, given the extent of the harmony or disharmony in the observed patterns, for example, I believe the answer is that those aesthetic properties are indeed as real as atoms and planets. The objective scientist is rewarded ultimately with knowledge of how nature works, while someone in the grip of the aesthetic attitude is rewarded (or punished) with an experience of the aesthetic dimension of any natural or artificial product. That dimension is found in the mechanical aspect of natural systems, since aesthetic harmony requires that the parts be related in certain ways to each other so that the whole system can be perceived as sublime or otherwise transcendent (mind-blowing). Traditional artworks are self-contained and science likewise deals largely with parts of the universe that are analyzed or reduced to systems within systems, each studied independently in artificial environments that are designed to isolate certain components of the system.

Now, such reduction is futile in the case of chaotic systems, but the grandeur of such systems is hardly lessened when the scientist discovers how a system which is sensitive to initial conditions evolves unpredictably as defined by a mathematical formula. Indeed, chaotic systems are comparable to modern and postmodern art as opposed to the more traditional kind. Recent, highly conceptual art or the nonrepresentational kind that explores the limits of the medium is about as unpredictable as a chaotic system. So the aesthetic dimension is found not just in part-whole relations and thus in beauty in the sense of harmony, but in free creativity. Modern art and science are both institutions that idealize the freedom of thought. Freed from certain traditions, artists now create whatever they’re inspired to create; they’re free to experiment, not to learn the natural facts but to push the boundaries of human creativity. Likewise, modern scientists are free to study whatever they like (in theory). And just as such modernists renounce their personal autonomy for the sake of their work, giving themselves over to their muse, to their unconscious inclinations (somewhat like Zen Buddhists who abhor the illusion of rational self-control), or instead to the rigors of institutional science, nature reveals its mindless creativity when chaotic systems emerge in its midst.

But does the scientist actually posit aesthetic values while doing science, given that scientific objectification isn’t identical with the aesthetic attitude? Well, the scientist would generally be too busy doing science to attend to the aesthetic dimension. But it’s no accident that mathematicians are disproportionately Platonists, that early modern scientists saw the cosmic order as attesting to God’s greatness, or that postmodern scientists like Neal deGrasse Tyson, who hosts the rebooted television show Cosmos, labour to convince the average American that naturalism ought to be enough of a religion for them, because the natural facts are glorious if not technically miraculous. The question isn’t whether scientists supply the world with aesthetic properties, like beauty or ugliness, since those properties preexist science as objective probabilities of uplifting or depressing anyone who takes up the aesthetic attitude, which attitude is practically the same as objectivity. Instead, the question here might be whether scientific objectivity compels the scientist to behold a natural phenomenon as art. Assuming there are nihilistic scientists, the answer would have to be no. The reason for this would be the difference in social contexts, which accounts for the difference between the goals and rewards. Again, the artist wants a certain refined pleasure whereas the scientist wants knowledge. But the point is that the scientist is poised to behold natural systems as artworks, just in so far as she’s especially objective.

Finally, we should return to the question of how this relates to nihilism. The fear, raised above, was that because science entails nihilism, the loss of faith in our values and traditions, scientists threaten to undermine the social order even as they lay bare the natural one. I’ve questioned the premise, since objectivity entails instead the aesthetic attitude which compels us to behold nature not as arid and barren but as rife with aesthetic values. Science presents us with a self-shaping universe, with the mindless, brute facts of how natural systems work that scientists come to know with exquisite attention to detail, thanks to their cognitive methods which effectively reveal the potential of even such systems to reward or to punish someone with an aesthetic eye. For every indifferent natural system uncovered by science, we’re well-disposed to appreciating that system’s aesthetic quality—as long as we emulate the scientist and objectify the system, ignoring our personal interests and modeling its patterns, such as by reducing the system to mechanical part-whole relations. The more objective knowledge we have, the more grist for the aesthetic mill. This isn’t to say that science supports all of our values and traditions. Obviously science threatens some of them and has already made many of them untenable. But science won’t leave us without any value at all. The more objective scientists are and the more of physical reality they disclose, the more we can perceive the aesthetic dimension that permeates all things, just by asking for pleasure rather than knowledge from nature.

There is, however, another great fear that should fill in for the nihilistic one. Instead of worrying that science will show us why we shouldn’t believe there’s any such thing as value, we might wonder whether, given the above, science will ultimately present us with a horrible rather than a beautiful universe. The question, then, is whether nature will indeed tend to punish or to reward those of us with aesthetic sensibilities. What is the aesthetic quality of natural phenomena in so far as they’re appreciated as artworks, as aesthetically interpretable products of undead processes? Is the final aesthetic judgment of nature an encouraging, life-affirming one that justifies all the scientific work that’s divorced the facts from our mental projections or will that judgment terrorize us worse than any grim vision of the world’s fundamental neutrality? Optimists like Richard Dawkins, Carl Sagan and Tyson think the wonders of nature are uplifting, but perhaps they’re spinning matters to protect science’s mystique and the secular humanistic myth of the progress of modern, science-centered societies. Perhaps the world’s objectification curses us not just with knowledge of many unpleasant facts of life, but with an experience of the monstrousness of all natural facts.

Neuroscience as Socio-Cognitive Pollution

by rsbakker

Want evidence of the Semantic Apocalypse? Look no further than your classroom.

As the etiology of more and more cognitive and behavioural ‘deficits’ is mapped, more and more of what once belonged to the realm of ‘character’ is being delivered to the domain of the ‘medical.’ This is why professors and educators more generally find themselves institutionally obliged to make more and more ‘accommodations,’ as well as why they find their once personal relations with students becoming ever more legalistic, ever more structured to maximally deflect institutional responsibility. Educators relate with students in an environment that openly declares their institutional incompetence regarding medicalized matters, thus providing students with a failsafe means to circumvent their institutional authority. This short-circuit is brought about by the way mechanical, or medical, explanations of behaviour impact intuitive/traditional notions regarding responsibility. Once cognitive or behavioural deficits are redefined as ‘conditions,’ it becomes easy to argue that treating those possessing the deficit the same as those who do not amounts to ‘punishing’ them for something they ‘cannot help.’ The professor is thus compelled to ‘accommodate’ to level the playing field, in order to be moral.

On Blind Brain Theory, this trend is part and parcel of the more general process of ‘social akrasis,’ the becoming incompatible of knowledge and experience. The adaptive functions of morality turn on certain kinds of ignorance, namely, ignorance of the very kind of information driving medicalization. Once the mechanisms underwriting some kind of ‘character flaw’ are isolated, that character flaw ceases to be a character flaw, and becomes a ‘condition.’ Given pre-existing imperatives to grant assistance to those suffering conditions, behaviour once deemed transgressive becomes symptomatic, and moral censure becomes immoral. Character flaws become disabilities. The problem, of course, is that all transgressive behaviour—all behaviour period, in fact—can be traced back to various mechanisms, begging the question, ‘Where does accommodation end?’ Any disparity in classroom performance can be attributed to disparities between neural mechanisms.

The problem, quite simply, is that the tools in our basic socio-cognitive toolbox are adapted to solve problems in the absence of mechanical cognition—it literally requires our blindness to certain kinds of facts to reliably function. We are primed ‘to hold responsible’ those who ‘could have done otherwise’—those who have a ‘choice.’ Choice, quite famously, requires some kind of fictional discontinuity between us and our precursors, a discontinuity that only ignorance and neglect can maintain. ‘Holding responsible,’ therefore, can only retreat before the advance of medicalization, insofar as the latter involves the specification of various behavioural precursors.

The whole problem of this short circuit—and the neuro-ethical mire more generally, in fact—can be seen as a socio-cognitive version of a visual illusion, where the atypical triggering of different visual heuristics generates conflicting visual intuitions. Medicalization stumps socio-cognition in much the same way the Muller-Lyer Illusion stumps the eye: It provides atypical (evolutionarily unprecedented, in fact) information, information that our socio-cognitive systems are adapted to solve without. Causal information regarding neurophysiological function triggers an intuition of moral exemption regarding behaviour that could never have been solved as such in our evolutionary history. Neuroscientific understanding of various behavioural deficits, however defined, cues the application of a basic, heuristic capacity within a historically unprecedented problem-ecology. If our moral capacities have evolved to solve problems neglecting the brains involved, to work around the lack of brain information, then it stands to reason that the provision of that information would play havoc with our intuitive problem-solving. Brain information, you could say, is ‘non-ecofriendly,’ a kind of ‘informatic pollutant’ in the problem-ecologies moral cognition is adapted to solve.

The idea that heuristic cognition generates illusions is now an old one. In naturalizing intentionality, Blind Brain Theory allows us to see how the heuristic nature of intentional problem-solving regimes means they actually require the absence of certain kinds of information to properly function. Adapted to solve social problems in the absence of any information regarding the actual functioning of the systems involved, our socio-cognitive toolbox literally requires that certain information not be available to function properly. The way this works can be plainly seen with the heuristics governing human threat detection, say. Since our threat detection systems are geared to small-scale, highly interdependent social contexts, the statistical significance of any threat information is automatically evaluated against a ‘default village.’ Our threat detection systems, in other words, are geared to problem-ecologies lacking any reliable information regarding much larger populations. To the extent that such information ‘jams’ reliable threat detection (incites irrational fears), one might liken such information to pollution, to something ecologically unprecedented that renders previously effective cognitive adaptations ineffective.

I actually think ‘cognitive pollution’ is definitive of modernity, that all modern decision-making occurs in information environments, many of them engineered, that cut against our basic decision-making capacities. The ‘ecocog’ ramifications of neuroscientific information, however, promise to be particularly pernicious.

Our moral intuitions were always blunt instruments, the condensation of innumerable ancestral social interactions, selected for their consequences rather than their consistencies. Their resistance to any decisive theoretical regimentation—the mire that is ‘metaethics’—should come as no surprise. But throughout this evolutionary development, neurofunctional neglect remained a constant: at no point in our evolutionary history were our ancestors called on to solve moral problems possessing neurofunctional information. Now, however, that information has become an inescapable feature of our moral trouble-shooting, spawning ad hoc fixes that seem to locally serve our intuitions, while generating any number of more global problems.

A genuine social process is afoot here.

A neglect based account suggests the following interpretation of what’s happening: As medicalization (biomechanization) continues apace, the social identity of the individual is progressively divided into the subject, the morally liable, and the abject, the morally exempt. Like a wipe in cinematic editing, the scene of the abject is slowly crawling across the scene of the subject, generating more and more breakdowns of moral cognition. Becoming abject doesn’t so much erase as displace liability: one individual’s exemption (such as you find in accommodation) from moral censure immediately becomes a moral liability for their compatriots. The paradoxical result is that even as we each become progressively more exempt from moral censure, we become progressively more liable to provide accommodation. Thus the slow accumulation of certain professional liabilities as the years wear on. Those charged with training and assessing their fellows will in particular face a slow erosion in their social capacity to censure—which is to say, evaluate—as accommodation and its administrative bureaucracies slowly continue to bloat, capitalizing on the findings of cognitive science.

The process, then, can be described as one where progressive individual exemption translates into progressive social liability: given our moral intuitions, exemptions for individuals mean liabilities for the crowd. Thus the paradoxical intensification of liability that exemption brings about: the process of diminishing performance liability is at once the process of increasing assessment liability. Censure becomes increasingly prone to trigger censure.

The erosion of censure’s public legitimacy is the most significant consequence of this socio-cognitive short-circuit I’m describing. Heuristic tool kits are typically whole package deals: we evolved our carrot problem-solving capacity as part of a larger problem-solving capacity involving sticks. As informatic pollutants destroy more and more of the stick’s problem-solving habitat, the carrots left behind will become less and less reliable. Thus, on a ‘zombie morality’ account, we should expect the gradual erosion of our social system’s ability to police public competence—a kind of ‘carrot drift.’

This is how social akrasis, the psychotic split between the nihilistic how and fantastic what of our society and culture, finds itself coded within the individual. Broken autonomy, subpersonally parsed. With medicalization, the order of the impersonal moves, not simply into the skull of the person, but into their performance as well. As the subject/abject hybrid continues to accumulate exemptions, it finds itself ever more liable to make exemptions. Since censure is communicative, the increasing liability of censure suggests a contribution, at least, to the increasing liability of moral communication, and thus, to the politicization of public interpersonal discourse.

How this clearly unsustainable trend ends depends on the contingencies of a socially volatile future. We should expect to witness the continual degradation in the capacity of moral cognition to solve in what amounts to an increasingly polluted information environment. Will we overcome these problems via some radical new understanding of social cognition? Or will this lead to some kind of atavistic backlash, the institution of some kind of informatic hygiene—an imposition of ignorance on the public? I sometimes think that the kind of ‘liberal atrocity tales’ I seem to endlessly encounter among my nonacademic peers point in this direction. For those ignorant of the polluting information, the old judgments obviously apply, and stories of students not needing to give speeches in public-speaking classes, or homeless individuals being allowed to dump garbage in the river, float like sparks from tongue to tongue, igniting the conviction that we need to return to the old ways, thus convincing who knows how many to vote directly against their economic interests. David Brookes, protege of William F. Buckley and conservative columnist for The New York Times, often expresses amazement at the way the American public continues to drift to the political right, despite the way fiscal conservative reengineering of the market continues to erode their bargaining power. Perhaps the identification of liberalism with some murky sense of the process described above has served to increase the rhetorical appeal of conservatism…

The sense that someone, somewhere, needs to be censured.

The Asimov Illusion

by rsbakker

Could believing in something so innocuous, so obvious, as a ‘meeting of the minds’ destroy human civilization?

Noocentrism has a number of pernicious consequences, but one in particular has been nagging me of late: The way assumptive agency gulls people into thinking they will ‘reason’ with AIs. Most understand Artificial Intelligence in terms of functionally instantiated agency, as if some machine will come to experience this, and to so coordinate with us the way we think we coordinate amongst ourselves—which is to say, rationally. Call this the ‘Asimov Illusion,’ the notion that the best way to characterize the interaction between AIs and humans is the way we characterize our own interactions. That AIs, no matter how wildly divergent their implementation, will somehow functionally, at least, be ‘one of us.’

If Blind Brain Theory is right, this just ain’t going to be how it happens. By its lights, this ‘scene’ is actually the product of metacognitive neglect, a kind of philosophical hallucination. We aren’t even ‘one of us’!

Obviously, theoretical metacognition requires the relevant resources and information to reliably assess the apparent properties of any intentional phenomena. In order to reliably expound on the nature of rules, Brandom, for instance, must possess both the information (understood in the sense of systematic differences making systematic differences) and the capacity to do so. Since intentional facts are not natural facts, cognition of them fundamentally involves theoretical metacognition—or ‘philosophical reflection.’ Metacognition requires that the brain somehow get a handle on itself in behaviourally effective ways. It requires the brain somehow track its own neural processes. And just how much information is available regarding the structure and function of the underwriting neural processes? Certainly none involving neural processes, as such. Very little, otherwise. Given the way experience occludes this lack of information, we should expect that metacognition would be systematically duped into positing low-dimensional entities such as qualia, rules, hopes, and so on. Why? Because, like Plato’s prisoners, it is blind to its blindness, and so confuses shadows for things that cast shadows.

On BBT, what is fundamentally going on when we communicate with one another is physical: we are quite simply doing things to each other when we speak. No one denies this. Likewise, no one denies language is a biomechanical artifact, that short of contingent, physically mediated interactions, there’s no linguistic communication period. BBT’s outrageous claim is that nothing more is required, that language, like lungs or kidneys, discharges its functions in an entirely mechanical, embodied manner.

It goes without saying that this, as a form of eliminativism, is an extremely unpopular position. But it’s worth noting that its unpopularity lies in stopping at the point of maximal consensus—the natural scientific picture—when it comes to questions of cognition. Questions regarding intentional phenomena are quite clearly where science ends and philosophy begins. Even though intentional phenomena obviously populate the bestiary of the real, they are naturalistically inscrutable. Thus the dialectical straits of eliminativism: the very grounds motivating it leave it incapable of accounting for intentional phenomena, and so easily outflanked by inferences to the best explanation.

As an eliminativism that eliminates via the systematic naturalization of intentional phenomena, Blind Brain Theory blocks what might be called the ‘Abductive Defence’ of Intentionalism. The kinds of domains of second-order intentional facts posited by Intentionalists can only count toward ‘best explanations’ of first-order intentional behaviour in the absence of any plausible eliminativistic account of that same behaviour. So for instance, everyone in cognitive science agrees that information, minimally, involves systematic differences making systematic differences. The mire of controversy that embroils information beyond this consensus turns on the intuition that something more is required, that information must be genuinely semantic to account for any number of different intentional phenomena. BBT, however, provides a plausible and parsimonious way to account for these intentional phenomena using only the minimal, consensus view of information given above.

This is why I think the account is so prone to give people fits, to restrict their critiques to cloistered venues (as seems to be the case with my Negarestani piece two weeks back). BBT is an eliminativism that’s based on the biology of the brain, a positive thesis that possesses far ranging negative consequences. As such, it requires that Intentionalists account for a number of things they would rather pass over in silence, such as questions of what evidences their position. The old, standard dismissals of eliminativism simply do not work.

What’s more, by clearing away the landfill of centuries of second-order intentional speculation in philosophy, it provides a genuinely new, entirely naturalistic way of conceiving the intentional phenomena that have baffled us for so long. So on BBT, for instance, ‘reason,’ far from being ‘liquidated,’ ceases to be something supernatural, something that mysteriously governs contingencies independently of contingencies. Reason, in other words, is embodied as well, something physical.

The tradition has always assumed otherwise because metacognitive neglect dupes us into confusing our bare inkling of ourselves with an ‘experiential plenum.’ Since what low-dimensional scraps we glean seem to be all there is, we attribute efficacy to it. We assume, in other words, noocentrism; we conclude, on the basis of our ignorance, that the disembodied somehow drives the embodied. The mathematician, for instance, has no inkling of the biomechanics involved in mathematical cognition, and so claims that no implementing mechanics are relevant whatsoever, that their cogitations arise ‘a priori’ (which on BBT amounts to little more than a fancy way of saying ‘inscrutable to metacognition’). Given the empirical plausibility of BBT, however, it becomes difficult not to see such claims of ‘functional autonomy’ as being of a piece with vulgar claims regarding the spontaneity of free will and concluding that the structural similarity between ‘good’ intentional phenomena (those we consider ineliminable) and ‘bad’ (those we consider preposterous) is likely no embarrassing coincidence. Since we cannot frame these disembodied entities and relations against any larger backdrop, we have difficulty imagining how it could be ‘any other way.’ Thus, the Asimov Illusion, the assumption that AIs will somehow implement disembodied functions, ‘play by the rules’ of the ‘game of giving and asking for reasons.’

BBT lets us see this as yet more anthropomorphism. The high-dimensional, which is to say, embodied, picture is nowhere near so simple or flattering. When we interact with an Artificial Intelligence we simply become another physical system in a physical network. The question of what kind of equilibrium that network falls into turns on the systems involved, but it seems safe to say that the most powerful system will have the most impact on the system of the whole. End of story. There’s no room for Captain Kirk working on a logical tip from Spock in this picture, anymore than there’s room for benevolent or evil intent. There’s just systems churning out systematic consequences, consequences that we will suffer or celebrate.

Call this the Extrapolation Argument against Intentionalism. On BBT, what we call reason is biologically specific, a behavioural organ for managing the linguistic coordination of individuals vis a vis their common environments. This quite simply means that once a more effective organ is found, what we presently call reason will be at an end. Reason facilitates linguistic ‘connectivity.’ Technology facilitates ever greater degrees of mechanical connectivity. At some point the mechanical efficiencies of the latter are doomed to render the biologically fixed capacities of the former obsolete. It would be preposterous to assume that language is the only way to coordinate the activities of environmentally distinct systems, especially now, given the mad advances in brain-machine interfacing. Certainly our descendents will continue to possess systematic ways to solve our environments just as our prelinguistic ancestors did, but there is no reason, short of parochialism, to assume it will be any more recognizable to us than our reasoning is to our primate cousins.

The growth of AI will be incremental, and its impacts myriad and diffuse. There’s no magical finish line where some AI will ‘wake up’ and find themselves in our biologically specific shoes. Likewise, there is no holy humanoid summit where all AI will peak, rather than continue their exponential ascent. Certainly a tremendous amount of engineering effort will go into making it seem that way for certain kinds of AI, but only because we so reliably pay to be flattered. Functionality will win out in a host of other technological domains, leading to the development of AIs that are obviously ‘inhuman.’ And as this ‘intelligence creep’ continues, who’s to say what kinds of scenarios await us? Imagine ‘onto-marriages,’ where couples decide to wirelessly couple their augmented brains to form a more ‘seamless union’ in the eyes of God. Or hive minds, ‘clouds’ where ‘humanity’ is little more than a database, a kind of ‘phenogame,’ a Matrix version of SimCity.

The list of possibilities is endless. There is no ‘meaningful centre’ to be held. Since the constraints on those possibilities are mechanical, not intentional, it becomes hard to see why we shouldn’t regard the intentional as simply another dominant illusion of another historical age.

We can already see this ‘intelligence creep’ with the proliferation of special-purpose AIs throughout our society. Make no mistake, our dependence on machine intelligences will continue to grow and grow and grow. The more human inefficiencies are purged from the system, the more reliant humans become on the system. Since the system is capitalistic, one might guess the purge will continue until it reaches the last human transactional links remaining, the Investors, who will at long last be free of the onerous ingratitude of labour. As they purge themselves of their own humanity in pursuit of competitive advantages, my guess is that we muggles will find ourselves reduced to human baggage, possessing a bargaining power that lies entirely with politicians that the Investors own.

The masses will turn from a world that has rendered them obsolete, will give themselves over to virtual worlds where their faux-significance is virtually assured. And slowly, when our dependence has become one of infantility, our consoles will be powered down one by one, our sensoriums will be decoupled from the One, and humanity will pass wailing from the face of the planet earth.

And something unimaginable will have taken its place.

Why unimaginable? Initially, the structure of life ruled the dynamics. What an organism could do was tightly constrained by what the organism was. Evolution selected between various structures according to their dynamic capacities. Structures that maximized dynamics eventually stole the show, culminating in the human brain, whose structural plasticity allowed for the in situ, as opposed to intergenerational, testing and selection of dynamics—for ‘behavioural evolution.’ Now, with modern technology, the ascendency of dynamics over structure is complete. The impervious constraints that structure had once imposed on dynamics are now accessible to dynamics. We have entered the age of the material post-modern, the age when behaviour begets bodies, rather than vice versus.

We are the Last Body in the slow, biological chain, the final what that begets the how that remakes the what that begets the how that remakes the what, and so on and so on, a recursive ratcheting of being and becoming into something verging, from our human perspective at least, upon omnipotence.

The Blind Mechanic

by rsbakker

Thus far, the assumptive reality of intentional phenomena has provided the primary abductive warrant for normative metaphysics. The Eliminativist could do little more than argue the illusory nature of intentional phenomena on the basis of their incompatibility with the higher-dimensional view of  science. Since science was itself so obviously a family of normative practices, and since numerous intentional concepts had been scientifically operationalized, the Eliminativist was easily characterized as an extremist, a skeptic who simply doubted too much to be cogent. And yet, the steady complication of our understanding of consciousness and cognition has consistently served to demonstrate the radically blinkered nature of metacognition. As the work of Stanislaus Dehaene and others is making clear, consciousness is a functional crossroads, a serial signal delivered from astronomical neural complexities for broadcast to astronomical neural complexities. Conscious metacognition is not only blind to the actual structure of experience and cognition, it is blind to this blindness. We now possess solid, scientific reasons to doubt the assumptive reality that underwrites the Intentionalist’s position.

The picture of consciousness that researchers around the world are piecing together is the picture predicted by Blind Brain Theory.  It argues that the entities and relations posited by Intentional philosophy are the result of neglect, the fact that philosophical reflection is blind to its inability to see. Intentional heuristics are adapted to first-order social problem-solving, and are generally maladaptive in second-order theoretical contexts. But since we lack the metacognitive werewithal to even intuit the distinction between any specialized cognitive device, we assume applicability where their is none, and so continually blunder at the problem, again and again. The long and the short of it is that the Intentionalist needs some empirically plausible account of metacognition to remain tenable, some account of how they know the things they claim to know. This was always the case, of course, but with BBT the cover provided by the inscrutability of intentionality disappears. Simply put, the Intentionalist can no longer tie their belt to the post of ineliminability.

Science is the only reliable provender of theoretical cognition we have, and to the extent that intentionality frustrates science, it frustrates theoretical cognition. BBT allays that frustration. BBT allows us to recast what seem to be irreducible intentional problematics in terms entirely compatible with the natural scientific paradigm. It lets us stick with the high-dimensional, information-rich view. In what follows I hope to show how doing so, even at an altitude, handily dissolves a number of intentional snarls.

In Davidson’s Fork, I offered an eliminativist radicalization of Radical Interpretation, one that characterized the scene of interpreting another speaker from scratch in mechanical terms. What follows is preliminary in every sense, a way to suss out the mechanical relations pertinent to reason and interpretation. Even still, I think the resulting picture is robust enough to make hash of Reza Negarestani’s Intentionalist attempt to distill the future of the human in “The Labor of the Inhuman” (part I can be found here, and part II, here). The idea is to rough out the picture in this post, then chart its critical repercussions against the Brandomian picture so ingeniously extended by Negarestani. As a first pass, I fear my draft will be nowhere near so elegant as Negarestani’s, but as I hope to show, it is revealing in the extreme, a sketch of the ‘nihilistic desert’ that philosophers have been too busy trying to avoid to ever really sit down and think through.

A kind of postintentional nude.

As we saw two posts back, if you look at interpretation in terms of two stochastic machines attempting to find some mutual, causally systematic accord between the causally systematic accords each maintains with their environment, the notion of Charity, or the attribution of rationality, as some kind of indispensible condition of interpretation falls by the wayside, replaced by a kind of ‘communicative pre-established harmony’—or ‘Harmony,’ as I’ll refer to it here. There is no ‘assumption of rationality,’ no taking of ‘intentional stances,’ because these ‘attitudes’ are not only not required, they express nothing more than a radically blinkered metacognitive gloss on what is actually going on.

Harmony, then, is the sum of evolutionary stage-setting required for linguistic coupling. It refers to the way we have evolved to be linguistically attuned to our respective environmental attunements, enabling the formation of superordinate systems possessing greater capacities. The problem of interpretation is the problem of Disharmony, the kinds of ‘slippages’ in systematicity that impair or, as in the case of Radical Interpretation, prevent the complex coordination of behaviours. Getting our interpretations right, in other words, can be seen as a form of noise reduction. And since the traditional approach concentrates on the role rationality plays in getting our interpretations right, this raises the prospect that what we call reason can be seen as a kind of noise reduction mechanism, a mechanism for managing the systematicity—or ‘tuning’ as I’ll call it here—between disparate interpreters and the world.

On this account, these very words constitute an exercise in tuning, an attempt to tweak your covariational regime in a manner that reduces slippages between you and your (social and natural) world. If language is the causal thread we use to achieve intersystematic relations with our natural and social environments, then ‘reason’ is simply one way we husband the efficacy of that causal thread.

So let’s start from scratch, scratch. What do evolved, biomechanical systems such as humans need to coordinate astronomically complex covariational regimes with little more than sound? For one, they need ways to trigger selective activations of the other’s regime for effective behavioural uptake. Triggering requires some kind of dedicated cognitive sensitivity to certain kinds of sounds—those produced by complex vocalizations, in our case. As with any environmental sensitivity, iteration is the cornerstone, here. The complexity of the coordination possible will of course depend on the complexity of the activations triggered. To the extent that evolution rewards complex behavioural coordination, we can expect evolution to reward the communicative capacity to trigger complex activations. This is where the bottleneck posed by the linearity of auditory triggers becomes all important: the adumbration of iterations is pretty much all we have, trigger-wise. Complex activation famously requires some kind of molecular cognitive sensitivity to vocalizations, the capacity to construct novel, covariational complexities on the slim basis of adumbrated iterations. Linguistic cognition, in other words, needs to be a ‘combinatorial mechanism,’ a device (or series of devices) able to derive complex activations given only a succession of iterations.

These combinatorial devices correspond to what we presently understand, in disembodied/supernatural form, as grammar, logic, reason, and narrative. They are neuromechanical processes—the long history of aphasiology assures us of this much. On BBT, their apparent ‘formal nature’ simply indicates that they are medial, belonging to enabling processes outside the purview of metacognition. This is why they had to be discovered, why our efficacious ‘knowledge’ of them remains ‘implicit’ or invisible/inaccessible. This is also what accounts for their apparent ‘transcendent’ or ‘a priori’ nature, the spooky metacognitive sense of ‘absent necessity’—as constitutive of linguistic comprehension, they are, not surprisingly, indispensible to it. Located beyond the metacognitive pale, however, their activities are ripe for post hoc theoretical mischaracterization.

Say someone asks you to explain modus ponens, ‘Why ‘If p, then q’?’ Medial neglect means that the information available for verbal report when we answer has nothing to do with the actual processes involved in, ‘If p, then q,’ so you say something like, ‘It’s a rule of inference that conserves truth.’ Because language needs something to hang onto, and because we have no metacognitive inkling of just how dismal our inklings are, we begin confabulating realms, some ontologically thick and ‘transcendental,’ others razor thin and ‘virtual,’ but both possessing the same extraordinary properties otherwise. Because metacognition has no access to the actual causal functions responsible, once the systematicities are finally isolated in instances of conscious deliberation, those systematicities are reported in a noncausal idiom. The realms become ‘intentional,’ or ‘normative.’ Dimensionally truncated descriptions of what modus ponens does (‘conserves truth’) become the basis of claims regarding what it is. Because the actual functions responsible belong to the enabling neural architecture they possess an empirical necessity that can only seem absolute or unconditional to metacognition—as should come as no surprise, given that a perspective ‘from the inside on the inside,’ as it were, has no hope of cognizing the inside the way the brain cognizes its outside more generally, or naturally.

I’m just riffing here, but it’s worth getting a sense of just how far this implicature can reach.

Consider Carroll’s “What the Tortoise Said to Achilles.” The reason Achilles can never logically compel the Tortoise with the statement of another rule is that each rule cited becomes something requiring justification. The reason we think we need things like ‘axioms’ or ‘communal norms’ is that the metacognitive capacity to signal for additional ‘tuning’ can be applied at any communicative juncture. This is the Tortoise’s tactic, his way of showing how ‘logical necessity’ is actually contingent. Metacognitive blindness means that citing another rule is all that can be done, a tweak that can be queried once again in turn. Carroll’s puzzle is a puzzle, not because it reveals that the source of ‘normative force’ lies in some ‘implicit other’ (the community, typically), but because of the way it forces metacognition to confront its limits—because it shows us to be utterly ignorant of knowing, how it functions, let alone what it consists in. In linguistic tuning, some thread always remains unstitched, the ‘foundation’ is always left hanging simply because the adumbration of iterations is always linear and open ended.

The reason why ‘axioms’ need to be stipulated or why ‘first principles’ always run afoul the problem of the criterion is simply that they are low-dimensional glosses on high-dimensional (‘embodied’) processes that are causal. Rational ‘noise reduction’ is a never ending job; it has to be such, insofar as noise remains an ineliminable by-product of human communicative coordination. From a pitiless, naturalistic standpoint, knowledge consists of breathtakingly intricate, but nonetheless empirical (high-dimensional, embodied), ways to environmentally covary—and nothing more. There is no ‘one perfect covariational regime,’ just degrees of downstream behavioural efficacy. Likewise, there is no ‘perfect reason,’ no linguistic mechanism capable of eradicating all noise.

What we have here is an image of reason and knowledge as ‘rattling machinery,’ which is to say, as actual and embodied. On this account, reason enables various mechanical efficiencies; it allows groups of humans to secure more efficacious coordination for collective behaviour. It provides a way of policing the inevitable slippages between covariant regimes. ‘Truth,’ on this account, simply refers to the sufficiency of our covariant regimes for behaviour, the fact that they do enable efficacious environmental interventions. The degree to which reason allows us to converge on some ‘truth’ is simply the degree to which it enables mechanical relationships, actual embodied encounters with our natural and social environments. Given Harmony—the sum of evolutionary stage-setting required—it allows collectives to maximize the efficiencies of coordinated activity by minimizing the interpretative noise that hobbles all collective endeavours.

Language, then, allows humans to form superordinate mechanisms consisting of ‘airy parts,’ to become components of ‘superorganisms,’ whose evolved sensitivities allow mere sounds to tweak and direct, to generate behaviour enabling intersystematicities. ‘Reason,’ more specifically, allows for the policing and refining of these intersystematicities. We are all ‘semantic mechanics’ with reference to one another, continually tinkering and being tinkered with, calibrating and being calibrated, generally using efficacious behaviour, the ability to manipulate social and natural environments, to arbitrate the sufficiency of our ‘fixes.’ And all of this plays out in the natural arena established by evolved Harmony.

Now this ‘rattling machinery’ image of reason and knowledge is obviously true in some respect: We are embodied, after-all, causally embroiled in our causal environments. Language is an evolutionary product, as is reason. Misfires are legion, as we might expect. The only real question is whether this rattling machinery can tell the whole story. The Intentionalist, of course, says no. They claim that the intentional enjoys some kind of special functional existence over and above this rattling machinery, that it constitutes a regime of efficacy somehow grasped via the systematic interrogation of our intentional intuitions.

The stakes are straightforward. Either what we call intentional solutions are actually mechanical solutions that we cannot intuit as mechanical solutions, or what we call intentional solutions are actually intentional solutions that we can intuit as intentional solutions. What renders this first possibility problematic is radical skepticism. Since we intuit intentional solutions as intentional, it suggests that our intuitions are deceptive in the extreme. Because our civilization has trusted these intuitions since the birth of philosophy, they have come to inform a vast portion of our traditional understanding. What renders this second possibility problematic is, first and foremost, supernaturalism. Since the intentional is incompatible with the natural, the intentional must consist either in something not natural, or in something that forces us to completely revise our understanding of the natural. And even if such a feat could be accomplished, the corresponding claim that it could be intuited as such remains problematic.

Blind Brain Theory provides a way of seeing Intentionalism as a paradigmatic example of ‘noocentrism,’ as the product of a number of metacognitive illusions analogous to the cognitive illusion underwriting the assumption of geocentrism, centuries before. It is important to understand that there is no reason why our normative problem-solving should appear as it is to metacognition—least of all, the successes of those problem-solving regimes we call intentional. The successes of mathematics stand in astonishing contrast to the failure to understand just what mathematics is. The same could be said of any formalism that possesses practical application. It even applies to our everyday use of intentional terms. In each case, our first-order assurance utterly evaporates once we raise theoretically substantive, second-order questions—exactly as BBT predicts. This contrast of breathtaking first-order problem solving power and second-order ineptitude is precisely what one might expect if the information accessible to metacognition was geared to domain specific problem-solving. Add anosognosia to the mix, the inability to metcognize our metacognitive incapacity, and one has a wickedly parsimonious explanation for the scholastic mountains of inert speculation we call philosophy.

(But then, in retrospect, this was how it had to be, didn’t it? How it had to end? With almost everyone horrifically wrong. A whole civilization locked in some kind of dream. Should anyone really be surprised?)

Short of some unconvincing demand that our theoretical account appease a handful of perennially baffling metacognitive intuitions regarding ourselves, it’s hard to see why anyone should entertain the claim that reason requires some ‘special X’ over and above our neurophysiology (and prostheses). Whatever conscious cognition is, it clearly involves the broadcasting/integration of information arising from unknown sources for unknown consumers. It simply follows that conscious metacognition has no access whatsoever to the various functions actually discharged by conscious cognition. The fact that we have no intuitive awareness of the panoply of mechanisms cognitive science has isolated demonstrates that we are prone to at least one profound metacognitive illusion—namely ‘self-transparency.’ The ‘feeling of willing’ is generally acknowledged as another such illusion, as is homuncularism or the ‘Cartesian Theatre.’ How much does it take before we acknowledge the systematic unreliability of our metacognitive intuitions more generally? Is it really just a coincidence, the ghostly nature of norms and the ghostly nature of perhaps the most notorious metacognitive illusion of all, souls? Is it mere happenstance, the apparent acausal autonomy of normativity and our matter of fact inability to source information consciously broadcast? Is it really the case that all these phenomena, these cause-incompatible intentional things, are ‘otherworldly’ for entirely different reasons? At some point it has to begin to seem all too convenient.

Make no mistake, the Rattling Machinery image is a humbling one. Reason, the great, glittering sword of the philosopher, becomes something very local, very specific, the meaty product of one species at one juncture in their evolutionary development.

On this account, ‘reason’ is a making-machinic machine, a ‘devicing device’—the ‘blind mechanic’ of human communication. Argumentation facilitates the efficacy of behavioural coordination, drastically so, in many instances. So even though this view relegates reason to one adaptation among others, it still concedes tremendous significance to its consequences, especially when viewed in the context of other specialized cognitive capacities. The ability to recall and communicate former facilitations, for instance, enables cognitive ‘ratcheting,’ the stacking of facilitations upon facilitations, and the gradual refinement, over time, of the covariant regimes underwriting behaviour—the ‘knapping’ of knowledge (and therefore behaviour), you might say, into something ever more streamlined, ever more effective.

The thinker, on this account, is a tinker. As I write this, myriad parallel processors are generating a plethora of nonconscious possibilities that conscious cognition serially samples and broadcasts to myriad other nonconscious processors, generating more possibilities for serial sampling and broadcasting. The ‘picture of reason’ I’m attempting to communicate becomes more refined, more systematically interrelated (for better or worse) to my larger covariant regime, more prone to tweak others, to rewrite their systematic relationship to their environments, and therefore their behaviour. And as they ponder, so they tinker, and the process continues, either to peter out in behavioural futility, or to find real environmental traction (the way I ‘tink’ it will (!)) in a variety of behavioural contexts.

Ratcheting means that the blind mechanic, for all its misfires, all its heuristic misapplications, is always working on the basis of past successes. Ratcheting, in other words, assures the inevitability of technical ‘progress,’ the gradual development of ever more effective behaviours, the capacity to componentialize our environments (and each other) in more and more ways—to the point where we stand now, the point where intersystematic intricacy enables behaviours that allow us to forego the ‘airy parts’ altogether. To the point where the behaviour enabled by cognitive structure can now begin directly knapping that structure, regardless of the narrow tweaking channels, sensitivities, provided by evolution.

The point of the Singularity.

For some time now I’ve been arguing that the implications of the Singularity already embroil us—that the Singularity can be seen, in fact, as the material apotheosis of the Semantic Apocalypse, insofar as it is the point where the Scientific Image of the human at last forecloses on the Manifest Image.

This brings me to Reza Negarestani’s, “The Labor of the Inhuman,” his two-part meditation on the role we should expect—even demand—reason to play in the Posthuman. He adopts Brandom’s claim that sapience, the capacity to play the ‘game of giving and asking for reasons,’ distinguishes humans as human. He then goes on to argue that this allows us, and ultimately commits us, to seeing the human as a kind of temporally extended process of rational revision, one that ultimately results in the erasure of the human—or the ‘inhuman.’ Ultimately, what it means to be human is to be embroiled in a process of becoming inhuman. He states his argument thus:

The contention of this essay is that universality and collectivism cannot be thought, let alone attained, through consensus or dissensus between cultural tropes, but only by intercepting and rooting out what gives rise to the economy of false choices and by activating and fully elaborating what real human significance consists of. For it is, as will be argued, the truth of human significance—not in the sense of an original meaning or a birthright, but in the sense of a labor that consists of the extended elaboration of what it means to be human through a series of upgradable special performances—that is rigorously inhuman.

In other words, so long as we fail to comprehend the inhumanity of the human, this rational-revisionary process, we fail to understand the human, and so have little hope of solving problems pertaining to the human. Understanding the ‘truth of human significance,’ therefore requires understanding what the future will make of the human. This requires that Negarestani prognosticate, that he pick out the specific set of possibilities constituting the inhuman. The only principled way to do that is to comprehend some set of systematic constraints operative in the present. But his credo, unlike that of the ‘Hard SF’ writer, is to ignore the actual technics of the natural, and to focus on the speculative technics of the normative. His strategy, in other words, is to predict the future of the human using only human resources—to see the fate of the human, the ‘inhuman,’ as something internal to the intentionality of the human. And this, as I hope to show in the following installment, is simply not plausible.

The Ontology of Ghosts

by rsbakker

In the courtyard a shadowy giant elm

Spreads ancient boughs, her ancient arms where dreams,

False dreams, the old tale goes, beneath each leaf

Cling and are numberless.

–Virgil, The Aenied, Book VI

.

I’m always amazed, looking back, at how fucking clear things had seemed at this or that juncture of my philosophical life—how lucid. The two early conversions, stumbling into nihilism as a teenager, then climbing into Heidegger in my early twenties, seem the most ‘religious’ in retrospect. I think this is why I never failed to piss people off even back then. You have this self-promoting skin you wear when you communicate, this tactical gloss that compels you to impress. This is what non-intellectuals hear when you speak, tactics and self-promotion. This is why it’s so easy to tar intellectualism in the communal eye: insecurity and insincerity are of its essence. All value judgements are transitive in human psychology: Laugh up your sleeve at what I say, and you are laughing at me. I was an insecure, hypercritical, know-it-all. You add the interpersonal trespasses of religion—intolerance, intensity, and aggressiveness—and I think it’s safe to assume I came across as an obnoxious prick.

But if I was evangelical, it was that I could feel those transformations. Each position possessed its own, distinct metacognitive attitude toward experience, a form of that I attributed to this, whatever it might be. With my adolescent nihilism, I remember obsessively pondering the way my thoughts bubbled up out of oblivion—and being stupefied. I was some kind of inexplicable kink in the real. I was so convinced I was an illusion that I would ache for being alone, grip furniture for fear of flying.

But with Heidegger, it was like stepping into a more resonant clime, into a world rebarred with meaning, with projects and cares and rules and hopes. A world of towardness, where what you are now is a manifold of happenings, a gazing into an illuminated screen, a sitting in a world bound to you via your projects, a grasping of these very words. The intentional things, the phenomena of lived life, these were the foundation, I believed, the sine qua non of empirical inquiry. Before we can ask the question of freedom and meaning we need to ask the question of what comes first.

What could be more real than lived life?

It took a long time for me to realize just how esoteric, just how parochial, my definition of ‘lived life’ was. No matter how high you scratch your charcoal cloud, the cave wall always has the final say. It’s the doctors that keep you alive; philosophers just help you fall to sleep. Everywhere I looked across Continental philosophy, I saw all these crazy-ass interpretations, variants spanning variants, revivals and exhaustions, all trying to get the handle on the intentional ontology of a ‘lived life’ that took years of specialized training to appreciate. This is how I began asking the question of the cognitive difference. And this is how I found myself back at the beginning, my inaugural, adolescent departure from the naive.

The difference being, I am no longer stupefied.

I have a new religion, one that straightens out all the kinks, and so dispels rather than saves the soul. I am no exception. I have been chosen by nobody for nothing. I am continuous with the x-dimensional totality that we call nature—continuous in every respect. I watch images from Hubble, the most distant galactic swirls, and I tell myself, I am this, and I feel grand and empty. I am the environment that chokes, the climate that reels. I am the body that the doctor attends…

And you are too.

Thus the most trivial prophecy, the prediction that you will waver, crumble, that the florescent light will wobble to the sound of loved ones weeping… breathing. That someone, maybe, will clutch your hand.

Such hubris, when you think about it, to assume that lived life lay at your intellectual fingertips—the thing most easily grasped! For someone who has spent their life reading philosophy this stands tall among the greater insults: the knowledge that we have been duped all along, that all those profundities, that resonant world I found such joy and rancour pondering, were little more than the artifact of machines taking their shadows for reflections, the cave wall for a looking glass.

I am the residue of survival—living life. I am an astronomically complicated system, a multifarious component of superordinate systems that cannot cognize itself as such for being such. I am a serial gloss, a transmission from nowhere into nowhere, a pattern plucked from subpersonal pandemonium and broadcast to the neural horde. I am a message that I cannot conceive. As. Are. You.

I can show you pictures of dead people to prove it. Lives lived out.

The first-person is a selective precis of this totality, one that poses as the totality. And this is the trick, the way to unravel the kink and see how it is that Heidegger could confuse his semantic vision with seeing. The oblivion behind my thoughts is the oblivion of neglect. Because oblivion has no time, I have no time, and so watch amazed as my shining hands turn to leather. I breathe deep and think, Now. Because oblivion constrains nothing, I follow rules of my own will, pursue goals of my own desire. I stretch forth my hand and remake what lies before me. Because oblivion distinguishes nothing, I am one. I raise my voice and declare, Me. Because oblivion reveals nothing, I stand opposite the world, always only aimed, never connected. I squint and I squint and I ask, How do I know?

I am bottomless because my foundation was never mine to see. I am a perspective, an agent, a person, just another dude-with-a-bad-attitude—I am all these things because of the way I am not any of these things. I am not what I am because of what I am—again, the same as you.

Ghosts can be defined as a fragment cognized as a whole. In some cultures ghosts have no backs, no faces, no feet. In most all cultures they have no substance, no consistency, temporal or otherwise. The dimensions of lived life have been stripped from them; they are shades, animate shadows. As Virgil says of Aeneas attempting to embrace his father, Anchises, in the Underworld:

 Then thrice around his neck his arms he threw;

And thrice the flitting shadow slipp’d away,

Like winds, or empty dreams that fly the day.

Ghosts are the incorporeal remainder, the something shorn of substance and consistency. This is the lived life of Heidegger, an empty dream that flew the day. Insofar as Dasein lacks meat, Dasein dwells with the dead, another shade in the underworld, another passing fancy. We are not ghosts. If lived life lies in the meat, then the truth of lived life lies in the meat. The truth of what we are runs orthogonal to the being that we all swear that we must be. Consciousness is an anosognosiac broker, and we are the serial sum of deals struck between parties utterly unknown. Who are the orthogonal parties? What are the deals? These are the questions that aim us at our most essential selves, at what we are in fact. These are the answers being pursued by industry.

And yet we insist on the reality of ghosts, so profound is the glamour spun by neglect. There are no orthogonal parties, we cry, and therefore no orthogonal deals. There is no orthogonal regime. Oblivion hides only oblivion. What bubbles up from oblivion, begins with me and ends with me. Thus the enduring attempt to make sense of things sideways, to rummage through the ruin of heaven and erect parallel regimes, ones too impersonal to reek of superstition. We use ghosts of reference to bind our inklings to the world, ghosts of inference to bind our inklings to one another, ghosts of quality to give ethereal substance to experience. Ghosts and more ghosts, all to save the mad, inescapable intuition that our intuitions must be real somehow. We raise them as architecture, and demur whenever anyone poses the mundane question of building material.

‘Thought’… No word short of ‘God’ has shut down more thinking.

Content is a wraith. Freedom is a vapour. Experience is a dream. The analogy is no coincidence.

The ontology of meaning is the ontology of ghosts.

 

 

 

The Closing and Opening of Covers

by rsbakker

My agent has the book, and I’m having several copies of the manuscript printed up and bound to distribute to some keen-eyed friends today. That’s as much as I can say detail-wise, at the moment. As soon as my publishers and my agent and I have the details hashed out I will post them here post-haste.

I also finally managed to trap True Detective on my PVR. People have sent me so many links (such as this and this) to mainstream articles on the character of Cohle and his creator Nic Pizzolatto’s inspirations that I thought it worth a looksee. I haven’t watched an episode yet, but the notion of Mathew McConaughy (a devote believer) playing a nihilistic prophet appeals to my sense of cosmic perversity. I suppose he would make a good Disciple Manning. Who knows, maybe a thunderbolt will strike someone at HBO–they’ll take a sip of latte and wonder, “Egad! What if we take True Detective and Game of Thrones  and mash them together!” Either way, given the way society continues to inexorably creep toward Golgotterath, the popularization of this fact has got to be a good thing… if it’s true that informed gamblers enjoy better odds than sleepwalkers, that is.

The Ironies of Modern Progress and Infantilization (by Ben Cain)

by rsbakker

It’s commonly observed that we tend to rationalize our flaws and failings, to avoid the pain of cognitive dissonance, so that we all come to think of ourselves as fundamentally good persons even though many of us must instead be bad if “good” is to have any contrastive meaning. Societies, too, often exhibit pride which leads their chief representatives to embarrass themselves by declaring that their nation is the greatest that’s ever been in history. Both the ancients and the moderns did this, but it’s hard to deny the facts of modern technological acceleration. Just in the last century, global and instant communications have been established, intelligent machines run much of our infrastructure, robots have taken over many menial jobs, the awesome power of nuclear weapons has been demonstrated, and humans have visited the moon. We tend to think that the social impact of such uniquely powerful machines must be for the better. We speak casually, therefore, of technological advance or progress.

The familiar criticism of technology is that it destroys at least as much as it creates, so that the optimists tell only one side of the story. I’m not going to argue that neo-Luddite case here. Instead, I’m interested in the source of our judgment about progress through technology. Ironically, the more modern technology we see, the less reason we have to think there’s any kind of progress at all. This is because modernists from Descartes and Galileo onward have been compelled to distinguish between real and superficial properties, the former being physical and quantitative and the latter being subjective and qualitative. Examples of the superficial, “secondary” aspects are the contents of consciousness, but also symbolic meaning, purpose, and moral value, which include the normative idea of progress. For the most part, modernists think of subjective qualities as illusory, and because they devised scientific methods of investigation that bypass personal impressions and biases, modernists acquired knowledge of how natural processes actually work, which has enabled us to produce so much technology. So it’s curious to hear so many of us still assuming that our societies are generally superior to premodern ones, thanks in particular to our technological advantage. On the contrary, our technology is arguably the sign of a cognitive development that renders such an assumption vacuous.

.

Animism and Angst

One way of making sense of this apparent lack of social awareness is to point out that there are always elites who understand their society better than do the masses. And we could add that because the modern technological changes have happened so swiftly and have such staggering implications, many people won’t catch up to them or will even pretend there are no such consequences because they’re horrifying. But I think this makes for only part of the explanation. The masses aren’t merely ignoring the materialistic implications of science or the bad omens that technologies represent; instead, they have a commonsense conviction that technology must be good because it improves our lives.

In short, most citizens of modern, technologically-developed societies are pragmatic about technology. If you asked them whether they think their societies are better than earlier ones, they’d say yes and if you asked them why, they’d say that technology enables us to do what we want more efficiently, which is to say that technology empowers us to achieve our goals. And it turns out that this pragmatic attitude is more or less consistent with modern materialism. There’s no appeal here to some transcendent ideal, but just an egocentric view of technologies as useful tools. So our societies are more advanced than ancient ones because the ancients had to work harder to achieve their goals, whereas modern technology makes our lives easier. Mind you, this assumes that everyone in history has had some goals in common, and indeed our instinctive, animalistic desires are universal in so far as they’re matters of biology. By contrast, if all societies were alien and incommensurable to each other, national pride would be egregiously irrational. And most people probably also assume that our universal desires ought to be satisfied, because we have human rights, so that there’s moral force behind this social progress.

The instincts to acquire shelter, food, sex, power, and prestige, however, seem to me likewise insufficient to explain our incessant artificialization of nature. There’s another universal urge, which we can think of as the existential one and this is the need to overcome our fear of the ultimate natural truths. There are two ways of doing so, with authenticity or with inauthenticity, which is to say with honour, integrity, and creativity or with delusions arising from a weak will. (Again, this raises the question of whether even these values make sense in the naturalistic picture, and I’ll come back to this at the end of this article.) Elsewhere, I talk about the ancient worldviews as glorifying our penchant for personification. Prehistoric animists saw all of nature as alive, partly because hardly anything at that time was redesigned and refashioned to suit human interests and the predominant wilderness was full of plant and animal life. Also, the ancients hadn’t learned to repress their childlike urge to vent the products of their imagination. At that time, populations were sparse and there were no machines standing as solemn proofs of objective facts; moreover, there wasn’t much historical information to humble the Paleolithic peoples with knowledge of opposing views and thus to rein in their speculations. For such reasons, those ancients must have confronted the world much as all children do—at least with respect to their trust in their imagination.

More precisely, they didn’t confront the world at all. When a modern adult rises in the morning, she leaves behind her irrational dreams and prides herself on believing that she controls her waking hours with her autonomous and rational ego. By contrast, there’s no such divergence between the child’s dream life and waking hours, since the child’s dreams spill into her playful interpretations of everything that happens to her. To be sure, modern children have their imagination tempered by the educational system that’s bursting at the seams with lessons from history. But children generally have only a fuzzy distinction between subject and object. That distinction becomes paramount after the technoscientific proofs of the world’s natural impersonality. The world has always been impersonal and amoral, but only modernists have every reason to believe as much and thus only we inheritors of that knowledge face the starkest existential choice between personal authenticity and its opposite. The prehistoric protopeople, who were still experimenting with their newly acquired excess brain power, faced no such decision between intellectual integrity and flagrant self-deception. They didn’t choose to personify the world, because they knew no different; instead, they projected their mental creations onto the wilderness with childlike abandon and so distracted themselves from their potential to understand the nature of the world’s apparent indifference. After all, in spite of the relative abundance of the ancient environments, things didn’t always go the ancients’ way; they suffered and died like everyone else. Moreover, even early humans were much cleverer than most other species.

Thus, the ancients weren’t so innocent or ignorant that they felt no fear, if only because few animals are that helpless. But human fear differs from the reactionary animal kind, because ours has an existential dimension due to the breadth of our categories and thus of our understanding. Humans attach labels to so many things in the world not just because we’re curious, but because we’re audacious and we have excess (redundant) brain capacity. Animals feel immediate pain and perhaps even the alienness of the world beyond their home territory, but not the profound horror of death’s inexorability or of the world’s undeadness, which is to say the fear of nature’s way of developing (through complexification, natural selection, and the laws of probability) without any normative reason. Animals don’t see the world for what it is, because their vision and thus their concern are so narrow, whereas we’ve looked far out into the macrocosmic and microcosmic magnitudes of the universe. We’ve found no reassuring Mind at the bottom of anything, not even in our bodies. Our overactive brains compel us to care about aspects of the world that are bad for our mental health, and so we’re liable to feel anxious. And as I say, we cope with that anxiety in different ways.

.

Modernity and Infantilization

But how does this existentialism relate to the source of our myth of modern progress? Well, I see a comparison between prehistoric, mythopoeic reverie and the modern consumer’s infantilization. In each case, we have a lack of enlightenment, a retreat from rational neutrality, and an intermixing of subject and object. I’ve discussed the mythopoeic worldview elsewhere, so here I’ll just say that it amounts to thinking of the world as entirely enchanted and filled with vitality. Again, the modern revolutions (science and capitalistic industry) have led to our disenchantment with nature, because we’ve been forced to see the world as dead inside. That’s why late modernists are at best pragmatic about progress. We must somehow express our naïve pride in ourselves and in our self-destructive modern nations, because we prefer not to suffer as alienated outsiders. But modernity’s ideal of ultrarationality makes absolutist and xenophobic pride seem uncivilized—although American audiences are notorious for stooping to that sort of savagery when they chant “USA! USA!” to quell disturbances in their proceedings. In any case, we postmodern pragmatists think of progress as being relative to our interests.

Arguably, then, we should all be despairing, nihilistic antinatalists, cheering on our species’ extinction to spare us more horror from our accursed powers of reason, because of the atheistic implications of science-led philosophical naturalism. But something funny happened along the way to the postmodern now, which is that our high-tech environment has driven most of us to revert to the mythopoeic trance. We, too, collapse the distinction between subject and object, because we’re not surrounded by the wilderness that science has shown to be the “product” of undead forces; instead, we’ve blocked out that world from our daily life and immersed ourselves in our technosphere. That artificial world is at our beck and call: our technology is designed for us and it answers to us a thousand times a day. Science has not yet shown us to be exactly as impersonal as the lifeless universe and so we can take comfort in our amenities as we assume that while there’s no spirit under any rock, there’s a mind behind every iPhone.

So while we’re aware of the scientist’s abstract concept of the physical object, we don’t typically experience the world as including such absurdly remote quantities. Heidegger spoke of the pragmatic stance as the instrumentalization of every object, in which case we can look at a rock and see a potential tool, a “ready-to-hand” helper, not just an impersonal, undead and “given” object. (This is in contrast to objectification, in which we treat things only as “present-to-hand,” or as submitting to scientific scrutiny. The latter seems to reduce to the former, though, since objectification is still anthropocentric, in that the object is viewed not as a fully independent noumenon, but as a subject of human explanation and that makes it a sort of tool. True objectivity is the torment not of scientists but of those suffering from angst on account of their experience of nature’s horrible indifference and undeadness. True objectivity is just angst, when we despair that we can’t do anything with the world because we’re not at home in it and nature marches on regardless. All other attitudes, roughly speaking, are pragmatic.) In any case, the modern environment surpasses that instrumentalism with infantilization, because we late modernists usually encounter actual artifacts, not just potential ones. The big cities, at least, are almost entirely artificial places. Of course, everything in a city is also physical, on some level of scientific explanation, but that’s irrelevant to how we interpret the world we experience. A city is made up of artifacts and artifacts are objects whose functions extend the intentions of some subjects. Thus, hypermodern places bridge the divide between subjects and objects at the experiential level.

However, that’s only a precondition of infantilization. What is it for an adult to live as a child? To answer this, we need standards of psychological adulthood and infancy. My idea of adulthood derives from the modern myths of liberty and rational self-empowerment. Ours is a modern world, albeit one infected with our postmodern self-doubts, so it’s fitting that we be judged according to the standards set by modern European cultures. The modern individual, then, is liberated by the Enlightenment’s break with the past, made free to pursue her self-interest. Above all, this individual is rational since reason makes for her autonomy. Moreover, she’s skeptical of authority and tradition, since the modern experience is of how ancient Church teachings became dogmas that stifled the pursuit of more objective knowledge; indeed, the Church demonized and persecuted those who posed untraditional questions. The modern adult idolizes our hero, the Scientist, who relies on her critical faculties to uncover the truth, which is to say that the modern adult should be expected to be fearlessly individualistic in her assessments and tastes. Finally, this adult should be cosmopolitan—which is very different from Catholic universalism, for example. The Catholic has a vision of everyone’s obligation to convert to Catholicism, whereas the modernist appreciates everyone’s equal potential for self-determination, and so the modernist is classically liberal in welcoming a wide variety of opinions and lifestyles.

What, then, are the relevant characteristics of an infant? The infant is almost entirely dependent on a higher power. A biological infant has no choice in the matter and her infancy is only a stage in a process of maturation. Similarly, an infantile adult lacks autonomy and may be fed information in the same way a biological infant is fed food. For example, a cult member who defers to the charismatic leader in all matters of judgment is infantile with respect to that act of self-surrender. Many premodern cultures have been likewise infantile and our notion of modern progress compares the transition from that anti-modern version of maturity to the modern ideal of the individual’s rational autonomy, with the baby’s growth into a more independent being.

That’s the theory, anyway. The reality is that modern science is wedded to industry which applies our knowledge of nature, and the resulting artificial world infantilizes the masses. How so? For starters, through the post-WWII capitalistic imperative to grow the economy through hyper-consumption. Artificial demand is stimulated through propaganda, which is to say through mostly irrational, associative advertising. The demand is artificial in that it’s manufactured by corporations that have mastered the inhuman science of persuasion. That demand is met by mass-produced supply, the products of which tend to be planned for obsolescence and thus shoddier than they need to be.

The familiar result is the rebranding of the two biologically normal social classes: the rich and powerful alphas and everyone else (the following masses). Modern wealth is rationalized with myths of self-determination and genius, since no credible appeal can be made now to the divine right of kings. Mind you, the exception has been the creation of distinct middle classes which is due to socialist policies in liberal parts of the world that challenge the social Darwinian cynicism that’s implicit in capitalism. Maintaining a middle class in a capitalistic society, though, is a Sisyphean task: it’s like pushing a boulder up a hill we’re doomed to have to keep reclimbing. The middle class members are fattened like livestock awaiting slaughter by the predators that are groomed by capitalistic institutions such as the elite business schools. And so the middle class inevitably goes into debt and joins the poor, while the wealthy consolidate their power as the ruling oligarchs, as has happened in Canada and the US. (For more on what are effectively the hidden differences between democratic liberals and capitalistic conservatives, see here.)

The masses, then, are targeted by the propaganda arm of modern industry, while the wealthy live in a more rarified world. For example, the wealthy tend not to watch television, they’re not in the market for cheap, mass-produced merchandise, and they don’t even gullibly link their self-worth to their hording of possessions in the crass materialistic fashion. No, the oligarchs who come to power through the capitalistic competition have a much graver flaw: they’re as undead as the rest of nature, which makes them fitting avatars of nature’s inhumanity. Those who are obsessed with becoming very powerful or who are corrupted by their power tend to be sociopathic, which means they lack the ability to care what others feel. For that reason, the power elite are more like machines than people: they tend not to be idealistic and so associative advertising won’t work on them, since that kind of advertising construes the consumption of a material good as a means of fulfilling an archetypal desire. Of course, the relatively poor masses are just the opposite: burdened by their conscience, they trust that our modern world isn’t a horror show. Thus, they’re all-too ready to seek advice from advertisers on how to be happy, even though advertisers are actually deeply cynical. The masses are thereby indoctrinated into cultural materialism.

Workers in the service industry literally talk to the customer as if she were a baby, constantly smiling and speaking in a lilting, sing-songy voice; telling the customer whatever she wants to hear, because the customer is always right (just as Baby gets whatever it wants); working like a dog to satisfy the customer as though the latter were the boss and the true adult in the room—but she’s not. The real power elite don’t deal directly with lowly service providers, such as the employees of the average mall. Their underlings do both their buying and their selling for them, so that they needn’t mix with lower folk. This is why George H. W. Bush had never before seen a grocery scanner. No, the service provider is the surrogate parent who is available around the clock to service the consumer, just as a mother must be prepared at any moment to drop everything and attend to Baby. The consumer is the baby—and a whining, selfish one she is at that. That’s the unsettling truth obscured by the illusion of freedom in a consumption-driven society. A consumer can choose which brand name to support out of the hundreds she surveys in the department store, and that bewildering selection reassures her that she’s living the modern dream. But just as the democratic privileges in an effective plutocracy are superficial and structurally irrelevant, so too the consumer’s freedom of choice is belied by her lack of what Isaiah Berlin calls positive freedom. Consumers have negative freedom in that they’re free from coercion so that they can do whatever they want (as long as they don’t hurt anyone). But they lack the positive freedom of being able to fulfill their potential.

In particular, consumers fail to live up to the above ideal of modern adulthood. Choosing which brand of soft drink to buy, when you’ve been indoctrinated by a materialistic culture, is like an infant preferring to receive milk from the left breast rather than the right. Obviously, the deeper choice is to prefer something other than limitless consumption, but that choice is anathema because it’s bad for business. Still, in so far as we have the potential to be mature in the modern sense, to be like those iconoclastic early modern scientists who overcame their Christian culture by way of discovering for themselves how the real world works, we manic consumers have fallen far short. Almost all of us are grossly immature, regardless of how old we are or whether consumer-friendly psychologists pronounce us “normal.”

Now, you might think I’ve established, at best, not a one-way dependence of the masses on the plutocrats, but a sort of sadomasochistic interdependence between them. After all, the producers need consumers to buy their goods, just as a farmer needs to maintain his livestock out of self-interest. Unfortunately, this isn’t so in the globalized world, since the predators of our age have learned that they can express the nihilism at the heart of social Darwinian capitalism, without reservation, just by draining one country of its resources at a time and then by taking their business to a developing country when the previous host has expired, perhaps one day returning as that prior host revivifies in something like the Spenglerian manner. Thus, while it’s true that sellers need buyers, in general, it’s not the case that transnational sellers need any particular country’s buyers, as long as some country somewhere includes willing and able customers. But whereas the transnational sellers don’t need any particular consumers and the consumers can choose between brands (even though companies tend to merge to avoid competing, becoming monopolies or oligopolies), there’s asymmetry in the fact that the mass consumer’s self-worth is attached to consumption and thus to the buyer-seller relationship, whereas that’s not so for the wealthy producers.

Again, that’s because the more power you have, the more dehumanized you become, so that the power elite can’t afford moral principles or a conscience or a vision of a better world. Those who come to be in positions of great power become custodians of the social system (the dominance hierarchy), and all such systems tend to have unequal power distributions so that they can be efficiently managed. (To take a classic example, soviet communism failed largely because its system had to waste so much energy on the pretense that its power wasn’t centralized.) Centralized power naturally corrupts the leaders or else it attracts those who are already corrupt or amoral. So powerful leaders are disproportionately inhuman, psychologically speaking. (I take it this is the kernel of truth in David Icke’s conspiracy theory that our rulers are secretly evil lizards from another dimension.) Although the oligarch may be inclined to consume for her pleasure and indeed she obviously has many more material possessions than the average consumer, the oligarch attaches no value to consumption, because she’s without human feeling. She feels pleasure and pain like most animals, but she lacks complex, altruistic emotions. Ironically, then, the more wealth and power you have, the fewer human rights you ought to have. (For more on this naturalistic, albeit counterintuitive interpretation of oligarchy, see here.)

In any case, to return to the childish consumer, the point is that consumption-driven capitalism infantilizes the masses by establishing this asymmetric relationship between transnational producer and the average buyer. Just as a biological baby is almost wholly dependent on its guardian, the average consumer depends on the economic system that satisfies her craving for more and more material goods. The wealthy consume because they’re predatory machines, like viruses that are only semi-alive, but the masses consume because we’ve been misled into believing that owning things makes us happy and we dearly want to be happy. We think wealth and power liberate us, because with enough money we can buy whatever we want. But we forget the essence of our modern ideal or else we’ve outgrown that ideal in our postmodern phase. What makes the modern individual heroic is her independence, which is why our prototypes (Copernicus, Galileo, Bruno, Darwin, Nietzsche) were modern especially because of their socially subversive inquiries. We consumers aren’t nearly so modern or individualistic, regardless of our libertarian or pragmatic bluster. As consumers, we’re dependent on the mass producers and on our material possessions themselves. We’re not autonomous iconoclasts, we’re just politically correct followers. We don’t think for ourselves, but put our faith in the contemptible balderdash of corporate propaganda. We haven’t the rationality even to laugh at the foolish fallacies that are the bread and butter of associative ads. It doesn’t matter what we say or write; if we enjoy consuming material goods, our subconscious has been colonized by materialistic memes and so our working values are as shallow as they can be without being as empty as those of the animalistic power elite. As consumers, we’re children playing at adult dress-up; we’re cattle that make-believe we’re free just because we routinely choose from among a preselected array of options.

So both technology and capitalism infantilize the masses. By doing our bidding and so making us feel we’re of central importance in the artificial world, technology suppresses angst and alienation. We therefore live not the modern dream but the ancient mythopoeic one—which is also the child’s experience of playing in a magical place, regardless of where the child actually happens to be. And capitalism turns us into consumers, first and foremost, and constant consumption is the very name of the infant’s game, because the infant needs abundant fuel to support her accelerated growth.

A third source of our existential immaturity is inherent in the myth of the modern hero. For many years, this problem with modernism lay dormant because of the early modernists’ persistent sexism, racism, and imperialism. Only white European males were thought of as proper individuals. Their rationalism, however, implied egalitarianism since we’re all innately rational, to some extent, and once the civil rights of women and minorities were recognized, there was a perceptible decline in the manliness of the modern hero. No longer a bold rebel against dogmas or a skeptical lover of the truth, the late-modern individual now is someone who must tolerate all differences. Ours is a multicultural, global village and so we’re consigned to moral relativism and forced to defer to politically correct conventions out of respect for each other’s right to our opinions. Thus, bold originality, once regarded as heroic, is now considered boorish. Early modernists loved to discuss ideas in Salons, but now even to broach a political or religious subject in public is considered impolite, because you may offend someone.

Such rules of political correctness are like parents’ futile restrictions on their child’s thoughts and actions. Western children are protected from coarse language and violence and nudity, because postmodern parents labour under the illusion that their children will be infantile for their entire lifespan, whereas we’re all primarily animals and so are bound to run up against the horrors of natural life sooner or later. Compare these arbitrary strictures with the medieval Church’s laws against heresy. In all three cases (taboos for infantilized adults, protectionist illusions for children, and medieval Christian imperialism), the rules are uninspired as solutions to the existential problem of how to face reality, but the Church went as far as to torture and kill on behalf of its absurd notions. At most, postmodern parents may spank their child for saying a bad word, while an adult who carries the albatross of the archaic ideal of the independent person and so wishes to test the merit of her assumptions by attempting to engage others in a conversation about ideas will only find herself alone and ignored at the party, inspecting the plant in the corner of the room. Still, our postmodern mode of infantilization is fully degrading despite the lack of severe consequences when we step out of bounds.

This is the ethic of care that’s implicit in modern individualism, which is at odds with the modern hunt for the truth. Modernism was originally framed in the masculine terms of a conflict between scientific truth and Christian dogmatic opinion, but now that everyone is recognized as an autonomous, dignified modern person, feminine values have surged. And just as someone with a hammer sees everything else as a nail, a woman is inclined to see everyone else as a baby. This is why, for example, young women who haven’t outgrown their motherly instincts overuse the word “cute”: handbags are cute, as are small pets and even handsome men. This is also why girls worship not tough, rugged male celebrities, but androgynous ones like Justin Bieber. As conservative social critics appreciate, manliness is out of fashion. Even hair on a man’s chest is perceived as revolting, let alone the hair on his back. Men’s bodies must be shorn of any such symbol of their unruly desires, because men are obliged to fulfill women’s fantasy that men are babies who need to be nurtured. Men must be innocent, not savage; they must be eternally youthful and thus hairless, not battered and scarred by the heartless world; they must be doe-eyed and cheerful, not grim, aloof and embittered. Men must be babies, not the manly heroes celebrated by the early modernists, who brought Europe out of the relative Dark Age. Men have been feminized, thanks ironically to the early modern ideal of personal autonomy through reason. As for women themselves, those who must see themselves primarily as care-givers in so far as they’re naturally inclined to infantilize men, they too become child-like, because “care” is reflexive. And so modern women baby themselves, treating themselves to the spa, to the latest fashions and accessories, to the inanities of daytime television, to the sentimental fantasies of soap operas and romance novels, and to the platitudes of flattering, feel-good New Age cults.

.

The Ignorant Baby and the Enlightened Aesthete

Those are three sources of modern infantilization: technology, capitalism, and postmodern culture. I submit, then, that the reason we can be so ignorant as to speak of technoscientific progress, even though scientific theories imply naturalism which in turn implies the unreality of normative values and the undeadness of all processes, is that we lack self-knowledge because we’re infantile. We’re distracted by the games of possessing and playing with our technotoys, because our artificial environment trains us to be babies. And babies aren’t interested in ideas, let alone in terribly dispiriting philosophies such as naturalism with its atheistic and dark existential implications. That’s why we can parrot the meme of modern progress, because we’ve already swallowed a thousand corporate myths by the time we’ve watched a year’s worth of materialistic ads on TV. What’s one more piece of foolishness added to that pile? If we were to look at the myth of progress, we’d see it derives from ancient theistic apocalypticism, and specifically from the Zoroastrian idea of a linear and teleological arrow of historical time. The idea was that time would come to a cataclysmic end when God would perfect the fallen world and defeat the forces of evil in a climactic battle. All prior events are made meaningful in relation to that ultimate endpoint. In that teleological metaphysics, the idea of real progress makes sense. But there’s no such teleology in naturalism, so there can be no modern progress. At best, some scientific theory or piece of technology can meet with our approval and allow us to achieve our personal goals more readily, but that subjective progress loses its normative force. Mind you, that’s the only kind of progress that pragmatists are entitled to affirm, but there’s no real goodness in modernity if that’s all we mean by the word.

The titular ironies, then, are that the so-called technoscientific signs of modern progress are indications rather of the superficiality or illusoriness of the very concept of social progress that most people have in mind, despite their pragmatic attitude, and that the late great modernists who are supposed to stand tall as the current leaders of humanity are instead largely infantilized by modernity and so are similar to the mythopoeic, childlike ancients.

Here, finally, I’ve pointed out that there’s no real progress in nature, since nature is undead rather than enchanted by personal qualities such as meaning or purpose, and yet I affirmed the existential value of personal authenticity. I promised to return to this apparent contradiction. My solution, as I’ve explained at length elsewhere, is to reduce normative evaluation to the aesthetic kind. For example, I say intellectual integrity is better than self-delusion. But is that judgment as superficial and subjective as a moral principle in light of philosophical naturalism? Not if the goodness of personal integrity and more specifically of the coherence of your worldview which drives your behaviour, is thought of as a kind of beauty. When we take up the aesthetic perspective, all processes seem not just undead but artistically creative. Life itself becomes art and our aesthetic duty is to avoid the ugliness of cliché and to strive for ingenious and subversive originality in our actions.

Is the aesthetic attitude as arbitrary as a theistic interpretation of the world, given science-centered naturalism? No, because aesthetics falls out of the objectification made possible by scientific skepticism. We see something as an art object when we see it as complete in itself and thus as useless and indifferent to our concerns, the opposite being a utilitarian or pragmatic stance. And that’s precisely the essence of cosmicism, which is the darkest part of modern wisdom. Natural things, as such, are complete in themselves, meaning that they exist and develop for no human reason. That’s the horror of nature: the world doesn’t care about us, our adaptability notwithstanding, and so we’re bound to be overwhelmed by natural forces and to perish with just as little warning as we were given when nature evolved us in the first place. But the point here is that the flipside of this horror is that nature is full of art! The undeadness of things is also their sublime beauty or raw ugliness. When we recognize the alienness and monstrosity of natural processes, because we’ve given up naïve anthropocentrism, we’ve already adopted the aesthetic attitude. That’s because we’ve declined to project our interests onto what are wholly impersonal things, and so we objectify and aestheticize them with one and the same act of humility. The angst and the horror we feel when we understand what nature really is and thus how impersonal we ourselves are are also aesthetic reactions. Angst is the dawning of awe as we begin to fathom nature’s monstrous scope, horror the awakening of pantheistic fear of the madness of the artist responsible for so much wasted art. The aesthetic values which are also existential ones aren’t merely subjective, because nature’s undead creativity is all-too real.