Technocracy, Buddhism, and Technoscientific Enlightenment (by Benjamin Cain)

by rsbakker

In “Homelessness and the Transhuman” I used some analogies to imagine what life without the naive and illusory self-image would be like. The problem of imagining that enlightenment should be divided into two parts. One is the relatively uninteresting issue of which labels we want to use to describe something. Would an impersonal, amoral, meaningless, and purposeless posthuman, with no consciousness or values as we usually conceive of them “think” at all? Would she be “alive”? Would she have a “mind”? Even if there are objective answers to such questions, the answers don’t really matter since however far our use of labels can be stretched, we can always create a new label. So if the posthuman doesn’t think, maybe she “shminks,” where shminking is only in some ways similar to thinking. This gets at the second, conceptual issue here, though. The interesting question is whether we can conceive of the contents of posthuman life. For example, just what would be the similarities and differences between thinking and shminking? What could we mean by “thought” if we put aside the naive, folk psychological notions of intentionality, truth, and value? We can use ideas of information and function to start to answer that sort of question, but the problem is that this taxes our imagination because we’re typically committed to the naive, exoteric way of understanding ourselves, as R. Scott Bakker explains.

One way to get clearer about what the transformation from confused human to enlightened posthuman would entail is to consider an example that’s relatively easy to understand. So take the Netflix practice described by Andrew Leonard in “How Netflix is Turning Viewers into Puppets.” Apparently, more Americans now watch movies legally streamed over the internet than they do on DVD or Blu-Ray, and this allows the stream providers to accumulate all sorts of data that indicate our movie preferences. When we pause, fast forward or stop watching streamed content, we supply companies like Netflix with enormous quantities of information which their number crunchers explain with a theory about our viewing choices. For example, according to Leonard, Netflix recently spent $100 million to remake the BBC series House of Cards, based on that detailed knowledge of viewers’ habits. Moreover, Netflix learned that the same subscribers who liked that earlier TV show also tend to like Kevin Spacey, and so the company hired Kevin Spacey to star in the remake.

So the point isn’t just that entertainment providers can now amass huge quantities of information about us, but that they can use that information to tailor their products to maximize their profits. In other words, companies can now come much closer to giving us exactly what we objectively want, as indicated by scientific explanations of our behaviour. As Leonard says, “The interesting and potentially troubling question is how a reliance on Big Data [all the data that’s now available about our viewing habits] might funnel craftsmanship in particular directions. What happens when directors approach the editing room armed with the knowledge that a certain subset of subscribers are opposed to jump cuts or get off on gruesome torture scenes or just want to see blow jobs. Is that all we’ll be offered? We’ve seen what happens when news publications specialize in just delivering online content that maximizes page views. It isn’t always the most edifying spectacle.”

So here we have an example not just of how technocrats depersonalize consumers, but of the emerging social effects of that technocratic perspective. There are numerous other fields in which the fig leaf of our crude self-conception is stripped away and people are regarded as machines. In the military, there are units, targets, assets, and so forth, not free, conscious, precious souls. Likewise, in politics and public relations, there are demographics, constituents, and special interests, and such categories are typically defined in highly cynical ways. Again, in business there are consumers and functionaries in bureaucracies, not to mention whatever exotic categories come to the fore in Wall Street’s mathematics of financing. Again, though, it’s one thing to depersonalize people in your thoughts, but it’s another to apply that sophisticated conception to some professional task of engineering. In other words, we need to distinguish between fantasy- and reality-driven depersonalization. Military, political, and business professionals, for example, may resort to fashionable vocabularies to flatter themselves as insiders or to rationalize the vices they must master to succeed in their jobs. Then again, perhaps those vocabularies aren’t entirely subjective; maybe soldiers can’t psych themselves up to kill their opponents unless they’re trained to depersonalize and even to demonize them. And perhaps public relations, marketing, and advertising are even now becoming more scientific.

.

The Double Standard of Technocracy

Be that as it may, I’d like to begin with just the one, pretty straightforward example of creating art to appeal to the consumer, based on inferences about patterns in mountains of data acquired from observations of the consumer’s behaviour. As Leonard says, we don’t have to merely speculate on what will likely happen to art once it’s left in the hands of bean counters. For decades, producers of content have researched what people want so that they could fulfill that demand. It turns out that the majority of people in most societies have bad taste owing to their pedestrian level of intelligence. Thus, when an artist is interested in selling to the largest possible audience to make a short-term profit, that is, when the artist thinks purely in such utilitarian terms, she must give those people what they want, which is drivel. And if all artists come to think that way, the standard of art (of movies, music, paintings, novels, sports, and so on) is lowered. Leonard points out that this happens in online news as well. The stories that make it to the front page are stories about sex or violence, because that’s what most people currently want to see.

So entertainment companies that will use this technoscience (the technology that accumulates data about viewing habits plus the scientific way of drawing inferences to explain patterns in those data) have some assumptions I’d like to highlight. First, these content producers are interested in short-term profits. If they were interested in long-term ones and were faced with depressing evidence of the majority’s infantile preferences, the producers could conceivably raise the bar by selling not to the current state of consumers but to what consumers could become if exposed to a more constructive, challenging environment. In other words, the producers could educate or otherwise improve the majority, suffering the consumers’ hostility in the short-term but helping to shape viewers’ preferences for the better and betting on that long-term approval. Presumably, this altruistic strategy would tend to fail because free-riders would come along and lower the bar again, tempting consumers with cheap thrills. In any case, this engineering of entertainment is capitalistic, meaning that the producers are motivated to earn short-term profit.

Second, the producers are interested in exploiting consumers’ weaknesses. That is, the producers themselves behave as parasites or predators. Again, we can conclude that this is so because of what the producers choose to observe. Granted, the technology offers only so many windows into the consumer’s preferences; at best, the data show only what consumers currently like to watch, not the potential of what they could learn to prefer if given the chance. Thus, these producers don’t think in a paternalistic way about their relationship with consumers. A good parent offers her child broccoli, pickles, and spinach rather than just cookies and macaroni and cheese, to introduce the child to a variety of foods. A good parent wants the child to grow into an adult with a mature taste. By contrast, an exploitative parent would feed her daughter, say, only what she prefers at the moment, in her current low point of development, ensuring that the youngster will suffer from obesity-related health problems when she grows up. Likewise, content producers are uninterested in polling to discern people’s potential for greatness, by asking about their wishes, dreams, or ideals. No, the technology in question scrutinizes what people do when they vegetate in front of the TV after a long, hard day on the job. The content producers thus learn what we like when we’re effectively infantilized by television, when the TV literally affects our brain waves, making us more relaxed and open to suggestion, and the producers mean to exploit that limited sample of information, as large as it may be. Thus, the producers mean to cash in by exploiting us when we’re at our weakest, to profit by creating an environment that tempts us to remain in a childlike state and that caters to our basest impulses, to our penchant for fallacies and biases, and so on. So not only are the content producers thinking as capitalists, they’re predators/parasites to boot.

Finally, this engineering of content depends on the technoscience in question. Acquiring huge stores of data is useless without a way of interpreting the data. The companies must look for patterns and then infer the consumer’s mindset in a way that’s testable. That is, the inferences must follow logically from a hypothesis that’s eventually explained by a scientific theory. That theory then supports technological applications. If the theory is wrong, the technology won’t work; for example, the streamed movies won’t sell.

The upshot is that this scientific engineering of entertainment is based on only a partial depersonalization: the producers depersonalize the consumers while leaving their own personal self-image intact. That is, the content producers ignore how the consumers naively think of themselves, reducing them to robots that can be configured or contained by technology, but the producers don’t similarly give up their image of themselves as people in the naive sense. Implicitly, the consumers lose their moral, in not their legal, rights when they’re reduced to robots, to passive streamers of content that’s been carefully designed to appeal to the weakest part of them, whereas the producers will be the first to trumpet their moral and not just their legal right to private property. The consumers consent to purchase the entertainment, but the producers don’t respect them as dignified beings; otherwise, again, the producers would think more about lifting these consumers up instead of just exploiting their weaknesses for immediate returns. Still, the producers think of themselves, surely, as normatively superior. Even if the producers style themselves as Nietzschean insiders who reject altruistic morality and prefer a supposedly more naturalistic, Ayn Randian value system, they still likely glorify themselves at the expense of their victims. And even if some of those who profit from the technocracy are literally sociopathic, that means only that they don’t feel the value of those they exploit; nevertheless, a sociopath acts as an egotist, which means she presupposes a double standard, one for herself and one for everyone else.

.

From Capitalistic Predator to Buddhist Monk

What interests me about this inchoate technocracy, this business of using technoscience to design and manage society, is that it functions as a bridge to imagining a possible posthuman state. To cross over in our minds to the truly alien, we need stepping stones. Netflix is analogous to enlightened posthumanity in that Netflix is part of the way toward that destination. So when we consider Netflix we stand closer to the precipice and we can ask ourselves what giving up the rest of the personal self-image would be like. So suppose a content provider depersonalizes everyone, viewing herself as well as just a manipulable robot. On this supposition, the provider becomes something like a Buddhist who can observe her impulses and preferences without being attached to them. She can see the old self-image still operating in her mind, sustained as it is by certain neural circuits, but she’s trained not to be mesmerized by that image. She’s learned to see the reality behind the illusion, the code that renders the matrix. So she may still be inclined in certain directions, but she won’t reflexively go anywhere. She has the capacity to exploit the weak and to enrich herself, and she may even be inclined to do so, but because she doesn’t identify with the crudely-depicted self, she may not actually proceed down that expected path. In fact, the mystery remains as to why any enlightened person does whatever she does.

This calls for a comparison between the posthuman’s science-centered enlightenment and the Buddhist kind. The sort of posthuman self I’m trying to imagine transcends the traditional categories of the self, on the assumption that these categories rest on ignorance owing to the brain’s native limitations in learning about itself. The folk categories are replaced with scientific ones and we’re left wondering what we’d become were we to see ourselves strictly in those scientific terms. What would we do with ourselves and with each other? The emerging technocratic entertainment industry gives us some indication, but I’ve tried to show that that example provides us with only one stepping stone. We need another, so let’s try that of the Buddhist.

Now, Buddhist enlightenment is supposed to consist of a peaceful state of mind that doesn’t turn into any sort of suffering, because the Buddhist has learned to stop desiring any outcome. You only suffer when you don’t get what you want, and if you stop wanting anything, or more precisely if you stop identifying with your desires, you can’t be made to suffer. The lack of any craving for an outcome entails a discarding of the egoistic pretense of your personal independence, since it’s only when you identify narrowly with some set of goals that you create an illusion that’s bound to make you suffer, because the illusion is out of alignment with reality. In reality, everything is interconnected and so you’re not merely your body or your mind. When you assume you are, the world punishes you in a thousand degrees and dimensions, and so you suffer because your deluded expectations are dashed.

Here are a couple of analogies to clarify how this Buddhist frame of mind works, according to my understanding of it. Once you’ve learned to drive a car, driving becomes second nature to you, meaning that you come to identify with the car as your extended body. Prior to that identification, when you’re just starting to drive, the car feels awkward and new because you experience it as a foreign body. When you’ve familiarized yourself with the car’s functions, with the rules of the road, and with the experience of driving, sitting in the driver’s seat feels like slipping on an old pair of shoes. Every once in a while, though, you may snap out of that familiarity. When you’re in the middle of an intersection, in a left turn lane, you may find yourself looking at cars anew and being amazed and even a little scared about your current situation on the road: you’re in a powerful vehicle, surrounded by many more such vehicles, following all of these signs to avoid being slammed by those tons of steel. In a similar way, a native speaker of a language becomes very familiar with the shapes of the symbols in that language, but every now and again, when you’re distracted perhaps, you can slip out of that familiarity and stare in wonder at a word you’ve used a thousand times, like a child who’s never seen it before.

What I’m trying to get at here is the difference between having a mental state and identifying with it, which difference I take to be central to Buddhism. Being in a car is one thing, identifying with it is literally something else, meaning that there’s a real change that happens when driving becomes second nature to you. Likewise, having the desire for fame or fortune is one thing, identifying with either desire is something else. A Buddhist watches her thoughts come and go in her mind, detaching from them so that the world can’t upset her. But this raises a puzzle for me. Once enlightened, why should a Buddhist prefer a peaceful state of mind to one of suffering? The Buddhist may still have the desire to avoid pain and to seek peace, but she’ll no longer identify with either of those or with any other desire. So assuming she acts to lessen suffering in the world, how are those actions caused? If an enlightened Buddhist is just a passive observer, how can she be made to do anything at all? How can she lean in one direction or another, or favour one course of action rather than another? Why peace rather than suffering?

Now, there’s a difference between a bodhisattva and a Buddha: the former harbours a selfless preference to help others achieve enlightenment, whereas the latter gives up on the rest of the world and lives in a state of nirvana, which is passive, metaphysical selflessness. So a bodhisattva still has an interest in social engagement and merely learns not to identify so strongly with that interest, to avoid suffering if the interest doesn’t work out and the world slams the door in her face, whereas a Buddha may extinguish all of her mental states, effectively lobotomizing herself. Either way, though, it’s hard to see how the Buddhist could act intelligently, which is to say exhibit some pattern in her activities that reflects a pattern in her mind and acts at least as the last step in the chain of interconnected causes of her actions. A bodhisattva has desires but doesn’t identify with them and so can’t favor any of them. How, then, could this Buddhist put any morality into practice? Indeed, how could she prefer Buddhism to some other religion or worldview? And a Buddha may no longer have any distinguishable mental states in the first place, so she would have no interests to tempt her with the potential for mental attachments. Thus, we might expect full enlightenment in the Buddhist sense to be a form of suicide, in which the Buddhist neglects all aspects of her body because she’s literally lost her mind and thus her ability to care or to choose to control herself or even to manage her vital functions. (In Hinduism, an elderly Brahmin may choose this form of suicide for the sake of moksha, which is supposed to be liberation from nature, and Buddhism may explain how this suicide becomes possible for the enlightened person.)

The best explanation I have of how a Buddhist could act at all is the Taoist one that the world acts through her. The paradox of how the Buddhist’s mind could control her body even when the Buddhist dispenses with that mind is resolved if we accept the monist ontology in which everything is interconnected and so unified. Even if an enlightened Buddha loses personal self-control, this doesn’t mean that nothing happens to her, since the Buddhist’s body is part of the cosmic whole, and so the world flows in through her senses and out through her actions. The Buddhist doesn’t egoistically decide what to do with herself, but the world causes her to act in one way or another. Her behaviour, then, shouldn’t reflect any private mental pattern, such as a personal character or ego, since she’s learned to see through that illusion, but her actions will reflect the whole world’s character, as it were.

.

From Buddhist Monk to Avatar of Nature

Returning to the posthuman, the question raised by the Buddhist stepping stone is whether we can learn what it would be like to experience the death of the manifest image, the absence of the naive, dualistic and otherwise self-glorifying conception of the self, by imagining what it would be like to be the sun, the moon, the ocean, or just a robot. That’s how a scientifically enlightened posthuman would conceive of “herself”: she’d understand that she has no independent self but is part of some natural process, and if she’d identify with anything it would be with that larger process. Which process? Any selection would betray a preference and thus at least a partial resurrection of the ghostly, illusory self. The Buddhist gets around this with metaphysical monism: if everything is interconnected, the universe is one and there’s no need to choose what you are, since you’re metaphysically everything at once. So if all natural processes feed into each other, nature is a cosmic whole, and the posthuman sees very far and wide, sampling enough of nature to understand the universe’s character so that she’d presumably understand her actions to flow from that broader character.

And just here we reach a difference between Eastern (if not specifically Buddhist) and technoscientific enlightenment. Strictly speaking, Buddhism is atheistic, I think, but some forms of Buddhism are pantheistic, meaning that some Buddhists personify the interconnected whole. If we suppose that technoscience will remain staunchly atheistic, we must assume only that there are patterns in nature and not any character or ghostly Force or anything like that. Thus, if a posthuman can’t identify with the traditional myth of the self, with the conscious, rational, self-controlling soul, and yet the posthuman is to remain some distinct entity, I’m led to imagine this posthuman entity as an avatar of lifeless nature. What does nature do with its forces? It evolves molecules, galaxies, solar systems, and living species. The posthuman would be a new force of nature that would serve those processes of complexification and evolution, creating new orders of being. The posthuman would have no illusion of personal identity, because she’d understand too well the natural forces at work in her body to identify so narrowly and desperately with any mere subset of their handiwork. Certainly, the posthuman wouldn’t cling to any byproduct of the brain, but would more likely identify with the underlying, microphysical patterns and processes.

So would this kind of posthumanity be a force for good or evil? Surely, the posthuman would be beyond good or evil, like any natural force. Moral rules are conventions to manage deluded robots like us who are hypnotized by our brain’s daydream of our identity. Values derive from preferences of some things as better than others, which in turn depend on some understanding of The Good. In the technoscientific picture of nature, though, goodness and badness are illusions, but this doesn’t imply anything like the Satanist’s exhortation to do whatever you want. The posthuman would have as many wants as the rain when the rain falls from the sky. She’d have no ego to flatter, no will to express. Nevertheless, the posthuman would be caused to act, to further what the universe has already been doing for billions of years. I have only a worm’s understanding of that cosmic pattern. I speak of evolution and complexification, but those are just placeholders, like an empty five-line staff in modern musical notation. If we’re imagining a super-intelligent species that succeeds us, I take it we’re thinking of a species that can read the music of the spheres and that’s compelled to sing along.