Technocracy, Buddhism, and Technoscientific Enlightenment (by Benjamin Cain)
by rsbakker
In “Homelessness and the Transhuman” I used some analogies to imagine what life without the naive and illusory self-image would be like. The problem of imagining that enlightenment should be divided into two parts. One is the relatively uninteresting issue of which labels we want to use to describe something. Would an impersonal, amoral, meaningless, and purposeless posthuman, with no consciousness or values as we usually conceive of them “think” at all? Would she be “alive”? Would she have a “mind”? Even if there are objective answers to such questions, the answers don’t really matter since however far our use of labels can be stretched, we can always create a new label. So if the posthuman doesn’t think, maybe she “shminks,” where shminking is only in some ways similar to thinking. This gets at the second, conceptual issue here, though. The interesting question is whether we can conceive of the contents of posthuman life. For example, just what would be the similarities and differences between thinking and shminking? What could we mean by “thought” if we put aside the naive, folk psychological notions of intentionality, truth, and value? We can use ideas of information and function to start to answer that sort of question, but the problem is that this taxes our imagination because we’re typically committed to the naive, exoteric way of understanding ourselves, as R. Scott Bakker explains.
One way to get clearer about what the transformation from confused human to enlightened posthuman would entail is to consider an example that’s relatively easy to understand. So take the Netflix practice described by Andrew Leonard in “How Netflix is Turning Viewers into Puppets.” Apparently, more Americans now watch movies legally streamed over the internet than they do on DVD or Blu-Ray, and this allows the stream providers to accumulate all sorts of data that indicate our movie preferences. When we pause, fast forward or stop watching streamed content, we supply companies like Netflix with enormous quantities of information which their number crunchers explain with a theory about our viewing choices. For example, according to Leonard, Netflix recently spent $100 million to remake the BBC series House of Cards, based on that detailed knowledge of viewers’ habits. Moreover, Netflix learned that the same subscribers who liked that earlier TV show also tend to like Kevin Spacey, and so the company hired Kevin Spacey to star in the remake.
So the point isn’t just that entertainment providers can now amass huge quantities of information about us, but that they can use that information to tailor their products to maximize their profits. In other words, companies can now come much closer to giving us exactly what we objectively want, as indicated by scientific explanations of our behaviour. As Leonard says, “The interesting and potentially troubling question is how a reliance on Big Data [all the data that’s now available about our viewing habits] might funnel craftsmanship in particular directions. What happens when directors approach the editing room armed with the knowledge that a certain subset of subscribers are opposed to jump cuts or get off on gruesome torture scenes or just want to see blow jobs. Is that all we’ll be offered? We’ve seen what happens when news publications specialize in just delivering online content that maximizes page views. It isn’t always the most edifying spectacle.”
So here we have an example not just of how technocrats depersonalize consumers, but of the emerging social effects of that technocratic perspective. There are numerous other fields in which the fig leaf of our crude self-conception is stripped away and people are regarded as machines. In the military, there are units, targets, assets, and so forth, not free, conscious, precious souls. Likewise, in politics and public relations, there are demographics, constituents, and special interests, and such categories are typically defined in highly cynical ways. Again, in business there are consumers and functionaries in bureaucracies, not to mention whatever exotic categories come to the fore in Wall Street’s mathematics of financing. Again, though, it’s one thing to depersonalize people in your thoughts, but it’s another to apply that sophisticated conception to some professional task of engineering. In other words, we need to distinguish between fantasy- and reality-driven depersonalization. Military, political, and business professionals, for example, may resort to fashionable vocabularies to flatter themselves as insiders or to rationalize the vices they must master to succeed in their jobs. Then again, perhaps those vocabularies aren’t entirely subjective; maybe soldiers can’t psych themselves up to kill their opponents unless they’re trained to depersonalize and even to demonize them. And perhaps public relations, marketing, and advertising are even now becoming more scientific.
.
The Double Standard of Technocracy
Be that as it may, I’d like to begin with just the one, pretty straightforward example of creating art to appeal to the consumer, based on inferences about patterns in mountains of data acquired from observations of the consumer’s behaviour. As Leonard says, we don’t have to merely speculate on what will likely happen to art once it’s left in the hands of bean counters. For decades, producers of content have researched what people want so that they could fulfill that demand. It turns out that the majority of people in most societies have bad taste owing to their pedestrian level of intelligence. Thus, when an artist is interested in selling to the largest possible audience to make a short-term profit, that is, when the artist thinks purely in such utilitarian terms, she must give those people what they want, which is drivel. And if all artists come to think that way, the standard of art (of movies, music, paintings, novels, sports, and so on) is lowered. Leonard points out that this happens in online news as well. The stories that make it to the front page are stories about sex or violence, because that’s what most people currently want to see.
So entertainment companies that will use this technoscience (the technology that accumulates data about viewing habits plus the scientific way of drawing inferences to explain patterns in those data) have some assumptions I’d like to highlight. First, these content producers are interested in short-term profits. If they were interested in long-term ones and were faced with depressing evidence of the majority’s infantile preferences, the producers could conceivably raise the bar by selling not to the current state of consumers but to what consumers could become if exposed to a more constructive, challenging environment. In other words, the producers could educate or otherwise improve the majority, suffering the consumers’ hostility in the short-term but helping to shape viewers’ preferences for the better and betting on that long-term approval. Presumably, this altruistic strategy would tend to fail because free-riders would come along and lower the bar again, tempting consumers with cheap thrills. In any case, this engineering of entertainment is capitalistic, meaning that the producers are motivated to earn short-term profit.
Second, the producers are interested in exploiting consumers’ weaknesses. That is, the producers themselves behave as parasites or predators. Again, we can conclude that this is so because of what the producers choose to observe. Granted, the technology offers only so many windows into the consumer’s preferences; at best, the data show only what consumers currently like to watch, not the potential of what they could learn to prefer if given the chance. Thus, these producers don’t think in a paternalistic way about their relationship with consumers. A good parent offers her child broccoli, pickles, and spinach rather than just cookies and macaroni and cheese, to introduce the child to a variety of foods. A good parent wants the child to grow into an adult with a mature taste. By contrast, an exploitative parent would feed her daughter, say, only what she prefers at the moment, in her current low point of development, ensuring that the youngster will suffer from obesity-related health problems when she grows up. Likewise, content producers are uninterested in polling to discern people’s potential for greatness, by asking about their wishes, dreams, or ideals. No, the technology in question scrutinizes what people do when they vegetate in front of the TV after a long, hard day on the job. The content producers thus learn what we like when we’re effectively infantilized by television, when the TV literally affects our brain waves, making us more relaxed and open to suggestion, and the producers mean to exploit that limited sample of information, as large as it may be. Thus, the producers mean to cash in by exploiting us when we’re at our weakest, to profit by creating an environment that tempts us to remain in a childlike state and that caters to our basest impulses, to our penchant for fallacies and biases, and so on. So not only are the content producers thinking as capitalists, they’re predators/parasites to boot.
Finally, this engineering of content depends on the technoscience in question. Acquiring huge stores of data is useless without a way of interpreting the data. The companies must look for patterns and then infer the consumer’s mindset in a way that’s testable. That is, the inferences must follow logically from a hypothesis that’s eventually explained by a scientific theory. That theory then supports technological applications. If the theory is wrong, the technology won’t work; for example, the streamed movies won’t sell.
The upshot is that this scientific engineering of entertainment is based on only a partial depersonalization: the producers depersonalize the consumers while leaving their own personal self-image intact. That is, the content producers ignore how the consumers naively think of themselves, reducing them to robots that can be configured or contained by technology, but the producers don’t similarly give up their image of themselves as people in the naive sense. Implicitly, the consumers lose their moral, in not their legal, rights when they’re reduced to robots, to passive streamers of content that’s been carefully designed to appeal to the weakest part of them, whereas the producers will be the first to trumpet their moral and not just their legal right to private property. The consumers consent to purchase the entertainment, but the producers don’t respect them as dignified beings; otherwise, again, the producers would think more about lifting these consumers up instead of just exploiting their weaknesses for immediate returns. Still, the producers think of themselves, surely, as normatively superior. Even if the producers style themselves as Nietzschean insiders who reject altruistic morality and prefer a supposedly more naturalistic, Ayn Randian value system, they still likely glorify themselves at the expense of their victims. And even if some of those who profit from the technocracy are literally sociopathic, that means only that they don’t feel the value of those they exploit; nevertheless, a sociopath acts as an egotist, which means she presupposes a double standard, one for herself and one for everyone else.
.
From Capitalistic Predator to Buddhist Monk
What interests me about this inchoate technocracy, this business of using technoscience to design and manage society, is that it functions as a bridge to imagining a possible posthuman state. To cross over in our minds to the truly alien, we need stepping stones. Netflix is analogous to enlightened posthumanity in that Netflix is part of the way toward that destination. So when we consider Netflix we stand closer to the precipice and we can ask ourselves what giving up the rest of the personal self-image would be like. So suppose a content provider depersonalizes everyone, viewing herself as well as just a manipulable robot. On this supposition, the provider becomes something like a Buddhist who can observe her impulses and preferences without being attached to them. She can see the old self-image still operating in her mind, sustained as it is by certain neural circuits, but she’s trained not to be mesmerized by that image. She’s learned to see the reality behind the illusion, the code that renders the matrix. So she may still be inclined in certain directions, but she won’t reflexively go anywhere. She has the capacity to exploit the weak and to enrich herself, and she may even be inclined to do so, but because she doesn’t identify with the crudely-depicted self, she may not actually proceed down that expected path. In fact, the mystery remains as to why any enlightened person does whatever she does.
This calls for a comparison between the posthuman’s science-centered enlightenment and the Buddhist kind. The sort of posthuman self I’m trying to imagine transcends the traditional categories of the self, on the assumption that these categories rest on ignorance owing to the brain’s native limitations in learning about itself. The folk categories are replaced with scientific ones and we’re left wondering what we’d become were we to see ourselves strictly in those scientific terms. What would we do with ourselves and with each other? The emerging technocratic entertainment industry gives us some indication, but I’ve tried to show that that example provides us with only one stepping stone. We need another, so let’s try that of the Buddhist.
Now, Buddhist enlightenment is supposed to consist of a peaceful state of mind that doesn’t turn into any sort of suffering, because the Buddhist has learned to stop desiring any outcome. You only suffer when you don’t get what you want, and if you stop wanting anything, or more precisely if you stop identifying with your desires, you can’t be made to suffer. The lack of any craving for an outcome entails a discarding of the egoistic pretense of your personal independence, since it’s only when you identify narrowly with some set of goals that you create an illusion that’s bound to make you suffer, because the illusion is out of alignment with reality. In reality, everything is interconnected and so you’re not merely your body or your mind. When you assume you are, the world punishes you in a thousand degrees and dimensions, and so you suffer because your deluded expectations are dashed.
Here are a couple of analogies to clarify how this Buddhist frame of mind works, according to my understanding of it. Once you’ve learned to drive a car, driving becomes second nature to you, meaning that you come to identify with the car as your extended body. Prior to that identification, when you’re just starting to drive, the car feels awkward and new because you experience it as a foreign body. When you’ve familiarized yourself with the car’s functions, with the rules of the road, and with the experience of driving, sitting in the driver’s seat feels like slipping on an old pair of shoes. Every once in a while, though, you may snap out of that familiarity. When you’re in the middle of an intersection, in a left turn lane, you may find yourself looking at cars anew and being amazed and even a little scared about your current situation on the road: you’re in a powerful vehicle, surrounded by many more such vehicles, following all of these signs to avoid being slammed by those tons of steel. In a similar way, a native speaker of a language becomes very familiar with the shapes of the symbols in that language, but every now and again, when you’re distracted perhaps, you can slip out of that familiarity and stare in wonder at a word you’ve used a thousand times, like a child who’s never seen it before.
What I’m trying to get at here is the difference between having a mental state and identifying with it, which difference I take to be central to Buddhism. Being in a car is one thing, identifying with it is literally something else, meaning that there’s a real change that happens when driving becomes second nature to you. Likewise, having the desire for fame or fortune is one thing, identifying with either desire is something else. A Buddhist watches her thoughts come and go in her mind, detaching from them so that the world can’t upset her. But this raises a puzzle for me. Once enlightened, why should a Buddhist prefer a peaceful state of mind to one of suffering? The Buddhist may still have the desire to avoid pain and to seek peace, but she’ll no longer identify with either of those or with any other desire. So assuming she acts to lessen suffering in the world, how are those actions caused? If an enlightened Buddhist is just a passive observer, how can she be made to do anything at all? How can she lean in one direction or another, or favour one course of action rather than another? Why peace rather than suffering?
Now, there’s a difference between a bodhisattva and a Buddha: the former harbours a selfless preference to help others achieve enlightenment, whereas the latter gives up on the rest of the world and lives in a state of nirvana, which is passive, metaphysical selflessness. So a bodhisattva still has an interest in social engagement and merely learns not to identify so strongly with that interest, to avoid suffering if the interest doesn’t work out and the world slams the door in her face, whereas a Buddha may extinguish all of her mental states, effectively lobotomizing herself. Either way, though, it’s hard to see how the Buddhist could act intelligently, which is to say exhibit some pattern in her activities that reflects a pattern in her mind and acts at least as the last step in the chain of interconnected causes of her actions. A bodhisattva has desires but doesn’t identify with them and so can’t favor any of them. How, then, could this Buddhist put any morality into practice? Indeed, how could she prefer Buddhism to some other religion or worldview? And a Buddha may no longer have any distinguishable mental states in the first place, so she would have no interests to tempt her with the potential for mental attachments. Thus, we might expect full enlightenment in the Buddhist sense to be a form of suicide, in which the Buddhist neglects all aspects of her body because she’s literally lost her mind and thus her ability to care or to choose to control herself or even to manage her vital functions. (In Hinduism, an elderly Brahmin may choose this form of suicide for the sake of moksha, which is supposed to be liberation from nature, and Buddhism may explain how this suicide becomes possible for the enlightened person.)
The best explanation I have of how a Buddhist could act at all is the Taoist one that the world acts through her. The paradox of how the Buddhist’s mind could control her body even when the Buddhist dispenses with that mind is resolved if we accept the monist ontology in which everything is interconnected and so unified. Even if an enlightened Buddha loses personal self-control, this doesn’t mean that nothing happens to her, since the Buddhist’s body is part of the cosmic whole, and so the world flows in through her senses and out through her actions. The Buddhist doesn’t egoistically decide what to do with herself, but the world causes her to act in one way or another. Her behaviour, then, shouldn’t reflect any private mental pattern, such as a personal character or ego, since she’s learned to see through that illusion, but her actions will reflect the whole world’s character, as it were.
.
From Buddhist Monk to Avatar of Nature
Returning to the posthuman, the question raised by the Buddhist stepping stone is whether we can learn what it would be like to experience the death of the manifest image, the absence of the naive, dualistic and otherwise self-glorifying conception of the self, by imagining what it would be like to be the sun, the moon, the ocean, or just a robot. That’s how a scientifically enlightened posthuman would conceive of “herself”: she’d understand that she has no independent self but is part of some natural process, and if she’d identify with anything it would be with that larger process. Which process? Any selection would betray a preference and thus at least a partial resurrection of the ghostly, illusory self. The Buddhist gets around this with metaphysical monism: if everything is interconnected, the universe is one and there’s no need to choose what you are, since you’re metaphysically everything at once. So if all natural processes feed into each other, nature is a cosmic whole, and the posthuman sees very far and wide, sampling enough of nature to understand the universe’s character so that she’d presumably understand her actions to flow from that broader character.
And just here we reach a difference between Eastern (if not specifically Buddhist) and technoscientific enlightenment. Strictly speaking, Buddhism is atheistic, I think, but some forms of Buddhism are pantheistic, meaning that some Buddhists personify the interconnected whole. If we suppose that technoscience will remain staunchly atheistic, we must assume only that there are patterns in nature and not any character or ghostly Force or anything like that. Thus, if a posthuman can’t identify with the traditional myth of the self, with the conscious, rational, self-controlling soul, and yet the posthuman is to remain some distinct entity, I’m led to imagine this posthuman entity as an avatar of lifeless nature. What does nature do with its forces? It evolves molecules, galaxies, solar systems, and living species. The posthuman would be a new force of nature that would serve those processes of complexification and evolution, creating new orders of being. The posthuman would have no illusion of personal identity, because she’d understand too well the natural forces at work in her body to identify so narrowly and desperately with any mere subset of their handiwork. Certainly, the posthuman wouldn’t cling to any byproduct of the brain, but would more likely identify with the underlying, microphysical patterns and processes.
So would this kind of posthumanity be a force for good or evil? Surely, the posthuman would be beyond good or evil, like any natural force. Moral rules are conventions to manage deluded robots like us who are hypnotized by our brain’s daydream of our identity. Values derive from preferences of some things as better than others, which in turn depend on some understanding of The Good. In the technoscientific picture of nature, though, goodness and badness are illusions, but this doesn’t imply anything like the Satanist’s exhortation to do whatever you want. The posthuman would have as many wants as the rain when the rain falls from the sky. She’d have no ego to flatter, no will to express. Nevertheless, the posthuman would be caused to act, to further what the universe has already been doing for billions of years. I have only a worm’s understanding of that cosmic pattern. I speak of evolution and complexification, but those are just placeholders, like an empty five-line staff in modern musical notation. If we’re imagining a super-intelligent species that succeeds us, I take it we’re thinking of a species that can read the music of the spheres and that’s compelled to sing along.
Very interesting post!
“The Buddhist doesn’t egoistically decide what to do with herself, but the world causes her to act in one way or another. Her behaviour, then, shouldn’t reflect any private mental pattern, such as a personal character or ego, since she’s learned to see through that illusion, but her actions will reflect the whole world’s character, as it were.”
When it comes to this or a similar state of mind (I’m thinking of living adoxastos as Roger described in the last posts, for instance) I’m always curious how much self control you still have/ought to have from your point of view. Would someone in this position kill other people if “the whole world’s character” is reflected in this way? Or is such a thing impossible? Does she/he/it “see through the illusion” in such a way as to just keep this hapening or is there a reason to stop?
I assume that it being impossible is the most common answer, that the world’s character could never want murder to happen but I don’t know why this should be the case?
Another thing that comes to mind is, that the step to posthuman is always seen as a step towards a “higher” Dasein, but if this nihilistic picture without values and morals is the next step, isn’t it more like a step down toward a Dasein that is closer to that of an animal. Isn’t it like flying on a magic carpet and choosing to let it take you wherever the wind blows?
I think many people will refuse to become posthuman in this way and flee into the motherly arms of their known values and moral. There might even be an uprising against posthumanism based on religious views.
These are great questions, Dietl. The first one I struggled with too when I wrote the article. It’s easy enough to imagine an enlightened person as a saint who never harms anyone. But this really strikes me as nonsense, especially if we assume the enlightened person replaces her former ignorance with a mystical view of everything’s ontological unity. What enlightens a person, in the Eastern sense, is her lack of ego. She detaches from her selfish desires, because she no longer identifies with her personal self. So what causes the actions she nevertheless performs? It has to be that she’s at one with the world, and so it’s the world as a whole that acts through her. And what’s the character of the world? Is nature saintly or amoral? Nature kills creatures all the time, so if enlightenment is a matter of letting the forces of nature flow through you directly, because you’re destroyed or seen through the illusion of the sand castle of your egoistic barrier, you’re going to act like an amoral force of nature. You’re surely going to be beyond good and evil.
Is this enlightenment a progression or a regression? Well, I don’t think it’s entirely nihilistic. A posthuman would have no personal values, but there’s a mystery about the cosmic pattern we don’t yet fully grasp. In my writings, I try to sketch it by talking about the decay of God’s undying corpse, or about cosmic complexification and evolution. But the point is that even if nature has no personal values, because there’s no person at the root of nature, the natural universe does have a finite pattern and it does flow in a specific direction. Even if we bring in the multiverse, this isn’t the realization of all logically possible worlds, including the world in which there’s no multiverse. In other words, there may be *implicit* values in natural processes, in the sense of patterns that would correspond better with some values than with others. So Nietzsche’s posthuman may have no mainstream values, since those are based on ignorance, but she does have peculiar values that match up with how nature really works. For Nietzsche, the mystical truth is the will to power. The question, then, is whether the posthuman’s technoscience would reveal the ultimate truth about what’s going on in nature and whether this would steer her in one direction rather than another.
“In other words, there may be *implicit* values in natural processes, in the sense of patterns that would correspond better with some values than with others.”
But it hinges on the word ‘better’, doesn’t it? What would make one correspondence of a pattern better than another? I really can’t think of a good answer.
I haven’t read much Nietzsche so maybe I’m misinterpreting this, but I don’t see much difference between the “peculiar values that match up with how nature really works” and “mainstream values”. Aren’t mainstream values in the end also based on natural tendencies? Aren’t peculiar values also based on the ignorance regarding the moral indifference of the universe? For Power, Truth just as good/evil ect. are all human concepts.
My argument against values is always evolutionary. At which point of the cosmic evolution did values start to “exist” in the universe? The only answer that makes sense to me is that they must have been there from the beginning, because else there must have been an external force that brought them into the universe. But values, I think, only make sense from the standpoint of intelligent beings. A stone can’t have values on its own. Only when someone attributes it with them, the stone becomes relevant. So in our universe values must have existed for billions of years without them having any relevance. This is the point where I ask Occam to give us a good, clean shave.
Don’t you think astronomy better corresponds with the facts than does astrology? Ultimately, you might have a pragmatic standard of truth. Scientific theories are best because they’re the most useful; they work by empowering us. Likewise, we might have some ultimate standard for values. The best values might ennoble rather than degrade us, for example.
I think you’re right that for Nietzsche everything is natural, and so he had a difficult time condemning slave morality. He often did so with the analogy of sickness vs health. This is the problem, though, of making sense of existential inauthenticity vs authenticity.
I think you might want to distinguish between implicit and explicit values. Values are recognized and acted on only by conscious beings, but the lifeless world may be guaranteed to strike certain conscious beings one way rather than another. Of course, we have different reactions to the world, but there are also patterns in our reactions. As Nietzsche said, rosy optimism is typically based on dualism, on looking past nature at an alleged hidden supernatural world.
So facts are values?
I think that ennobling and degrading depends a lot on perspective. Like, do you think Nietzsche is right with his will to power? A Buddha would surely disagree. Might there be values that stand in contradiction to each other and still both ennoble us if we choose to persue one? For instance, rationality and emotionality. They seem to be in conflict with each other (maybe only in a narrow perspective), but still both are way that some might interpret as ennobling. One is the way of the scientist (in a broader sense as someone who seeks knowledge not only those gained by the “scientific method”) and the other of the artist. Maybe we are degrading ourselfes by following only one of them and neglecting the other. By becoming too rational we might lose our ability to feel empathy or by completely following our emotions we might lose our grip on reality.
The thing is, I can’t really believe the last thing I just wrote. It was an attempt to rationalize values but I’m not contented with this. It all feels like implicitely pointing the finger at something and saying “this is wrong, it must be done like this…”. Couldn’t rosy optimism be the better way? Human beings want to be happy, don’t they? Science says that optimists live longer and are happier. Why is knowing statistical things like that better than being ignorant about such things. Where is the value in scientific advance? Does it come down to being able to help other people live healthier lives or being able to forsee natural disasters? Life and death? Life is good – death is bad? Where is the valuie in life? And in questioning this I’m not thinking in a suicidal way, because likewise I must be questioning the value of death. This thinking leaves me in a vacuum, which can only be filled by ignoring those questions and living life regardless and playing guitar no matter if I achieve my goals someday or not. For what else can I do but go on…
Can you elaborate on implicit and explicit values? I’m not sure that what I have in mind is what you mean.
No, you’re asking some deep questions, Dietl. I’ve tried to tackle these sorts of questions in a few places on my blog. See the first half of this article:
http://rantswithintheundeadgod.blogspot.ca/2012/11/the-philosophy-of-existential-cosmicism.html
And see also:
http://rantswithintheundeadgod.blogspot.ca/2013/04/technoscience-existentialism-and-fact.html
The topic is kind of too big to get into much in a comments section, I think. An explicit value, though, is a feeling that a creature has about the rightness of some goal. An implicit value would be a fact’s potential to make a creature feel a certain way about a goal.
Oh, and this article too might be helpful:
http://rantswithintheundeadgod.blogspot.ca/2013/03/the-virtue-of-speculation-scientism-and.html
Thanks for the links! I’ll check this out as soon as I have time.
Your blog among others is quite an inspiration for me to start a philosophical blog too some day 🙂
Real cool, Cain.
Couple unconnected thoughts/questions.
– Your Netflix consumers / social, bio-cultural, or noospheric entity beyond us is a fantastic analogy, in my understanding, of Blind Brain / Greater Brain relationship, as it stands. There’s the real possibility that the self has always been living as you describe.
– This social, bio-cultural or noospheric manifest entity might already exist (though, there are plenty less contextual but equally possible ideas concerning these concepts (Quantum AI, for instance) we could discuss – Neuropath Spoiler ):
Neil mentions that the human species is just one brain, its nervous system spread across all the individual self-nodes, constantly rewiring itself through our linguistic interaction.
– What happens when first there come bodhisattvas? I think that is the immediate challenge posed by cognitive augmentation – a human just controlled a live rats tail the other day and months ago a commentator here linked the first neurally-integrated prosthetic – is a fear for us Normies, those who cannot afford the augmentations (not that I want them, as I’d be happy to live a life of practice in this form – sake of discussion). And yet these new selves or self-amalgamations will still think themselves sufficient (that is total and complete in their expanded conscious states and requisite form changes), still think themselves Avatars to our deluded robot.
Cheers for the read, regardless.
Thanks very much. Your comments suggest to me this question: what’s the likelihood that posthumanity would come about as science fiction currently predicts? Will rich folks really just wire themselves up to supercomputers and dominate the rest of us?
That is, there’s a prior question here about the relation between science fiction and the reality of technoscience’s social impact. I think science fiction (and fantasy, horror, and any other genre of fiction that speaks to how we might develop as a species) tends not so much to predict, but to reflect current affairs by means of distracting (i.e. entertaining) metaphors. It’s fun to go with the metaphors and to imagine that we now have a blueprint of our next several centuries. Transhumanists like Ray Kurzweil say we have such a blueprint. I just think it’s highly unlikely that any species that can’t predict the weather five days out can predict the nature of our transition to a posthuman state.
In fact, for me, the point of the posthuman themes that currently dominate science fiction is that technology already makes us posthuman and has always made us so. Humanity here is our animal nature and technology elevates us above the other animals, making us alien and strange even to ourselves. We already have the potential for a terrifyingly objective understanding of the world. Granted, we don’t have the bells and whistles of eternal life and FTL spaceships, but the ideas of such things are just entertainments that distract us from the philosophical upshot of posthuman SF.
Cain, apologies, I have a bad habit of not revisiting blog posts for new comments.
I agree with what you’ve written as far as I understand but my issue, I think, is that so far, aforementioned fiction aside, our speculative fictions, we might call them, are doing a poor job of any sort of intentional predicting, rather than reflecting.
SFF leveraging biological, arguably cognitive – however, bodhisattvas, Buddhas, and Avatars experience cognition – transcendence themes need more contemporary cultural context? Even Kurzweil updates his theories.
[…] read more at: Technocracy, Buddhism, and Technoscientific Enlightenment (by Benjamin Cain) | Three Pound Brain. […]
“Now, Buddhist enlightenment is supposed to consist of a peaceful state of mind that doesn’t turn into any sort of suffering, because the Buddhist has learned to stop desiring any outcome. You only suffer when you don’t get what you want, and if you stop wanting anything, or more precisely if you stop identifying with your desires, you can’t be made to suffer.”
This characterisation of Buddhism doesn’t quite ring true for me. From what I understand the idea isn’t to get rid of all desire, but rather to get rid of inappropriate desires (which are generally referred to as craving). Behaviour is then motivated by tuning in to the appropriate ways to act in a given situation (so the Buddha lived to a ripe old age after attaining enlightenment and continued to want to eat, give talks to his followers etc etc, which implies that he had some form of motivation and hence desire). Though I suppose you do acknowledge that when you say that there are remnants of pantheism in Buddhism .
You raise a good point, but I think it winds up being a semantic one. You’re right that the Buddhist gives up desire only in the sense of egoistic craving, because she sees through the illusion of ego, of the independent self (since everything is supposed to be metaphysically interdependent). If we think of the Buddha, then, does it still make sense to say that he *desired* to be altruistic, to live rather than to die, to eat rather than to starve? The English word “desire” has egoistic connotations, so I think the radical point of Buddhism is better made if we say the Buddhist wants to give up desires as such, given that desires are egoistic.
More precisely, though, you’re right: the Buddhist still has some motivations to act, and I wanted to frame this mystery in the article as a sort of paradox. If the enlightened Buddhist has no ego, if she’s therefore personally detached from the outcome of her actions, what causes her to act as she does? Why is altruism the only choice left when we dispense with the illusion of the independent self? Just because everything is metaphysically one, doesn’t mean we ought to help everything. This is a *non sequitur* that I associate with an exoteric reading of Buddhism. Nietzsche gets closer to the esoteric meaning of posthumanity, in my view. Perhaps the oneness of everything is a horror and the noble, enlightened course is to seek everything’s destruction. Who knows how we’d feel if we truly saw the universe as it ultimately is? The Buddha says this transcendent perspective is unknowable unless it’s thrust upon you.
My point, though, is that we shouldn’t assume the dichotomy of selfish egoism vs saintly, altruistic enlightenment. The Buddhist is supposed to give up the former, but that doesn’t mean she’s left only with the latter. Her enlightenment should transcend the dichotomies that seem so obvious to the ignorant masses.
Once the habit of craving for the outcome of actions is gone, and the notion of an unchanging self is also gone, what is left is the possibility to react to the world based on something other than habit, desire, or aversion. Enlightenment opens up a landscape of possibilities, but it doesn’t tell us where to go. Liberation is the beginning, not the end of the journey, at least in moral terms.
So, in a sense, altruism isn’t the only outcome of Enlightenment.
Altruism comes from wisdom. It is true that without craving or identification, the difference between good and bad outcomes isn’t felt personally, the difference is still there (you can still guess reliably how an unenlightened person would feel in a given circumstance.)
So an enlightened person can still understand the difference between happiness and suffering, and because of wisdom, you can still see the consequences of suffering in the mind of unenlightened fellow creatures.
Because you understand that not all beings are enlightened, and because you grasp the difference between well-being and its opposite, the moral imperative to act altruistically is clear.
It is also clear, through wisdom again, that suffering cannot be completely conquered without enlightenment, thus the moral imperative to guide your fellow creatures toward it.
But, there is also a more biologically contingent answer. The universality of suffering and of the quest for happiness is quite obvious. Once the habitual self-absorption is lifted, it is difficult for a human to not see this. And having seen it, it is difficult to not feel compassion. Enlightenment doesn’t change this. Seeing this implicit motivation in oneself, and enlighten being would see no reason to discard it.
Yet another way of seeing my point: Enlightenment doesn’t directly proscribe what to do; it is simply a level of freedom from the habitual patterns of clinging and identification. However, the moral order and the moral imperative to the universe is in fact quite obvious once you come at it objectively: “the worst possible misery for everyone is bad. You should avoid it.” The details are not clear always, but the general idea that well-being is better than suffering is pretty obvious.
hey bakker, i don’t mean to be a bother, but an update on The Unholy Consult would be great! Im so pumped for its release!
His usual reply is that he’s working on it five days a week (or maybe seven! I forget). He’s basically out there each day brewing up more of that moonshine – I’m surprised he works on it so consistantly. I’m not sure if he works on it when he goes on Holidays (but he probably still thinks about it!), but he’s back now IIRC, so every day just creeps the ordeal a little closer to the ‘roth. I think he’d done alot of the book so far, IIRC – its the refinement of the text that is the real time taker.
Benjamin, you wrote:
“Which process?”
Given your presuppositions, natural selection. Which is really just a very special case of thermodynamics. However, that assumes that all this meta-meta-metacognitive processing is RATIONAL, which we have absolutely no reason to assume. Indeed, as the metacognitive processes begin to reference themselves more and more pathologically, you’d get a “thought” process likely more and more influenced by stochastic environmental noise as it detaches itself from its original evolutionarily-imposed value system.
So, the post-human could literally be anything, and indeed this might be the end game of any evolutionary process.
Honestly, this whole thing sounds like fun to me, not so much depressing. If there’s one thing about the posthuman is that they’ll undeniably be much better at science, so they’ll probably be able to answer questions like why the universe was in such a low entropy state to begin with. Not that we’ll like the answer, but hey.
” If there’s one thing about the posthuman is that they’ll undeniably be much better at science, so they’ll probably be able to answer questions like why the universe was in such a low entropy state to begin with.”
Sounds to me like “the posthuman” is scientist’s Santa Clause or Jesus… or something like that. Is this the new religion? ‘Oh, how much better it will be when the great Redeemers come and take us out of our paradox unsophisticated hell. Praise the posthumans!’
No, not at all. I’m not implying that they will make a “better” world or whathaveyou (see the first part of my post), but rather that they will be able to answer questions we currently cannot. That is all.
My reply was a bit tongue-in-cheek but I think the way you said it kinda implies that. Even ‘end game’ assumes a goal or something and for an evolutionary process that seems like a misinterpretation.
But a question, that I find quite interesting is in which way posthumans can be better at science. They may be less vulnerable to any biases; anything else?
Jorge,
Actually I didn’t have only natural selection in mind. I was thinking of more general patterns of complexification and cosmic evolution/transition.
You’re right that a posthuman who doesn’t just understand natural processes but fully identifies with them, having seen through the illusion and the limitation of the ego, might not seem rational to ignorant critters like us. I wonder whether there’s something paradoxical, though, in supposing that a posthuman could be irrational (compared to our low standards of reasoning) and yet much superior at technoscience.
The question is what becomes of technoscience for posthumans? We think of technoscience in instrumental, and thus implicitly egoistic, dualistic terms. My best shot at this is to think of posthuman technoscience as a new natural process that perhaps intensifies what the cosmos has been doing all along, which is to create newer and newer layers and stages before all creative possibilities are exhausted and the natural universe ends in heat death or in some other unutterably horrible fashion. But these are just empty words. I don’t think we have the foggiest idea of how this view of the cosmos would motivate a supreme “intelligence” and power to act. To understand that would be to understand what the universe is ultimately doing, because the two would be one, whereas now we’re separated from the universe by the illusions of our ego (dualistic, folk psychological intuitions, etc).
Benjamin, you wrote:
” I wonder whether there’s something paradoxical, though, in supposing that a posthuman could be irrational (compared to our low standards of reasoning) and yet much superior at technoscience. ”
Yes! It seems you caught me in a kind of contradiction. I think it’s because currently something like Bayesian reasoning “seems like” the best way to do science, so posthumans should be hyper-rational. Of course we might be conflating rationality and reasoning here, but I don’t feel like doing any actually philosophy today and untangling this. I’ll leave it to those whose profession it is.
“To understand that would be to understand what the universe is ultimately doing”
How teleological are you feeling today? Because I’ve been feeling positively Boltzmannian recently.
Perhaps think of the way you think the cells in your arms are yours to command, as if they were robots to do your bidding.
To me it seems a recursive cycle – like the corporate dictator sees himself above the robotic ‘streamers’, the budhist pattern (where it doesn’t just neglect its life support and dies) is simply a recursion. Instead of corporate dictator above people, it’s part of the mind above the rest of the mind – the above part seeing itself as a dictator.
But maybe it glimpses itself and retracts from even that? Oh, sweet enlightenment! Or maybe that’s just another recursion, just the same deal and this other thing called enlightenment – just a mind above a mind above the rest of the mind. No theory of mind – blind to ones own state, making robots of everything else seen. Never putting oneself in their shoes, when there’s yet another enlightenment to have – ie, a blind recursion.
Like a totem pole. A demon sat upon by an angel sat upon by a man, sat upon by something else, something I can’t quite see, something with tendrils like an electrical spider…
I really appreciate atleast one reference to life support and potential non self maintenance of that, in this piece. I presume it’s possibly an aspect of a brain blind to itself that nearly always imagines being free of identity and yet for some bizzaro reason still feeding itself. I do appreciate a break in that pattern and a reference to a potential non life support maintaining result.
Back down to earth, yeah, entertainment is saturated by high grade studios who, at the very least, will polish and sparkle what is just the same old confirming stuff (Avatar, for example – I aught to find a video I saw that looked into the features of the blue dudes in it, noting how their eyes are larger so as to obtain sympathy, and alot of other design by commitee elements (but elements that work because it’s design by scientific commitee)). When before that people were starved for entertainment (before TV came around, traveling entertainers were a widespread thing). Heck, they probably went to church in part for the entertainment (or atleast the socialising entertainment). I think the post posterity article here gives more on this.
For those of us who can only brew a minor drug, were competing with drug dealers who cook up the most refined powders and beam them into everyones homes. For, ostensibly, free.
Though I GM roleplay sessions – micro audience (unpaying!), but atleast it’s audience, that fit it in amongst the AAA studio games they play. But it feels like having been beaten back into a tiny corner.
Callan,
I hope my article helped inspired some of your poetic rhapsodies. 😉 But actually, I wonder whether a technoscientific posthuman would communicate more in such poetic songs or rants. We know that enlightened folks in the Buddhist tradition speak in oddly paradoxical ways, since they eschew dichotomies. Zen Buddhists are explicitly opposed to rationality, since they think the truth transcends what reason can grasp. Of course, nature in general doesn’t communicate, since it just impersonally acts. Likewise, an enlightened posthuman would lack personality and would act, but she’d still be a special force of nature that might communicate in a bizarre way. She might use technoscience to produce art on a cosmic scale. The thing is, what’s left to say when you’ve stared into the heart of the universe and you fully grasp what’s going on there as well as where the universe came from and where it’s going? Would even poetry or song be apt?
Hi Benjamin,
In line with my recursion thoughts, I think you’d just get increasingly extreme human traints, unfettered by any other human traits. The chords would be pronounced and long. All subtely would be left to the universe to convey – the sense of ‘right’ will mean only one, hard cord is played, no nuance because this. is. right! And why would you complicated that? And along with it, with increasing technological power the universe (and anyone made out of universe) will have decreasing capacity to play any complex or nuanced counter capacity. Right will hard write. You’ve heard of grey goo? Kind of like that, but at another angle.
Reminds me of a nightmare I had when I was young, where something would convert everything, yet although everything would look the same, it’d all be utterly wronged – the conversion was false, but there’s nothing to go back to. That bit in resident evil where there’s a blue laser grid that sweeps forward at a guy and then you’re looking at him, ostensibly standing there, suppoedly fine (as if the grid was stopped in time) but you know he’s utterly wrong at that point if you’ve seen it. Yet everything looks the same – exactly that!
Enjoy your writing Ben, have been checking out some of your other rants on your site. I came across this essay recently that you might enjoy and fits in with some of the topics you’ve been exploring,
http://www.academia.edu/2780085/Thinking_the_Charnel_Ground_the_Charnel_Ground_Thinking_Auto-Commentary_and_Death_in_Esoteric_Buddhism
Thanks very much. I’ll check out that article.
Relevant article:
http://www.cbc.ca/news/technology/story/2013/04/12/science-music-brain.html
Perhaps we have different interpretations of the terms you put forth, but it is possible to to all that is within your power to accomplish a given result and still have no clinging or identification to the outcome.
When you say that a Bodhisattva would have desires but for lack of craving could not favor any of them, you seem to forget this. A more accurate description would be that a Bodhisattva would not favor an action merely because of a desire. You might have the desire to eat chocolate and the desire to raise compassionate children. You might then reflect on the consequences of both associated actions and see that the first is not useful and the second is. You might then choose to override the first desire and let the second desire guide you.
I think perhaps your paradox comes from the idea that morality can only ever truly be the projection of inner preferences onto a reality inherently value-free. I would submit to you that this is not the case, and desires are not a necessary component of being a good person.
Also, the description of the Buddha as having deconstructed her mind and abiding in a state of nirvana without caring for anything, this seems to be obviously nonsensical based on my definition of the terms.
There is a world of difference between equanimity and indifference. Being nonreactive to the suffering of others is very different from being uncaring. The first comes from understanding that outcomes aren’t conditioned by wishes alone, and that failure is part of reality. The latter comes from detachment, and ignorance.
What you seem to describe isn’t nirvana, but absorption or cessation.
Nirvana simply means freedom from conditionality or reactivity. It’s the freedom to react to a situation not only based on habit, desire, or aversion.
Absorption is a state of extreme concentration where most of ordinary experience is replaced by a more basic, “pure” state.
Cessation is the phenomenon of dissociation between consciousness and the sensorium.
Neither Absorption nor cessation are necessary to Enlightenment, and there are obvious dangers with these states (it is possible to cling to them), against which the historical Buddha warned.
The phenomenology of nirvana is quite mundane. There is in fact no change in the contents of experience, though the state is sometimes accompanied with the experience of no-self, where the individual components of experience with which we usually identify (e.g. sensations in the face, motion of the eyes, thoughts, voluntary actions, etc.) remain separated from each other and so can produce a kind of vertigo.