Encircled by Armageddon
by rsbakker
Aphorism of the Day: Holding a fool accountable is like blaming your last cigarette for giving you cancer. Behind every idiot stands a village.
This is a horse I’ve been flogging for several years now, the way that the picture(s) offered by the technological optimists seem to entail our doom as much as the pictures(s) offered by the pessimists. To crib a metaphor from Marx, we will be hanged by the very rope that is supposed to save us.
It seems to me that the two best ways to attack the argument from the two previous posts are to argue that the biological revolution I describe simply won’t happen, or that if it does happen, it doesn’t entail the ‘end of humanity.’
My argument for the first is simply: in the absence of any obvious and immediate deleterious effect, any technology that renders competitive advantages will be exploited. My argument for the second is simply: identity is not conserved across drastic changes in neurophysiology.
The inevitability of the former entails the ‘drastic changes’ of the latter. Even though ‘loss of identity’ counts as an ‘obvious deleterious effect,’ it does not count as an immediate one. Creeping normalcy will be our undoing, the slow accumulation of modifications as our neurophysiology falls ever deeper into our manipulative purview.
The question of whether we should fear this potential likelihood is the same as asking whether we should fear that other profound loss of identity, death. Either way, whatever the paradise or pandamonium that awaits us on the other side, it ain’t human.
POST-SCRIPT: Here’s an interesting little tidbit from The Atlantic that a buddy just sent me. We’re standing at the wee beginning of Enlightenment 2.0, and we’re already talking about overturning the entire foundation of our whole legal system.
POST-POST-SCRIPT: Here’s another interesting tidbit I came across correlating personality types and the propensity to believe in free-will…
The one reason I don’t think we should fear death is that it is inevitable, and so for me the question always returns to ‘is it preventable?’. According to you, “minus some sort of catastrophic reset”, prevention is unlikely. Thank god for Global Warming, eh?
I do agree that the picture proposed by both technological optimists and pessimists are similarly bleak.
It seems like we won’t have the tools to offer any meaningful critique until it’s too late. The same could be said about the invention of language.
I will advocate for the devil. If we are effect, then “human” means “result”; and only one among innumerable others. If we exist as we do because we must, then it is inevitable that something else will exist when old causes cease and new causes arise–whatever those causes, whenever they come. Worrying about this is exactly like worrying about death.
Why should I fear losing our “humanity”? I don’t even know what “humanity” is. Is it an ancient concatenation, a fractal bloom of the big bang? Can we ever be anything other than the effects of each particular cause? Are we not the ship of Theseus, replaced plank by plank?
So you have no fear of lobotomies or severe strokes?
That does not seem to be a fair comparison, which I think you know. If someone said to me, “I will cause you to have a stroke,” then I would be afraid (if this were a reasonable threat under whatever circumstances). If someone said to me, “You may have a stroke some day,” then I might be worried for a time but I would not be “shitting my pants.” On the other hand, if someone said “I will replace a small part of your brain with a machine that will make you better at calculus,” then I might be worried that things would go wrong, but I do not know that I would have some profound existential dread about it.
I can say that as I sit here I am not worrying about lobotomies or having a stroke.
Maybe I am not following you. After all, I am not that smart.
And I’m not sitting here with knocking knees either! Psychologists call in ‘future discounting’: the way we lend less emotional weight to future punishments and rewards. It’s probably the reason capital punishment isn’t an effective deterrant. But then, when we find ourselves in the electric chair…
The question isn’t, IS a murderer afraid of capital punishment? (no), it’s IS IT RATIONAL for a murderer to be afraid of capital punishment? (of course).
“I will cause you to have a stroke,” then I would be afraid (if this were a reasonable threat under whatever circumstances). If someone said to me, “You may have a stroke some day,”
This shifts the source as well as the time frame.
What if someone said “One day I will cause you to have a stroke”?
Currently your second rephrasing attributes the stroke to just being something that happens by unfortunate natural circumstance. With that I can understand not being afraid all the time, because that’s what nature just inflicts on us. But we are talking man made strokes here.
I understand your point, and it is valid.
Bakker is arguing that “Enlightenment 2.0” is an unavoidable doom. The ongoing theme here in the discussion, as I understand it, is how one responds to the inevitable So, my examples were perhaps not well constructed.
In fact, I think I wandered from my original thought quite a bit. We are the product of the inevitable, and we move toward the inevitable. Generally speaking, we are not smart enough to see through the tangle of cause and effect in advance, but when (if) we do, should we fear what (we think) we know simply because we know it? I suppose ignorance is bliss.
It is easier for me to understand feeling dread at understanding human beings as non-exceptions to the law of cause and effect (i.e. the overarching condition) than it is for me to understand feeling dread because we might be able to guess one particular effect.
Maybe I just feel differently about this than Bakker. Maybe I have more fatalistic equanimity, or maybe I am just too dumb to be afraid. Either way, it is tough to argue with emotion.
As a member of the village behind the idiot (and almost certainly some variety of idiot myself), I (like Shawn) don’t see the ‘doom’ in the (possibly) coming revolution. I don’t think systematic changes (as described in the excellent Atlantic article) are equivalent with the rushing wings of the angel of death. Things are going to get strange, I’m not sure that’s the same as things getting terrible. I also perhaps do not understand the magnitude of the problem of the loss of identity. Yes, we’ll be able to manipulate our brains (and therefore, our personalities). But we do that now, we just do it much more crudely, and much less consciously, than we’ll be able to soon. I see it as an issue of scale and precision, not an entirely new thing.
The Paradox of the Heap, huh?
Things always get strange in times of change, then we settle into the new normal. This is your optimistic induction, one shared by pretty much all techno-enthusiasts. My rebuttal is the same as before – that’s the argument you need to attack. When the change become so fundamental that ‘we’ no longer exist on the far side of it, your argument collapses. But I’m not sure I even need it, given that there’s so many exceptions to your generalization: Often, the change is catastrophic, as pretty much every society before us has discovered.
Loss of identity means ceasing to exist means… some form of death, no?
I’m wondering why it’s necessary for techno-enthusiasts to attack the intuitions that provide the basis for our whole idea. Your argument, that humanity will end, is too conceptually vague to me to be resonant. It will end because….why? We’ll be able to manipulate our brains with more precision than we ever have before? That strikes me as a positive thing. The Atlantic article spoke to that. Growth in terms of the application and development of neuroimaging technology will lead to better sentencing, or at least, provide us with a model for sentencing that is better. It will also force us to re-evaluate the term ‘criminal,’ because the word loses a lot of force once you start thinking about it in terms of neuroscience. This example is simple and probably not super-relevant to the broader discussion, but it certainly doesn’t support the idea that humanity will be destroyed, merely that its future will lead to a society that would be largely unintelligible to those who came before us (and even to us, although I think we can probably see the edges of it).
Given Enlightenment 1.0, I think that’s probably a good thing.
I literally can’t understand your first sentence. Who said it was necessary for you to attack your intuitions?
Conceptually vague? How so?
We are talking about futures where the brains of our descendents will ultimately have less in common with human brains than human brains presently have in common with canine brains. I’m not sure how many other ways I can make this same point. Otherwise, I never disagreed that the individual steps leading into the neuroplastic future will be ‘obviously good’ from a local perspective: in fact, my argument depends on this.
Maybe what you meant at the beginning is that you don’t know why you have to attack my arguments? Well, if you don’t attack my justifications, you’re simply stomping your foot and saying my conclusion is wrong because you happen to believe in a contrary one. Not very convincing…
“Who said it was necessary for you to attack your intuitions?”
You did, unless I’m bad at reading.->
“This is your optimistic induction, one shared by pretty much all techno-enthusiasts. My rebuttal is the same as before – that’s the argument you need to attack.”
It’s conceptually vague because I’m not sure what you mean will be ending. Humans will still exist in 150 years, barring some extinction event, if by human you mean “basically the same DNA as us now.” Obviously, if biotech gets there, we’ll have be able to have a lot more granular control over which genes are activated, but that’s just speeding up the evolutionary process.
“We are talking about futures where the brains of our descendents will ultimately have less in common with human brains than human brains presently have in common with canine brains.”
I’m not sure what you mean by ‘ultimately’ here. If you mean in terms of what they can made to do, sure, that’s possible. The jump from “they will not be like us” to “them not being like us is a *spooky thing*” is I what I’m not getting, and I concede I just might not have the processing power to get there (although obviously I doubt it).
I meant that you need to attack my rebuttal.
My use of the concept ‘human’ is vague to you? What I mean is our mind, personality, and experiential template as fixed by our neurophysiology. But this is what I’ve been talking all along. Charity, please, Zach. It’s entirely possible for our descendents to share the same genotype than us, but to be so alien in terms of brain function and experience as to be unrecognizable.
All human beings share the same general neurophysiology, the same ‘collective personality,’ one which will be wiped away and replaced by something we will probably be unable to comprehend. If that doesn’t count as existential uncertainty, then what does? And if existential uncertainty isn’t grounds for apprehension, Zach, then what is?
I’ve already told you why I find your optimistic induction unconvincing (because it relies on conserving human identity across E 2.0). Do you have any other arguments to support your enthusiasm for the future of humanity?
“Charity, please, Zach. It’s entirely possible for our descendents to share the same genotype than us, but to be so alien in terms of brain function and experience as to be unrecognizable.”
You mean, exactly like we would be to people living 25000 years ago? This is where I think we diverge: What you’re talking about, if I’m understanding it correctly, has already happened at least once across the species. Is your concern stemming from the acceleration of the rate of change, or do you reject the idea that we would be as unrecognizable to our ancestors as ‘post-humans’ (I really don’t like the term, but I suppose that’s what we’re talking about) will be to us?
“All human beings share the same general neurophysiology, the same ‘collective personality,’ one which will be wiped away and replaced by something we will probably be unable to comprehend.”
I think there’s two problems with the conclusions you’re deriving from this.
The first is, due to the effects environment and conditioning have on our development, this generalized similarity produces people who are fundamentally dissimilar in the way they process information. It’s possible I’m understanding ‘general neurophysiology’ in too shallow a way, but in terms of literal thought process there are, as far as I’m aware, gulfs between various subsets of our species even now (although the science surrounding this is still so new and controversial that if you pressed me on this I wouldn’t have a huge amount to fall back on). I’m not saying there’s not problems now, but it’s nothing apocalyptic, it’s just the way things are.
The second problem with this argument (in terms of the ‘the future will be bad’ line of reasoning) is that it appears as though you’re assuming we’re going to tinker with our neurophysiology *all across the species* before we have a chance to see the macro effects of doing so. What’s more likely is a moderate tinkering in some individuals, extreme tinkering in very few individuals, and mild tinkering in some significant percentage of the general population The inevitable resistance to endo-technological innovation, as you call it, which is already fevered, will grow in intensity as the full potential of what we’ll be able to do to ourselves becomes more obvious to the general population, meaning, there will be still be people as we would recognize them in 100-200 years, unless once post-humans do exist they just kill all of us. Which I don’t think is what you’re predicting.
I’ll address your other questions sort of disregarding what I just wrote, (sorry about the post length):
“If that doesn’t count as existential uncertainty, then what does?”
One thing we seem to disagree on is the importance of the continued existence of humanity as a thing. I think we’re a stepping stone, as we are now. Bigger intelligences, whether artificial or biologically engineered, are coming, barring an extinction event before we get there. Once we get them up and running, I’m not particularly concerned about whether or not modern humans keep on going, I don’t think of us as singularly important.
“And if existential uncertainty isn’t grounds for apprehension, Zach, then what is?”
I’m more apprehensive about a full nuclear exchange, and subsequent global regression (which seems likely, but obviously isn’t guaranteed) than a future where humans as we exist now don’t exist, but some other form of intelligence pioneered by us does.
“Do you have any other arguments to support your enthusiasm for the future of humanity?”
We’re not dead yet? It depends on what you mean by the future of humanity. I think you’re probably right on purely semantic grounds, given how you’re defining human, but it doesn’t look like that matters to the argument I’m making.
People 25000 years ago shared our neurophysiology, so I don’t see the force of your first point. Ditto with your second: the brain is plastic, and in some respects amazingly so, but there is only so much the environment can do. Pinker’s Blank Slate (which I don’t particularly like) has a long list of all things that all humans in all cultures – largely if not entirely due, no doubt, to the fact that we share the same neurophysiology. The vast rug that you seem to be hoping will be pulled away.
Your third point has prickle, however. It could very well be the case that differences in adoption of neuro-technologies would count as ‘obvious and immediate deleterious effects’ for those left behind. So while the early adopters will likely fall prey to creeping normalcy, a consensus could develop among resisters and late adopters powerful enough to enforce some kind of moratorium, or at the very least, some kind of scheduled descent into the unknown.
Now that I realize you’re a full blown misanthropist your position is much more clear to me. If you genuinely think humanity should go, then we’re doomed to be at cross-purposes. But this strikes me as an extreme, and possibly incoherent, view. The logical problem is simply that your attributions of value belong to the same neurophysiological basket you want to throw overboard. Given that attributions of ‘better’ and ‘worse’ are products of our neurophysiology, it becomes tricky arguing that we would ‘better’ for abandoning our neurophysiology.
Otherwise, why should anyone value ‘bigger intelligences’ when there’s nothing inherently valuable about them? If they have nothing to offer us apart from the threat of extinction, what possible argument could justify your position to us? Why should humans be a stepping stone?
I’ll concede the plasticity argument, I don’t know enough about it to continue that line seriously.
I’d quibble with the term ‘prey’ in terms of creepy normalcy. Neuro-manipulation is likely to happen, but in what areas? I think in terms of values (meaning, the wiring surrounding values, meaning, ethical intuitions and reasoning), some things are pretty likely to remain constant, outside of an extremely limited subset. Otherwise the first-adopters will ‘get out ahead,’ as it were, and be suppressed. I think your fear of total conversion to some other thing are pretty overblown. Some people might convert to something other than human, most people are going to have, say, elements of fight-or-flight response rewired so it’s not triggered every time someone cuts in front of them on the highway.
I wouldn’t call myself a misanthrope, but I’m not interested in our species, as i, t exists now, continuing long term. We ought to be a stepping stone because our own processing is so bad. In order to unlock our full potential intelligence and rationality, it appears as though we need to escape our biology. The way our brains are wired are not conducive to our own long term survival. I agree with what you say about technological development outpacing sociological development, but that happens largely as a result of bad sociological wiring on the part of humanity. We’ve managed to develop some institutions that can control some of the worse aspects of our neurophysiology, but clearly, they’re not operating at maximal efficiency.
In order to increase our chances of long term survival, in some form, it appears as though we’ll need to develop some greater type of intelligence, which then can regulate us. Now, once that’s developed, I’m not sure it’s important for us to still exist, but I think even if you value our existence then you still ought to be counting on the future to save us.
Everyone has to count on the future to save them!
But I think I see the argument you’re driving at: a kind of If Not Technology, Then What? argument.
Our institutions and our brains are going to find themselves more and more behind the curve: this is something I’ve been arguing a long, long time, what might be called an Neuro-Institutional Lag Argument. Short of some kind of game changer, things are looking pretty bleak. What you’re suggesting is that technology is the only game-changer we seem to have.
This strikes me as a pretty strong argument.
Possible argument for why it won’t happen:
Advantages will always be exploited, but they can be offset by costs. In evolution, we see that trade-offs are a very real thing. When neurosplicing becomes available, societies may suddenly be put in a very perilous position… take the good (increased socio-economic competitive edges) with the bad (having a bunch of Neil Cassidys running around) or simply avoiding the whole thing. Just as certain developed nations are suddenly turning against nuclear technology despite the competitive edges it offers (less reliance on oil, source of fuel for nuclear weapons, etc.). It’s possible that there are real costs in terms of sanity and health to those who muck around with their neural architecture, enough that a global prohibition on those technologies emerges. It only took a few dead Russians back in ’78 for them to realize that maybe they shouldn’t fuck around with smallpox and anthrax
Possible argument against loss of identity:
I don’t really have a good one. All I can say is our brains change over time, so I don’t really fear the transformation of my identity. As long as the coin trick of consciousness is preserved I don’t care.
Just playing devil’s advocate. Shit is going down folks.
I can see this. As a pessimistic induction, the first argument has to be ceteris paribus – the problem being that all things are rarely equal. It could be that the kinds neural tinkering I’m worried about or some other technological revolution will render markets moot, in which case everything changes. This qualifies the strength of my argument that the neural apocalypse will happen, but doesn’t touch the argument that it will mean the end of humanity if it does. The first argument is only as robust as markets are…
Carousing through the comic book store, figured i’d stop by and throw a few cents in the opinion bucket. So during my morning training session me and the bloke who owns the crossfit box I train at were listening to a little Rush. The track “Witch Hunt” came on, one I’ve heard but really haven’t listened to, and the lyrics caught me as I think they pertain to the problem you keep writing about:
Features distorted in the flickering light
The faces are twisted and grotesque
Silent and stern in the sweltering night
The mob moves like demons possessed
Quiet in conscience, calm in their right
Confident their ways are best
The righteous rise
With burning eyes
Of hatred and ill-will
Madmen fed on fear and lies
To beat and burn and kill
They say there are strangers who threaten us
In our immigrants and infidels
They say there is strangeness too dangerous
In our theaters and bookstore shelves
That those who know what’s best for us
Must rise and save us from ourselves
Quick to judge
Quick to anger
Slow to understand
Ignorance and prejudice
And fear walk hand in hand…
I’m starting to believe that the problem is the human ego and its need to push the views of the generations that molded said ego. From child to young adult we are constantly being molded by the environments we live in, people we are raised with so on and so forth. I remember asking as a child why do we waste so many materials, why would you say one thing then turn around and do the exact opposite of what you said you were going to do, I think you get the picture the list goes on. People, since E 1.0 have feed on emotion not efficiency; its about how they feel and how those feelings project themselves on to their environments. We have progressed as a race, but not by efficient means. I believe there is an answer to said problem, and when that answer is found I don’t think it will be an easy pill for large scale society to accept because it will be more then just being happy or content.
Separate thought. In the long run I think this blog will only stand to reinforce your artistry. You say things that need to be said, you put forward problems that need to be thought about. And that is the greatest gift that you can give the world. I find this blog is challenging me in ways I never thought possible, from my thinking to putting my thoughts down for others to read in a communicable manner. The last bit, as you can tell, I’m still working on. So, in summation, go and have a beer or two and stop shitting your pants; the problem is put forth, now lets try and find a solution.
That’s a great tune. But, contrarian bastard that I am, I’ve pretty much disagreed with Neil Peart on pretty much everything since I was fourteen years old, including how to pronounce his name!
Being that I’m a budding musician I try to learn from as many artists from as many different genre’s as I can… I find i’d rather be a complete artist then half of one. Personally I dj electro & trance and produce the same, in my little mock studio, with my dog to do my critiquing. I don’t know his personal philosophy but I will make a note to educate myself tonight. Its funny after my fiasco with university my uncle sent me to London for a year to study music before I came home. Great education that place. So now I sit in good ole West Plains, MO pop. 10,000 and as close minded as they get, and practice. And believe it or not West Plains has been just as good to me as London was considering I’m in BFE. As far as the lyrics I thought they might help to illustrate my point with what I wrote. Which is that this is a problem of the ego getting in the way of humans being completely efficient.
Well, I tend to agree, but all not need be so bleak. Technical innovations never happen in a vacuum. The shape and form of innovation is determined by its social/cultural context. I fear our culture more than possible future evolution, technical or biological. We do suffer from Homo Chauvinism. People during the 19th century were freaked out by the idea of non-human ancestors. We could suffer from the same 19th century aversion in reverse; we are freaked out by the idea of non-human descendants. Evolution happens, we originated from non-human ancestors, we will evolve into non-human descendents. The real question is what kind of creature we will became, and I think this is what Scott is getting at. I have to agree, if the forces shaping our future evolution, such as market forces and consumerism are the context guiding future change then it all looks bleak indeed. We will evolve into horrible creatures, not that we will care anyway. However, there is no certainty that our modern world will be relevant in the future. The same forces that would lead us into a terrible path may themselves become irrelevant by future developments.
I’ve never looked at it in this context before – interesting. I think this is where the problem of nihilism really nips at our cognitive heels. All our moral intuitions are inextricably linked to our existing circuitry, including those that militate for and against ‘chauvinism.’ I’m not sure what this means, other than we are bumping up against the cognitive abyss. At least we could recognize our prehuman ancestors well enough to be insulted!
Ah but the 19th century people couldn’t change the past. Our aversion to our non-human descendants leads me to think we will not have non-human descendents. Thank goodness for markets. Catering to our desires and preventing the apocalypse.
How could markets go wrong, Gareth? Or at the very least, what sort of change could occur where you’d say it’s not a market anymore and it’s gone wrong? Or are they invulnerable to such change?
Evolution happens, we originated from non-human ancestors, we will evolve into non-human descendents.
As the universe would have it, yes – but as the universe has done it, it was slow.
So, what happens with tweaks? Are we going to pretend to ourselves it’s the will of the universe that such evolving occurs?
The thing is with Darwinism, is that pretty much everyone can understand it (even if they don’t agree it occurs). Which means weve out smarted the universe, to a certain degree. Were ahead of the big game, to an extent.
But the forces that would drive tweaked evolution, which wouldn’t happen over millenia – it will fucking happen over years (or fucking days!), who the hell will know what drives those forces?…the universe will, of course, because it is causality, clicking along as always. We, or whatever tool using creatures lay ahead, will go back to being under the universes thumb, no longer having outwitted the big plan. And no doubt pretending themselves liberated and more free whilst doing so.
“The thing is with Darwinism, is that pretty much everyone can understand it (even if they don’t agree it occurs). Which means weve out smarted the universe, to a certain degree. Were ahead of the big game, to an extent.” – Callan
I agree with this and something about it worries me, if our brains work in a non-representational manner (RSB) and are effectively self models with no necessary direct correspondence with anything “Real:” what does it mean that we can perceive the processes by which they are evolved and are changing? Are we deluding ourselves this is the case (i.e. evolution fits what we know now but may be refined in future) or does it imply our understanding of this process gives us some kind of external measure of a portion (maybe a very small portion impinging on us) of what must be “Real” – Why can we see and understand the process of evolution, it seems strange to me that this need be the case? – I apologise if this is not making sense as I am still trying to clarify what I mean, myself, I am probably missing something and being quite naïve here…I am trying to get at an answer to the question – Is evolution a window outside phenomenal perception or just a juggling of the contents of our subjective view?…I think the first possibly .. but I don’t understand this well enough to clarify yet..
Sort of smacks of Hegels discovery/belief he is a crucial stage of self-aware thought because he see the process – then evolution sort of corrected his interpretation somewhat – maybe something drawing on but extending evolutionary theory will be forthcoming..
I am no sort of expert on this so if anyone thinks I don’t know what I am talking about – they are probably right – correction appreciated!
Adding to my other problems the difficulty I am having completing Godel Escher Bach – is leading me to suspect I may be a skin-spy 😉
As much as I think you’re lending far too much credit to neuroscience, I won’t keep flogging that dead horse.
Now I’m all wrapped up in imagining your future. In a world where neuromanipulation exists, what fate can we expect for humanity? As our respective cognitive frameworks begin to diverge at unprecedented rates, should we expect them to then contract again, into neurologically similar subspecies? Evolution suggests this, that certain neuromods will prove so advantageous that most individuals will select them over others. But how extensive will the differences be? Enough so that tribal, or racial, identities centered around the particular alterations in different groups’ neuroarchitecture become predominant?
And now I’m envisioning a true semantic apocalypse, where the fundamental way each individual engages with the world is so divergent from his or her peers that communication is largely impossible. Lacking a common form of life, everyone would be like Wittgenstein’s lion.
I’m not sure why. Either you think there’s some principled limit on what neuroscience can discover, or you think I have some definite timescale in mind. I hope the timescale is drawn out for as long as possible, but like I mentioned in my earlier reply to you, we are already mucking around with a number of interventions (like transcranial magnetic stimulation or deep brain stimulation) even though we only have the crudest understand of the circuitry we’re manipulating. We’re not going to wait; we’re going to barge in like we always do in science, if not out of therapeutic necessity then because the default is always to think we know more than we do (and then learn a shitload from the inevitable errors). Once again, think of the difference between the phonograph and the Blueray, then remember that the pace of technological change is accelerating. What will the Blueray version of deep brain stimulation look like in a century’s time?
For some reason I just got an image of a Michael Jackson brain!
They have a TMS at Vanderbilt. Did they ever strap you into the damn thing? I’ve been to Nashville twice but never wanted to bug anyone about it. My guess is they don’t just let people zap their brains for shits and giggles. Unfortunately they don’t have one where I work.
Yes, we cannot alter our past evolution, but our future evolution is something we have a say in how it might develop, for better or worse. That is the troubling question. I am really a pessimist, but buried inside there is always that optimist that dreamed of a utopian science fiction future. My frustration is that we could crate something better, but will choose not to. History has shown a tendency of ever broadening our concept of kin. Women are equals, slavery is wrong, children have rights, and we care for the sick. Slowly we have broadened our idea of who is deserving of compassion and life. We no longer just accept that certain people are to be left out to suffer and die just because I have no immediate interest in them. This is good, I think. But will that trend continue in the future? That is what really worries me. I do not care what shape and form our future takes, as long as the trend of broadening inclusion and compassion continues. But the forces that will drive our future evolution might turn us into ugly creatures indeed. Incapable of connection unless there is an immediate interest, economic most likely. A form of economic free market tribalism, driving some to the deepest despair, loneliness, and poverty. A form of underclass made of those who could not compete or stay relevant in an uncaring system. That is the worst nightmare, I think.
The article on extraverts was interesting. Having worked with mentally handicapped criminals for a while a few years back it brings up a whole lot of familiar issues. You can tell it was written by an introvert because he’s basically saying “Yeah, you can’t help but believe in free will, it’s how you were programed, sorry.”
I’ve never been one for free will, and the concept of accountability has recently begun to frustrate me. Mostly because I’ve begun to notice that people tend to use ‘you need to be accountable’ when they want something from you.
But while I’m not a cheerleader for free will, I do believe that choice now weighs heavily upon evolution. I do realize that this is a major contradiction. A big part of me thinks that the reason extraverts are predisposed to believing in free will is that in order to be an extravert you need to put certain things to sleep. In order to focus on the social (and to be successful, or what the article calls ‘warm’) you need to either answer with certainty or ignore completely the question of nihilism and the question posed by the existence of the self. I imagine that metacognitive types are almost exclusively introverts for a similar reason (may be wrong about that).
But if we barely understand the human brain yet, how much less do we understand the potential of education? This is taking education to be a sort of ‘indoctrination to a shared reality’, and anything that clarifies that reality and gives us greater understanding and appreciation of it’s intricacy and the potential of our own roles within it.
Or maybe this is just me struggling against the meaninglessness inherent in a completely predetermined reality.
Which meaninglessness? Sometimes I think weve entered so much into a work hard/play hard culture, weve lost contact with gentle play. Like enjoying the sunlight or walking in the park, the taste of fresh fruit, etc. Yeah, I know none of these make up for the grinding work, but that’s exactly my point about ‘work hard’. It’s like we strive for some staggering amount of meaning simply because of the pain and suffering in our lives, which at one point was inflicted by nature, but now it’s inflicted by men in business suits so as to keep the peasants motivated. Once the threat of nature has already been removed, hard work is not a virtue, it’s a vice.
“Once the threat of nature has already been removed, hard work is not a virtue, it’s a vice.”
Agreed.
There is very little genuine gratitude in the lives of most people I meet.
Both of those articles were great. I can’t see the changes with the legal system that the first article proposes sitting well with the general populace. A lot of people (myself included, often enough) look upon the legal system as more of a punishment-dispensing-device rather than a means to remove dangerous people from society. Like it’s some sort of reward for the well-behaved.
As for the second article, yet another reason I’m glad I’m not an extravert! I’m so relieved! It’s like I’ve won some sort of magical belief lottery!
This is likely the first great challenge of E 2.0: squaring our knowledge with what some consciousness researchers call the ‘manifest image,’ our experience as fixed by our neurophysiology. (I wrote a short article on this for Tor.com some time ago… I should find the link.) It’s part of the reason why I’m so pessimistic: since even the most basic things we’re discovering seem to be well nigh indigestible, our culture of denial seems doomed to drift further and further into self-aggrandizing fantasy. This means the consensus required to mitigate or direct things like ‘creeping manipulation’ or ‘creeping auto-modification’ will likely never materialize.
litg, why do you want to meet out punishment? Some urge inside?
Like say we have two set ups – in one, the offender is punished real hard, yeah! In the second, he isn’t punished as you would call it, but as it turns out the way he is treated sets him up not to repeat the offence ever again in his lifespan.
I mean, in this made up scenario, if the urge to punish is actually more important than the offence never happening again, it actually turns out the urge to punish is simply a desire to practice sadism on others? Or as you say with your reward for the well behaved, simply a self validation device, bought at the misery of some.
Is this the short article you talk about – A Fact More Indigestible Than Evolution (heh)?
Here are the links:
part 1 – http://www.tor.com/blogs/2009/11/a-fact-more-indigestible-than-evolution
part 2 – http://www.tor.com/blogs/2009/11/a-fact-more-indigestible-than-evolution-part-ii
Just kicking around. Decided to leave some thoughts over coffee as I spent some moments reading over the last few blog posts and the discussion. Though, honestly, just thoughts, I feel completely beyond of my depth when I peruse Three Pound Brain.
“Even though we cannot agree on the answer, we all agree on the importance of the question, and the shapes that answers should take – even apparently radical departures from traditional thought, such as Buddhism. No matter how diverse the answers seem to be, they all remain anchored in the facts of our shared neurophysiology.”
“Pinker’s Blank Slate (which I don’t particularly like) has a long list of all things that all humans in all cultures – largely if not entirely due, no doubt, to the fact that we share the same neurophysiology.”
Perhaps, these ideas are reasons to suggest that incrementally, the beginnings of the biological semantic apocalypse are already past.
I have to suggest that brains are all unique in their electrical architecture. The same function in different individuals usually has a subtly distinct cortical representation; the cortical map for the function of moving your index finger is going to be in more or less the same area, with more or less the same structure. However, unique. So instead of a universal natural neurophysiological frame, I might propose a universal natural average of neurophysiological frame as the foundations for our philosophic Humanity. Just based on the idea that our brains seem to organize around some undefined neuroscientific law of averages.
Of course, this might simply further suggest that as our level of technological manipulation evolves that these laws of cortical organization, emergent from brain structures not already plastically affected by the nurture element of our perceptive experience, will inevitably find themselves reconfigured as well.
One of the first revelations I had in my purview of Neuroscience – specifically supplemented by Bakker’s Books – was that due to the plastic nature of our brains and the idea that experience is entirely defined by electrical structures in our brain shaped conceptually by culture, society, language, all humans live in, literally, physically individual realities, based on our sensory perceptions and our awareness and interpretations of them.
To apply this to the discussion, the idea is that we already define our physical experience based on our attention and our actions, as this has the neuroplastic ability to reshape and sharpen our existing perceptions, to change the very electrical structures apparently responsible for our existing functions, our selves.
Not to mention the effects of social and cultural prosthetics, which alongside the use of language or dominant appendages, are probably largely responsible for the shape of cognitive function as it is.
So we currently exist as a species of humans who probably already couldn’t find neurological recognition past a certain distinction. It remains to be seen if neuroscience will discover the reasons for our averages of cortical representation for human functions before we can decide to manipulate those mechanisms.
“This is likely the first great challenge of E 2.0: squaring our knowledge with what some consciousness researchers call the ‘manifest image,’ our experience as fixed by our neurophysiology.”
This all seems to lead into two ideas.
Firstly, the more we invent tools based in the neuroscientific disciplines, the more they will be used to physically ingrain our existing philosophic ideals. I then might cast Neil into the light of a simple Neuropsychologist, due to his tweaks, within a future Neurosociety. If everyone simply wishes to alter their physical experience to resonate with their, then immediately, archaic idealized conceptions – the nature of our experience will be indefinably altered, the shape and taste of experience changed subtly, in ways we cannot imagine with every neurotweak – then, again, the changes in our neurophysiology will reflect our abstract desires of the now, differing from philosophic desires post-tweak. We’ll seek constructed change in our experience, with our current wants and needs in mind, and associate based on that changing internal recognition in the future.
So without even delving into the deeper plastic issues Bakker seems to be poking at, the future will already, inevitably it seems, use humans to physically embody the dialectic of ideas through history in a novel fashion. Now instead of parchment and rhetoric, we’ll shift to brain matter and example – anyone, individually or collectively, in the game in the future might simply become a Bene organization searching for their Kwisatz Haderach. It serves to illustrate, I think, that even without altering the deep structures possibly responsible for a certain averages of cortical representation for human function in the brain, the paths of our neurologically futures are already as diverse, as deep, as our existing conceptual capacity for human philosophy, religion, and myth. Not to mention, how these conceptual disciplines evolve after Human Brain 1.1 through Homo Infinitus apply their different abstract understandings to theory.
Second, efficient neurophysiological architecture (IANF’s?) resulting in long-lasting change will, initially, turn on shared fundamental principles of neuroscience that allow and are responsible for our current universal natural neurophysiological frame or at least the universal natural averages of neurophysiological frame, because fitting into these substrates may allow for less naturally rejected rewiring.
This brings up another big point.
There’s a neurological proposition, an easy one even, that suggests what we experience as “our bodies” is actually simply a cortical structure that fires based on our sensual representation, a neurological body-image. V.S. Ramachandran has used the idea of this cortical representation to explain a plethora of disorders as well as stimulate the experience of tactile sensation in someone when touched on a prosthetic arm or alleviate phantom pains through means of visually tricking the brain.
This too falls exceedingly closer as an mechanism for change than what seems to be the inevitable technological and semantic culmination in Bakker’s argument as we proceed to affecting our deeper imperatives.
Manipulating this body-image efficiently and with the least negative experiential side-affects will be one of the first avenues of research because so many of our aspirations seem to turn on physical augmentation. And it also seems it will be one of the most drastic indicators of change.
If our sensory experience and our awareness of it provides for most of our conceptual understandings, then a change in how we perceive the world physically will result in the some of the most drastic changes for the ways we conceptually, semantically, define our reality.
Lol, I’m done. My computer is pretty much a zombie, the living dead, after a table collapsed under my laptop, soaking it with mixed alcohols a couple days ago, and I’ve almost lot this twice now.
This all prompts me to suggest that Bakker write a full-on space opera after the Second Apocalypse. I mean, bring on the standalones and Disciple tales but I’m really interested in Bakker’s full on science fiction perspective.
I’m not sure if these were the articles that you were talking about Bakker but:
http://www.tor.com/blogs/2009/11/a-fact-more-indigestible-than-evolution
http://www.tor.com/blogs/2009/11/a-fact-more-indigestible-than-evolution-part-ii
Just rereading some stuff. Zach, I believe there is a whole lot humans could be doing now to enhance our understandings and quality of life without resorting to anything so drastic as ditching our biology. We don’t even fully understand it or what then constitutes our natural, innate, capabilities and yet you’d counsel that we just do away with it all, simply because you think we know better?
By the way, Bakker, I like “No bells, just whistling in the dark” but I did like “Where humans are the problem.”
Peace all.
I agree that technical innovations are determined by the social/cultural context. Still, to be more precise, we would have to say that society and technology co-constitute each other, that they are each other’s ongoing condition or possibility for being.
Heh – I posted this Atlantic article in a thread over at westeros.org. Glad to see I wasn’t the only one who thought “Bakker is going to love this” when I read it. Perhaps “love” isn’t the right term, but you catch my drift.