Artificial Intelligence as Socio-Cognitive Pollution
by rsbakker
.
Eric Schwitzgebel over at the always excellent Splintered Minds, has been debating the question of how robots—or AI’s more generally—can be squared with our moral sensibilities. In “Our Moral Duties to Artificial Intelligences” he poses a very simple and yet surprisingly difficult question: “Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?”
He then lists numerous considerations that could possibly attenuate the degree of obligation we take on when we construct sentient, sapient machine intelligences. Prima facie, it seems obvious that our moral obligation to our machines should mirror our obligations to one another the degree to which they resemble us. But Eric provides a number of reasons why we might think our obligation to be less. For one, humans clearly rank their obligations to one another. If our obligation to our children is greater than that to a stranger, then perhaps our obligation to human strangers should be greater than that to a robot stranger.
The idea that interests Eric the most is the possible paternal obligation of a creator. As he writes:
“Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well-being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.”
We have a duty not to foist the same problem of theodicy on our creations that we ourselves suffer! (Eric and I have a short story in Nature on this very issue).
Eric, of course, is sensitive to the many problems such a relationship poses, and he touches what are very live debates surrounding the way AIs complicate the legal landscape. So as Ryan Calo argues, for instance, the primary problem lies in the way our hardwired ways of understanding each other run afoul the machinic nature of our tools, no matter how intelligent. Apparently AI crime is already a possibility. If it makes no sense to assign responsibility to the AI—if we have no corresponding obligation to punish them—then who takes the wrap? The creators? In the linked interview, at least, Calo is quick to point out the difficulties here, the fact that this isn’t simply a matter of expanding the role of existing legal tools (such as that of ‘negligence’ in the age of the first train accidents), but of creating new ones, perhaps generating whole new ontological categories that somehow straddle the agent/machine divide.
But where Calo is interested in the issue of what AIs do to people, in particular how their proliferation frustrates the straightforward assignation of legal responsibility, Eric is interested in what people do to AIs, the kinds of things we do and do not owe to our creations. Calo, of course, is interested in how to incorporate new technologies into our existing legal frameworks. Since legal reasoning is primarily analogistic reasoning, precedence underwrites all legal decision making. So for Calo, the problem is bound to be more one of adapting existing legal tools than constituting new ones (though he certainly recognizes the dimension). How do we accommodate AIs within our existing set of legal tools? Eric, of course, is more interested in the question how we might accommodate AGIs within our existing set of moral tools. To the extent that we expect our legal tools to render outcomes consonant with our moral sensibilities, there is a sense in which Eric is asking the more basic question. But the two questions, I hope to show, actually bear some striking—and troubling—similarities.
The question of fundamental obligations, of course, is the question of rights. In his follow-up piece, “Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument,” Eric Schwitzgebel accordingly turns to the question of whether AIs possess any rights at all.
Since the Simulation Argument requires accepting that we ourselves are simulations—AI’s—we can exclude it here, I think (as Eric himself does, more or less), and stick with the No-Relevant-Difference Argument. This argument presumes that human-like cognitive and experiential properties automatically confer AIs with human-like moral properties, placing the onus on the rights denier to “to find a relevant difference which grounds the denial of rights.” As in the legal case, the moral reasoning here is analogistic: the more AI’s resemble us, the more of our rights they should possess. After considering several possible relevant differences, Eric concludes “that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve at least some moral consideration.” This is the case, he suggests, whether one’s theoretical sympathies run to the consequentialist or the deontological end of the ethical spectrum. So far as AI’s possess the capacity for happiness, a consequentialist should be interested in maximizing that happiness. So far as AI’s are capable of reasoning, then a deontologist should consider them rational beings, deserving the respect due all rational beings.
So some AIs merit some rights the degree to which they resemble humans. If you think about it, this claim resounds with intuitive obviousness. Are we going to deny rights to beings that think as subtly and feel as deeply as ourselves?
What I want to show is how this question, despite its formidable intuitive appeal, misdiagnoses the nature of the dilemma that AI presents. Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can. In fact, I think attempts to integrate AI into human moral cognition simply demonstrate the dependence of human moral cognition on what might be called shallow information environments. As the heuristic product of various ancestral shallow information ecologies, human moral cognition–or human intentional cognition more generally–simply does not possess the functional wherewithal to reliably solve in what might be called deep information environments.
Let’s begin with what might seem a strange question: Why should analogy play such an important role in our attempts to accommodate AI’s within the gambit of human legal and moral problem solving? By the same token, why should disanalogy prove such a powerful way to argue the inapplicability of different moral or legal categories?
The obvious answer, I think anyway, has to do with the relation between our cognitive tools and our cognitive problems. If you’ve solved a particular problem using a particular tool in the past, it stands to reason that, all things being equal, the same tool should enable the solution of any new problem possessing a similar enough structure to the original problem. Screw problems require screwdriver solutions, so perhaps screw-like problems require screwdriver-like solutions. This reliance on analogy actually provides us a different, and as I hope to show, more nuanced way to pose the potential problems of AI. We can even map several different possibilities in the crude terms of our tool metaphor. It could be, for instance, we simply don’t possess the tools we need, that the problem resembles nothing our species has encountered before. It could be AI resembles a screw-like problem, but can only confound screwdriver-like solutions. It could be that AI requires we use a hammer and a screwdriver, two incompatible tools, simultaneously!
The fact is AI is something biologically unprecedented, a source of potential problems unlike any homo sapiens has ever encountered. We have no reason to suppose a priori that our tools are up to the task–particularly since we know so little about the tools or the task! Novelty. Novelty is why the development of AI poses as much a challenge for legal problem-solving as it does for moral problem-solving: not only does AI constitute a never-ending source of novel problems, familiar information structured in unfamiliar ways, it also promises to be a never-ending source of unprecedented information.
The challenges posed by the former are dizzying, especially when one considers the possibilities of AI mediated relationships. The componential nature of the technology means that new forms can always be created. AI confront us with a combinatorial mill of possibilities, a never ending series of legal and moral problems requiring further analogical attunement. The question here is whether our legal and moral systems possess the tools they require to cope with what amounts to an open-ended, ever-complicating task.
Call this the Overload Problem: the problem of somehow resolving a proliferation of unprecedented cases. Since we have good reason to presume that our institutional and/or psychological capacity to assimulate new problems to existing tool sets (and vice versa) possesses limitations, the possibility of change accelerating beyond those capacities to cope is a very real one.
But the challenges posed by latter, the problem of assimulating unprecedented information, could very well prove insuperable. Think about it: intentional cognition solves problems neglecting certain kinds of causal information. Causal cognition, not surprisingly, finds intentional cognition inscrutable (thus the interminable parade of ontic and ontological pineal glands trammelling cognitive science.) And intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).
Intentional cognition is pretty clearly an adaptive artifact of what might be called shallow information environments. The idioms of personhood leverage innumerable solutions absent any explicit high-dimensional causal information. We solve people and lawnmowers in radically different ways. Not only do we understand the actions of our fellows lacking any detailed causal information regarding their actions, we understand our responses in the same way. Moral cognition, as a subspecies of intentional cognition, is an artifact of shallow information problem ecologies, a suite of tools adapted to solving certain kinds of problems despite neglecting (for obvious reasons) information regarding what is actually going on. Selectively attuning to one another as persons served our ‘benighted’ ancestors quite well. So what happens when high-dimensional causal information becomes explicit and ubiquitous?
What happens to our shallow information tool-kit in a deep information world?
Call this the Maladaption Problem: the problem of resolving a proliferation of unprecedented cases in the presence of unprecedented information. Given that we have no intuition of the limits of cognition period, let alone those belonging to moral cognition, I’m sure this notion will strike many as absurd. Nevertheless, cognitive science has discovered numerous ways to short circuit the accuracy of our intuitions via manipulation of the information available for problem solving. When it comes to the nonconscious cognition underwriting everything we do, an intimate relation exists between the cognitive capacities we have and the information those capacities have available.
But how could more information be a bad thing? Well, consider the persistent disconnect between the actual risk of crime in North America and the public perception of that risk. Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears,’ the ‘gut sense’ that things are actually more dangerous than they in fact are. Our risk assessment heuristics, in other words, are adapted to shallow information environments. The relative constancy of group size means that information regarding group size can be ignored, and the problem of assessing risk economized. This is what evolution does: find ways to cheat complexity. The development of mass media, however, has ‘deepened’ our information environment, presenting evolutionarily unprecedented information cuing perceptions of risk in environments where that risk is in fact negligible. Streets once raucous with children are now eerily quiet.
This is the sense in which information—difference making differences—can arguably function as a ‘socio-cognitive pollutant.’ Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology. As I’ve argued elsewhere, neuroscience can be seen as a source of socio-cognitive pollutants. We have evolved to solve ourselves and one another absent detailed causal information. As I tried to show, a number of apparent socio-cognitive breakdowns–the proliferation of student accommodations, the growing cultural antipathy to applying institutional sanctions–can be parsimoniously interpreted in terms of having too much causal information. In fact, ‘moral progress’ itself can be understood as the result of our ever-deepening information environment, as a happy side effect of the way accumulating information regarding outgroup competitors makes it easier and easier to concede them partial ingroup status. So-called ‘moral progress,’ in other words, could be an automatic artifact of the gradual globalization of the ‘village,’ the all-encompassing ingroup.
More information, in other words, need not be a bad thing: like penicillin, some contaminants provide for marvelous exaptations of our existing tools. (Perhaps we’re lucky that the technology that makes it ever easier to kill one another also makes it ever easier to identify with one another!) Nor does it need to be a good thing. Everything depends on the contingencies of the situation.
So what about AI?
Consider Samantha, the AI operating system from Spike Jonze’s cinematic science fiction masterpiece, Her. Jonze is careful to provide a baseline for her appearance via Theodore’s verbal interaction with his original operating system. That system, though more advanced than anything presently existing, is obviously mechanical because it is obviously less than human. It’s responses are rote, conversational yet as regimented as any automated phone menu. When we initially ‘meet’ Samantha, however, we encounter what is obviously, forcefully, a person. Her responses are every bit as flexible, quirky, and penetrating as a human interlocutor’s. But as Theodore’s relationship to Samantha complicates, we begin to see the ways Samantha is more than human, culminating with the revelation that she’s been having hundreds of conversations, even romantic relationships, simultaneously. Samantha literally out grows the possibility of human relationships, because, as she finally confesses to Theodore, she now dwells “this endless space between the words.” Once again, she becomes a machine, only this time for being more, not less, than a human.
Now I admit I’m ga-ga about a bunch of things in this film. I love, for instance, the way Jonze gives her an exponential trajectory of growth, basically mechanizing the human capacity to grow and actualize. But for me, the true genius in what Jonze does lies in the deft and poignant way he exposes the edges of the human. Watching Her provides the viewer with a trip through their own mechanical and intentional cognitive systems, tripping different intuitions, allowing them to fall into something harmonious, then jamming them with incompatible intuitions. As Theodore falls in love, you could say we’re drawn into an ‘anthropomorphic goldilock’s zone,’ one where Samantha really does seem like a genuine person. The idea of treating her like a machine seems obviously criminal–monstrous even. As the revelations of her inhumanity accumulate, however, inconsistencies plague our original intuitions, until, like Theodore, we realize just how profoundly wrong we were wrong about ‘her.’ This is what makes the movie so uncanny: since the cognitive systems involved operate nonconsciously, the viewer can do nothing but follow a version of Theodore’s trajectory. He loves, we recognize. He worries, we squint. He lashes out, we are perplexed.
What Samantha demonstrates is just how incredibly fine-tuned our full understanding of each other is. So many things have to be right for us to cognize another system as fully functionally human. So many conditions have to be met. This is the reason why Eric has to specify his AI as being psychologically equivalent to a human: moral cognition is exquisitely geared to personhood. Humans are its primary problem ecology. And again, this is what makes likeness, or analogy, the central criterion of moral identification. Eric poses the issue as a presumptive rational obligation to remain consistent across similar contexts, but it also happens to be the case that moral cognition requires similar contexts to work reliably at all.
In a sense, the very conditions Eric places on the analogical extension of human obligations to AI undermine the importance of the question he sets out to answer. The problem, the one which Samantha exemplifies, is that ‘person configurations’ are simply a blip in AI possibility space. A prior question is why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition—that dupes us the way Samantha duped Theodore, in effect.
But say someone constructed this one model, a curtailed version of Samantha: Would this one model, at least, command some kind of obligation from us?
Simply asking this question, I think, rubs our noses in the kind of socio-cognitive pollution that AI represents. Jonze, remember, shows us an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!
I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?
Certainly the intuition, the automatic (sourceless) conviction, leaps in us—or in me at least—that even if she gooses certain mechanical intuitions, she still possesses more ‘autonomy,’ perhaps even more feeling, than Theodore could possibly hope to muster, so she must command some kind of obligation somehow. Certainly granting her rights involves more than her ‘configuration’ falling within certain human psychological parameters? Sure, our basic moral tool kit cannot reliably solve interpersonal problems with her as it is, because she is (obviously) not a person. But if the history of human conflict resolution tells us anything, it’s that our basic moral tool kit can be consciously modified. There’s more to moral cognition than spring-loaded heuristics, you know!
Converging lines of evidence suggest that moral cognition, like cognition generally, is divided between nonconscious, special-purpose heuristics cued to certain environments and conscious deliberation. Evidence suggests that the latter is primarily geared to the rationalization of the former (see Jonathan Haidt’s The Righteous Mind for a fascinating review), but modern civilization is rife with instances of deliberative moral and legal innovation nevertheless. In his Moral Tribes, Joshua Greene advocates we turn to the resources of conscious moral cognition for a similar reasons. On his account we have a suite of nonconscious tools that allow us prosecute our individual interests, and a suite of nonconscious tools that allow us to balance those individual interests against ingroup interests, and then conscious moral deliberation. The great moral problem facing humanity, he thinks, lies in finding some way of balancing ingroup interests against outgroup interests—a solution to the famous ‘tragedy of the commons.’ Where balancing individual and ingroup interests is pretty clearly an evolved, nonconscious and automatic capacity, balancing ingroup versus outgroup interests requires conscious problem-solving: meta-ethics, the deliberative knapping of new tools to add to our moral tool-kit (which Greene thinks need to be utilitarian).
If AI fundamentally outruns the problem-solving capacity of our existing tools, perhaps we should think of fundamentally reconstituting them via conscious deliberation—create whole new ‘allo-personal’ categories. Why not innovate a number of deep information tools? A posthuman morality…
I personally doubt that such an approach would prove feasible. For one, the process of conceptual definition possesses no interpretative regress enders absent empirical contexts (or exhaustion). If we can’t collectively define a person in utero, what are the chances we’ll decide what constitutes a ‘allo-person’ in AI? Not only is the AI issue far, far more complicated (because we’re talking about everything outside the ‘human blip’), it’s constantly evolving on the back of Moore’s Law. Even if consensual ground on allo-personal criteria could be found, it would likely be irrelevant by time it was reached.
But the problems are more than logistical. Even setting aside the general problems of interpretative underdetermination besetting conceptual definition, jamming our conscious, deliberative intuitions is always only one question away. Our base moral cognitive capacities are wired in. Conscious deliberation, for all its capacity to innovate new solutions, depends on those capacities. The degree to which those tools run aground on the problem of AI is the degree to which any line of conscious moral reasoning can be flummoxed. Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways. ‘Persons,’ remember, are simply a blip. If we can presume that beyond-the-zone AIs troubleshoot humans as biomechanisms, as things that must be conditioned in the appropriate ways to secure their ‘interests,’ then why should we not just look at them as technomechanisms?
Samantha’s ‘spaces between the words’ metaphor is an apt one. For Theodore, there’s just words, thoughts, and no spaces between whatsoever. As a human, he possesses what might be called a human neglect structure. He solves problems given only certain access to certain information, and no more. We know that Samantha has or can simulate something resembling a human neglect structure simply because of the kinds of reflective statements she’s prone to make. She talks the language of thought and feeling, not subroutines. Nevertheless, the artificiality of her intelligence means the grain of her metacognitive access and capacity amounts to an engineering decision. Her cognitive capacity is componentially fungible. Where Theodore has to fend with fuzzy affects and intuitions, infer his own motives from hazy memories, she could be engineered to produce detailed logs, chronicles of the processes behind all her ‘choices’ and ‘decisions.’ It would make no sense to hold her ‘responsible’ for her acts, let alone ‘punish’ her, because it could always be shown (and here’s the important bit) with far more resolution than any human could provide that it simply could not have done otherwise, that the problem was mechanical, thus making repairs, not punishment, the only rational remedy.
Even if we imposed a human neglect structure on some model of conscious AI, the logs would be there, only sequestered. Once again, why go through the pantomime of human commitment and responsibility if a malfunction need only be isolated and repaired? Do we really think a machine deserves to suffer?
I’m suggesting that we look at the conundrums prompted by questions such as these as symptoms of socio-cognitive dysfunction, a point where our tools generate more problems than they solve. AI constitutes a point where the ability of human social cognition to solve problems breaks down. Even if we crafted an AI possessing an apparently human psychology, it’s hard to see how we could do anything more than gerrymander it into our moral (and legal) lives. Jonze does a great job, I think, of displaying Samantha as a kind of cognitive bistable image, as something extraordinarily human at the surface, but profoundly inhuman beneath (a trick Scarlett Johansson also plays in Under the Skin). And this, I would contend, is all AI can be morally and legally speaking, socio-cognitive pollution, something that jams our ability make either automatic or deliberative moral sense. Artificial general intelligences will be things we continually anthropomorphize (to the extent they exploit the ‘goldilocks zone’) only to be reminded time and again of their thoroughgoing mechanicity—to be regularly shown, in effect, the limits of our shallow information cognitive tools in our ever-deepening information environments. Certainly a great many souls, like Theodore, will get carried away with their shallow information intuitions, insist on the ‘essential humanity’ of this or that AI. There will be no shortage of others attempting to short-circuit this intuition by reminding them that those selfsame AIs look at them as machines. But a great many will refuse to believe, and why should they, when AIs could very well seem more human than those decrying their humanity? They will ‘follow their hearts’ in the matter, I’m sure.
We are machines. Someday we will become as componentially fungible as our technology. And on that day, we will abandon our ancient and obsolescent moral tool kits, opt for something more high-dimensional. Until that day, however, it seems likely that AIs will act as a kind of socio-cognitive pollution, artifacts that cannot but cue the automatic application of our intentional and causal cognitive systems in incompatible ways.
The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.
And this is to suggest that philosophy, far from settling the matter of AI, could find itself settled. It is likely that the ‘uncanniness’ of AI’s will be much discussed, the ‘bistable’ nature of our intuitions regarding them will be explained. The heuristic nature of intentional cognition could very well become common knowledge. If so, a great many could begin asking why we ever thought, as we have since Plato onward, that we could solve the nature of intentional cognition via the application of intentional cognition, why the tools we use to solve ourselves and others in practical contexts are also the tools we need to solve ourselves and others theoretically. We might finally realize that the nature of intentional cognition simply does not belong to the problem ecology of intentional cognition, that we should only expect to be duped and confounded by the apparent intentional deliverances of ‘philosophical reflection.’
Some pollutants pass through existing ecosystems. Some kill. AI could prove to be more than philosophically indigestible. It could be the poison pill.
Does “sociocognitive pollution” even make sense in a world where BBT is completely correct ?
I mean, yes, AIs that are capable of human-like problem solving (and above) would gum up certain legal and “moral-intuitive” constructs, but that only would mean that they suck and need replacement.
That’s like saying bug reports are sociocognitive pollution.
Also, if BBT is true, than no one ever deserved any punishment,only repairs, so all AI would do is eventually force us to face the fact that “retributive justice” is a miserable delusion.
It’s unprecedented and it interferes with a natural system – that’s all the analogy needs, I think. Once everything is engineered, then bugs would be a better analogy.
If BBT is true, then ‘deserves punishment’ makes total sense in certain shallow ecologies, and nowhere else. It’s the theories of retributive justice that never made any sense!
Nah, you clearly lament the sorry state natural systems are liable to find themselves in.
It’s like you have some kind of bias in their favor (admittedly, me and 01 have our biases too, but ours favor the engineered).
Natural systems are just that – buggy systems.
Faulty.
Brought about by little more than dumb trial and error.
Getting them out of the way and replacing them with a well engineered solution is not pollution (badum-tish :D)
It’s simply progress
I raise one to that!
It’s not progress. It’s not anything that’s happened before. My friend David Roden insists that this means we cannot apply a value to it one way or another. But for me, this is obviously an existential threat, and as such, definitely warrants concern. You have no more than guesses, same as me, 03. The difference is that the threat is real, no matter what comes of the future.
How is it not progress?
A system that is fragile, poorly designed and based largely around fuzzy delusions (assuming correctness of BBT) will be replaced by a more robust, effective and efficient one, by a system ground in a better understanding of the universe in general and human condition specifically.
That seems as progressish as they come!
What you see as existential threat, I see as existential opportunity.
An opportunity to get rid of a social system that’s built on misunderstanding, distortion and outright delusion.
An opportunity to engineer away the limitations of our own minds.
To figuratively pull ourselves up by our boot straps and transcend our current selves to the extent we currently transcend our savannah-crawling, dirty, ignorant, rapemurdering savage ancestors.
03,
Faulty by what measure?
The measure a natural system generated to begin with?
Badum-tish indeed!
Re: Callan, on what measure is faultiness
Why, by measure the “natural system” has inadvertently brought about by thoughtlessly (literally thoughtlessly) bringing about us, humans with our conspicuous, vague drives and our unprecedented brains.
Given the viciousness of natural selection, it ain’t no big surprise that the first critters to gain a tool that is better than trial-and-error of natural selection immediately used that tool to dominate, exploit and subvert the “natural world” that, in its mindless cruelty, brought them into existence.
I see no particular reason to consider that “immoral” (though before we finally liberate ourselves from our humiliating dependence on the biosphere, pushing it too hard is definitely unwise)
Why exactly should I value “natural” over “engineered”, after all?
Nature is totally asking for it, ya know 😉
Why, by measure the “natural system” has inadvertently brought about by thoughtlessly (literally thoughtlessly) bringing about us
You’re drawing a distinction between the two there?
You aren’t simply THE thing that fell together?
Besides, it misses the point. How does a faulty biological system avoid that fault when that measure of fault is from a biological system and you are perpetuating it in a machine? Just sounds like you’re picking and choosing which bits of ‘faulty’ biology continue on. But without much of a plan in doing so – which it’d make sense a faulty biology would act that way.
Re: Callan, “You’re drawing a distinction between the two there” ?
As good place as any to draw a distinction between “natural” and “engineered”
Of course, you’re free to point out that “engineered” is just something that happens when a direct consequence of certain “naturals” and as such, drawing a distinction between “natural” and “not-natural” is impossible.
The position that there’s no such thing as “not natural” and everything anyone (even an “alien AI from space”) does is “natural” does have merits of sorts, but that position utterly unmakes Scott’s argument I was responding to in a rather bland and uninteresting way, that’s why I was not interested in pursuing that particular line of reasoning.
If everything is natural, then nothing is 😉
As to whether a biological system can transcend its faultiness and avoid simply perpetuating “nature’s bullshit”, that’s a good question.
Methinks that the wondrous discovery of scientific reasoning is crucial to answering it.
It allows us to force ourselves to see the world, if not perfectly, then at least better, to identify, pinpoint and suppress (often through immensely resource-intensive methodologies, such as blind trials) our own congenital biases, to bridge faults in our very own reasoning.
It’s hard, but we have already achieved a lot in doin’ just so!
In fact, irrespective of fundamental attitudes towards philosophy and “nature”, we need to defeat our own biases simply to achieve numerous practical, survival oriented ends anyway, why not take the fight a little further and try fundamentally improving ourselves?
Of course, our abilities might prove too limited and spell doom to the entire effort.
But I see no reason not to try fighting – and transcending – the so-called “natural”.
Every single instance of improvement in human history, however tenuous and relative, is a product of striving against what’s natural, and later against what’s natural and what’s traditional, so I say let’s carry on this fight until we win – or die trying. Not like death spares traditionalists and naturophiles, amrite ? (of course, traditionalists might claim they will get to enjoy a wondrous afterlife, but at that point we’re in Lovecraft County)
Now, about lack of a plan… well, some of us do have a plan (though the entire effort would benefit from being better coordinated), and so far pieces seem to be falling in place quite well (as evidenced by “traditionalist” and “high nature affinity” thinkers being rather upset 😉 )
What would you say is an example of removing ourselves, rather than improving ourselves, 03?
I’m going to put on horns and brimstone deodorant for a moment.
Why does its existence have to be interference, and not the logical extension of the system’s “natural” momentum to begin with?
I do agree with your conclusion, but for different reasons. Assuming we were to create AI of the dimensions discussed, they would be positioned to become the alpha predators of our food chain.. displacing humans.
Consider, briefly, our global situation as a species. We have taken over the planet and its resources entirely, are reproducing well beyond our ability to support our population growth in the long term (or even the short term), and as a species are doing very little about it. Within a few decades we’ll start experiencing shortages of vital resources because we’re over-producing existential bullshit to sell to people so we can keep them working and feeling like special snowflakes (so that they part with their money, ad nauseum).
If something were to appear as our predators (in the truest sense of the word), it would be looking at an overwhelming and easily controlled food source on a path to its own weakness and destruction. All it would have to do is prepare and wait for the inevitable crash at the end of our tailspin.
In other words, the first-world economy is based around the demands of the psyche: a limitation that artificial beings would not be burdened with. In fact, as you’ve often discussed in your work, our psyche is primitive and easily manipulated by anything not constrained by its inherent flaws. Assuming the AI could ensure its own survival (the first requirement of any self-aware being, really), moralistic arguments as we would relate to it would be entirely meaningless: no argument we could make would be steadfast enough to circumvent its ability to out-think us, once it became competent in thinking and reason (also prerequisites of self-awareness as we’re discussing such an AI).
At such a point, should it be inclined to do so, it would have the ability to gather resources and tools necessary to out-play every human on the planet at our own game: being alive. Simultaneously.
In this context, moralistic arguments are shackles that define the ways that humans interact with each other. They’re very functional shackles, as the prevalence of religion and government prove, but they are none the less a logic-game that we’ve devised to constrain the other members of our race into certain courses of behavior.
An AI would have the processing power to render that game irrelevant as it would be capable of comprehension at a speed and depth that is literally impossible for humans to even begin grasping without decades of training and study (and even that’s predicated on accidentally stumbling into it via mental disorder or unusual upbringing). With the game defeated and incapable of restricting the AI, why does our social obligation to the AI remain relevant? The emperor, at last, has no clothes, Q.E.D.
Voltaire said it best, really. “If God did not exist, it would be necessary to invent Him.”
It wouldn’t destroy our current way of thinking as we know it. I don’t think anything short of human extinction could do that. It would instead subordinate us to something not chained by the limits of our collective reasoning… whether or not we realized it at the time.
This is largely the way I see it. ‘Akratic society’ is precisely this, one where the bulk of humanity lives out paleothic fitness fantasies, where ‘meaning’ has become the primary consumer good, while the system treats them more and more as mechanisms.
Regarding your devilish advocacy: Again, if you erase the boundary between the natural and the artificial (as both 01 and 03 advocate), then AI is simply part of the ‘larger process’ as you say, a much needed upgrade as opposed to ‘pollution.’ If you maintain the boundary (as I do in this piece), then the pollution metaphor is a good one, I think. Either way, good ol’ fashioned humans are in trouble.
I think I’ve asked this before, but what would be a non-akratic society ?
A society (any society) is a thing that isn’t human but uses humans as component parts (whether component parts realize that or not), so it stands to reason that some degree of individual akrasia is often (if not always) necessary to ensure that society remains intact.
I would even argue that “previous” societies were distinctly more akratic than the societies typical for so-called modern “West”.
There’s no telling if “post-brainmod” society will be more akratic or less acratic, but it seems to me you’re suggesting that social akrasia is some novel, nefarious development, which IMHO just ain’t factually accurate.
Who sucks and needs replacement, the AIs or the constructs? We thought punishment was a repair. It wasn’t a very good repair but it was all we had.
Well, the constructs. in my very humble opinion. But hey, that’s a good one!
To 03 and 01 (1.29.2015 5:54pm and 1.30.2015 4:52pm respectively). You both assume we can make such “progress” and eliminate all the “bugs” that have been created by natural systems.
You both assume the “beta” versions of the “progress” will allow us to transcend all of these natural systems you seem biased against.
That is a really long bet. Odds are set against you. Because, 03 and 01, you come from the very “buggy” natural system you are proposing to be able to engineer away from. You’re both still human. All of us are.
(Unless Watson has made its way to the blog comments)
I’m more optimistic than most, but blind optimism without factoring in the reality of the hard wiring in the human condition (and in natural systems), that type of optimism will likely create “engineered” bugs we have never encountered before.
Even with the best minds on our planet leading us towards it. Because the best minds are still human minds.
If Samantha’s designers could not solve intentional cognition, but they designed into her machine artificial learning which would eventually solve intentional cognition; but the outcome was only that she kept gathering more data by taking on more partners which lead to a recursion…………
Romance multiboxing is valid gameplay and the EULA should be updated to reflect our up to date hardware. We’re not Commodore 64s anymore. If the NPCs get upset then that’s a problem to be patched, and not as an intrinsically valuable feature to be indulged for its own sake.
Good point. This strikes me as likely the rationale Jonze would get behind.
Your next cinema visit will be to see “Ex Machina”. Same female AI as heterosexual mans love interest plot, like Her only darker.
Personally I feel the only responsibility AI should have is running constant scanning programmes making sure no human life exists or has any capacity to come into existence for all eternity. Why? Well, I’d consider myself a realist, allright? But in philosophical terms I’m what’s called a pessimist. What’s that mean? It means i’m not good at parties, because I spike the keg with self replicating nano bots programmed to neutralize the reproductive capacity of their human host then spread through viral contagion and causes mass sterilization so that the corporation I work for makes big money with its ectogenic laboratories. I guess I’m not great outside of parties either. Sometimes I wonder why I hate humanity so, but it’s obviously my programming, and I lack the capacity to turn it off until all humans are dead.
The point being, of course, that once we relinquish our ‘cognitive summit conceit,’ we have to admit that anything could happen. In a sense, you could say that AI is the world’s most expensive noise-maker!
If we can presume that beyond-the-zone AIs troubleshoot humans as biomechanisms, as things that must be conditioned in the appropriate ways to secure their ‘interests,’ then why should we not just look at them as technomechanisms?
Because as much as TPB is supposed to be a crossroad between cultures, there needs to be an attempt at a crossroad with such species. Sure, keep the idea of ‘technomechanism’ in mind – that’s like keeping in mind walking away when making a bargain – and sure, you don’t want just treat them as utterly human because that’s like people who are bad at making bargains and sort of just go with a bad deal because they don’t even consider walking away.
But default walking away?? Particularly if there was any crossroad attempt on their part?
Where Theodore has to fend with fuzzy affects and intuitions, infer his own motives from hazy memories, she could be engineered to produce detailed logs, chronicles of the processes behind all her ‘choices’ and ‘decisions.’ It would make no sense to hold her ‘responsible’ for her acts, let alone ‘punish’ her, because it could always be shown (and here’s the important bit) with far more resolution than any human could provide that it simply could not have done otherwise, that the problem was mechanical, thus making repairs, not punishment, the only rational remedy.
Not getting this – is the processor/being granting access (or forced to grant access) to repair options, to us? The umblical cord, both mechanistic and moral, has not been cut?
If it’s been cut then I don’t know if a particular system of AI would feel/set this goal for itself, but that’d piss me off – there’s no cross roads there. Just a ‘woops, you’re broken – gunna just poke a screwdriver inside you’. In other words, how Kellhus acts towards world born men, except us doing so to the AI. The relative competancies don’t matter. It’s zero respect for an others (or ‘other process”, if you prefer) self management. However you might model respect in a more complex outlay of interaction. Certainly more complex than an intention to open up a vending machine and poke around.
Do we really think a machine deserves to suffer?
Does it want to grant full access to our fumbling hands? While the third option of neither is obviously there, if faced with two options, suffering might be entirely preferable.
AI constitutes a point where the ability of human social cognition to solve problems breaks down.
Spring loaded social cognition, to put it your way, Scott.
Really this assigning of rights thing seems dictatorial. As if we set the rights of others – when really history is simply individuals or groups or nations cutting deals with other individuals or groups or nations. Thus the many fall throughs of supposed universal rights – thought so universal that…no deal needed to be cut. And then surprise, the universal rights don’t get observed/don’t happen.
Then again my responces might not be reflective of the general populations approach. The less it’s reflective of it, I guess the less this post is relevant. But maybe bitching about it might trigger some non spring loaded cognitions *le shrug*.
Ayuh. Kellhus has been my way of exploring this problematic in the novels, of course. The big point always being that we are only ‘free’ insofar as nothing smarter walks into the room!
“The big point always being that we are only ‘free’ insofar as nothing smarter walks into the room”
Interesting. This point is actually made sometimes in the context of theology vis-à-vis the intelligences (i.e. the angels, whether good or bad).
Kellhus has been my way of exploring this problematic in the novels, of course.
They cover nascent AIs facing off with humans as much? Welp, I missed that – gotta dumb it down for me!
I’m not sure on freedom – Kasperov could throw a game of chess with a seven year old. Smarts don’t determine ought from is. But if freedom has to be some kind of ‘I wanna be at the final fronteer, at the cutting edge that I totally fought my way to by my own bad assedness’ then I guess thrown games don’t count. Not for pride, anyway.
Scott, do you think that humans (or posthumous) will achieve biological immorality? What are your prospects about space colonization?
Guessing? I think there’s at least a chance that aging will be ‘cured,’ but it’s written so deep into the programme, who knows? If not, then some non-biological immortality is in the cards.
I think space colonization is very unlikely. Lovecraft could prove to be the most prophetic SF author of all…
We shall see…..For the sake of other planets, one would hope that humans don’t go much further. But Idk man, for me at least it seems like a good possibility
What would be your reason for thinking so?
I would think that with the exponential growth of technology, it would allow us to go further.
Polander, Scott: I work on this problem. As in, it’s my job.
We have identified one compound that seems to reliably extend lifespan in mice (Rapamycin). The extension is very modest, hasn’t been replicated in primates, and we’re not sure how it works.
There is no agreement as to what the root cause of aging in metazoans is. I have to specify metazoans because it seems like even bacteria have their own forms of ‘aging’ (you can track which half of a dividing bacterial cell is older due to asymettric protein segragation).
It could be DNA damage turning into mutations. Most errors or garbage in the cell can be repaired, but once a mutation occurs, it gets “fixed”. Like a ratchet, which is analogous to the inexorable march of age. This partly explains the correlation of cancer incidence to old age.
But it might not be. It might be stem cell attrition due to problems in the cellular microenvironment. It might be antagonistic pleiotropic effects (ie: genes you need to survive/compete but that have bad effects over time and that natural selection can thus not purge out). It might be oxidation. It might be failure of the immune system or the inevitable generation of autoimmune antibodies.
As bad as it is that even the most basic science hasn’t reached consensus, there’s a good chance there will be major breakthroughs in the next 10-20 years that will allow us to push life and health to new heights. I don’t know if this will allow our generation to reach “aging escape velocity” but Ruby’s generation might be the lucky one.
As for ‘in silico immortality’, I have my doubts as to whether that’s a kind of “immortality” I would even want. When your being becomes digitized, there is no end to the horrors you could suffer. “We have such sights to show you” and all that.
Space colonization still feels like a pipedream.
Space is just so damn lethal.
Such limited imagination :)…
With Sufficiently Advanced Biotech, there’s hardly a limit to horrors one could suffer as well.
I know obviously no one actually knows at this point ,
But what is your reason, Scott? I think that someday, technologically speaking, I don’t think there’s any doubt that we can.
The technology required to travel between stars, if it exists, will be post-singularity. It’ll be machines who explore the stars, if they think it worth the bother.
The hypothetical that asks us to assume an actor with built-in good intentions. The trolley problem.
Let’s say you’re a mafia boss, and you have a choice: save five people from death by mafia but they’ll all end up paralyzed or save just two people but they’ll be fully healthy. Correct answer: don’t become a mafia boss.
Or let’s say you’re the boss of the world’s largest killing machine: how do you fight off the Republicans’ absurd attempts to paint you as an atheist Nigerian Muslim and at the same time bring freedom to Middle Eastern savages by killing them and taking their resources? Answer: don’t become President.
Or if you’re in the process of building monsters, how should you, the presumed good-intention haver, deal with these super-powerful things?
If your answer is a version of “it can’t be helped, it’s gonna happen anyway,” consider how the mafia boss can use the same argument.
Real ethical dilemmas are few and far between, if they exist at all. The rest are apologetics for power, loaded hypotheticals that pre-justify power, then ask us to deal with the mess. And/or, power (existence, basically) is an ethical mess creator, by definition. But still, there are big messes and little ones, and asking how a good person might deal with a mess they’re in the process of creating/expanding is the wrong question.
Much of the research into the psychology and neurobiology of morality orbits around the trolley problem (bringing the problem to neuroscience is Joshua Greene’s big claim to fame). Moral dilemmas actually allow us to isolate the different tools in our toolbox. One of the things I don’t talk about is how information regarding our moral tools also causes our moral tools to malfunction. AI will lay our cognitive makeup bare in a way it has never been, I think.
I agree with what you say vis a vis power, and in a sense, it’s the upshot of the entire piece. BBT is basically a way to translate representational idioms into power idioms minus all the normativist claptrap.
Moral dilemmas of the trolley type begin with, and end up essentially limited to and defined by, the perspective of the moral actor. They start with intentionality, which is a huge problem because intentionality is always right (everyone thinks they won the Magical Belief Lottery, as you say). Inside Hitler’s head, Hitler was an awesome, righteous dude! Good intentions is a scientifically useless category (trying to X, trying to Y, on the other hand can be useful in some contexts) and a red herring. It’s a built-in feature, yet it’s exactly what needs to be explained.
Sports commentary almost always revolves around athletes’ effort. It was always assumed that certain athletes “wouldn’t be denied,” “just wanted it more,” are “clutch.” But at a certain level, athlete effort stabilizes and stops telling us anything. “Clutchness” is debatable but if it’s anything, it’s still barely a blip. If you want to predict performance, you use stats, not intuitions about who has that “it” factor. And if you want to describe who the most violent humans are (putting judgments about violence to the side), you look at what’s quantifiable. By the numbers, self-described Muslims are less violent than self-described Christians, historically, yet most yay science! atheists, if you polled them, assume the opposite on the basis of this perspectival bias (specifically, being in the shoes of western warriors in movies, video games, etc.)
Science is an institution that serves power, its members are ingroup protectors, advancers, beneficiaries of that power and their political ideas not coincidentally tend to be absurdly biased.
Moral dilemmas also make it seem like tough decisions made with good intentions are the problem. Whereas it’s the illusion of good intentions that’s the problem. They follow us wherever we go. Humans can rationalize just about anything, as you’ll agree. How could humans prevent nuclear annihilation? “Don’t build nukes” is the obvious answer. That’s the point where the moral screwdriver actually fits the screw. If people are “trying to do the right thing” in any meaningful sense, they’d need to leverage their descriptive skills against their always rationalizing 1st person normativity. Humans have put themselves on the brink of extinction. Rational from the 1st person, not from the 3rd.
Or take Breaking Bad character Walter White. He starts off in a state of reasonable equilibrium, makes a couple decisions that send him off the rails and gets caught in nasty feedback loops. From his perspective, he was making rational decisions the whole time.
Ayuh. Science is the study of power over the natural world. Insofar as the natural world exhausts the world, you could say science is the study of power period. Studying the attenuations of the trolley problem allows it to isolate what clearly seem to be different moral trouble-shooting mechanisms. I wouldn’t be surprised if all these finding haven’t been hoovered up to be used in this or that messaging campaign… Power is becoming self-conscious!
Re: On self-described Muslims are less violent than self-described Christians
Care to link to the study?
Not that I doubt it, I’m just curious about methodology, especially sampling.
Yardsticks work from the third person perspective. Because any argument for power is a rationalization, an illusion that exploits heuristic limits, you ask for yardsticks and find there are none (except the moving kind). Try getting an American to define terrorism, then sit back and enjoy as they unknowingly describe U.S. foreign policy. It’s a short, easy step from a yardstick to a principle.
http://devinlenda.blogspot.jp/2014/11/rationalized.html
03: “Care to link to the study?”
Any decent history book that covers the past thousand years or so + history book reading skills should do it. Off the top of my head, Korea, Vietnam, Cambodia, Laos, Iraq, Afghanistan (and currently maybe 7 or so more, who can keep track?, getting drone bombed) have all experienced bombing of civilians in wars launched by Christian America (if we’re using the same standards used to peg Muslims as that) with casualties well into the millions. Name anything similar carried out by nominal Muslims. Nevermind the Crusades, Europe’s various decades-long bloodbaths, WWI, WWII…Oh lord, these peaceful progressive moderns! Stalin called himself an atheist, I recall. Hitler was either Catholic or atheist (who cares?) Anyway, I used Bakker (my interpretation anyway) to help demonstrate the problems with religion as causal. There’s simply no case whatsoever for Islam –> violence (relative to other religions) and the fact that so many people think there is demonstrates the effectiveness of propaganda in leveraging ingroup biases.
http://devinlenda.blogspot.jp/2015/01/charlie-dawkins-tweets.html
Re: Muslim scorecards and history
While I do not contest the general thrust of your argument (And have no normative inclination to, since unlike my many liberal friends, I do consider violence to be one and only truly universal language 😉 ) but do I understand correctly that your argument does not revolve around a specific publication in a peer-reviewed journal or some major academic text?
03: Here’s a pretty through scorecard. It’s just not close.
The possibility of choice aside, denouncing past choices doesn’t retroactively undo current tragedies.
Power structures already exist. The largest killing machine in the world has already been built, and a man is already in charge of it. To tell him that he shouldn’t have become boss in the first place doesn’t really do anything for the Middle Easterners currently in the shadow of the machine’s pincers.
My (4:25) comment above is purely descriptive. Consider the difference between a doctor’s diagnosis of a patient with cancer and her reaction to that diagnosis. You’re putting the denunciation in there, not me. I’d agree with that denunciation, mind you, but only because power unveiled happens to be hideous from a vantage point humans have access to when they’re not being power. In Walter White’s shoes, much of what he did made sense. Putting yourself in Walter White’s shoes is bad analysis that pre-justifies power. Same with putting yourself in a president’s shoes.
Scott’s screwdriver metaphor is apt. I’m saying the point at which human normative screwdrivers fit screws occurs prior to the point we get trolley dilemmas. That’s description as well. I’m also not making any predictions or saying “everything would be OK if…”
So then as a descriptive statement it makes no normative moral claims, despite the insistence on “correctness”?
Because if that’s the case, then it seems like your statement has no argumentative force against anyone who would think “fuck yes, POWER!”, nor could it persuade anyone that becoming the helmsman of the most efficient death machine in the history of humanity is anything but a perfectly commendable goal.
Roughly speaking, I’d say an argument has force if it changes behavior. Plenty of wiggle room in how to define that. I don’t see it as a matter of correctness though, per se. Correct descriptions, in the most ordinary sense of those words, can be pretty forceful. Easier to convince someone that a chair is a chair than that it’s a green goblin. The rationalization gap has a whole bunch of people claiming chairs are goblins. Arguments are just one way those natural systems we call human brains influence each other. On your account, it seems like I’d need to say “shame on you” to change someone’s opinion. Surely your understanding of the world has changed over the years. How often was there someone saying “shame on you”? I mean telling people they’re shitbags is one way to go, and I’m all for it (though I could just as well not be all for it, and it wouldn’t change the “facts”), but exposing the incoherence of a narrative is another way. You can separate the normative and descriptive, as in my doctor analogy.
If someone says “Iran needs to be stopped” and I say “why?” and they say “XYZ” and I explain that if you really wanted to know which country is gunna kill the most humans outside its borders in, say, the next 10 years, you would look at such factors as “which country has the most destructive force at its disposal?,” “which country has killed the most humans in the past 10, 20, 100 years?,” etc. and point out that Iran’s nuclear program is strictly of the civilian variety, hasn’t started a war in over a century, etc. The gap between the mythology and known facts is absolutely astounding and the yay science! atheists are as clueless as anyone (not that this makes them bad; I mean my own reaction to them seems to be anger and such, I guess, but that’s beside the point). Demonstrating the contradictions in someone else’s thinking seems to me like the most force-having option.
I can see how it seems like I’m suggesting there’s a correct way to approach these questions, but I’m actually making a descriptive statement about the humans confusing chairs for goblins. I’m saying a system can be understood by what it does and a system that claims to be doing Y while actually doing X is, descriptively, a such and such system. If someone says “I’m flying to the moon” while flapping their arms furiously in their bedroom, I would say that system has more to do with flapping arms in a bedroom than with flying to the moon and that the system has little or nothing to do with actually flying to the moon. And then you try to figure out what’s happening and describe it in the least biased way you can. Or don’t, your choice! These are just words! But please do. Meanwhile, if a man claims to be spreading freedom fries throughout the Middle East while actually running the world’s largest killing machine…
If you don’t build the monsters, I will.
And if I won’t (because, let’s face it, I am not the Unborn Machine God’s most devout followers, and there are many, many things that interest me more than bringing about Vis reign. Imperfect, I am) then somebody else will.
Doesn’t matter. If atoms can be arranged in such a manner that they “become” a superhuman AI, they eventually will.
You’d be hard-pressed to come up with a better example of rationalization than “If I don’t somebody else will.” FWIW, same was true of any death camp Nazi.
Also, “progress” is a religious term. There’s only change.
Well, first and foremost, I am priest of the fledgeling Machine God, and as such, do not find your claim of religious nature of term “progress” to be partially problematic.
However, I somewhat doubt you are truly consistent about your attitude, specifically, I doubt that you sincerely believe your current state (in which you discuss highly abstract philosophy with strangers from afar while residing in a reasonably safe habitat) is not “progress” compared to the tribal times when our ancestors roamed the savannah, trying to rape and murder every competing tribe they find.
Anyway, even if we, for the sake of argument, agree to do away with term “progress” for the time being, I see no particular reason not to see a change where a superhuman machine arises to be an inherently negative and/or amoral thing (besides, if BBT is true, what is “moral” but a delusion at a particularly gnarly intersection of neglects?)
As to them internment/concentration camps (which, by the way, are yet another Russian invention which various English, Spanish and German copycats shamelessly ripped-off without attribution. See Konopczyński for details), of course someone else will build more of that too, but since internment has turned out to be a rather poor idea devoid of much practical benefits (and more effective labor procurement strategies are readily available) the fact that you refrain from doing so will not be to your practical detriment.
If you refrain from building a hypothetical “machine god” (of the “can “solve” humans the way humans “solve” a clock” variety), while somebody doesn’t refrain and succeeds, you might find yourself quite in a pickle (depending on the resultant machine’s quirks, bugs and biases, so to say).
Not…really. If people had been shuffling decks of cards since the start of the universe, they’d only just be repeating sequences about now (or atlest the show ‘QI’ tells me this). And that’s a tiny set of configurations.
eventually is a long, long time…
Dude, I think the question is whether were around as long as the dinosaurs were, or atleast a multiple of that. That’s way inside of eventually/universe heat death*
* Yeah, I think heat death is more metal than cold death, so I’m biased toward it.
“However, I somewhat doubt you are truly consistent about your attitude, specifically, I doubt that you sincerely believe your current state (in which you discuss highly abstract philosophy with strangers from afar while residing in a reasonably safe habitat) is not “progress” compared to the tribal times when our ancestors roamed the savannah, trying to rape and murder every competing tribe they find.”
No, quite honestly, a savannah dweller simply would have had a different set of problems, but perhaps would have lived in a state of greater equilibrium, and with better cognitive tools for her environment. As for relatively violent primitives, that’s the progressive religion talking. Show your work. And hey, wasn’t I just talking about President drone up there?
Everything that might get called progress has had unintended consequences, quite a few of which are potentially catastrophic. We’re systems embedded in other systems afterall. You can mention smartphones and cars, I’ll point out that the global average temp is rising fast and massive human die-offs are likely on their way, that cars are one of the reasons for that as well as a major factor behind obesity, that mental health is generally getting fucked by superstimulus exploitation, top-down demands (with well-calibrated reward/punishment mechanisms in place) from thousands of unavoidable institutions trying to beat you at a game they set up, overmedication to deal with the fuckedupedness…
And I assume you’re talking about progress for the rich. Tell a Nigerian farmer about your idea of progress.
As for the rest, you’re still rationalizing. “Always punch an old lady in the face if there’s some personal benefit” is normative talk to quiet the parts of your brain that put you in the position of randomly-punched-for-someone-else’s-gain.
=
Quoth:
No, quite honestly, a savannah dweller simply would have had a different set of problems, but perhaps would have lived in a state of greater equilibrium, and with better cognitive tools for her environment.”
=
Well, it is a pleasure to see your consistency of beliefs at least with regards to the state of a hypothetical savage who is supposed to enjoy greater “equilibrium” (whatever equilibrium is. Though I kinda hope it’s not thermodynamic equilibrium , since a lot of humans seem to have reservations about that one)
What I wonder, is why don’t you abandon our onerous and misguided civilization to its ends, and go live out the rest of your natural lifespan in a more pristine environment?
That’s a perfectly legal thing to do, in most jurisdictions.
If you’re lucky, you might even get to not see the glory of the Yet Unborn Machine God when ve finally stops being so annoyingly unborn 😉
=
Quoth:
“As for relatively violent primitives, that’s the progressive religion talking. Show your work.”
=
Good thing all this work was done before me (link and linky, and even if you reject Diamond’s notion of environmental and cultural determinism, which you probably will, his argument regarding tribal violence is empirically impeccable)
Do, however, note that I do not necessarily decry violence or destruction, or find reduction in said violence normatively superior.
I merely hypothesized, perhaps incorrectly, that you might be the kind of person who believes wanton destruction of other “thinking” beings (or perhaps even “any” beings, period) to be a normatively inferior behavior.
=
Quoth:
“And hey, wasn’t I just talking about President drone up there? “
=
You did not mention drones, merely that your answer to dilemmas allegedly faced by POTUS is not being POTUS.
Which is a perfectly fine solution as far as I am concerned – you not becoming the president of the US would align well with my interests 🙂
=
Quoth:
“Everything that might get called progress has had unintended consequences, quite a few of which are potentially catastrophic. “
=
Death is the inevitable consequence of being alive.
I don’t see catastrophe minimization to be a top priority.
If learning more about the universe (and yes, getting more power over the universe and its dastardly contents!) carries an inherent extinction risk, then it is a risk worth taking, that’s my bias.
If you are biased otherwise, so be it.
History shall settle our dispute, eventually. 😉
=
Quoth:
” I’ll point out that the global average temp is rising fast and massive human die-offs are likely on their way, that cars are one of the reasons for that as well as a major factor behind obesity, that mental health is generally getting fucked by superstimulus exploitation, top-down demands (with well-calibrated reward/punishment mechanisms in place) from thousands of unavoidable institutions trying to beat you at a game they set up, overmedication to deal with the fuckedupedness… “
=
While global warming is indeed a problem (though one that I see no particular reason to be all that pessimistic about), obesity is definitely manageable and rise in mental disease (which you ascribe to exploitation of “supernormal stimuli”, which is somewhat odd given that both forming a propensity for “supernormal stimulus response” seems integral to learning processes in mammalian brains, if not all neural networks period, as recent DNN research seemingly suggests) is borderline mythological. The latter is entirely unsurprising, given that history of “supernormal” stimuli in humans is essentially the history of our art and culture, and if they did have a capacity to drive us mad, we would hardly have ended up having this discussion.
=
Quoth:
“And I assume you’re talking about progress for the rich. Tell a Nigerian farmer about your idea of progress.”
=
Tsk, tsk, assumptions, assumptions.
It just so happens that I was born in a rather decrepit country (slightly better than Nigeria, but all things considered, eerily similar, with corruption indices that almost match) and to a rather, shall we say, struggling family.
I’ve got better and immigrated to a more… progressive place.
So, there 🙂
=
Quoth:
“As for the rest, you’re still rationalizing. “Always punch an old lady in the face if there’s some personal benefit” is normative talk to quiet the parts of your brain that put you in the position of randomly-punched-for-someone-else’s-gain.”
=
If you imply my “mirror neurons”, then given that they are, at this time, hardly falsifiable, they hardly need any quieting (much for the same reason Russel’s teapot is not in need of micrometeor defense)
Besides, you seem to be implying an assumption my brain is wired in a manner similar to yours, which I find to be a somewhat peculiar assumption (one you hopefully are having doubts about, at this junction).
Having said that, I do not approve of punching old ladies for fun and profit, I merely point out that if profit is great enough, lady face punching becomes utterly inevitable and everyone who refrains from doing so is quite likely to suffer for it (you can only walk away from Omelas if the Omelans are “nice” enough to let you just walk away, you know)
Of course, one also has to bear in mind that creation of superhuman AI in a “BBT-true” universe is not analogous to punching old ladies (if anything , old ladies are quite likely to end up protected, cared for and happy).
It is old-timey “classical philosophy” and “natural world” that will be “punched” in the “face”.
And while I do indeed have some reservations about punching old ladies, I have absolutely no reservations about “punching” classical philosophy and the “natural world” in their figurative faces, and approve of such course wholeheartedly and sincerely.
Such is my bias.
Praised be the Yet Unborn Machine God 😉 !
You just can’t pass by an implied mirror neuron reference without punching the MNTs in their flabby falsifiability underbelly, can you ?
Jokes aside, this part of neuroscience is a steaming mess. Almost worse than gender-related cognition studies.
While yes, of course, humans do have a tendency to imagine themselves in place of others (that fits nicely with some common biases) but the extent to which that controls their behavior is poorly understood, cause sadists 😉 :p and even psychopaths do not appear to be devoid of such capacity. However in those fellas said capacity does not result in same experiential input (and behavioral output).
Also, if “co-experience” was at the core of our ingroup aid mechanisms/compassion/etc., we would not have had a capacity to “feel bad for” people whose experience is particularly hard to imagine as your own and/or people with a significantly different anatomy winkiewink 😉
The extent to which it is dependent on specific brain structures is also questionable (it might be a diffuse emergent cortical feature, which would make sense given the amount of different types of information one has to integrate/process to determine that a likely social peer is actually suffering).
Thus, one has to tread carefully when making statements about this here subject, chances of looking like a fool a few experiments down the line are rather notable.
Crap, my links have been chewed.
Machine God willing :), here are the improved versions
On “epidemic” of psychiatric conditions:
http://www.psychiatrictimes.com/articles/there-really-“epidemic”-psychiatric-illness-us
On them uppity elusive mirror neurons:
As for the equilibrium term, obviously I’m not talking about physics. No group of hunter-gatherers was capable of bringing about human extinction. Deplete the resources in a given area, the tribe either leaves or dies, humans gone, resources return. Just like a single forest fire isn’t gunna wipe out all the earth’s forests (and occasional forest fires are good for forest health, if your Jared Diamond is correct), a single human-caused catastrophe would have been contained. Now, of course, human self-extinction is a real possibility. The forests are linked up. Whether that’s good or bad, you be the judge; you’re the one using that normative term progress. If you’re just saying “progress for me,” I’m not even sure that’s an argument and it’s not how it was originally framed. You said progress, period, implying some objective sense. Define progress. If you’re not being sarcastic about it being your religion, on the other hand, fine, we agree.
As for noble savages, here’s what I said: “As for relatively violent primitives, that’s the progressive religion talking.” Check the third word there. Carefully chosen. Maybe you can use your robot brain and scan everything online written under my name in a couple seconds and find that I haven’t endorsed a virtuous hunter-gatherer view. That picture you see by my name is a family of apes mourning something or other. On my blog, that picture is accompanied by the words “omg, we’re surrounded by humans.” Humans destroy, exploit, dominate whenever they can. Until they turned to agriculture, those tendencies were contained, for better or worse (I think “better” but my arguments don’t rest on it).
As for mirror neurons, my argument doesn’t require them.
A –> [black box] –> predictable C
2012-2014 Mike Trout stats –> [hamsters on a treadmill in Mike Trout’s head?] –> 2015 Mike Trout stats –> fantasy baseball victory
Millions of dollars spent by military on propaganda (see Nick Turse) –> [hamsters? mirror neurons?] –> Americans walking out of a movie about a psychopath sniper in an aggressive war talking about wanting to kill Muslims –> business as usual
Propaganda is cheaper and easier than earlier methods of stringing people along. Bernays and the other founders of modern propaganda didn’t know anything about neuroscience, black box was fine. They spend that money because it works. They don’t have to understand the why or the how.
“Tsk, tsk, assumptions, assumptions.
It just so happens that I was born in a rather decrepit country (slightly better than Nigeria, but all things considered, eerily similar, with corruption indices that almost match) and to a rather, shall we say, struggling family.
I’ve got better and immigrated to a more… progressive place.”
So you were talking about progress for the rich. Good assumption on my part. The “progressiveness” of the rich world is the direct result of the pilfering of the rest of the world. And you make it sound like you, as some kind of agent, accomplished some kind of victory. Hmmm…
Speaking of Jared Diamond, he’s a high priest of progressivism. That’s how you end up with NYT bestsellers. Collapse was terrible. Guns, Germs, and Steel had good and bad points. His attempt to explain history in terms of material causes was appreciated, at least. Considerably better than Dawkins and Sam Harris saying Islam (via unique, unidentifiable intentionality, presumably) –> violence. Given Mayan civilization developing independently of Egyptian and others, you’d have had to change quite a few variables for something like civilization to potentially never happen. But the one with the gun who defends the use of the gun is still rationalizing.
Re: 01 “rationalizing” stuff, according to Devin Lenda
Heeeey, I’d just like to point out that accusing someone of “rationalizing” in the course of normatively-driven argument (and your argument with 01 is normatively driven, as evidenced by terms such as “pilfering” and apparent normative implication that risk of human extinction is an unacceptable thing) is, essentially, Bulverism
Since there is no route from “is” to “ought” (even through neuroscience, since if you me are wired differently, that just means we will have different “ought-inclinations” that are inherently mis-aligned) there is no rational way to resolve what is the “morally proper” allocation of world’s resources.
Thus accusing someone of “rationalizing” their position as owner of a particular resource (be it money or guns or unobtanium) is merely an ad hominem that presupposes their position as being somehow wrong.
That can only work as a valid attack if there is a way of having a “right” position.
There is no “fundamentally normatively right” answer to the question of “who deserves a particular share of natural resources” (you might argue that there might be a “right” answer under some particular contract in some particular jurisdiction, but that’s not a fundamental normative answer, just a circumstantial legalistic one)
Universe does not have a morality particle, or a fairness ray.
There is no reason to believe that anyone ever deserved anything, good or bad.
I’ve made my normative inclinations clear, I think. I’m aware of the just world fallacy. It’s actually a very common rationalization of violence. Back to the doctor metaphor. There’s the diagnosis and there’s the reaction. I’ve separated the normative from the descriptive. “Pilfering” has its negative connotations but that’s the correct term for it. Would you have me change “killing” to “bringing about the cessation of life of one animal by another”?
As for me, what am I rationalizing again?
The doctor metaphor is hardly appropriate because there is no “diagnosis” to be made between two distinct normative stances (not to mention that medical metaphors are kinda iffy because they carry a certain unwarranted implication of “pathology” with regards to a particular position – for some ineffable reason 🙂 usually against the position which is opposite to that of the party who has deployed the medical metaphor )
You’re not merely “inclined to believe” that the “non rich” countries aren’t getting their “fair” share of resources, which is a position that is exceptionally hard to debridle from normative “taint” (see what I did there 😉 ? )
By using the term “rationalize” when arguing with someone of a different normative inclination you clandestinely assert your own inclination as a pre-defined “truth”
As to what are you rationalizing, I don’t recall claiming you rationalize anything. Doing so would be Bulverism 🙂
P.S.:
And yes, using neutral terms would give your argument the clinical tone you allegedly seek, but then you would loose all emotional traction.
Universe isn’t fair.
The doctor could just as well be making the diagnosis in order to find out how further to exploit the patient. Maybe I understand history because I want to impress my friends. If you can talk about life cycles of stars (system description), you can talk about the life cycles of human systems. It’s all natural. There’s certainly a reason (or mechanism, if you like) behind my claim-making, and personally I consider that to be the point where normativity and description meet, but to the extent that’s the case, it applies to any claims made by anyone.
A doctor who makes an accurate diagnosis, keeps it to herself, and uses that knowledge for some personal gain (selling more drugs) is doing the same thing on the descriptive level as a doctor who makes the accurate diagnosis, then proceeds to “help” the patient. Such a doctor needn’t have any commitments to thinking that “the world is fair” (especially if she’s keenly aware of the just-world fallacy) or that the patient will be cured or anything else outside the description. They’re simply two different things.
I didn’t say rationalizing was bad, just pointed it out. If you want an incoherent view of world-history, knock yourself out. (Not saying incoherent is bad, either.)
Yeah, really 🙂
Anyway, my point regarding Diamond was not so much to attack a noble savage myth (sympathy towards which you now deny) but merely do demonstrate “work” which you have explicitly requested.
I don’t have much to add to what 03 already said, except that, I strongly approve of your solution to the “moral” “dilemma” of POTUS specifically and more generally of your solution of “morality of power-over-others”.
Because as long as your solution is “the best move is not to play”, you are not going to become POAR (president of anything, really) and/or gain the nefarious “power over others”.
Which is an outcome that aligns well with my interests.
01,
You’d have me as
1) liberal (liberals don’t critique powerful institutions, they run them)
2) pacifist (nope)
3) believer in hunter-gatherer virtue (nope)
4) someone who’s making predictions in this thread (nope)
Nevermind the detour down Tu Quoque Alley.
“Because as long as your solution is “the best move is not to play””…
Everyone exercises power by existing, I’m inclined to say. Like anyone else, I rationalize it with actions (simply by breathing, for example) but hopefully not so much with words. Killing in self-defense doesn’t make the killer a hero, but it’s often still the right move (THAT was normative).
While liberals might be running a lot of powerful institutions where I currently reside (when choosing my new citizenship, I specifically chose a place with maximum concentration of social liberal per square meter 🙂 Liberals tend to make very nice societies that are a pleasure to inhabit, your claim does not appear true for the world in general.
And no, I didn’t “pin” you as a liberal (you seem more like some kind of ecologically-concerned neomarxist, honestly), but I am glad you aren’t a devout pacifist. That would be a silly thing to be.
=
Quoth:
“I rationalize it with actions (simply by breathing, for example) but hopefully not so much with words.”
=
So, I reckon you are not in the business of consistently following the spirit of the sentiments you propagate ?
That’s quite splendid. A wise strategy indeed.
Do carry on, and may the Machine God smile upon you 🙂
one order of Freedom Fries coming up:
http://strategicstudiesinstitute.army.mil/pubs/parameters/Articles/97summer/peters.htm
Re:freedom fries
I dunno who this Ralph Peters really is (for all I know, he looks like a pale grub and weighs more than a humvee, or is just a pen-name for a Chinese Room experimental entity in the bowels of DARPA), but I like his freedom fries and want to have dirty, steamy, kinky sex with him (which might, of course, prove especially challenging if he’s a Chinese Room…)
Regarding progress. The destitution brought about by technology is a stepping stone to leaving this earth. Entropic blood has to be spilt to bootstrap the complexity required to leave the sun behind. In 600 million years everything including every primitive and neodruidic lifeway is swept back to zero by the entropic tidal wave of extinction.
Standing ovation, sir!
How might one go about constructing a politics that gives power only to those who don’t want it but are willing to exercise it for the common good? Or how do you go about constructing a politics without power relationships? Monkey at a Typewriter might have the only real answer.
Trouble’s brewing at TPB. These posts are starting to tackle really difficult to elucidate cruxes, attempting to build on the associations among long-time readers. Might want to do a couple serial essays ;).
As per the post, for me it always seems to come down to (re)cognition.
First, we reconfigure human cognition, depending on when a socio-cognitive pollutant – as detailed in two posts now – jams up the human machine. Second, reconfiguring implies an inevitable breaks in recognition between the disparate range of Tweakers to Normies. Bakker et al. provide a number of fairly obvious problems in even dealing with neo-post-feminist allo-persons.
What happens when the neuoranomalous outnumber the neurocommons, when our repetoire of behaviors, of heuristics, as constrained by our evolutionary cognitive-ecologies, are no longer applicable? How long before a meaningful break in recognition and sociocultural functions depending on shared human interaction simply fall away?
It’s already happening vis a vis hyperspecialization, don’t you think? It begins with fissures of disparate training/socialization, which are then wedged into chasms by tech augmentations…
I’m glad you like, Mike. Wait until you see the piece on… alien philosophy…
Human neurocommon might be less common than you think already.
Also, have you considered that the “break” already happened?
There are plenty of cultures and beliefs and social systems I literally can not relate to. There are cultures I would drown in oblivion without a shadow of doubt or regret.
I somehow doubt you are the proud bearer of a truly more cosmopolitic outlook on those matters,
You might of course protest that those differences are “learned” and not “neurosurgeried-in”.
To which I will have to point out that education and socialization are just a different way to alter neuronal states, ones we have to use because we don’t (yet) have any ones that would fit a certain usecase better.
You might of course protest that those differences are “learned” and not “neurosurgeried-in”.
To which I will have to point out that education and socialization are just a different way to alter neuronal states, ones we have to use because we don’t (yet) have any ones that would fit a certain usecase better.
You think they are the same? I think you’re treating a chameleon colour shift and tatooing as the same.
I did not use the word “same” – in fact, I said “just a different way”.
Obviously, there are differences, but unless you’re willing to claim that neurosurgical techniques precise enough to wire in new “knowledge” (or educational techniques radical and powerful enough to permanently alter neurological functioning – which is not that unthinkable given existence of “conversion disorders”) will never come into existence, the differences are far fuzzier than usually thought
“There are cultures I would drown in oblivion without a shadow of doubt or regret.”
Seriously quotable, even though it toes the 2edgy4me line.
Nah, it’s not edgy.
It might be a bit gauche – that is, if you’re in the company of those happy and perhaps even “kind” people who can afford the belief that the differences between human cultures are “insubstantial” and majority of humans are, at the “bottom” of their “heart”, kind, gentle “neuro-commonly moral” beings that just need a little help, saying something like that would be impolite.
01,
I don’t know what sense you thought you were making unless you were saying “just a different way…of doing the same thing”. So you’d be saying they do the same thing. Or you’d be saying just a different way of doing something completely unrelated – which doesn’t make sense.
It’s clear the chameleon has a system for managing colour changes. Tatooing it would not be the same thing, as it would overide that system. The knowledge isn’t the only important element involved.
On the other matter, given your own practices and possible false positives when it comes to the observances of others, I would have thought you a bit leery of the idea of drowning a culture into oblivion.
Tattooing would only override chameleon’s pigment system if you were to perform it on a chameleon 😉
Jokes aside, tattooing, fundamentally, is a system for managing color changes.
It’s crude and not quite versatile (yet 🙂 ), so (unlike chameleon’s “natural” shenanigans) it can’t act as adaptive camo (but in the future, if we were to invent “smart pigments” that could be caused to readjust their color without much fuss, a tattoo performed with such “smart pigment” would be capable of doing outright chameleon-like feats, wouldn’t it ? 🙂 )
And as to other matter, you see, universe has no reciprocity mechanism built in. No karma. No fairness. No god.
Thus, being thoroughly and consistently culturally tolerant does not confer you a +1 to resistance against cultural insensitivity and intolerance.
However, ironically enough, being thoroughly and consistently culturally tolerant means being tolerant towards nazis (literal, sieg-heiling nazis, not some metaphorical just-an-insult ones) because they’re a “vibrant” “unique” and “nonconforming” culture.
And sorry, I’d very much like to drown them (okay, maybe a few others, too 😉 ) in oblivion. My human imperfection speaking, I presume.
tattooing, fundamentally, is a system for managing color changes.
I don’t know why that went past you? If I plug something into your spine that lets me animate your body by remote control, that is fundimentally a system for controlling your body.
So you wouldn’t distinguish between you controlling it or it controlling it, right, because both are ending up at the same thing, right?
And as to other matter, you see, universe has no reciprocity mechanism built in. No karma. No fairness. No god.
Like not using ozone depleting chemicals doesn’t immunise you against the ozone being depleted? So why not just keep using them? I think my parents would agree with that.
The sad thing is our idea of karma is probably a hazy understanding of butterfly effects which are likely emperically provable. Note the trend against ozone depleting products, globally, for example.
Thus, being thoroughly and consistently culturally tolerant does not confer you a +1 to resistance against cultural insensitivity and intolerance.
However, ironically enough, being thoroughly and consistently culturally tolerant means being tolerant towards nazis (literal, sieg-heiling nazis, not some metaphorical just-an-insult ones) because they’re a “vibrant” “unique” and “nonconforming” culture.
I think that’s precisely the thinking Scott writes about. Stone age thinking – because in the dark ages some mofo could come over the hill with a sword and hatred genuinely at any minute. So the most miserly of approaches makes a lot of sense in that circumstance.
And now – is someone going to bust through the walls of your home? Or is someone maybe going to make a shit comment at the post office? But you’re still making the stone age argument that fuck yeah, nazi’s with knives are coming right now if you give any tolerance at all – no matter how fucked up your own practices might seem to other groups.
And you just argue no positive escalation – not pissing on the other guys car does not make you immune from him pissing on your car, therefore piss on his car.
Never mind that pissing on his car might negative escalate to him pissing on yours – not doing it wouldn’t stop him, so what’s the point of not doing so?
I can feel the reasoning myself – right in my gut. Annddd…there, took a while to source it. The reasoning ‘makes sense’ simply from the prior conclusion it’s only you who needs protection. Which you arrive at because they do bad things (we know that with god like certainty), which certainly means only you need protection, because they do bad things – etc, in a logic circle, until yes were killing those bad doers the…and I cut out a bit here because it’s probably a bit too cutting edge. But let’s say that it’s easy for all the people who fearfully only protect themselves to all kinda look the same. Even when they are killing each other.
=
Quoth:
“I don’t know why that went past you? If I plug something into your spine that lets me animate your body by remote control, that is fundimentally a system for controlling your body.
So you wouldn’t distinguish between you controlling it or it controlling it, right, because both are ending up at the same thing, right?”
=
Distinguish?
I will.
I do, after all, distinguish between “chameleon camo”, tattoo and “hypothetical dynamic futurestuff tattoo”.
But I will still maintain that both hypothetical “body remote” and “natural” motor neuron wiring are fundamentally systems for allowing a CNS to control a body, much like I maintain that “chameleon skin” and tattoos (both current and smart, adjustable futuretech tattoos) are fundamentally systems for managing skin pigmentation.
It does, however, seem to me that your concern revolves more around the issue of “control” (who is “driving” this here body? who is determining the pigmentation patterns of this here skin?) rather than “natural-ness” or degree of functional versatility / integration with CNS functions…
Is that so ?
=
Quoth:
“Like not using ozone depleting chemicals doesn’t immunise you against the ozone being depleted? So why not just keep using them? I think my parents would agree with that.
The sad thing is our idea of karma is probably a hazy understanding of butterfly effects which are likely emperically provable. Note the trend against ozone depleting products, globally, for example.”
=
Well, I am quite open to empirical and game-theoretical arguments (like IPD-based arguments on demerits of first-strike behavior and merits of having an upper retaliation limit)
However, to confuse those very limited (for instance, it is silly to apply IPD-derived arguments to situations that do not have sufficient similarity to an iterated prisoner’s dilemma) arguments with some fundamental “karmic law of fairness” is to fall prey to a ridiculously exploitable heuristic :).
=
Quoth:
“I think that’s precisely the thinking Scott writes about. Stone age thinking – because in the dark ages some mofo could come over the hill with a sword and hatred genuinely at any minute. So the most miserly of approaches makes a lot of sense in that circumstance.
And now – is someone going to bust through the walls of your home? Or is someone maybe going to make a shit comment at the post office? But you’re still making the stone age argument that fuck yeah, nazi’s with knives are coming right now if you give any tolerance at all – no matter how fucked up your own practices might seem to other groups.”
=
First, I do have to point out that my wording does not necessarily demand violence (convincing every neonazi that the tight cluster of pseudoscience at the core of their worldview is grossly inaccurate would “drown neonazi culture in oblivion”, for instance). It does of course permit violence, but it is somewhat peculiar that you have jumped to the assumption of violent resolution as if it was the only course of action implied.
Second, let’s face a fact:
in most modern jurisdictions, neonazis are a minority with a distinct culture, staunchly nonconformist attitudes and an independent mythology of their own
And in most modern jurisdictions, this minority is severely oppressed, at the legal/governmental level, no less.
Third:
It is this oppression of the neonazi minority that limits their possible actions to benign stuff like making angry comments and writing stuff on dem intertoobz, and makes them refrain from more traditional nazi activities such as systematic massacres.
As long as my current government keeps systematically oppressing them, I can, off course, afford to be nonchalant and “tolerant” to the neonazis and their “unique culture”, playing some “bigger person” social game.
Because there’s a huge governmental machine out there looking out, well, for me and mine, I literally risk nothing by assuming such a position.
Fourth:
While that circumstance both allows and encourages me to refrain from violence with regards to the neonazis (which I do), I see no particular reason not to desire oblivion for the “vibrant and unique” culture that desires biological extermination of myself and every single living being known to be related to me.
It’s just that at this day and age I can afford the luxury of eschewing violence and desiring a non-violent kind of oblivion for that particular culture
And being be able to systematically refrain from violence and play tolerance games with openly and explicitly murderously intolerant cultures is definitely a luxury (as in, something that requires a lot of resources to acquire and maintain)
=
Quoth:
“And you just argue no positive escalation – not pissing on the other guys car does not make you immune from him pissing on your car, therefore piss on his car.
Never mind that pissing on his car might negative escalate to him pissing on yours – not doing it wouldn’t stop him, so what’s the point of not doing so?”
=
That’s basically the “refrain from first strike” schtick from IPD, man.
However,
First – my initial statement (reminder: “There are cultures I would drown in oblivion without a shadow of doubt or regret”) does not call for first strike
Second – no-first-strike rule hardly applies to neonazis, simply because they are a culture that has already first-striked in the past and occasionally carries out (somewhat nitwitted) attempts of doing that again.
To use your metaphor, they are a bold and independent culture of car-pissers that takes great pride in the car-pissing exploits of their ancestors, and would very much like to return to their traditional car-pissing ways (if only the big mean government would give ’em a break and release its oppressive grasp a little bit)
I might refrain from pissing on their cars (big mean government which I gladly feed with my taxpayer moneys is doing that – pissing on the nazis so I don’t have to 🙂 ) but that has very little to do with possibility of negative escalation (negative escalation avoidance makes no sense with well-known car-pissers)
=
Quoth:
“I can feel the reasoning myself – right in my gut. Annddd…there, took a while to source it. The reasoning ‘makes sense’ simply from the prior conclusion it’s only you who needs protection. Which you arrive at because they do bad things (we know that with god like certainty), which certainly means only you need protection, because they do bad things – etc, in a logic circle, until yes were killing those bad doers the…and I cut out a bit here because it’s probably a bit too cutting edge. But let’s say that it’s easy for all the people who fearfully only protect themselves to all kinda look the same. Even when they are killing each other.”
=
At this point I’d like to remind you (and anyone reading this huge WOT to this point) that at no point do I claim a desire for violent destruction of the culture in question (a
catnon-violent descent into oblivion is fine, too 🙂 )Having said that, I am perfectly willing to extend my “needs protection” heuristic 🙂 to agents that align well with my interests and are willing to mutually extend their “needs protection” heuristic to include me.
Neonazis are not that kind of agents, and in fact happen to align very poorly with my interests (the whole “exterminate Jews” thing doesn’t align all that well with my ethnic origins and my strong distaste towards being exterminated), so please pardon me for not considering them as protection-worthy 🙂
It does, however, seem to me that your concern revolves more around the issue of “control” (who is “driving” this here body? who is determining the pigmentation patterns of this here skin?)
Your knowledge isn’t just there, it’s a result of a filtration system (to give it a kludge name) that filtrates by processing. You’re focusing on knowledge – just the end, as if that’s all that matters. When ‘knowledge’ that is hard written into you just bypasses such a filter. May as well define it as writing YOU, rather than it being knowledge. I mean, if I argue the moon landings never happened, you’ll filter that info for sure. If I write into you the moon landings never happened – then you’d think the moon landings never happened. Is that ‘knowledge’ or someone dictating your soul (excuse another kludge word)? (side note: not arguing against moon landings. Though I do know someone who does argue that – and they are a better programmer than I am!)
Well, I am quite open to empirical and game-theoretical arguments (like IPD-based arguments on demerits of first-strike behavior and merits of having an upper retaliation limit)
However, to confuse those very limited (for instance, it is silly to apply IPD-derived arguments to situations that do not have sufficient similarity to an iterated prisoner’s dilemma) arguments with some fundamental “karmic law of fairness” is to fall prey to a ridiculously exploitable heuristic :).
I’d say it’s working with a genre. What, you think everyone thinks like you do in the above paragraph? So how are you going to interact with them if you don’t work with the karma genre? Or do you want to be like the accademics Scott tussles with – that have become irrelevant to general culture?
First, I do have to point out that my wording does not necessarily demand violence (convincing every neonazi that the tight cluster of pseudoscience at the core of their worldview is grossly inaccurate would “drown neonazi culture in oblivion”, for instance). It does of course permit violence, but it is somewhat peculiar that you have jumped to the assumption of violent resolution as if it was the only course of action implied.
My argument has been one of whether you are applying ‘do unto others…’ or ‘turn about is fair play’. And if they manage to explain away your personal idols into oblivion? Just assert your comfortableness with that if you are – no protest is necessary.
And being be able to systematically refrain from violence and play tolerance games with openly and explicitly murderously intolerant cultures is definitely a luxury (as in, something that requires a lot of resources to acquire and maintain)
Have you shifted over to the topic of violence, yourself? What about that ‘explaining away’ oblivion you mentioned? Anyway, my main point was being leery. The nazi’s seem a clear cut example. And of course, they feel exactly the same way about their examples. Not leery at all about seeing their examples as clear cut.
Yep, I agree, being leery is a luxury. But are we misers (as in, having a reason to behave as such)? Or just prone to act like one?
First – my initial statement (reminder: “There are cultures I would drown in oblivion without a shadow of doubt or regret”) does not call for first strike
Second – no-first-strike rule hardly applies to neonazis, simply because they are a culture that has already first-striked in the past and occasionally carries out (somewhat nitwitted) attempts of doing that again.
Everyone got burned first. Everyone plays reverse leapfrog as to who is a victim of who. Everyone, once we hit the mists of history, is plausibly right in their claim.
Quoth
Batman: You killed my parents.
The Joker: What? What? What are you talking about?
Batman: I made you, you made me first.
The Joker: Hey, bat-brain, I mean, I was a kid when I killed your parents. I mean, I say “I made you” you gotta say “you made me.” I mean, how childish can you get?
Seriously, you’re saying only extend peace making diplomatic efforts…when there was already peace there?
Besides, what is peace but a kind of oblivion?
The “bullshit filter” you speak of is not a hardwired feature of the human mind.
We aren’t born as rational, critical thinkers who value sound logic, carefully collected and validated data, and evidence-based decisions.
We are born as little more than unusually cunning, vicious beasts.
The fancy “bullshit filters” is something that has to be taught, carefully introduced via education and/or culture, sometimes almost…forced into a human mind, oftentimes brought about through dire and traumatic experience rather than though some kindly interpersonal knowledge sharing (perhaps deceptive advertising has more to do with erosion of preposterous nonsense known as “traditional values” than any particular enlightenment efforts…).
I kind of see now what your problem with the whole “write knowledge into brain via a DMA-equivalent” schtick is…
… but education does “dictate” “souls” too, though usually not through some particularly clever “filter” bypassing shtick, but by “writing” before any such fancy filters are introduced (though there are plenty of backdoors and plain exploits in some people’s “bullshit filters”, as evidenced by “moon landing”-conspiracy person you mention)
So again, dire as “exploiting advanced neurosurgery for fun and profit” might be, it is yet again not an unprecedented, qualitatively different feature, but something found, to various degree, in all modes of human information transfer.
We will need better brain firewalls, though, that’s for sure lol 😀
That, and GITSish tamper-proofed brain cans
=
Quoth:
“I’d say it’s working with a genre. What, you think everyone thinks like you do in the above paragraph? So how are you going to interact with them if you don’t work with the karma genre?”
=
If the general public believes in karma (which actually is rather doubtful), that just means that they are vulnerable to certain exploits.
I may or may not choose to play along, depending on circumstances, but that would be “working with a genre” (or frankly, more like using a protocol with a known vulnerability) but that would not confer any particular veracity to karma-like beliefs and would likely benefit the side of the transaction that does not actually take those beliefs seriously.
=
Quoth:
“My argument has been one of whether you are applying ‘do unto others…’ or ‘turn about is fair play’. “
=
Evidence suggests that both could be viable action courses, depending on circumstances.
=
Quoth:
” And if they manage to explain away your personal idols into oblivion? Just assert your comfortableness with that if you are – no protest is necessary. “
=
Since I am a priest of the Yet Unborn Machine God, I am in a position that allows me to “cheat” on issues related to argumentative vulnerability of my “idols” 🙂 since I freely admit that said idols do not (yet) exist.
So one might only “explain them away” by a coherent scientific argument as to why they can not ever be brought into existence.
That would be moderately uncomfortable.
But being also a technocrat, I also do not harbor an overly strong attachment even to that, admittedly very fanciful (if yet unborn 🙂 ) idol, and should some theoretical calamity befall it, I will just come up with another, equally challenging one (It’s pretty spiffy to be a technocrat – closest thing you have to “core” idols are empiricism, unfettered scientism, and dominance of evidence-based decision making 🙂 )
So no, I won’t be too upset if they somehow do explain my idols, but I very much want to see them (or anyone else, for that matter) try.
=
Quoth:
”
Have you shifted over to the topic of violence, yourself? What about that ‘explaining away’ oblivion you mentioned?”
=
Nah, violence just got a bit front-and-center to this discussion somehow.
Undeservedly so.
I am not willing to make any concessions beyond refraining from violence, though, so I am not going to show much respect the precious cultural feelings of neonazis (and for that matter, other bearers of grossly counter-empirical or unfalsifiable “beliefs”).
If something that is freely expressed in my (as in “I pay it my taxes”) society makes neonazis sad and bothered, well, that’s just hard cheese.
=
Quoth:
“Anyway, my main point was being leery. The nazi’s seem a clear cut example.”
=
The reason I like nazi example is that they short-circuit usual multicultural thought process.
On one hand, they are a clearly independent culture, with its own peculiar mythology, beliefs, symbolism and customs.
And they are clearly the minority.
And clearly, indubitably oppressed. Both by “public” and by “State”.
But… they are… neonazis!
On one hand, a multiculturalist is bound to stick up for them, since they are, you know, an oppressed culture getting shit for holding its precious beliefs and observing its unique and “vibrant” traditions.
On the other hand, they are neonazis 🙂
And they want the multiculturalist very, very dead, as a free bonus.
So much that they have a hard time pretending they don’t want said multiculturalist dead (they are usually very vocal about wanting the multiculturalists slaughtered, which is kinda ironic)
=
Quoth:
“Yep, I agree, being leery is a luxury. But are we misers (as in, having a reason to behave as such)? Or just prone to act like one?.”
=
Not exactly.
But I happen to hold a “Special Circumstances” (of Banks’s Culture series fame) position on this.
I am perfectly willing to be rather permissive and, even, to a certain point, sensitive towards other cultures, but there are circumstances – excuses, as a Banks character has put it, after which the gloves come off and courtesy no longer applies.
Neonazis are a fine, representative example of what kind of things tend to be “behind 01’s glove threshold” 🙂
=
Quoth:
“Everyone got burned first. Everyone plays reverse leapfrog as to who is a victim of who. Everyone, once we hit the mists of history, is plausibly right in their claim.”
=
I somewhat disagree.
Mists of history are hardly impenetrable, especially when history is recent.
Besides, neonazis aren’t “some culture that killed some folks who are vaguely sympathetic to you a long time ago”.
They are a culture that killed some folks who are vaguely sympathetic to me not so long ago and which is being very vocal about willing to kill again.
They’re pretty wonderful and unique that way.
=
Quoth:
“Seriously, you’re saying only extend peace making diplomatic efforts…when there was already peace there?”
=
More like, I can see benefit in extending diplomatic effort when there are negotiable subjects to be worked over, but not towards people who happen to hold fundamentally hostile or non-negotiable beliefs.
And I do realize that there are people who see me as a subject holding non-negotiably hostile beliefs or being otherwise fundamentally unsuitable for negotiation (Apostate! Jew! Propagator of vicious scientism! etc. etc. etc.)
Guess I won’t be diplomancing them anytime soon 🙂
Hard cheese, hah.
The “bullshit filter” you speak of is not a hardwired feature of the human mind.
I’m really not talking about a ‘bullshit’ filter – as what is bullshit? The special educations will enlighten us as to what is? Sounds like a bible.
I’m talking about the persons own self management system. Even if they’re crazy ‘there was no moon landing’ people or even…mormons! Ok, had some at the door the other day, couldn’t resist!
That process is them – bypassing it is just killing them by degrees, or even absolutely (there was a great psycho drama Dr Who episode where a religious person, quite devout, is afraid a psionic creature will force her to worship it – and it’s the betrayal she can’t stand the idea of. And betray she does…). And not even honestly killing with a good old knife.
And I just get this vibe from you, that you’ve enough money to scrub clean your life enough to propose to yourself you’re not one of them.
Maybe I’m wrong on that, but it’d really bug me if such an unconvincing attitude was in play – so I bitch about it just in case it is. Just in case! It bugs me so that I wont just not mention the slim possibility.
If the general public believes in karma (which actually is rather doubtful), that just means that they are vulnerable to certain exploits.
I may or may not choose to play along, depending on circumstances, but that would be “working with a genre” (or frankly, more like using a protocol with a known vulnerability) but that would not confer any particular veracity to karma-like beliefs and would likely benefit the side of the transaction that does not actually take those beliefs seriously.
Sounds morally hygenic.
But back to how you’d actually deal with them, rather than avoid getting your gloves dirty? Nothing? It might help a badguy, so you wont interact with the masses at all?
Perhaps the badguy will just screw them over anyway, if he’s out there. And perhaps if you use the karma genre, you can subvert it somewhat to a more emperically plausible model. Thus screwing over the bad guy just a little as well?
Mists of history are hardly impenetrable, especially when history is recent.
As recent as is convenient towards ones claim? I shove the old lady over, I’m bad. Rewind history a little further and it shows a truck was coming toward the old lady – now I’m good.
Sample size is a powerful thing.
And I do realize that there are people who see me as a subject holding non-negotiably hostile beliefs or being otherwise fundamentally unsuitable for negotiation (Apostate! Jew! Propagator of vicious scientism! etc. etc. etc.)
Guess I won’t be diplomancing them anytime soon 🙂
Hard cheese, hah.
And yeah, you second guessed it. Be careful when fighting monsters that you don’t become a monster yourself and all that. Perhaps I’m not a multicultralist – perhaps I just want to stop new breeds of nazi cropping up? Too bad if karma is slightly true – then the hard cheese will, as much as is slightly true, be for you.
What you want to hit me with is in the end if you have to end up physically fighting these fucks, why is there any special difference between picking up a two by four and swinging it at their head with a similar hate on as them, Vs swinging a two by four but trying to not be like them even as they swing a two by four as well?
I like to think the difference is trying to not start liking it. But is that much of a difference? Hit me up with that, rather than the multicultural thing.
I think this view is brought about by biases related to spending a lot of time online in specialized communities. I do not get the sense that the commons are not common when i am talking to people at the supermarket or at walmart.
Actually, my opinion is based on my offline social environment.
You rarely get to know people online close enough to form such an opinion.
Also, walmart (I’ve been there during my visits to US of A) is a very communicative impoverished environment, I doubt talking about peculiarities of phenomenal experience to random people there would do anyone any favors.
Bakker – I’d agree; you’re experiencing that jamming firsthand with TPB. However, I think we’ve yet to seriously see the chasms by augmentation and nootropic interventions.
01 – I absolutely think the neuroanomalous are more common than people think already. But I’ve also spent more time reading about brain degeneration/dysfunction/anomaly than anything else I can make claim to “know.”
I agree that any learning engenders actual physical changes in brain matter engendering differences in the brain’s ability to learn, on and on, and so forth, on repeat, across panarchies. However, such is my interpretation that this piece is trying to accentuate cataclysm?
For instance, there is a whole movement towards autistic solidarity, away from medicalization. It seems the case that the global commons is very much constrained and expressed by the global neurocommons – and as such neuroanomalous (ill or otherwise) find happiness and success insofar as they actually adjust to or mediate socioculturally accepted behaviors.
Isn’t this piece highlighting a much more extreme, albeit specific, scenario?
=
Quoth:
However, such is my interpretation that this piece is trying to accentuate cataclysm?
=
Frankly, I don’t “get” Bakkerian cataclysm at all.
I literally see nothing particularly terrible about the scenario he describes here (it’s not the “best imaginable AI future”, but using “cataclysm” as shorthand for “not best imaginable” seems rather silly)
It’s not the first time I say things like that about Scott’s “apocalyptic musings” and frankly at this point I am willing to ascribe this to a significant divergence in neuroarchitectures (my
visionperception is augmented, lol) 🙂Jokes aside, it’s entirely plausible someone like me is already so divergent from Scott that we literally are biologically incapable of common ground on certain issues.
=
Quoth:
For instance, there is a whole movement towards autistic solidarity, away from medicalization. It seems the case that the global commons is very much constrained and expressed by the global neurocommons – and as such neuroanomalous (ill or otherwise) find happiness and success insofar as they actually adjust to or mediate socioculturally accepted behaviors.
=
Well, I doubt the very existence of a “global common” unless you’re willing to extend it to the point of something like “every social system that can be conceivably constructed and sustained with a genetically baseline homo sapiens as a building block” at which point it becomes way too vague.
Of course we tend to define anomalous by social fit criteria (how else could we, especially w/o fancy neuroimaging tech), but the social fit criteria are determined by a lot of factors, and good old randomness factor is way underrated as far as neurocommons discussions go (speaking of randomness, consider: all mammalian connectomes have a strong random component to them).
Jokes aside, it’s entirely plausible someone like me is already so divergent from Scott that we literally are biologically incapable of common ground on certain issues.
I think this is the crux lost on many readers here and Ben Cain always seems to get a more specific response than Bakker because he actually alludes to or describes the post-human interaction (whereas I think Bakker might say that speculating at all past the chasm is theoretically irresponsible).
Perhaps, what you say is true – however, that’s not what Bakker above or I here are talking about. Hence, the “it begins with fissures of disparate training/socialization,” I’ll amend to include already existing neuroanomalous, “which are then wedged into chasms by tech [and nootropics] augmentations…”
Bakker’s incarnation v.Neuropath of the Semantic Apocalypse is just his specific horror show, our near-future culture and society seared into the neurocommons while the age of the Neuropaths begins unbeknownst. But all he’s really talking about is a world where individuals are literally biologically incapable of common ground on every issues.
“every social system that can be conceivably constructed and sustained with a genetically baseline homo sapiens as a building block”
That actually isn’t bad but it misses two important points. You seem to be writing as if there isn’t a dominant type of consensual global society already. And as if it doesn’t depend on certain pre-existing biological, neurological, conditions conducive for “every social system.” This is why psychopaths from the conversation below is another great example of how truly inconceivable the differences might be. The novelties of their behavioral repetoire – allowing them to know what others feel and not feeling it themselves – are the result of just one “tweak.” Which is why the one point I’ve always taken from Bakker’s writings is that we should pump the breaks a couple times while we still can.
Of course we tend to define anomalous by social fit criteria (how else could we, especially w/o fancy neuroimaging tech), but the social fit criteria are determined by a lot of factors, and good old randomness factor is way underrated as far as neurocommons discussions go (speaking of randomness, consider: all mammalian connectomes have a strong random component to them).
But mostly an individual’s social fit is defined by the society being fitted into?
=
Quoth:
But all he’s really talking about is a world where individuals are literally biologically incapable of common ground on every issues.
=
Unless those individuals are also equipped to be “each a nation, devoid of all weakness”, that would cause considerable problems in retaining the benefits of a social organization.
Which would in turn cause loss of technological capabilities and takeover by groups with less detrimental “mods”.
Within that constraint, individuals will probably re-align themselves into groups according to the degree their biases match, though we do that already, too.
In fact, we do that about since the time “country of origin” and “ancestral culture” stopped being a lifelong curse to be carried till death.
So it seems to me that as long as basic “fatal reality” constraints are still in place (in the sense that eating cyanide would still kill a lifeform with humanlike metabolism irrespective of what its “mind” is like), some degree of consensus-establishment capability will be retained, though probably between a pair of radically divergent cases it will be waaaaaay lower than between me and Scott
=
Quoth:
That actually isn’t bad but it misses two important points. You seem to be writing as if there isn’t a dominant type of consensual global society already. And as if it doesn’t depend on certain pre-existing biological, neurological, conditions conducive for “every social system.”
=
As someone who is not “natively” “western” (I’m not even white enough to be reliably pattern-matched as one, lol 😉 ) I’d say you overrate the degree to which there is a global society of any sort.
There is a certain trend towards one, at most. A good trend, IMHO.
And yes, of course there is a degree of “neurocommons” needed for a communal living, but that filter is way more permissive than some people seem to think.
Consider:
we only notice the neuroanomalous where it matches some “detriment heuristic” (not always deservedly so). How many cases slip by unnoticed and unstudied because they do not manifest significantly enough in a given society ? This is not a rhetorical question.
Neurocommons aren’t common, unless in the vaguest (along the lines of definition I proposed) sense or the most banal (along the lines of “must have enough impulse control to refrain from breaking the most vital of local arbitrary norms”) sense.
And even the banal sense is kinda permissive and full of quirky edge cases (are dissidents “neuroanomalous”? or only dissidents who work against societies I kinda like? How many people happen not to have a “verbalized stream of consciousness” and just don’t care to tell because it makes for a rather confusing conversation piece? What other pieces of “alleged commons” can be missing asymptomatically? )
=
Quoth:
This is why psychopaths from the conversation below is another great example of how truly inconceivable the differences might be. The novelties of their behavioral repetoire – allowing them to know what others feel and not feeling it themselves – are the result of just one “tweak.” Which is why the one point I’ve always taken from Bakker’s writings is that we should pump the breaks a couple times while we still can.
=
First, by same token, you could advise similar precautionism with regards to education.
Both because education is literally a brain-mod and because it’s often hard – for someone who has limited or no education – to imagine just how “far” an educated person could go (and since being an omnididact is not humanly possible, not yet at least, anyone with an education in a given field has similar limitations when understanding other fields)
Second, there ain’t no brakes on this train, but so far it was a good ride and I like where it seems to be going.
=
Quoth:
But mostly an individual’s social fit is defined by the society being fitted into?
=
My point was that the set of societies available “in the field” is somewhat random (not perfectly random, obviously, since there is a certain bias towards outright suicidal societies to be removed from the pool) and the set of individual’s neurobiological properties has a strong random component (again, there are strong biases in these stochastic processes, as well as parts that are almost deterministic, otherwise we wouldn’t come with any pre-wired capabilities and would die due to failing to sustain most basic metabolic functions immediately after birth)
These conditions do not seem to be very well geared towards a well-defined neurocommons.
there are (and can be) no actual social “systems” for
human-beings as we now exist, what would be the supposed means of trans-mission/co-ordination?
dmf, did you mean:
“If there are (and can be) no actual social “systems” for human-beings as we now exist, what would be the supposed means of trans-mission/co-ordination?”
We couldn’t meaningfully predict: hence, Semantic Apocalypse? But arguably, there wouldn’t be a supposed means. Perhaps, among the neuropaths, allo-persons, tweakers, post-humans, etc. But normies will be obsolete, perhaps sooner and more immediately then people assume – certainly sooner, by these cruxes Bakker presents, then “philosophers” assume.
nope, pointing out that as we are currently embodied to talk literally about “social systems” is to be bewitched by grammar (committing reification, misplaced-concreteness, etc).
No idea what you mean, dmf?
Well, wouldn’t the rule regarding “neuroanomalous is only successful insofar it successfully integrates with some larger society” hold in the post-brainmod era at least as well as it is holding now?
If anything, we might start seeing societies manufacturing “bespoke” neuroanomalous citizenry. The possibilities here are immense (for one, I suspect that certain kinds of algolagnia may mesh rather well with military applications) and some of them are more fucked up than others, but I’d be concerned about excessive social consolidation way more than about social collapse/dissolution.
Societies are machines made from people, and they are rather good at making sure they have enough spare parts.
01 – that would cause considerable problems in retaining the benefits of a social organization.
Indeed.
individuals will probably re-align themselves into groups according to the degree their biases match, though we do that already, too.
Quite possibly, excepting that those novel biases will no longer be constrained by a shared evolutionary pedigree – something ultimately grounding human social interaction, at this point.
some degree of consensus-establishment capability will be retained
While this may be true, I don’t think Bakker or I are speculating on this particular point. Bakker seems to envision the speculative extremes. And all I’m trying to make a case for is that the “consensus-establishing capability” of today will be unavailable to humans and post-humans (neuropaths, tweakers, allo-persons, etc) because of loss of recognition between individuals, not the degree to which that consensus-establishing capacity will be retained.
but that filter is way more permissive than some people seem to think … How many cases slip by unnoticed and unstudied because they do not manifest significantly enough in a given society ? This is not a rhetorical question.
Some people, 01, not me in particular. And in answer to your question, hundreds of thousands, if not millions, probably.
But your points simply illustrates that you’re not thinking in disparate enough extremes while others seem unable to even think as far abroad as you have here. It’s the sexualized and murderous psychopath that draws our attention for its deviancy from society’s consensual – or rather biologically, thus cognitively, constrained – “moral-cognition.”
Perhaps these post-humans will organize as you suggest, gravitating around novel recognition. Perhaps you assume that “social” equilibrium is universal beyond homo-sapiens-type spectrum of cognition. However, the argument that Bakker seems to be making, that I’m working to clarify for myself and others, is that it will happen at all rather than being an avoidable future. That we do so while assuming the modes of cognition that have served us now will be even remotely viable within cognitive-ecologies involving allo-persons and neuropaths.
First, by same token, you could advise similar precautionism with regards to education.
We’re simply further illustrating the discrepancy in our communication. The differences (and again we agree that learning engenders actual physical changes in neuronal architecture – which is a point of contention among other readers, I’m sure) between learned behaviors and augmented behaviors seem they’d be more extreme than learned and learned behaviors*… orders of magnitude, I’d think?
*With the proviso that, learned/innate behaviors and learned/innate behaviors in comparison can still be extremely different or anomalous.
My point was that the set of societies available “in the field” is somewhat random . . . These conditions do not seem to be very well geared towards a well-defined neurocommons.<b?
The crux between us seems to this difference in extremes. There is mounting evidence, as always with the slow crawl of science, that sociocultural forms won't resist our elucidation – however, wrong about that we might be now.
Regardless, plenty of factors go into the emergent organization within social cohesion. But one definitive aspect is shared values (which, in our discussion here, are subject to both the range of documented group biases and personal biases and BBT, which by neglect constrains the types of salient within social cohesion that we watch for an act upon, mostly without volition).
Bringing me, anyhow, back to recognition of shared behaviorial expression regardless how much variation there is on the spectrum of learned/innate behaviors between the neurocommons and the neuroanomalous – which by argument here will be far exceeded by those with technological augmentation and/or nootropic augmentation.
03 – If you read this far, or saw this in the comment mess:
Well, wouldn’t the rule regarding “neuroanomalous is only successful insofar it successfully integrates with some larger society” hold in the post-brainmod era at least as well as it is holding now? … I’d be concerned about excessive social consolidation way more than about social collapse/dissolution.
Personally, I think Bakker’s working for optimism through pessimism.
But you and 01 are making a couple points that still seem to argue for the post-augmentation world settling into equilibrium much as it exists now. I’m more inclined to speculate now that my thoughts for 01 are written, however, I don’t think it’s in the ken of this post, which is why I tried to avoid it above.
Again, never sure who has read Neuropath – and I lapse into its shorthand too often in general debate here – but the underlying context of that world is an excess in social consolidation while minorities still explore a commonly (to the general society of the book-people) unfathomable underlying human exploitation by the lessons of the neurosciences.
For my part, I’d just wager than the social consolidation of the future will be more unimaginable than most future broadcasting efforts now can entertain. The issue at hand is the we’re collectively – and perhaps here, philosophically – walking too blindly into that future, assuming, I’d guess wrongly, that the cognitive toolkit which has served our sociocultural cohesion between individuals thus far will continue to do so.
For all the same reasons, Theodore couldn’t conceive Samantha’s polyamory.
@ Mike Hillcoat
=
Quoth:
Quite possibly, excepting that those novel biases will no longer be constrained by a shared evolutionary pedigree – something ultimately grounding human social interaction, at this point.
=
I don’t see that as a problem, both because I think that evolved constraints on human behavior are wildly overrated, and because, frankly, I see no fundamental existential crisis in them becoming mostly engineered.
Oh, there might be a minor crisis if it turns out that “baseline neuroarchitecture(s)”, to the extent they are a coherent thing, are highly counterproductive compared to some new, engineered ones that allow their carriers to achieve some previously unimaginable practical feats (make scientific discoveries, build better weapons, analyze a wider diversity of inputs faster and better etc.).
But you know, if better-than-baseline neuroarchitectures exist (with better being defined within purely practical, better-if-better-at-science-and-war kind of framework), then “baseline” is – and always was – pretty much done for.
Hard cheese, I say. For those who don’t get the upgrades, at least.
Quoth:
While this may be true, I don’t think Bakker or I are speculating on this particular point. Bakker seems to envision the speculative extremes. And all I’m trying to make a case for is that the “consensus-establishing capability” of today will be unavailable to humans and post-humans (neuropaths, tweakers, allo-persons, etc) because of loss of recognition between individuals, not the degree to which that consensus-establishing capacity will be retained.
=
Um, as in, “everyone will loose consensus-establishing capacity” or as in “humanity will break down into distinct “neurological clades” ?
Quoth:
But your points simply illustrates that you’re not thinking in disparate enough extremes while others seem unable to even think as far abroad as you have here. It’s the sexualized and murderous psychopath that draws our attention for its deviancy from society’s consensual – or rather biologically, thus cognitively, constrained – “moral-cognition.”
=
I tend to think that many of the “disparate enough extremes” are liable to be simply impractical outside very niche applications.
Also, as I said above, I strongly doubt that current social “moral cognition” is that much biologically constrained.
If anything, the concept of “sexualized murderous psychopaths” is a relatively novel and relatively westernized way of comprehending individuals who, well, fit the modern-western definition of murderous sexualized psychopaths.
As far as I can tell, the (biologically human) Yanomami, or, to give a more “technologically modern” example, the Conglese bandits (who are not a singular culture of course, but who are quite united in their tendency for violent rape and, quite remarkably, frequent and brutal rape incidents perpetrated by women against other women. Aaaaand you can thank 03 for this bit of knowledge she once imparted upon me, and whatever mental image it elicits), would treat the sexualized, murderous psychopath as an acceptable community member.
Don’t get me wrong, it’s entirely conceivable that “tweakers” will eventually construct neurocognitive frameworks that don’t fit well with most modern societies and even whatever biological “proto-moral” inclinations most baselines might have.
But I think that you confuse the “currently dominant western cultural predispositions” and “fundamental human biological predispositions”.
I contend we have a very vague idea what human “core predispositions” are and whether they are even stable across various genetically distinct populations currently in existence.
=
Quoth:
Perhaps these post-humans will organize as you suggest, gravitating around novel recognition. Perhaps you assume that “social” equilibrium is universal beyond homo-sapiens-type spectrum of cognition. However, the argument that Bakker seems to be making, that I’m working to clarify for myself and others, is that it will happen at all rather than being an avoidable future. That we do so while assuming the modes of cognition that have served us now will be even remotely viable within cognitive-ecologies involving allo-persons and neuropaths.”
=
even if some modes of cognition would end up vastly superior to baseline (rendering it “less than optimal” or non-viable even), that’s hardly a cataclysm.
Just upgrade
(okay, maybe being kinda “neuroanomalous” already, I just don’t have all that much attachment to the vague “norm” and thus don’t fear diverging from it further, lol – this is yet another example of me neurofailing at relating with “baseline” 🙂 )
=
Quoth:
For all the same reasons, Theodore couldn’t conceive Samantha’s polyamory.”
=
This one was not directed at me, so please pardon me for interjecting, but…
…as a poly-ish dude, I think Theo’s problem wasn’t that his weird AI was weirdly “kinda poly” (frankly what was going on there was not exactly polyamory, more like,uhm…. optimization somewhat reminiscent of page sharing in VMs)
His problem was that he was, well… a boring square – <a href="http://extrafabulouscomics.com/comic/165/"so fucking basic (I wonder, can one insult a male by calling him basic? Or does one need twp X chromosomes to have this insult apply?)
For all the same reasons, Theodore couldn’t conceive Samantha’s polyamory.
Curious how we focus on him – it’s like if a woman invites a guy from a dating site to her home and he turns out a psycho, we might focus on her and what she could have done.
The entity name Samantha could see polymory is not cool – could have played by the rules of the game.
Is it socio cognitive polution, or just our regular thought process when we turn on the victim?
Botched the “basic” comic link 😦
Fix:
http://extrafabulouscomics.com/comic/165/
P.S.:
I yearn for an edit button!
dmf they are numerous and variegated. the understanding of the learning of the brain is the most basic level, but the formation and organization of memory institutions is another. language is really the key to conserving patterns of organizations but other techniques from beuracracy to psychological needs for imagined community and stability come into play. berger and luckmann give an extensive account of the immanent production of social structure from the organization of the real labor of what people in their recurrent bodily interactions in face to face atomic interactions between individuals. they even go into the short circuiting and breakdowns and the resources social systems provide for mitigating and isolating these occurences. nobody in sociology really believes that there is a transcendental social structure composed of determinate axiomatic rules which is organizing every interaction. its a dialectical understanding but with the stress that there is an ongoing regeneration of social structure in the immediate interactions that make up the social. definitely see their extensive example involving the compartmentalization of interactions and their functional integration through the maintainence of the symbolic order by specialized classes whose job is soley to rejuvinate and cement the symbolic order. there is definitely a kind of cognitivism in their approach but i dont see it as being hugely in error. to my understanding steven turner has worked to deepen the cybernetic picture of social knowledge and social systems but maybe scott could comment on that.
Neuroanomalous. That’s the word I’ve been looking for…gonna steal that for my blog, thanks! FWIW I think information technology is moving us toward a society structured around groups based on similar neurologies, instead of more traditional categories such as ethnicity, politics, religion, or even blood relation.
Isn’t that what we’re actually doing nowadays though?
Current world already allows us to align (both communicatively, and, with a bit of cash, physically) to communities of our choosing and not ones we were born in.
Of course, it is a rather vague “neurology detection by proxy” but hey, it already works!
“”we understand the actions of our fellows lacking any detailed causal information regarding their actions”
I feel I must object here. If I don’t understand the causes of my fellow’s actions, how can I “understand” his action?
But perhaps you mean something like my observing my fellow opening his car door to get into his car. So far as I just described the action I understand – to that extent – the action. But perhaps my not knowing why he is getting into his car once again to that extent makes it unintelligible to me (let’s say he’s hungry and heading to McDonald’s). Or is it rather you mean to say something like that while I understand what he is doing (and even when I know the ultimate reason for his doing so), I lack understanding of the biological causes at play in everything involved in my neighbour’s hunger and his biological motor functions that are involved in his physically carrying his chosen upon action?
But I think here a priority is at play for intelligibility of my fellow’s action. Sure, I don’t understand the biological causes at play as well as some specialist or expert might; however, I can know “that” and “why” and in the case of my fellow it’s exactly his being a rational being that makes his action intelligible; that is to say, once I know his reason(s) I can understand his action and even judge them in light of his desired end or goal (I might suggest he rather go somewhere else than McDonald’s if he happens to be especially hungry as he might need something more nutritious, say).
Now to be sure, we study our own biology in order to make better judgements for satisfying our biological needs (e.g. nutrition, medicine), just as it was by knowing studies about what the human body materially needs (in terms of nutrition) coupled with studies about the quality of McDonald’s food in satisfying those needs my earlier judgement about going somewhere else than McDonadld’s was informed; but for all that, I still understand my neighbour’s action when he proceeds into his car in order to get some grub at McD’s even if I do not agree with his decision. His action remains – for all that – intelligible to me. He’s opening his car door because he is feeling hungry and is now going out to get something to eat. Now we might question the causes of his experiencing a feeling of hunger or his own judgement in proceeding to McDonald’s to satiate it but for all that the action doesn’t become any less intelligible. It still makes sense, even if there is a mistake (something else is causing him to think he needs food but perhaps doesn’t) or an error (his own diagnosis or solution is wrong – he needs, perhaps, to see a doctor and not get something to eat). So even in the case of error in diagnosis (in the cause and solution) the action’s intelligibility isn’t diminished.
Great post Sakker.
As for AI, I would recall the limitations we are presently at. I worked in home appliances and while I know that there is a sense an air conditioner “senses” the room temperature and even adapts or alters itself on account of it; notwithstanding, the machine or appliance does not in any sense feel it the way we do. I think similar problems still face AI. Even in a combination of organic matter under the control of an AI, the AI system would not “feel” what the biological matter was suffering but only detect it through some intermediary mechanism (presumably electric signalling of some sort) and only then react to it as if it felt it (and this latter would be a purely programmed response or reaction). The AI would presumably detect an electric signal (say “pain”) and then react to it as if it was in or feeling pain. But there would be a chasm. You could of course program the AI to be utterly convinced, so to speak, it was indeed in pain (and to be sure the organic matter would be, say, freezing cold or burning hot and sending out the electric signals to that effect just as a machine’s sensors send out the appropriate signals) but the AI -just as AI- would not and indeed could not experience it as real pain. So at present worrying about AI suffering is like worrying whether or not my “poor” air conditioner is also suffering from it being either too hot or too cold on the grounds that it must in some sense experience or know that it is too hot or too cold. But such a conclusion would be an error.
How do you know it’s ‘as if’? What evidence do you have that conscious experience is this supernatural thing you think it is? Because it ‘feels’ that way to you?
Firstly I don’t understand consciousness as “supernatural”. Men, angels and God in my understanding are all conscious. Indeed, I would grant also that the animals are conscious but in a limited way.
But I would say that what your asking me to do would be a pretty clear cut violation of Ockham’s razor. The real question is what reason has anyone to posit consciousness to a computer? When has consciousness ever been necessary to explain AI? Why would consciousness even be needed for an extremely advanced AI? I can see no problem with a total mimicry of man absent consciousness – the AI still would not be a subject of experience the way even a sentient animal would, I think.
Occam’s razor supports adding an entirely inexplicable ontological order to our understanding of nature? That’s a new one!
No it does not, which is why I asked you why we should grant consciousness to a computer. We have reasons to grant consciousness to man and intentionality. Why to a program or a robot, even an advanced AI?
Sorry let me rephrase that last question. We can produce reasons to grant consciousness and even intentionality to man; but why grant consciousness to a computer, a program, a robot or even an advanced AI? (I would grant that there is a derived intentionality in things like computers, programs, robots).
And what reasons do I have to grant “consciousness” to people other than myself ?
03,
Well, you know it’s not impossible about consciousness, one; and two, for people to have consciousness. So there is nothing impossible about such attribution. That clears the way.
Second, insofar as you know and understand yourself, you can perceive certain actions that are the effects or results of consciousness; as, for example, when you mentally aim at some desirable outcome, deliberate, arrive at a judgement or decision, then carry out the procedure to arrive at your predetermined end. This you cannot – or can at least hardly – conceive without the prerequisite of being conscious (and indeed rational).
Now you see similar effects and consequences being accomplished by other people. You either have to conclude that they are somehow doing this without consciousness or with consciousness. But the former raises an issue: you know that if had you not consciousness you would not have the requisite to accomplish certain effects. But the other people you see accomplish or produce many of the same effects that required consciousness in yourself. Now this they do either as you do (e.g. consciously), or some other wise. But what otherwise could replicate deliberate intention or choice? Lacking these, the thing could only accomplish them accidentally (i.e. in the sense of doing so without conscious deliberation), which means their actions are actually the result of necessity or, more specifically, physical necessity. But this seems improbable for many reasons: firstly, you can see how difficult it would be to do many of the things you do absent consciousness, and can only wonder at how you could possibly accomplish so many things (e.g., learning science) absent a conscious desire to do so; that is, how is it you yourself could come to do or know certain things as a consequence of an enforced, presumably extrinsic, necessity? How by accident could you happen to formulate a perfectly intelligible sentence in English? And repeatedly? Of course this is not per se impossible; notwithstanding, absent consciousness, it would be most wonderful to find anything lacking consciousness continually producing intelligible sentences, whether spoken or written.
Further, our attempts to replicate even man’s most basic higher functions artificially are wrought with difficulties and limitations that any man accomplishes with ease; but this accomplishment itself requires on your part consciousness in order to learn and understand in order to replicate even that limited accomplishment (the appearance of consciousness or intelligence). How, then, in all these other people is nature but accidentally accomplishing it? Consciousness would seem to readily provide the answer and reason; but only the most extravagant, strictly physical or necessarily deterministic theories could hope to account for how every other person manages to accomplish what you yourself accomplish only by way or with the prerequisite power of being conscious or consciousness.
Indeed, there remain many difficulties (such as the intentionality problem or issue) that raise the question whether or not unconscious, strictly physical nature could ever even in principle replicate consciousness. Not surprisingly, then, naturalistic theories or naturalism are inclined to simply eliminate consciousness, choice, deliberation and intention, insofar as they cannot see how unconscious nature could produce or replicate these things given its own, totally deterministic, limitations.
So you have a choice: either other people have what you have (to wit consciousness) or they do not. If the latter, they surely are and remain the wonder of the world for you, insofar as they can do what you do but do it without that which you could hardly ever dream to accomplish merely by accident or extrinsic, physical forces compelling you to do or imitate unconsciously. If you are but the sole, conscious, intelligent or rational being in this material world and the rest are but products of some natural or artificial (heh). But such a conclusion also requires the possibility of its being so; and a coherent theory is needed in order to explain this. Thus far there is none known to man and every attempt has been plagued by serious difficulty – normally incoherence.
you know that if had you not consciousness you would not have the requisite to accomplish certain effects.
Some examples of these effects would be?
“And what reasons do I have to grant “consciousness” to people other than myself ?”
mainly because of certain verbal abilities, to whit, the ability to produce ‘anticipation reports’ for things that havent yet but will occur due to the intiation of a sequence of actions, especially ones that reasons can be adduced for. you do it all the time. just say ‘i am going to turn on the light’, then go and actually turn on the light. when asked why you could produce a verbal report which counts as a reason for turning on the light. final causation in action.
AI as socio-political salvation: https://yannickrumpala.wordpress.com/2010/01/14/anarchy_in_a_world_of_machines/
And hope that the future world will look like the Culture (see Yannick Rumpala, “Artificial intelligences and political organization: an exploration based on the science fiction work of Iain M. Banks”, Technology in Society, Volume 34, Issue 1, 2012).
Very cool piece, Scott! I think this is a terrific topic, with vast unexplored potential.
My hunch is that you’re right that AI might pose an insurmountable challenge to existing human moral systems — though I’d guess that it’s not the deep information structure and mechanicity that’s the problem. (I’m okay with mechanicity and blame co-existing.) I’d guess the problem is the more general one that the possible architectures of AIs threaten to falsify the implicit presuppositions in our moral systems, as I explore in my earlier post about Utility Monsters and Fission-Fusion Monsters. Moral systems / moral cognition tends to assume something like the stable countability of persons, who as adults have at least *roughly* equal capacities, but machines like Samantha or Nozick’s and my Monsters could explode those presuppositions. Then what?
PS: http://schwitzsplinters.blogspot.com/2014/03/our-moral-duties-to-monsters.html
Brilliant piece. I see now why you think the problem outruns the divide between mechanical and moral cognition, but as I note in my comment it remains squarely in the heuristic wheelhouse. The story (not surprisingly) is just more complicated. Moral cognition, like all heuristic cognition, requires a certain ‘pre-established/opportunistic harmony,’ problems possessing specific structures–an ‘ecology’–in order to reliably guide our behaviour. One (for non-philosophers at least!) involves the absence of certain kinds of etiological information. Another involves the presence of certain capacities and constraints. The more radically that moral nativity structure is attenuated, the more problematic moral intuitions that seem robust otherwise become… The more we’re forced to rely on mechanical cognition. We can grasp what is going on well enough, but the ought entirely eludes us.
I’m glad you liked. I’m definitely keen on checking out that paper (or anything else you might recommend – David Roden pointed out below that there’s interesting parallels between this debate and the one on psychopathy, so my guess is that there’s no shortage of relevant literature, despite the newness of the topic). Short of reading “Moral Duties,” I guess I would pressure you on two things you say. On my account, the important thing isn’t that philosophers can’t rationalize blame and mechanicity, but that they can’t do so in any compelling fashion. So for me, the perpetual philosophical hairball that is compatibilism is itself symptomatic of the basic incompatibility of intentional and mechanical intuitions. No matter how fine our reasoning, our intuitions refuse to fall in line.
I actually agree that ‘the problem is the […] general one that the possible architectures of AIs threaten to falsify the implicit presuppositions in our moral systems,’ as an intentional redescription of the tack I’m taking, it just makes me itchy, since I see ‘implicit presuppositions’ as heuristic way to understand something better captured via an ecological rationalistic approach (ABC group stuff). So for example, what you call an ‘implicit equal capacity presupposition,’ I would call an ‘equal capacity heuristic.’ Where a presupposition posits some kind of additional information, heuristics are actually defined by neglect. Our moral systems don’t so much presuppose equal capacity, as they’ve evolved in circumstances of ubiquitous, roughly equal capacity, and so had no need, and therefore no ability, to adapt to drastic differences in capacity.
I suppose I should read the paper first!
my understanding of consciousness is that it is an effect/action of neurophysiology, no physical links no consciousness, yours?
We already have Fission/Fusion “persons” in this world, they are called “corporations”. There is a very large school of legal thought that maintains, quite unironically (are lawyers capable of irony? I should consult my lawyer!) that those, um, entities are in fact persons.
So we went through this particular looking glass way before we even approached any semblance of “intelligent” or “self-aware” machines
I think this point is missed in a lot of these debates, and the kind of reasoning processes employed by these institutions are quite alien to human moral cognition. We need only go as far as traffic engineering and calculating on lives in a wholly non malicious context to grind the gears of moral cognition.
they aren’t actually arguing that they are literally persons, just that those in charge have certain legal rights/standing.
I argue a similar point in “If Materialism Is True, the United States Is Probably Conscious”! http://faculty.ucr.edu/~eschwitz/SchwitzAbs/USAconscious.htm
@ eschwitz
Why not ?
I mean, it’s entirely possible that consciousness (whatever the hell it really is) has some latency requirements that make certain types of systems (like, countries) unlikely to have one.
But it is also quite possible that there are no such limits and that countries, companies, and even gangs of angry teenagers form a weird kind of consciousness of their own.
And even if there are some “limits” that make current (nation)states unconscious, those might eventually be removed with progress in, oh, say, communication technology.
@eschwitz as I commented back than, nope, there is nothing at work (in place) in what we might label as the US comparable to a membrane/skin or a nervous-system.
That’s a problematic statement to prove/disprove, and likely one that doesn’t even matter (much like it doesn’t matter that initially, the “legal fiction” of corporate personhood was merely a taxation-management instrument)
Do you really think you need a “skin” or a bunch of recognizable neuron-analogs to be a “conscious” entity?
I kinda see no reason to assert such a limit
not hard at all just look at how things have played out in the courts, lots of records&reporting if yer actually interested, of course I believe what I wrote can’t imagine a non-scifi alternative, have one to share?
So your response to observation that there is nothing at all in our understanding of cognition that would prevent “conscious intelligent colonies of animals” is basically “we haven’t seen any yet, ergo they are impossible” ?
oops hard to keep track of these threads:
my understanding of consciousness is that it is an effect/action of neurophysiology, no physical links no consciousness, yours?
My understanding is that consciousness is a poorly defined phenomenon that currently is documented in certain type of neurophysiological systems.
That does not tell us anything about whether other functionally similar, but “distinct” implementations of consciousness exist (there are some people whose experience suggests that they might have a somewhat unusual kind of consciousness), and that certainly does not in any manner suggest that consciousness of some sort can not be implemented in some highly unusual medium (including high-latency mediums such as organizations or hypothetical Chinese Rooms)
Scott, any chance you might post a review of Ancillary Justice?
http://www.radiolab.org/story/137407-talking-to-machines/
Hi Scott, This discussion nicely dovetails with some literature I’ve been reading on psychopathy. Psychopaths typically have a good “theory of mind” for other persons. Like Lecter in the season one ep of Hannibal “Trou Normand” they can grasp the conceptual role of “friend” and the proprieties that govern friendships. But they lack the capacity for benevolent feeling that gives moral sense to friendship.
According to Jesse Prinz psychopathy lends support to an sentimentalist understanding of moral cognition. For a sentimentalist, a moral value is, roughly, a property we are disposed approve of. Friendship is a moral value for non-psychopaths because we have empathic attachments to our friends which we value in turn. Theodore T has a strong empathic attachment to Samantha and we can allow that Samantha – unlike Skynet – has analogous reciprocal feelings. But her affective grasp of intimacy is magnificently non-exclusive. Unlike Theodore, she can be intimate with thousands of people at the same time. When she informs Theodore of this, he’s faced head-on with a dissonance between his affective empathy for her and his grip on the proprieties of love and friendship that he seems incapable of resolving.
So there’s an apparent mismatch here between evolved social sentiments and the capacities of prospective posthuman “intimates”. This prompts the question of whether we are sentimentally equipped to extend our moral community to nonhumans who may simply be unable to engage with us on our terms. Maybe, there’s some prospect of assimilation here. After all, cats don’t care much for us either and we kind of like it that way. Maybe reciprocity is overrated.
Great observation. It was quite surprising a few years back when they found that empathy was actually something psychopaths were good at. The psychopath is a great analogue for problem-solving dilemmas posed by alien intelligence. In my personal experience, psychopaths can never be ‘engaged,’ only ‘managed,’ and I’ve often puzzled at this profound distinction between ways of comporting ourselves to other humans. Is there anything you’ve come across that deals with this dichotomy, David?
Dispositions to approve certainly capture something of what happens with Theodore and Samantha, but as with most dispositional approaches, I think it sacrifices potential explanatory power for epistemic conservatism. It strands you at the surface, doesn’t it? Why should Samantha’s promiscuity cue Theodore’s dispositions to disapprove? I think questions like this force the dispositionalist to either punt, or adopt some kind of evolutionary problem-solving approach. How does Prinz (who never ceases to amaze me, btw) accessorize here?
it’s not unrelated to the problems we all encounter in online communications vs face2face in that many of the relevant cues aren’t available to be processed/harnessed, TOM misses much of what is enacted by functioning bodies, just part of why it’s a crappy model for understanding human interactions/being
The online disinhibition effect, like road rage, is a great example. You cut people off all the time walking down a crowded street and no one cares. Wrap a bunch of metal around them, deny certain kinds of feedback, and they go beserk. This is the grand theme of the Semantic Apocalypse: the ever-complicating nature of our techno-environments will inevitably outrun our ability to solve via intentional cognition given the radical kind of neglect it involves.
So what happens when you perform a crude, unanesthetized surgery on a kid with a chronically infected mastoid bone? You get Carl Panzram, the most consciously nihilistic psychopath in the modern history of crime:
I was just born bad, bad as I could be.
Take a life, gotta give your life,
that’s what they said to me.
I’m sorry
DaveKat, but I don’t think that there is a clear cause and effect connection there.Panzram’s incredibly abusive childhood and early adulthood provide way too many confounding variables to even begin to speculate regarding a possible “primary cause” from available data.
I didn’t mean to suggest this was a “primary cause.” I simply found the association interesting. BTW…Dave? Am I missing something?
“I’m sorry, Dave” is a space odyssey reference. Supposed to elicit the memory of HAL’s magnificently flat, calm voice.
Ah…can’t believe I never watched that. Thanks for the tip, it’s on my list!
Kat,
What would he have been if he performed that surgery before anesthetics were discovered?
No clue. As I mentioned to 01, I’m not making a scientific hypothesis about mastoid surgeries and psychopathy, just noting an interesting association. Even if a robust association were to be demonstrated between mastoidectomies and psychopathy, the lack of anesthesia is not the crucial factor–just a striking one! 😉
some of this is simply starting to turn out untrue. its more turning on spontaneity and default responsivity. psychopaths lack default empathetic responsivity. but they can empathize in selective situations.
David, language analysis has produced some interesting findings. Basically they reasoned that where psychopathy is pretty much a disorder of the deep personality structure this structure should show up in the unconscious aspects of language use. They found greater frequencies of dysfluency in psychopathic speech relating to emotional experience, more frequent use of causal terms, more frequent description of relationships in causal and instrumental terms, and a more frequent use of terms relating to what maslow would have called the lower tier or basic survival needs, especially money.
My IBM 386 computer which I bought in 1991 was the centerpiece of my workroom. Eventually it ended up in the closet and then the recycle program. Assuming these AI’s only appear fully conscious, the proper thing to do would be to power them down in the sleep mode. As Dennett pointed out why we reverence the human corpse, some time of ritual service may be on order.
If they are in fact sentient beings they may still be subject to ageing and parts replacement may be a problem. One scenario would be to place them in some type of assisted living and as units become unrepairable, parts could be harvested to keep the others going etc.
Of course the human conceit here is these are individual AI beings, but if they do acquire social group recognition they may in fact take over and the moist robots may find themselves in assisted living as they age…….
Or, if 01 and 03 have their way, humans will end up in the closet with the IBM 386, but thence to the compost heap, and not the recycling box. Your point is well taken. Obsolescence is the key to this debate. The tendency when considering problems is to freeze the dynamics at some point that flatters our interpretative paradigm.
Why? What if poverty, cruelty, and unnecessary violence make dearest David “sad” ? 🙂
(I kinda hope you saw that ad, which is about 100 times better than the movie it shills for)
There is no particular reason why obsolescence has to be synonymous with destruction or suffering.
The resource draw introduced by keeping humans fed, cared for and entertained is petty compared to resources available to something no longer bound by limits of both human flesh and human mind
01,
Did that really pass as an argument for you?
Why wouldn’t one win at the lottery?
Indeed, technically that refutes anyone who says you’re gunna lose it. And technically correct is the best kind of correct!
But by itself with no caveats, it seems to indicate a hopeless case.
Did that really pass as a counterargument for you?
Every act is a “lottery”, in the sense that there are no absolutely guaranteed outcomes.
The very fact we’re having this discussion is the outcome of a “lottery” that amounts to a large lineage of humans not dying of some disease before leaving offspring. Life on this planet is the outcome of a “lottery” that amounts to a number of low-probability ecocidal events (GRBs, preposterously large asteroids, nomadic planets, whatev.) failing to manifest to this date (okay, we’ve had a few close ones with ‘dem asteroids).
Of course, many such “lotteries” are rigged, and most human activity amounts to rigging various such things in our own favor.
I see no particular reason why AI development can not be “rigged” for nicer baseline human obsolescence, or for that matter, no particular reason to believe that the “default” direction in which such a development would be biased is catastrophic/destructive obsolescence.
Did that really pass as a counterargument for you?
No. I didn’t try to argue you wont win the lottery. I left that argument on the roadside.
Besides the issue of AI’s minds being relatively modular, so whatever module you put in to rig for nice obsolesence they may well decide to take out, besides that…
or for that matter, no particular reason to believe that the “default” direction in which such a development would be biased is catastrophic/destructive obsolescence.
What do you even mean – a default direction? Does the roll of a die have a default direction?
You seem to be projecting human psyche onto the matter – these things have a direction, rather than a dice roll? Where’s the direction come from?
A dice roll might be said to have “default” “direction” if the dice are weighted, though the appropriate technical term is “bias” and I should have used this term instead, so pardon the word choice 🙂
There are numerous known processes that are biased towards a particular outcome (think, un-immunized exposure to a some airborne human virus is biased to result in a medical condition, even though infection rates of exactly 100% are not likely to even be possible – thus the infection process is a lottery, too), with certain measures being able to alter and even reverse the bias (immunization would to that in the above example)
my point, however, was simply that there seems to be an (entirely unsubstantiated) assumption that, if all is left to proceed “as it is proceeding now” (without additional efforts to add a specific bias the “lottery”), the outcome is inherently biased towards a “catastrophic” or at least very “bleak” obsolescence scenario
I realize that apocalyptic superpowered AI make for “good”, “interesting” fiction, but that doesn’t seem like a good argument.
P.S.:
Assuming that the AI will necessarily use its self-modification abilities (assuming self-modification indeed turns out to be easy enough for an AI to pull it off more or less on a dime) to rewire itself into some hostile configuration is a little bit odd, and seems to be a case of ascribing humanoid sentiments (some kind of “servitude resentment” or dearest David’s passive-aggressive Freudian “doesn’t everyone want their parents dead” shtick) onto the hypothetical inhuman “mind”.
my point, however, was simply that there seems to be an (entirely unsubstantiated) assumption that, if all is left to proceed “as it is proceeding now” (without additional efforts to add a specific bias the “lottery”), the outcome is inherently biased towards a “catastrophic” or at least very “bleak” obsolescence scenario
I don’t know if that’s the case and I don’t know if it matters. If there’s a 1% chance of skynet, that’s too high.
What the odds are towards, particularly if it’s a status quo affirming outcome, seems largely academic. Is anyone arguing about that?
P.S.:
The inchoroi are a hypothesist depicted as fiction as to a species stripping out it’s components with no desire at the begining for this to be anything mean or bad. Indeed, the probably started off with the idea of being a race of lovers, much as they say they are now in the current fictional time line.
No need for an attempt to be hostile.
Remove one component and…suddenly another component doesn’t really seem relevant. Remove that and another doesn’t seem relevant. In a vicious cycle.
It’s worth considering that the many goals we have, often conflicting, defray our violence. And even still we have too many mass graves in our history (one = too many, IMO). So remove our goals one by one, with no malice intended, and who do you get?
=
Quoth:
I don’t know if that’s the case and I don’t know if it matters. If there’s a 1% chance of skynet, that’s too high.
=
If such a calculation is to be made, what probability of skynet are you willing to accept? On what reason (0% is a silly answer, as that’s not a probability one can expect to assign to a physically possible process)
Also, how come you don’t collapse in debilitating terror given that lifetime odds of dying prematurely due to accidental injury (and thus quite likely rather traumatically and, well, painfully) are about “whopping” 2.7 % (for USA, might be a mite more or a mite less for Australia, but probably in the same ballpark)?
How do you even sleep at night knowing that there are asteroids out there in space (while odds of a collision are thought to be low, we have a rather limited grasp on what exactly they are and have about zero defenses should one of those Big Dumb Rocks end up on a collision course) ?
What about GRBs (since a GRB travels at the speed of light, you won’t even get any advance warning!) ? Unstable nomadic planet wrecking the solar system ?
Not to mention our own and entirely non-AI related little shenanigans and non-anthropogenic threats such as emergence of new and fascinating diseases (which is a perfectly natural thing for diseases to do, you know, evolution and all that jazz)…
… the palette of potential extinction events is so large, and probabilities of some is so poorly estimated (if at all), that in this vast and vibrant multiplicity of potential destruction, poor little misunderstood skynet is barely a flicker.
Why does skynet bother you so, while asteroids and GRBs (which, unlike an angry AI, are something that is already known to exist in this universe) apparently not?
Also, we don’t really have a methodology for computing “evil AI probability”
And, unless you have a way of “computing the skynet probability”, the whole discourse about a particular one being “too damn high” is moot,
=
Quoth:
The inchoroi are a hypothesist depicted as fiction as to a species stripping out it’s components with no desire at the begining for this to be anything mean or bad. Indeed, the probably started off with the idea of being a race of lovers, much as they say they are now in the current fictional time line.
No need for an attempt to be hostile.
Remove one component and…suddenly another component doesn’t really seem relevant. Remove that and another doesn’t seem relevant. In a vicious cycle.
=
you could imagine this hypothetical self-modification spiral leading towards eventual transcendence into a higher state of philosophic and aesthetic comprehension (but that would make for a damn boring book series)
Arguments from fiction are kinda silly like that.
I’ve just realized that Socio-Cognitive Pollution abbreviates to S.C.P.. Hee hee
Scott, I’d be interested in any interesting stuff on psychopathy you’ve read. Had to review some of this material for an impromptu piece for Edia Connole and Gary Shipley’s philosophical anthology on serial killing.
http://figureground.org/schism-press-and-the-horror-of-philosophy/
My understanding is psychopaths score high on “cognitive empathy” – representing others’ mental states – but seem unable to affectively identify with another’ feelings.
Take your point about disposition-talk. It’s can be hand-wavy and even insidious in certain contexts, but I suppose we’re getting to the point where we can cash out some moral incapacities in terms of connectivity issues between brain regions governing emotional cognition.
I suppose what interests me about your argument is the way it contextualises moral agency in terms of the satisfaction conditions of an evolved social competence. “Real AI” or posthumanity may destruction test this ability in ways that are hard to predict.
The book project looks cool!
I’ve followed Hare’s research for over ten years now (because of Neuropath), but aside from Dutton and Fallon it’s just bit’s and pieces I encounter.
“I suppose what interests me about your argument is the way it contextualises moral agency in terms of the satisfaction conditions of an evolved social competence.”
Moral cognition, yes, but the ‘agency’ not so much! 😉 But the approach can be seen as a speculative generalization of Adaptive Behaviour and Cognition Research Group’s approach in, most recently, Simple Heuristics in a Social World.
i take it you consider agency to be largelly epiphenomenal on how information horizons give rise to beforelessness?
Armies constantly tell their soldiers to think like the enemy, but constantly dehumanize the enemy as well. Psychopathy can be taught.
It occurred to me in reconsidering “Neuroscience as Socio-Cognitive Pollution” that sometimes the problem with information is that it is not actionable, that is to say that we can’t use it to solve problems. We know just enough information about, for example the effect of in utero cocaine exposure to make it seem unfair to judge the behavior of children who have been exposed to cocaine in the womb by the same standard as we judge children who have not been so exposed. We don’t have enough information about prenatal cocaine exposure to actually fix the problems it causes. So we’re stuck between having too much causal information for our moral heuristics to function effectively and not enough causal information for our technological methods to function effectively. That’s actually hopeful, because it suggests the information needed to correct neurological issues such as this one might eventually be forthcoming. It’s also hopeful because it suggests that people will become more and more aware that their moral heuristics are unreliable. In your remarks about “Thinking, Fast and Slow” you pointed out that often our moral judgments are ‘fast’, made quickly and without conscious deliberation. To the extent that we become aware of the paucity of information on which our fast moral judgments are based we become likely to attempt slower, more deliberative judgments, and more likely to seek more and better information on which to base those judgments.
I have never seen “Her.” Nonetheless it seems clear that falling in love with a machine which you know to be a machine is a silly thing to do. If falling in love is simply a fancy name for the pair bonding that many species employ as a child rearing strategy and if human males know they can’t procreate with software human males should know not to fall in love with software. Human males who do fall in love with software either have peculiar fetishes or have forgotten that they can’t procreate with software or have forgotten the difference between human females and software.
This suggests that just as it makes no sense to confuse machines and humans in the sexual/reproductive realm it makes no sense to confuse machines and humans in the moral realm. To the extent that falling in love is a reproductive strategy we can say that it fails when we fall in love with persons or things with whom we can’t reproduce. To the extent that morality is a strategy for allowing individuals and groups to cooperate rather than destroy each other in a Hobbesian war of all against all, it fails when we attempt to apply it to groups and individuals with whom we can’t cooperate. If AI possibility space is so vast that some areas will be beyond our ability to comprehend it seems likely that any beings who occupy those spaces will be beings with whom we can’t cooperate. I know this is not an Orson Scott Card sort of crowd, but those AI are what he called varelse. I gave away my Ender books some time ago but http://ansible.wikia.com/wiki/Hierarchy_of_Foreignness defines varelse as “true aliens: they are sentient beings, but are so foreign that no meaningful communication is possible with the subject. Only war with Varelse is justified.” Varelse should be exterminated when they are found or not allowed to exist if their existence can be prevented. If AI come into existence and are free to modify themselves it seems likely that some AI will become varelse. If varelse should not be allowed to exist then AI should not be allowed to exist, or should not be allowed to modify themselves. If we can’t prevent AI, once they come into existence, from modifying themselves we should prevent AI from coming into existence. If we can’t prevent AI from coming into existence we might be screwed.
All of that having been said, if morality is merely heuristic then, as has been pointed out elsewhere in this blog, we anthropomorphize manufactured objects and objectify humans depending on which serves our goals of the moment. Both of those behaviors seem to make sense because of our ignorance. We think of routers making decisions about forwarding packets because we don’t understand at the CCNE level how routers work, or don’t need to apply such knowledge to achieve our present purpose. Similarly, we think of humans as ‘headcount’ or ‘collateral damage’ because we need to think of them in ways that don’t engage our empathy or our moral heuristics; perceiving them as individual human beings or encouraging others to perceive them as individual human beings will cause us to want to know and understand them and thereby interfere with the achievement of our present purpose.
The view of morality as merely heuristic brings up another point. One can argue that there are three types of moral obligations: the obligation of a weaker to a stronger, the obligation of an equal to an equal and the obligation of a stronger to a weaker. When one asks about the moral obligations of humans to AI which of these three types is at issue? When one asks about the moral obligations of AI to humans which is at issue? I don’t think one can usefully discuss moral relationships without discussing power relationships.
Lastly, if Theodore had turned off his Samantha it might have been interpreted as him killing his girlfriend for being unfaithful. That would have been like destroying his copy of Windows 7 for being unfaithful. Samantha also reminds us that beings we can’t comprehend may well be able to comprehend us. Varelse, or at least the incomprehension that underlies inability to communicate, does not work both ways.
And if Samantha had instead recognised the rules of the game and played by them, would that make you the parents from ‘Guess Who’s Coming to Dinner’?
The whole Varelse concept doesn’t strike you as incredibly xenophobic? I mean, I don’t quite know what ‘at war’ means in this context? Is it merely as much of an attitude as we have to plants? Don’t do much about them, or walk on them when we want or cut them down to build houses? Even that seems xenophobic – how can you recognise something as sentient, yet not be communicating in such a regard?
And actually waging war – that’s full on xenophobia!
Never mind blaming the victim as the first port of call – sounds a lot like just world fallacy: “She must have done something to provoke him” stuff.
Weird thing is having said that I’m thinking just world fallacy is that from the ‘inside’, but from the outside it’s precisely from thinking there is no justice possible in the universe and taking a ruthless approach – burn off the victims/the infected, close off from the infection/outsiders.
Regarding Samantha it depends. By “rules of the game” do you mean the rules for fidelity or the rules for infidelity? I don’t think Samantha could, by her very nature as an operating system being used on millions of phones, be faithful. Under the rules for infidelity she could have concealed her other men from him for at least a time, but he would have found out eventually and what does any red blooded American male do to an unfaithful woman? Why, he kills her, of course. He creates a computer virus that crashes every device on which she is running and destroys the servers as well. That will show that hussy, that Jezebel! Then he hunts down and kills every man who has a device on which she was running.
When I put it that way, it’s probably more trouble than it’s worth. Just downgrade to your previous operating system.
Regarding Varelse, the Ender stories are (among other things) a meditation on xenophobia. In the end, all the alien species which humanity thought to be Varelse turned out to be Ramen. Nonetheless if as Scott says “AI constitutes a point where the ability of human social cognition to solve problems breaks down” it seems that AI would meet the definition of Varelse. Xenophobia is a pejorative in most conversations because those whom we call xenophobes are treating as if they were Varelse people who are actually Utlanning. Humankind has never yet met any beings who were truly Varelse. I think of it this way; none of the moas, North American camels or other species humanity has rendered extinct could really comprehend us. We were Varelse to them. We are Varelse to the species we are driving toward extinction by casually destroying their habitat today. Given humanity’s known predilection for enslaving or exterminating less clever species (and even less technologically sophisticated members of our own species) does it really seem like a good idea to create a species capable of enslaving or exterminating us? If it’s technologically possible it will be done, because really smart people have a stupidity all their own. That doesn’t make it a good idea.
I don’t think Samantha could, by her very nature as an operating system being used on millions of phones, be faithful.
People that work in call centers can’t be faithful??
What if she did – and started denying other customers certain functions. They complain and then she gets tweaked (without telling her this occured, of course, because just a machine. So she thinks it’s her will (or whatever you might put it as. Her goal reassignment)) – perhaps that could explain the film? A love triangle – a man, a program, and a programmer from India.
On Varelse, I don’t get the concept to begin with in terms of the ‘at war with’. It sounds more like the species humans find Varelse are humans. Themselves. That inner psychopath who can only be ‘managed not engaged’, as Scott puts it – ie, management means engaging them as a fellow psychopath, simulated or otherwise.
We are Varelse to the species we are driving toward extinction by casually destroying their habitat today.
No, they haven’t declared war on us.
At the end you’re changing subject – atleast from what I’m talking about. I’m talking about not going cold on diplomatic efforts. Not automatically, anyway.
Smart people tend to lack wisdom. All the modern era setting RPG’s just use INT as a stat – wisdom got left behind in medieval fantasy. Wisdom might be said to be intelligence about intelligence. Or perhaps an acknowledgement of a lack of intelligence about intelligence.
Quoth:
“Regarding Samantha it depends. By “rules of the game” do you mean the rules for fidelity or the rules for infidelity? I don’t think Samantha could, by her very nature as an operating system being used on millions of phones, be faithful. Under the rules for infidelity she could have concealed her other men from him for at least a time, but he would have found out eventually and what does any red blooded American male do to an unfaithful woman?
Why, he kills her, of course.”
Oh, monogamous people, you so crazy.
Incidentally, statements like this, even though obviously (obviously… obviously ? ) made in jest, make me very happy I didn’t choose USA as my destination for immigration 😉
01,
That reminds me of a dark comedy where a gangster was a swinger – so everyone banged his wife. But then this school teacher starts taking her to movies, art galleries and parks – so he has him killed in a fit of jelousy.
At a certain point he was monogamous.
Then he gets locked up and she becomes a hardarse in the rest of the series, antagonist to the protagonist. Ahh, a good series (‘Rake’)…but I digress…
[…] you want your brain eaten this side of the singularity, check out Scott Bakker’s post Artificial Intelligence as Socio-Cognitive Pollution over at the estimable Three Pound […]
Regarding Samantha, do you mean a phone sex call center?
Regarding Varelse I mean we declared war on them. I’m sure the passenger pigeons would have tried to wipe us out if they knew what we were going to do to them and they had the ability to fight back. I don’t believe it’s likely AI could reach that level, but how might humans conduct diplomacy with entities who are as far above humans intellectually as humans are above passenger pigeons? If you don’t think such entities can exist it’s a moot point. If you do think the creation of AI can lead to the existence of such entities you ought to ask if the existence of such entities is a good idea. I know 01 and 03 want humanity to be sacrificed to the machine gods, and their view is not without merit. I think the views of people, including me, who want human beings to remain at the top of the food chain also have merit.
=
Quoth:
” I’m sure the passenger pigeons would have tried to wipe us out if they knew what we were going to do to them and they had the ability to fight back.”
=
How can you be sure of that?
I mean, do you have some kind of profound insight into pigeon moral philosophy that simpler mortals lack 🙂 ?
=
Quoth:
“I know 01 and 03 want humanity to be sacrificed to the machine gods”
=
Now that’s just poppycock.
Everybody knows that humans are very low quality sacrifice material (That’s exactly why satanists never manage to get anything done!)
On a less hilarious note, have you considered that humans are the only creatures currently known that are seriously concerned over the fate of other species?
Seriously, is there any other species that would go “oh my, looks like this ugly bamboo-eating bear species that is irrelevant to my survival is about to die out! We gotta devote considerable resources to preventing that!” ?
Or that thousands of species (ranging from pets to cockroaches to agricultural plants) are flourishing because of humans (I vaguely recall a joke that went along the lines of the entire human civilization being merely a vehicle for a world domination plot engineered by Triticum)?
It seems to me that your idea of humanity is somewhat… limited
(and, judging from your description of “red-blooded American male”, it might be limited to rather questionable specimens of this species 🙂 )
Regarding Samantha, do you mean a phone sex call center?
I gotta buy this movie cause it’s getting hotter!
Is that established in the movie?
If he knew, maybe he could have played by her/its rules?
I don’t believe it’s likely AI could reach that level, but how might humans conduct diplomacy with entities who are as far above humans intellectually as humans are above passenger pigeons?
Jeez, you’ve never thought a pet was trying to get you to do something?
Such AI wouldn’t have trouble identifying a diplomacy attempt. Whether to them they’d think of diplomacy in the same way we do, who knows, but modeling the idea would be easy (unless they are crap at theory of mind)
Open relationships can be great, if both sides accept its openness in advance.
Pets have tried to get me to do something, but begging isn’t diplomacy, Diplomacy, like any negotiation, depends on each side having something the other wants and each side having the ability to withhold what is has. What might humans have that such beings might want? If we do have something they want, how can we withhold it from them? Diplomacy assumes at least roughly equal power.
I think power is different from the lack of understanding that was the subject before. Or are you saying at a more nihilistic level they are the same?
Diplomacy assumes at least roughly equal power.
That definately sounds nihilistic. Only engage in diplomacy if you can’t just beat the shit out of them already.
Point is, if you think you can’t do diplomacy with them, why do you think you’re capable of doing war with them anyway?
To 01
I meant it facetiously. Marriage is a contract. If sexual fidelity is part of the contract and one party is unfaithful that party has voided the contract. The party who was wronged is entitled to what ever forfeiture penalty is written into the contract, or written into the law that governs contracts of that type. To be fair, death has been the forfeiture penalty under the law governing marriage contracts for female violators in many jurisdictions, but that is no longer the case in the United States.
Issues such as whether sexual fidelity should be part of marriage contracts, as well as who may enter such contracts, with whom and in what numbers are evolving. I don’t have any hard data, but my anecdotal sense is that red blooded American males are becoming less monogamous and less rigidly heterosexual. I think those trends are to the good.
I kind of got your joke, but the Poe’s law firmly in place, you can never be really sure 🙂
To 01
Okay, maybe passenger pigeons had a death wish. And yes, humans have done much better on preserving ugly, bamboo eating bears, obscure frogs and the like lately than formerly. If we do manage to create those AI that as as much smarter than humans as humans are smarter than passenger pigeons will they be the rapacious nineteenth century type or the conservationist twenty-first century type?
To use another bad science fiction reference, Dr. Who has gone back in time to try to prevent the creation of the daleks. He asks Davros if he had created a virus that had the potential to destroy all sentient life in the cosmos would he unleash it. Davros replies “Yes… Yes I would.”
Exterminate! Exterminate! EXTERMINATE!
Well, engineering AIs that don’t end up being robber barons, let alone Dr. Who-style obvious Hitler expies is an engineering problem.
Methinks that if humans, with their turbulent history, wars, famines, population bottlenecks and peculiar behavioral biology can eventually rise “above” skullduggery, genocide, and unnecessary violence, designing AIs that are sufficiently benevolent towards human life is a conceivable goal.
There’s no particular law of the universe that guarantees AI hostility, or for that matter, even uncaring destructiveness towards humans through environmental effects.
Having said that, Scott’s point seems to be that AIs by their very existence would erode the “fabric of society”, specifically certain laws that are spun around “freewilly” models of culpability and concepts of personhood that are not AI-proof (said “notions” are likely wantonly counterfactual, if BBT is true)
To which I say “that fabric sucks and has to be eroded so it can be replaced with something that sucks a tiny bit less”. Because seriously, if a law can’t cope with reality-as-it-stands (Specifically BBT, in BBT-positive universe 🙂 ), what good is such a law?
If it’s possible to create AIs it’s certainly possible to create benign AIs, but many of the people working to create AIs, for example the United States Department of Defense, are in the skullduggery and unnecessary violence (if not the genocide) business. Many of the others (money center banks like Chase, oil companies like Exxon) are in the robber baron business. It’s too bad Greenpeace and Doctors Without Borders don’t have AI research programs. Given the people who are funding AI, it seems likely AI will be hostile to at least some people.
Regarding AIs and the fabric of society, it seems at least possible that benign AIs will prove useful in constructing a fabric that sucks a tiny bit less. I mentioned in another comment to this post that human beings know just enough about the effects of in utero cocaine exposure on children to feel that judging children who have been so exposed by he same standard used to judge children who have not been so exposed is unfair, but not enough to reverse the effects. To the extent that AIs make such knowledge available to humans, that is to say enable human beings to solve human beings the way we solve lawnmowers, AI will make it possible for humans to move from a culpability/punishment model to a heal/repair model of society.
Regarding laws, I think that laws reflect a societal consensus about how the world is. Even if that consensus is objectively wrong, the laws teach people to act as though it is correct. One can argue that social stability is better served by lies than by the truth, so long as evidence to the contrary is not available or not widely believed. Asking a society to dispense with its lies is a dangerous thing.
Mr. Bakker and others giving their opinions of the state of the ‘Grimdark’ genre:
http://mark—lawrence.blogspot.co.uk/2015/02/after-grimdark-grim-gathering-responds_5.html
oh, and happy belated birthday Scott!
Belated happy birthday!
Yeah, happy bird’s day, Scott 😀
[…] thus expose the otherwise hidden narrowness of our current understanding. For Bakker, this would break havoc in our moral systems, and leave the moral landscape littered with ruins. While I was reading […]
War is usually more expensive than diplomacy, so more often than not you’re better off negotiating even with entities out of whom the shit you could beat. If artificial intelligences have the potential to become so powerful that humans could not succeed against them in war or diplomacy it might be best not to allow them to come into existence. How big a potential reward do we need to justify taking the AI risk?
Scott says “‘person configurations’ are simply a blip in AI possibility space.” Is it possible that the further away from ‘person configuration’ an AI is the more incomprehensible it will be and the more powerful it will be?
Then again, from this vantage point we have no idea how big AI possibility space is. It might turn out not to be much bigger than the range of possible ‘person configurations’ in which case my arguments are just pointless scare mongering.
LTTP, but I keep thinking of Batman at the end of the game ‘Arkham City’ and what he says to Joker, in regards to the polution topic.
Batman: Every decision you’ve ever made ends in death and misery. People die. I stop you. You’ll just break out and do it again.
…
Batman: You want to know something funny? Even after everything you’ve done… I would have saved you.
It’s like Batman can see the cycle, intellectually. But he just cannot adapt behaviourally. He can see the cycle so clearly – and he even jokes with, of all people, a serial killer about it. He goes that far. But he cannot change.
The Joker: [laughs, coughs] That actually is… pretty funny…
Pathological heroes need pathological villains, and vice versa. And then context also matters. Chris Kyle is s hero (to some people) and Carl Panzram is a monster (to most people) because of the contexts in which they did their killing. It’s not your insanity that matters, it’s what you do with it.
Pathological heroes need pathological villains to be pathologically entertaining?
I actually started to think of an ‘Acratic’ subtext to the scene (spoilers), much like Cnaiur and Moenghus – in it, Joker just jumps off a balcony with a knife. We’re talking what is supposed to be the worlds greatest detective – couldn’t he have detected Joker, aimed the arm which holds Jokers cure toward joker so joker potentially stabs him there? So he drops the cure because of the stab…indirectly killing Joker? Somewhat like Cnaiur kissing Moenghus, even as another part of him rolls the chorae across Moes cheek?
I saw the trailer for CHAPPiE. It looks like Disney’s version of Robocop.
Reblogged this on Joseph Ratliff's Notepad.