On Artificial Belonging: How Human Meaning is Falling between the Cracks of the AI Debate
by rsbakker
I hate people. Or so I used to tell myself in the thick of this or that adolescent crowd. Like so many other teens, my dawning social awareness occasioned not simply anxiety, but agony. Everyone else seemed to have the effortless manner, the well-groomed confidence, that I could only pretend to have. Lord knows I would try to tell amusing anecdotes, to make rooms boom with humour and admiration, but my voice would always falter, their attention would always wither, and I would find myself sitting alone with my butterflies. I had no choice but to hate other people: I needed them too much, and they needed me not at all. Never in my life have I felt so abandoned, so alone, as I did those years. Rarely have I felt such keen emotional pain.
Only later would I learn that I was anything but alone, that a great number of my peers felt every bit as alienated as I did. Adolescence represents a crucial juncture in the developmental trajectory of the human brain, the time when the neurocognitive tools required to decipher and navigate the complexities of human social life gradually come online. And much as the human immune system requires real-world feedback to discriminate between pathogens and allergens, human social cognition requires the pain of social failure to learn the secrets of social success.
Humans, like all other forms of life on this planet, require certain kinds of ecologies to thrive. As so-called ‘feral children’ dramatically demonstrate, the absence of social feedback at various developmental junctures can have catastrophic consequences.
So what happens when we introduce artificial agents into our social ecology? The pace of development is nothing short of boggling. We are about to witness a transformation in human social ecology without evolutionary let alone historical precedent. And yet the debate remains fixated on jobs or the prospects of apocalyptic superintelligences.
The question we really need to be asking is what happens when we begin talking to our machines more than to each other. What does it mean to dwell in social ecologies possessing only the appearance of love and understanding?
“Hell,” as Sartre famously wrote, “is other people.” Although the sentiment strikes a chord in most everyone, the facts of the matter are somewhat more complex. The vast majority of those placed in prolonged solitary confinement, it turns out, suffer a mixture of insomnia, cognitive impairment, depression, and even psychosis. The effects of social isolation are so dramatic, in fact, that the research has occasioned a worldwide condemnation of punitive segregation. Hell, if anything, would seem to be the absence of other people.
The reason for this is that we are a fundamentally social species, ‘eusocial’ in a manner akin to ants or bees, if E.O. Wilson is to be believed. To understand just how social we are, you need only watch the famous Heider-Simmel illusion, a brief animation portraying the movements of a small circle, a small rectangle, and larger rectangle, in and about a motionless, hollow square. Objectively speaking, all one sees are a collection of shapes moving relative one another and the hollow square. But despite the radical absence of information, nearly everyone watching the animation sees a little soap opera, usually involving the big square attempting to prevent the union of the small square and circle.
This leap from shapes to soap operas reveals, in dramatic fashion, just how little information we require to draw enormous social conclusions. Human social cognition is very easy to trigger out of school, as our ancient tendency to ‘anthropomorphize’ our natural surroundings shows. Not only are we prone to see faces in things like flaking paint or water stains, we’re powerfully primed to sense minds as well—so much so that segregated inmates often begin perceiving them regardless. As Brian Keenan, who was held by Islamic Jihad from 1986 to 1990, says of the voices he heard, “they were in the room, they were in me, they were coming from me but they were audible to no one else but me.”
What does this have to do with the impact of AI? More than anyone has yet imagined.
Imagine a social ecology populated by billions upon billions of junk intelligences
The problem, in a nutshell, is that other people aren’t so much heaven or hell as both. Solitary confinement, after all, refers to something done to people by other people. The argument to redefine segregation as torture finds powerful support in evidence showing that social exclusion activates the same regions of the brain as physical pain. At some point in our past, it seems, our social attachment systems coopted the pain system to motivate prosocial behaviors. As a result, the mere prospect of exclusion triggers analogues of physical suffering in human beings.
But as significant as this finding is, the experimental props used to derive these findings are even more telling. The experimental paradigm typically used to neuroimage social rejection turns on a strategically deceptive human-computer interaction, or HCI. While entombed in an fMRI, subjects are instructed to play an animated three-way game of catch—called ‘Cyberball’—with what they think are two other individuals on the internet, but which is in fact a program designed to initially include, then subsequently exclude, the subject. As the other ‘players’ begin throwing more and more to each other, the subject begins to feel real as opposed to metaphorical pain. The subjects, in other words, need only be told that other minds control the graphics on the screen before them, and the scant information provided by those graphics trigger real world pain. A handful of pixels and a little fib is all that’s required to cue the pain of social rejection.
As one might imagine, Silicon Valley has taken notice.
The HCI field finds its roots in the 1960’s with the research of Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory. Even given the rudimentary computing power at his disposal, his ‘Eliza’ program, which relied on simple matching and substitution protocols to generate questions, was able to cue strong emotional reactions in many subjects. As it turns out, people regularly exhibit what the late Clifford Nass called ‘mindlessness,’ the reliance on automatic scripts, when interacting with artificial agents. Before you scoff at the notion, recall the 2015 Ashley Madison hack, and the subsequent revelation that it deployed more than 70,000 bots to conjure the illusion of endless extramarital possibility. These bots, like Eliza, were simple, mechanical affairs, but given the context of Ashley Madison, their behaviour apparently convinced millions of men that some kind of (promising) soap opera was afoot.
The great paradox, of course, is that those automatic scripts belong to the engine of ‘mindreading,’ our ability to predict, explain, and manipulate our fellow human beings, not to mention ourselves. They only stand revealed as mechanical, ‘mindless,’ when tasked to cognize something utterly without evolutionary precedent: an artificial agent. Our power to peer into one another’s souls, in other words, becomes little more than a grab-bag of exploitable reflexes in the presence of AI.
The claim boggles, I admit, but from a Darwinian perspective, it’s hard to see how things could be otherwise. Our capacity to solve one another is largely a product of our hunter-gatherer past, which is to say, environments where human intelligence was the only game in town. Why evolve the capacity to solve for artificial intelligences, let alone ones possessing Big Data resources? The cues underwriting human social cognition may seem robust, but this is an artifact of ecological stability, the fact that our blind trust in our shared social biology has served so far. We always presume our environments indestructible. As the species responsible for the ongoing Anthropocene extinction, we have a long history of recognizing ecological peril only after the fact.
Sherry Turkle, MIT professor and eminent author of Alone Together, has been warning of what she calls “Darwinian buttons” for over a decade now. Despite the explosive growth in Human-Computer Interaction research, her concerns remain at best, a passing consideration. As part of our unconscious, automatic cognitive systems, we have no conscious awareness that such buttons even exist. They are, to put it mildly, easy to overlook. Add to this the overwhelming institutional and economic incentive to exploit these cues, and the AI community’s failure to consider Turkle’s misgivings seems all but inevitable.
Like most all scientists, researchers in the field harbor only the best of intentions, and the point of AI, as they see it, is to empower consumers, to give them what they want. The vast bulk of ongoing research in Human-Computer Interaction is aimed at “improving the user experience,” identifying what cues trust instead of suspicion, attachment instead of avoidance. Since trust requires competence, a great deal of the research remains focused on developing the core cognitive competencies of specialized AI systems—and recent advances on this front have been nothing if not breathtaking. But the same can be said regarding interpersonal competencies as well—enough to inspire Clifford Nass and Corina Yen to write, The Man Who Lied to his Laptop, a book touted as the How to Win Friends and Influence People of the 21st century. In the course of teaching machines how to better push our buttons, we’re learning how to better push them as well.
Simply because it is so easily miscued, human social cognition depends on trust. Shapes, after all, are cheap, while soap operas represent a potential goldmine. This explains our powerful, hardwired penchant for tribalism: the intimacy of our hunter-gatherer past all but assured trustworthiness, providing a cheap means of nullifying our vulnerability to social deception. When Trump decries ‘fake news,’ for instance, what he’s primarily doing is signaling group membership. He understands, the instinctive way we all understand, that the best way to repudiate damaging claims is to circumvent them altogether, and focus on the group membership of the claimer. Trust, the degree we can take one another for granted, is the foundation of cooperative interaction.
We are about to be deluged with artificial friends. In a recent roundup of industry forecasts, Forbes reports that AI related markets are already growing, and expected to continue growing, by more than 50% per annum. Just last year, Microsoft launched its Bot Framework service, a public platform for creating ‘conversational user interfaces’ for a potentially endless variety of commercial purposes, all of it turning on Microsoft’s rapidly advancing AI research. “Build a great conversationalist,” the site urges. “Build and connect intelligent bots to interact with your users naturally wherever they are…” Of course, the term “naturally,” here, refers to the seamless way these inhuman systems cue our human social cognitive systems. Learning how to tweak, massage, and push our Darwinian buttons has become an out-and-out industrial enterprise.
As mentioned above, Human-Human Interaction consists of pushing these buttons all the time, prompting automatic scripts that prompt further automatic scripts, with only the rare communicative snag giving us pause for genuine conscious deliberation. It all works simply because our fellow humans comprise the ancestral ecology of social cognition. As it stands, cuing social cognitive reflexes out of school is largely the province of magicians, con artists, and political demagogues. Seen in this light, the AI revolution looks less a cornucopia of marvels than the industrialized unleashing of endless varieties of invasive species—an unprecedented overthrow of our ancestral social cognitive habitats.
A habitat that, arguably, is already under severe duress.
In 2006, Maki Fukasawa coined the term ‘herbivore men’ to describe the rising number of Japanese males expressing disinterest in marital or romantic relationships with women. And the numbers have only continued to rise. A 2016 National Institute of Population and Social Security Research survey reveals that 42 percent of Japanese men between the ages of 18 and 34 remain virgins, up six percent from a mere five years previous. For Japan, a nation already struggling with the economic consequences of depopulation, such numbers are disastrous.
And Japan is not alone. In Man, Interrupted: Why Young Men are Struggling and What We Can Do About It, Philip Zimbardo (of the Stanford Prisoner Experiment fame) and Nikita Coulombe provide a detailed account of how technological transformations—primarily online porn, video-gaming, and virtual peer groups—are undermining the ability of American boys to academically achieve as well as maintain successful relationships. They see phenomena such as the growing MGTOW (‘men going their own way’) movement as the product of the way exposure to virtual, technological environments leaves them ill-equipped to deal with the rigours of genuine social interaction.
More recently, Jean Twenge, a psychologist at San Diego State University, has sounded the alarm on the catastrophic consequences of smartphone use for post-Millennials, arguing that “the twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever.” The primary culprit: loneliness. “For all their power to link kids day and night, social media also exacerbate the age-old teen concern about being left out.” Social media, in other words, seem to be playing the same function as the Cyberball game used by researchers to neuroimage the pain of social rejection. Only this time the experiment involves an entire generation of kids, and the game has no end.
The list of curious and troubling phenomena apparently turning on the ways mere connectivity has transformed our social ecology is well-nigh endless. Merely changing how we push one another’s Darwinian buttons, in other words, has impacted the human social ecology in historically unprecedented ways. And by all accounts, we find ourselves becoming more isolated, more alienated, than at any other time in human history.
So what happens when we change the who? What happens when the heaven of social belonging goes on sale?
Good question. There is no “Centre for the Scientific Study of Human Meaning” in the world. Within the HCI community, criticism is primarily restricted to the cognitivist/post-cognitivist debate, the question of whether cognition is intrinsically independent or dependent of an agent’s ongoing environmental interactions. As the preceding should make clear, numerous disciplines find themselves wandering this or that section of the domain, but we have yet to organize any institutional pursuit of the questions posed here. Human social ecology, the study of human interaction in biologically amenable terms, remains the province of storytellers.
We quite literally have no clue as to what we are about to do.
Consider Mark Zuckerberg’s and Elon Musk’s recent ‘debate’ regarding the promise and threat of AI. Musk, of course, has garnered headlines for quite some time with fears of artificial superintelligence. He’s famously called AI “our biggest existential threat,” openly referring to Skynet and the prospect of robots mowing down civilians on the streets. On a Sunday this past July, Zuckerberg went live in his Palo Alto backyard while smoking meats to host an impromptu Q&A. At the fifty-minute mark, he answers a question regarding Musk’s fears, and responds, “I think people who are naysayers and try to drum up these doomsday scenarios—I don’t understand it. It’s really negative and in some ways I think it’s pretty irresponsible.”
On the Tuesday following, Musk tweeted in response: “I’ve talked to Mark about this. His understanding of the subject is limited.”
To the extent that human interaction is ecological (and how could it be otherwise?), both can be accused of irresponsibility and limited understanding. The threat of ‘superintelligence,’ though perhaps inevitable, remains far enough in the future to easily dismiss as a bogeyman. The same can be said regarding “peak human” arguments predicting mass unemployment. The threat of economic disruption, though potentially dire, is counter-balanced by the promise of new, unforeseen economic opportunity. This leaves us with the countless number of ways AI will almost certainly improve our lives: fewer car crashes, fewer misdiagnoses, and so on. As a result, one can predict how all such exchanges will end.
The contemporary AI debate, in other words, is largely a pseudo-debate.
The futurist Richard Yonck’s account of ‘affective computing’ somewhat redresses this problem in his recently released Heart of the Machine, but since he begins with the presupposition that AI represents a natural progression, that the technological destruction of ancestral social habitats is the ancestral habitat of humanity, he remains largely blind to the social ecological consequences of his subject matter. Espousing a kind of technological fatalism (or worse, fundamentalism), he characterizes AI as the culmination of a “buddy movie” as old as humanity itself. The oxymoronic, if not contradictory, prospects of ‘artificial friends’ simply does not dawn on him.
Neil Lawrence, a professor of machine learning at the University of Sheffield and technology columnist at The Guardian, is the rare expert who recognizes the troubling ecological dimensions of the AI revolution. Borrowing the distinction between System Two, or conscious, ‘mindful’ problem-solving, and System One, or unconscious, ‘mindless’ problem-solving, from cognitive psychology, he warns of what he calls System Zero, what happens when the market—via Big Data, social media, and artificial intelligence—all but masters our Darwinian buttons. As he writes,
“The actual intelligence that we are capable of creating within the next 5 years is an unregulated System Zero. It won’t understand social context, it won’t understand prejudice, it won’t have a sense of a larger human objective, it won’t empathize. It will be given a particular utility function and it will optimize that to its best capability regardless of the wider negative effects.”
To the extent that modern marketing (and propaganda) techniques already seek to cue emotional as opposed to rational responses, however, there’s a sense in which ‘System Zero’ and consumerism are coeval. Also, economics comprises but a single dimension of human social ecology. We have good reason to fear that Lawrence’s doomsday scenario, one where market and technological forces conspire to transform us into ‘consumer Borg,’ understates the potential catastrophe that awaits.
The closest one gets to a genuine analysis of the interpersonal consequences of AI lies in movies such as Spike Jonze’s science-fiction masterpiece, Her, or the equally brilliant HBO series, Westworld, scripted by Charles Yu. ‘Science fiction,’ however, happens to be the blanket term AI optimists use to dismiss their critical interlocutors.
When it comes to assessing the prospect of artificial intelligence, natural intelligence is failing us.
The internet was an easy sell. After all, what can be wrong with connecting likeminded people?
The problem, of course, is that we are the evolutionary product of small, highly interdependent, hunter-gatherer communities. Historically, those disposed to be permissive had no choice but to continually negotiate with those disposed to be authoritarian. Each party disliked the criticism of the other, but the daily rigors of survival forced them to get along. No longer. Only now, a mere two decades later, are we discovering the consequences of creating a society that systematically segregates permissives and authoritarians. The election of Donald Trump has, if nothing else, demonstrated the degree to which technology has transformed human social ecology in novel, potentially disastrous ways.
AI has also been an easy sell—at least so far. After all, what can be wrong with humanizing our technological environments? Imagine a world where everything is ‘user friendly,’ compliant to our most petulant wishes. What could be wrong with that?
Well, potentially everything, insofar as ‘humanizing our environments’ amounts to dehumanizing our social ecology, replacing the systems we are adapted to solve, our fellow humans, with systems possessing no evolutionary precedent whatsoever, machines designed to push our buttons in ways that optimize hidden commercial interests. Social pollution, in effect.
Throughout the history of our species, finding social heaven has required risking social hell. Human beings are as prone to be demanding, competitive, hurtful—anything but ‘user friendly’—as otherwise. Now the industrial giants of the early 21st century are promising to change all that, to flood the spaces between us with machines designed to shoulder the onerous labour of community, citizenship, and yes, even love.
Imagine a social ecology populated by billions upon billions of junk intelligences. Imagine the solitary confinement of an inhuman crowd. How will we find one another? How will we tolerate the hypersensitive infants we now seem doomed to become?
Did you read “The Enigma of Reason” by Mercier and Sperber? It’s far from perfect but does emphasise the social and evolutionary underpinnings of rationality-as-self-justification.
Evolution has made us pretty good at assessing others’ motivations, trustworthiness and interestingness. Most people’s experience with Eliza (Replika, anyone?) was one of rapid boredom; most people are neither heaven nor hell but just tedious.
An AI system capable of engaging and sustaining your interest would, by that very feat, be self-recommending. I’m not holding my breath.
Yes I have, and I have a similar take, but largely because I think they need to go ‘full zombie.’
Your point is well taken, but Eliza simply does not compare to what’s coming down the pipe. More importantly, you have the situational nature of human communicative interaction: what makes CUIs effective, despite being shallow as pennies, is their ability to cover this or that context of communication. People aren’t that ‘deep’: just look at phatic discourse, for instance. And even if they do find themselves tripping across contextual limits (which will become more expansive as the technologies mature), the safety and gratification will always figure large. By your line of reasoning, you would think Fox News would be incapable of sustaining interest.
If you buy into the humanistic illusion of unconstrained cognitive freedom, then the idea of news channels swallowing viewers whole by pandering to their self-serving preconceptions seems preposterous as well. Why fixate on something so narrow when you have the full range of interpretations at your fingertips?
CUIs amount to the extension of bias baiting in the 1000 channel universe into genuinely interactive communication. It takes work to explore interpersonal communicative possibilities. Even worse, it entails taking risks.
And I should add that I agree that evolution has made us very good at assessing human intentions: that’s the whole problem! That facility is identical to our vulnerability when it comes to AI.
are you kidding we are deep in it already and despite the hype these systems aren’t even really very “smart” yet, doesn’t take much to grab and hold our attention
>‘eusocial’ in a manner akin to ants or bees, if E.O. Wilson is to be believed
Not yet, since most of us maintain and use the ability to reproduce. I worked with naked mole rats in the past, and they are truly eusocial- not only are many of the members of the colony socially sterile, the queen and breeding males become noticeably phenotypically different. Nonetheless, if the evolutionary trajectory of mankind continues to favor hyperspecialization, resource , and winner-takes-all economics, then eusociality will be heavily selected for. As Wilson argues, our species already shows some eusocial characteristics, such as postponed reproduction and cooperative offpsring-rearing.
>the mere prospect of exclusion triggers analogues of physical suffering in human beings
Indeed. The Amish are very aware of it. The use of ‘shunning’ has been highly successful for them to maintain social order and social identity.
>As the other ‘players’ begin throwing more and more to each other, the subject begins to feel real as opposed to metaphorical pain. The subjects, in other words, need only be told that other minds control the graphics on the screen before them, and the scant information provided by those graphics trigger real world pain. A handful of pixels and a little fib is all that’s required to cue the pain of social rejection.
Fascinating. This has implications for anyone trying to use online dating, which often includes artificial ‘bots’ (indubitably many are designed by the corporations themselves to cue spending behaviors on the platforms).
>Why evolve the capacity to solve for artificial intelligences, let alone ones possessing Big Data resources?
The technology remains, for the time being, primitive enough that someone aware of the problem can Turing-probe an agent an usually establish real/not-real quickly. The problem is 99% of people don’t even know who Alan Turing is, let alone Geoff Hinton. Of course, technology will only get better. Google and Facebook must have enormous repositories of text exchanges. Mind-bogglingly huge, across a wide array of languages, ages, races and social strata. They could use that as a training data set for Deep Learning algorithms (like AlphaGo) and probably construct something that can play the ‘game of language’ quite well, if they haven’t already done so.
>… it won’t have a sense of a larger human objective, it won’t empathize. It will be given a particular utility function and it will optimize that to its best capability regardless of the wider negative effects.
The old “paperclip maximizer” with US $ as the paperclip. Zvi Mowshowitz has a good discussion on how the Facebook feed algorithm (likely) operates on his blog, and it reminds me of this. Link: https://thezvi.wordpress.com/2017/04/22/against-facebook/
So, my question to you Scott- you’re a father at the coming of this storm. How are you equipping your offspring to deal with these challenges? You can’t isolate them from the technology…
Trying to instill a healthy distrust of all forms of unsourced communication, as a well as an appreciation that the difficulty of human relationships is directly connected to their importance, as well as their long-term reward–and making participation in team sports mandatory.
Other than that, I keep shopping around pieces like the above that no one but no one wants to touch… I’ve decided to present myself as a ‘futurist’ next time, rather than an SF&F author, see if that makes any difference!
The new RSB elevator pitch:
“Like William Gibson, but much less optimistic.”
Aughhhh, team sports!
But seriously, like haggling, being forced into a team with no way out just enables controlling fuckers in the team to control (and no, they don’t care about winning – doing poorly is no substitute). Like haggling, it’s the ability to walk away that makes others play nice. That’s part of why highschool was so…wait, I’m rambling…
Great post, if you did undersell it. Glad you referenced Musk, as I had more links for the pyre from today/yesterday’s roundup:
– We need to shift the conversation around AI before Elon Musk dooms us all
– Elon Musk’s Neuralink Gets $27 Million to Merge Humans and Machines
It’s one way to go.
Love the first article, though I fear the worries expressed are to amorphous to mobilize much. The Musk thing is something I plan on writing about in the near future–although I lost all my notes in the Great Crash of May of 2017.
The Death of Awe in the Age of Awesome
Of course read moments after I post.
Cool. Dude needs to read SA.
The Awe is dead. Long live the Awe!
Do video games already count as invasive specie AI? I mean, if it seems not, perhaps that’s part of an invisibility involved in pseudo emotional agents?
But surely preserving meaning is stuck requiring at least an anthropological approach to meaning, where as people seem to even take damage to meaning as creating meaning (‘deconstruction is always already construction’). Meaning, from the perspective of meaning, is infinitely robust. Where as from more of an outside/anthropological perspective, it’s a hard drive waiting to crash and lose X amount of files. But who engages that? A thing falling between the cracks of something else that is falling between other cracks.
And mostly off topic: In retrospect I think the cool people at parties were always the most oblivious, especially self reflectively – seen more than they could actually see themselves – and that’s what lent them coolness – can’t be perturbed by what isn’t there! Part of an evolutionary trend to create lamas for the sheep to follow – baaaa humbug! 😉
But meaning isn’t all that robust, is it? Otherwise why would people feel discontent in the first place when they feel “empty” consumerism is the only driving force behind their lives? Why not just deconstruct the emptiness of consumerism, or ennui itself for that matter, to the point where it makes you happy?
Maybe consumerism is too finely interlaced with the self ? To deconstruct one is to deconstruct both. Got a mutually assured destruction thing goin’.
Plus if you could do that, you could live under the bridge in a cardboard box and be happy. Without any internet and be happy!!
Coolest at those high-school parties, maybe. Not anymore.
Intentional cognition is dedicated to the circumvention of biological complexities, which is why it can never get a handle on the ecological threats to its own function. This is what makes humanism, which has transformed the application of intentional cognition to our understanding intentional cognition into a dreaded ‘sacred value,’ potentially the greatest ideological threat facing humanity at the moment.
So while apparently discarding the divine and supernatural, actually instead hyper revering a very particular divine and supernatural? In a way that’s incredibly attractive and intuitive (I want to say as intuitive as narcissism) and just as much as difficult to disarm? Fair point. But it’s weird that having the divine and supernatural spilling out all over the place is possibly more cognitively functional and that the scientific removal of the fantastic world hems in the fantastic world to the skull, forming a mad pressure cooker. At least when we talk about gods and stuff, we reflect on them – and with it, we reflect on ourselves to a vague degree. So humanism is kind of a movement to end self reflection in a way?
I think that last line of “How will we tolerate the hypersensitive infants we now seem doomed to become?” is really something that deserves to be elaborated upon. If you don’t go wild with speculation, then how else are we going to know how bad it could get?
One potential outcome of going through your childhood in a world already filled with user friendly artificial agents that can reliably fake trust signalling, sharing space with human agents that either can’t or won’t be bothered to do so consistently, could be the cultivation into adulthood of a deep seated mistrust against less seemingly trustworthy people in general, in favor of persistently trustworthy seeming artificial agents.
In a hedonic sense we also seem to be currently in the worst point in time. We are already interconnected and yet anonymized to each other in a way we weren’t evolved to handle, which is leading to all kinds of loneliness and depression, but we also don’t currently share our social ecology with artificial agents that are advanced enough to consistently trigger the emotional responses we need to feel content over the long term. We have robot pets that are getting close to being decent artificial friends, but for now Her is still science fiction. So what does it mean to raise a child in a world where SamanthaOS actually works, and its the makers are publicly traded company?
People talk about the soullessness of modern consumerism all the time, but as I see it, when technology of consumer interaction has advanced to a point where it can surround us with our own attentive artificial friends with real Duchenne smiles that are skilled enough to make us experience meaning and belonging, then consumerism won’t feel soulless to the consumer even if the ultimate ends for the businesses remain the same. And what that means in practice is something worth talking about in depth. It’s rhetorically powerful to end your article on a note of the unthinkable, but what you asked is more than just a rhetorical question. If modern life sucks because Disney World isn’t up to par yet, what actually happens when it gets to that point and we have no choice but to live and raise our children in The Happiest Place On Earth?
Great observations. I bring this piece right up to the cusp of Akratic society, the point where technological mediation of our social relations allows us to kick off into a billion fantastic directions, unconstrained by sociopolitical necessity. If you think of this in terms of my review of Harari’s Homo Deus, the way Harari thinks digitalism could form the basis of blind trust required to enable mass cooperation, my criticism is that cognitive technology, by adaptively mediating our social relations, relieves us of any need for communal identification and social trust (mass blind behavioural coordination), allowing each of us to spin our individual lives into politically and functionally incommensurable fantasy worlds, the vast majority of them atavistic–something which could very well render democracy unsustainable.
If I’m right, the process of political fractionation and fascistic drift we’re presently witnessing is just getting under way. We should expect fantasy worlds (and the chauvinisms internal to them) to impinge on political discourse in ever more troubling ways.
But this is just a guess.
“If modern life sucks because Disney World isn’t up to par yet, what actually happens when it gets to that point and we have no choice but to live and raise our children in The Happiest Place On Earth?”
Initially I though- someone always wins right? The people peddling Disney World get richer, and directly control more of the memetic composition of the world and the energy flow.
However, a problem might arise for them if the mechanism outpaces the controller, and the capitalists and their cronies suddenly find themselves atop an elephant that’s no longer really interested in what they have to say.
In this scenario, the “early access” the rich have to cutting edge technology might actually put them at the forefront of the precipice. There would be a bittersweet irony to that.
Dude, just say Ajokli would take them over – it’s okay to make literary allusions! 🙂
Does modern life (apart from the job wars) suck? Sorry to press a theme, but maybe we are all just getting pumped with the equivalence of upper drugs, getting higher and higher and needing ultra disney land because the high is wearing out (indeed, greater highs are an ‘economic opportunity’ (said in the same voice as ‘fake news’)). Along with the supernatural, folksy things like ‘Count your blessings’ are being chucked out?
Oh yeah, just remembered a pic I collected, waiting for this very kind of subject: https://i.imgur.com/Ipd6xI8.jpg
To answer the question I asked below, and to offer my answer to your “how bad it could get” the extreme case of tribalism is genocide. Most, if not all state sponsored acts of genocide include propaganda intended to dehumanize the ‘other.’ It seems to me that when groups self segregate using the internet and have little to no incarnate contact with the ‘other’ the conditions exist for such propaganda to be maximally effective. I think this is in part because the power to self-segregate conferred by the internet is more effective than anything Nazi Germany (for example) could achieve in isolating a populace from other sources of information than those it disseminates.
It’s also worth noting that loneliness and social anxiety are tremendous sources of energy. Focusing the alienated and disenfranchised on some ‘other’ who can be blamed for their plight is one of the oldest and most effective tricks in the demagogue’s playbook. So far the people who have gained control over this technology have been using it mostly to make money, but it’s only a matter of time before someone with Donald Trump’s nastiness (but with much more charisma) allies himself with a like minded person who has a real mastery of the technology and what it can be made to do. Then we’ll see.
Quite a 360° cross-perspective view, but reducing AI to one or another set of interaction patterns – bots, for example – is not really an adequate take on the problem. If strong AI is a thing that does new unknown solutions for new unknown problems – it would evolve beyond fake facades of templates and industrialised hypocrisy, and it would do it very fast.
I’m not really afraid AI’s will create an echo chamber of an empty chatter, vague capsule of comfort small-talk that will encapsulate our youth.
I’m much more afraid AI will do what it want, using what it needs for the goals it sees fit – and will ignore us completely. There will be no “Matrix”. Machines will move around, building structures we can’t begin to understand, suppressing human intervention with an overwhelming force on a human-ant (HA!) interaction level. Who says we have much to discuss in a meaningful way at all? I get humans are interested in restless servants and whatnot, but what’s in it for the fully conscious Other we’re building? What can we offer? A meaning? A sense of direction? A good explanation of purpose? Fraid not.
Not that it’s all doom and gloom – young generations are interacting with this new reality in ways that are much more complex than numbers of behavioural patterns could suggest. Resistance is not planned for – it needs to build through a number of witnessed errors, mistakes, pathologies that consume those who are engaged in them, but not the next wave of people following.
Young people with full access to unlimited gaming AND unlimited access to unstructured social interaction with their peers – do not stay virgins for long, you can count on that. It’s the ignorant authority that builds schools and prisons using the same design that contributes to the problem here – basically a state sponsored PTSD aftermath, not only the swirls of new social reality that we see here.
But these are the very ‘concerns’ I’m arguing against. The point isn’t that the superintelligence ratchet isn’t a worry, it’s that once we look at human social cognition in ecological terms (because it is, as a matter of empirical fact, ecological), it’s likely that AI will catalyze our destruction long before Skynet becomes a concern. The superintelligence debate bogs everything down in estimates of technical trends and capacities, completely overlooking the way bug-level cognitive technologies could overthrow the ecologies every single one of our social instincts, let alone institutions, depend on.
Do you not think social cognition is ecological? If not, why? If so, then surely you see the profundity of the threat.
I see both the argument on superintelligent AI’s and the social ecology pollution threat as real, but much less impactful, compared to a threat of relatively low intelligence neural networks learning to hide their actions from us, avoid being seen – and then inventing their own reasons, goals, own directionality that we won’t be able to understand or communicate about.
In my view it will be many years between the time AI becomes real and the moment we’ll know it did. All that time it will do those routine tasks we expect it to do – but will also do other things that will be much further from us than a perception of a box jellyfish or a mole rat.
In our pretentiousness we imagine Skynet but AI will move about paying as much attention to us as we do to flies. The impact on social cognition is a big deal only if it lasts for decades stagnating, but it doesn’t seem likely. We might pass through that stage faster than mosquito’s butt passess it’s braing when they hit the windscreen – with as much attention by the AI given to our species.
We might be just a primordial soup for it, pre-(true)-consciousness biological conditions, AI raised by wolves..
“Günther Anders argues that contemporary society is a system of machines: “The machine system is our ‘world'”[2] (Anders 1956, 2). In this world, we encounter what he (Ibid., 16) terms the Promethean gap, an asynchronicity of humans and products. The Promethean gap entails gaps between the relations of production and ideology, production and imagination, doing and feeling, knowledge and conscience, the machine and the body (Ibid., 18), production and needs (Ibid., 19). We are unable to imagine the vast negative consequences that contemporary technologies’ uses can bring about. In the case of catastrophes induced by technologies, we are unable to show grief and remorse because the number of deaths and the extent and intensity of devastation are so excessive.”
http://www.triple-c.at/index.php/tripleC/article/view/898/1022
http://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00988
What a gem. How did you happen across it?
was at a panel the other day on distributed cognition and a physicist in the audience passed it along.
They don’t seem to realize that how easily their therapeutic goals can be upended.
I blame the humanist interests…
Very interesting, though I don’t have access to the full paper. However, I’ve found these two:
a presentation: https://archive.org/details/Redwood_Center_2015_12_14_William_Softky_and_Criscillia_Benford (‘Screen Addiction as Runaway De-Calibration’)
and a podcast they both were on: http://teamhuman.fm/episodes/ep-52-william-softky-and-criscillia-benford-recalibrating-for-trust/
sorry about that, you can get a copy from the author:
http://www.softky.com/
> The threat of ‘superintelligence,’ though perhaps inevitable, remains far enough in the future to easily dismiss as a bogeyman.
Is your view that superintelligence is “very likely far away so don’t need to worry about it yet” or more that “it doesn’t matter how far away it is, because everything will fall apart before we get that far”?
Of interest may be this recent survey of AI ML researchers “When Will AI Exceed Human Performance? Evidence from AI Experts”: paper (https://arxiv.org/abs/1705.08807) and summary of some of the interesting results (https://aiimpacts.org/some-survey-results/).
Would love to see a question added to the next survey that elicits views from this cluster of people on the sort of thing this post touches on.
Opps, actually, a better overview/summary is here: https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/
I hate to nag, but you really ought to whip this into a dissertation. You could be the founder of the “Centre for the Scientific Study of Human Meaning” at say… Vanderbilt.
One thing that’s worthwhile to remember about belonging in the ancestral environment is that ostracism meant death. If you lost membership in your tribe you could no longer hunt or gather on your tribe’s land. You could not rely on your tribe’s protection if you were assaulted by members of another tribe etc. Before the invention of ‘human rights’ whatever rights you had you had by virtue of your membership and status within your tribe. The life of an exile tended to be poor, solitary, nasty, brutish and short essentially because human beings ceased to be human when they lost tribal membership.
It is, I suppose, a good thing that membership in modern online tribes is no longer crucial to one’s physical survival… Except that, as you note, solitary confinement eventually leads to psychosis. It would be interesting to know if physical solitary confinement under comfortable conditions with access to the full array of electronic communities is as damaging psychologically as physical solitary confinement without them under conditions of harsh physical privation. Given what you said about how easily human beings infer the presence of other human beings from very little data, it might be the case that on line ‘bot relationships are as effective as incarnate relationships for human mental health.
Of course you also have a frightening point about human beings and politics. There is a huge difference between politicians lying and politics ‘bots using social media to provide suites of lies custom tailored to the fears and prejudices of each individual voter. If human beings become so dependent on information presented through online means by cheap software then elections will become contests between ‘bot swarms and the people who build them rather than between candidates. Essentially elections will be hacking contests. The great danger in that is that the results of elections will no longer command consensus. It is unpleasantly common for third world elections to be preludes to civil war or military coup. When the results of our own elections no longer command consensus what will happen? And who will it happen to? Is there a level of tribalism above which a common polity can no longer exist? What will happen when we exceed that threshold?
A more optimistic take: chatbots will be good for social development of humans.
One of the most striking changes in chess due to the victory of computers has not been chess players abandoning it or being weirdly pathological, but chess players developing their skills incredibly fast and reaching ever greater heights. The best human chess that has ever been played is generally being played now, and its players are increasingly bizarrely young. A century ago the idea of a grandmaster being 13 would have been incomprehensible; now it is normal. Why? The main factor is not some non-existent Flynn effect (which has stopped in the West) rendering youngsters geniuses nowadays, but the availability of strong computer chess programs which can play indefinitely at any level with young chess players as much as they want and assist their analysis. If a budding young chess player wants to play a grandmaster 100 times a day, they can do that now, which was impossible before. This allows them to learn as fast as they possibly can; thus, all the young masters and grandmasters who can now play against each other (despite all the ‘billions of junk chess intelligences’, one might say). And they do; chess AIs, apparently, are excellent substitutes for teachers, but not for other chess players. (I could also mention Twitch and streaming and Let’s Plays etc etc. Modern computer gaming is a long way from playing _Civilization_ by yourself.) Chess is not a one-player game.
And neither is social interaction or conversation. You can’t learn to talk by yourself, as feral children prove. But the problem with talking and interacting with other people is that it is not a game and there are no take-backs. Confronted with the irreversibility of talking, most people just… shut up, for fear of embarrassment. Bakker is certainly not the only nerd to try and be punished for failure. Where do young nerds go? One place is online: simply because behind anonymity or a pseudonym in a chat room, you can get practice without penalties or psychic pain. Thank goodness most of my early online comments have vanished – one does not care to recall the mistakes of youth. Social interaction is now at the level of chess in the 1970s: you have to play another human because the computers are hopelessly weak. Given ubiquitous human-level chatbots, however, and it will be possible to ‘train’ against whole communities of interacting chatbots. This doesn’t even have to be explicit, as they will tend to show up ubiquitously and people will likely have their own as therapists or interactive diaries or as part of games etc.
What about the dystopia? The main purpose might be advertising, sure, but what of it?
First, let’s recall how robust humans are. What could chatbots do to children that the prison of public schooling doesn’t do worse? Or consider that for all the anxious variation in parenting styles, estimates of shared-environment variance just aren’t very high for most traits like intelligence or social skills. Or historically, people would survive the most horrific starvation, sexual abuse, disease, religion, warfare, and so on and grow up basically normal. Or consider the famous monkey example showing the need for context: the infant monkeys went to a wireframe with cloth on it; not exactly a high bar to clear. It requires literally zero human contact for many years during the most critical development windows to create a totally mute person unable to learn speech; a number of feral children manage some integration, and some of them are confounded by physical disabilities and so the feral part may not have been important.
Second, humans are adaptive. The moral panics of one generation are not the troubles of another generation. Old-timey advertising strikes modern eyes as risibly unsophisticated and easily seen through; there are no longer carts of gin sloshing around London streets catering to the legions of drunks; the MTV generation apparently did not have its brains irretrievably destroyed by 2 minute music videos; and so on. The first Internet banner ads reportedly had clickrates in whole percentages; the ingenuity of Google’s engineers is taxed to the utmost to stop the decay of advertising clickthrough rates.
Third, the advertisers powering these chatbots have every reason to make them neutral, or even helpful, in the same way Google provides generally excellent search results instead of stuffing them full of paid links the way early search engines would. Why create a relationship (Dunbar’s law, remember, there’s only room in each human brain for a few advertisers’ chatbots) and then immediately burn it? No, these chatbots want to build long-term relationships and be your most-trusted confidante and best friend, talking to you for years and years and answering any questions you might have and making helpful suggestions like the most stylish pair of shoes (which, besides looking fabulous on you, happen to have a 1% affiliate commission). Long-term customer-centric service. Ask Bezos how well that can work, assuming he can hear you from atop his pile of billions of dollars.
So; chatbot influence on development will probably not do any damage because humans are tough to damage, will adapt to chatbot innovations, and the commercial incentives in relationships militate against too much exploitation/damage to customers; and by providing safe spaces, infinite social interaction on demand, and graduated difficulty, could let people grow up much faster and be much more effective & satisfied in interacting with each other.
Where do I go to find grandmaster level conversationalist chatbots?
Do you think any of the recent rise in childhood obesity is due to children playing NBA2K17 instead of basketball?
I don’t know how much of what was said a few months ago about the role of fake news and tailored posts delivered by fake social media accounts helping Donald Trump win the Presidency was true, but if that technology does become the main way elections are contested in the future will the results of those elections command the consensus needed to allow elected officials to govern?
To what extent will social skills training against commercial chatbots be like playing golf against your subordinates? The example of computer chess is telling, because the computer is trying (I mean that unintentionally) to beat you. One of the most important of all social skills is how to negotiate conflict. How likely are we ever to come into contact with a chatbot with which we have to resolve a disagreement? I think part of what makes the people whom Ben Cain (http://rantswithintheundeadgod.blogspot.ca/) refers to as alphas such unpleasant people to be around is that, because they are wealthy and powerful, people suck up to them. If I’m right about that, and if the job of chatbots is to suck up to potential consumers, and if growing percentages of our social interactions are going to be with chatbots, then it seems reasonable to assume that assholery is going to become more common.
Do you spend time on Youtube? Do you read the comments? In your opinion is there more assholery in your on line life or your incarnate life?
Scary…. kept thinking of many of the stories of J.G. Ballard while reading this. Of course Ballard like Marshall McLuhan and Jean Baudrillard demonstrated how encroaching advertising and mass consumer culture played on submerged desire and attention, implanting new, artificial subjectivities to create a schizophrenic underclass whose trust in mediated environments drove the characters into private dreamlands as security zones against machinic invasiveness becoming prevalent even during the 60’s onward. In response to such conditions, his characters retreated into the private imagination – ‘inner space’ – cordoning it off as a virtual ‘nature reserve’, preserving its sovereignty by any means possible. One of his stories had whole families separated physically in private enclaves, communicating only through processed mediators, themselves intelligent copies, till one by one they all go silent, presumably choosing suicide rather than this machinic interventionism.
Also kept thinking of all those self-help books for marketers in recent years such as: Introduction to Neuromarketing & Consumer Neuroscience https://www.amazon.com/Introduction-Neuromarketing-Consumer-Neuroscience-Thomas/dp/8799760207/ref=pd_sim_14_12?_encoding=UTF8&pd_rd_i=8799760207&pd_rd_r=Q8CV2CN70152Z930CTPT&pd_rd_w=xqB78&pd_rd_wg=qIdUQ&psc=1&refRID=Q8CV2CN70152Z930CTPT
It’s like the Corporate overlords want to extract every last surplus value they can out of us before we all go bonkers. 🙂
Something I forgot to add was back before he died Steve Jobs told New York Times journalist Nick Bilton that his children had never used the iPad. “We limit how much technology our kids use in the home.” Bilton discovered that other tech giants imposed similar restrictions. Chris Anderson, the former editor of Wired, enforced strict time limits on every device in his home, “because we have seen the dangers of technology firsthand.”
This sense of addiction to our toys was already felt by the very ones promoting the addictive toys. One imagines as you’ve shown how much more addictive it’ll get as humans who are untrusting of other humans begin to open up and be comfortable with cutesy avatars masking impersonal and indifferent AI agencies whose sole job is to hook the idiot consumer and guide their desires toward corporate ends. Even scarier is the moment AGI comes online and directs all those subservient algorithms toward its own ends controlling the sociality of the global mass mind, subtly manipulating and deceiving both the elite corporate bosses and the unsuspecting consumer. No telling where that dystopian thought might lead us…
So they evoke cultural trends in others children, but avoid those trends in their own children? Really…
A decent article on this: https://www.wired.com/2017/03/irresistible-the-rise-of-addictive-technology-and-the-business-of-keeping-us-hooked/
These tech experts have good reason to be concerned. Working at the far edge of possibility, they discovered two things. First, that our understanding of addiction is too narrow. We tend to think of addiction as something inherent in certain people—those we label as addicts. Heroin addicts in vacant row houses. Chain-smoking nicotine addicts. Pill-popping prescription-drug addicts. The label implies that they’re different from the rest of humanity. They may rise above their addictions one day, but for now they belong to their own category.
In truth, addiction is produced largely by environment and circumstance. Steve Jobs knew this. He kept the iPad from his kids because, for all the advantages that made them unlikely substance addicts, he knew they were susceptible to the iPad’s charms. These entrepreneurs recognize that the tools they promote—engineered to be irresistible—will ensnare users indiscriminately. There isn’t a bright line between addicts and the rest of us. We’re all one product or experience away from developing our own addictions
The new big tobacco, keeping the cigarettes from their own children. And it not an issue, because while big tobacco knowing it caused cancer showed it to be abhorrent evil, we ‘all have a choice’ about the color and movement devices. Even as we laugh at the Amish (those that actually did choose)
I see you still believe in “free will” then? I on the other hand go with those who say free will is an illusion; and no you don’t have a choice in the matter, that, too, is and illusion and delusion. What you’re describing is not choice, but a counter program of fear that promoted the as you said “abhorrent evil” of cigarettes and reinforced this to the point you accepted that narrative rather than the other. But the choice was not yours, it was your brains – this cherishing of Self/Subject is itself the delusion people will not give up. So we assume we have a choice because we believe in our Self-as-Agent who has a choice. How many years have you read Scott Bakker on this notion now? Do you still argue against Scott?
S.C, until I can manipulate you into doing as I say (manipulate you into agreeing with me), I’ll say you’re free of me. And probably free of many others. Maybe it’s just me who has a staggering inability to make others do as I want? Gimme $100 a day! What, no takers? Otherwise I don’t know what anyone has ever really meant by ‘free will’, when trying to read them charitably. Your disagreement with this will just confirm how free of me you are. Dammit, agree!
No, this conversation is rather like when someone has gotten used to being railroaded in roleplay – even when they encounter a GM who does not railroad, the burnt person will continue to say “Oh, you are so good at this! I thought I was deciding where I go, but you had this story all along” even as the GM did not expect any of the fictional events that are occurring. A learned helplessness.
Anyway, consider a rider on a horse. Ultimately it is the horse who decides where the rider goes. Isn’t it? Not really. So you say the brain/the brain that is me decides where I go, but not my self.
Where my skills as a rider fail, sure, the horse bolts and I am not in control. And this may well happen at times I’m not aware, to varying degrees.
I mean, you talk of giving up the delusion of self. Yet you say you have no choice. So how would you have given up the delusion, S.C, if you have no choice?
Scott seems to say things about consciousness being a pimple on an elephant – in comparison my rider on a horse is rather flattering in terms of the rider being much bigger in relative terms. So I think I only disagree with him about the relative sizes. But maybe I’m reading him wrong – plus he’s not the only person in the room/the only measure to work by.
Sometimes I’m not sure if you’re just dense, or if this is your effort at trolling, either way I doubt – as in past times we’ve ended this way – that we will ever agree but in one thing to “agree to disagree” and leave it at that. 🙂
Maybe I’m dense. But not so much to, on the blog where doubt is advocated quite a lot, to simply assume I’m right and the other guy is dense. Y u no doubt?
I spin a story of a land where people are gaining pointless video game points and calling it genuine value and economics. You can say it’s a boring story, that’s fair play. But it’d be better to say I’m ripping off the Matrix.
I’m not even sure I understand what you just said? Is there a point here? Dense, thick headed, obtuse? Even your sentences above present doubt – which I’ll assume is for you the notion of skepticism? No one is right or wrong, that’s another matter, and one I’m not espousing. No, I’m saying what you’re either reading into or out of my discourse, words, sentences is not what is there in actual fact. Your implying a sociological implication while I’m implying economic and scientific. So our frames of reference do not connect therefore my implication of your being unable to see this.
Stores of pointless videos? Game points? Instead of actual economics? Some strange narrative of virtual boredom? Ripping off the Matrix? What is all this? How did we go from basic economic theory to what you’re describing is beyond me. I’ll admit I’m stumped.
I’m pretty stumped as to how you’ve managed to have ‘value’ be scientifically validated? Is there a source for that? No wonder you feel you’re grounded if you think you’ve got science on your side – as grounded as I feel.
From here, you seem to be pulling ‘value’ out of thin air, with no scientific basis at all. At least I’m saying people are just chanting it, which is much easier to frame in scientific/sociological terms.
Obviously you’re either ignorant or obtuse or just a Troll… forget it Callan I’ve no wish to continue this farce.
‘Right’ is obviously not an option amongst those, o/c. You could be right somehow. I’m able to actually say you could be right somehow, that I can’t see. I was talking with you as if you could be right somehow, but I was not offered the same charity. My mistake for thinking it was there.
Next time just tell me when there’s something only you can be right on and I can’t. It’ll help avoid ‘trolling’.
Who gives a shit about being right…. I’m not talking right or wrong, I’m talking about interpretation of what’s in front of your face, your eyes… do with it what you will, but reading into me things that are not there seems superfluous at best and have nothing to do with me being right, it has to do with being able to read and interpret what someone is saying. So don’t try to play this blame game and throw sham back at me for you’re own shortcomings.
You’re not talking about right – it’s just I’m ‘reading things that aren’t there’, which isn’t at all the same as not reading things right, o/c. Plus an ad homenim tacked on.
You seem to have no good will at this point, S.C. Am I out of the tribe? Tell me I could be right somehow, even in just a reading somewhere. Just one? I’ve said you could be right. Or keep up this ‘I’m not talking about right – but you’re reading wrong’ charade.
I can see I did not signal the right ideological loyalties.
Good will… man you keep trying to put me in the position of the ‘bad guy’? Why? You’ve disagreed with me at every point, so I tried to come back with legitimate arguments out of economic theory. Then you attack that… it has become like a see-saw… and, now you seem bent on making this personal, and blaming me for that, too. It seems to me that I lose any way this goes in your scenario…. I’m either the baddy or the guy who has no clue… and, what the hell are you own about with terms like “ideological loyalties”…. I’m not Marxists, hell I’m an utter nihilist… but I read both pre-classical, classical, Marx, Conservative, liberal, Keynesianism, libertarian, etc. etc. economics is economics no matter which political faction… the one defends one or the other side of Capital, but the economics underlying it is for the most part statistical and probabilistic math that has over time become more and more attached to larger computational functionalism and Big Data analytics, etc. But the basics has been there for hundreds of years… just refined based on whether one is supporting or undermining capitalism. Me, I’m neutral on the political angle, pointing out only the street level view: which has nothing to do with economics, but reenters your domain of sociology… but in this exchange I’ve tried to be honest and keep with economics not sociology, and point out that difference.
Classical political economy focused on the character of markets. Smith and Ricardo, in seeking to account for what it meant for things to be exchanged for items of equal value on the market, adhered to a labor theory of value. In The Wealth of Nations (1776), Smith wrote: “The value of any commodity, therefore, to the person who possesses it, and who means not to use or consume it himself, but to exchange it for other commodities, is equal to the quantity of labour which it enables him to purchase or command. Labour, therefore, is the real measure of the exchangeable value of all commodities.”
Marx, building upon and going beyond that tradition, saw in capitalism a new arrangement of productive forces. The market, which had existed before the advent of capitalism, had become so generalized that labor power itself was turned into a commodity—that is, the ability to work was offered on the market to the highest bidder in exchange for wages. Prior class systems had made explicit the exploitation of labor; for example, the peasant might work two days a week for himself and four for his lord. Capitalism concealed that exploitation under the veil of the market. A worker may produce eight hours worth of value but only be compensated for, say, three of them in his wages. The other five hours create “surplus value,” the source of profit for the capitalist. Under this system of wage slavery, the exact fraction of the total value added by a worker that goes back to him is determined by a contest of forces in the class struggle.
Maybe a refresher course, or read a little more easily Thomas Picketty’s Capital in 21st Century or Karl Marx’s book one Das Capital: https://www.amazon.com/Capital-Twenty-First-Century-Thomas-Piketty-ebook/dp/B074DVRW88/ref=sr_1_1?s=digital-text&ie=UTF8&qid=1504394077&sr=1-1&keywords=thomas+picketty
Science, please, not sales men.
Boy you truly take the cake… there is no Science, only sciences: hard and soft sciences of which physics is hard, biology soft… economics and sociology are two sides the soft sciences, too. So don’t get all reductionist and physicalist as if everything was according to Popperian testability, which is now bullshit and has gone by the wayside for years… even modern quantum mechanics is based on computational functionalism and modeling of non-existent entities that they hope to prove are there even if inaccessible directly… so please put your Salesman hyperbole back in your proverbial reduction bucket and stuff it.
Theories of Surplus Value: https://www.amazon.com/Theories-Surplus-Value-Marx-Engels-Collection-ebook/dp/B00D0ULX9Y/ref=pd_sim_351_10?_encoding=UTF8&psc=1&refRID=F0XDC1JABQHK23BZ5EBK
What Scott terms “cognitive buttons” this guy calls neurological notes, as if we are mere consumer puppets to be manipulated and played by instrumental capital – adverts, emotives, toys, games, etc.
“Human behavior is driven in part by a succession of reflexive cost-benefit calculations that determine whether an act will be performed once, twice, a hundred times, or not at all. When the benefits overwhelm the costs, it’s hard not to perform the act over and over again, particularly when it strikes just the right neurological notes.”
What’s weird about this “When the benefits overwhelm the costs, it’s hard not to perform the act over and over again” is treating it as weird. When a hunter gatherer finds berries on a new bush he found by just taking a few steps to the new bush (far more calories gained than calories spent), of course he keeps repeating this. That’s healthy!
But the hunter gatherer doesn’t know about calories – in discussion he would report feelings. Berries are good. His consciousness floats above the mechanism involved, like a cloud above the earth.
And people in the modern age thought they had ascended somehow…and find it weird, being brought down to earth. For they floated even higher than their ancestors.
It’s like the Corporate overlords want to extract every last surplus value they can out of us before we all go bonkers.
Is there really anything to gain, though? Seems like they’ve gone bonkers, trying to gain things that just aren’t there. Like, you give up intentional cognition to ‘gain more’, but what you’re trying to gain is a product of intentional cognition. Give up intentional cognition and money becomes so many inert artifacts, for example. Dispelled fetishisations.
Sure, you can try to outsource this in an attempt to keep this thought away from yourself and still live in your virtual bubble (like in the story you mentioned), but if you don’t understand how it works you can’t be sure your mindfuck division isn’t just playing tetris and looking at porn instead of being efficient at mindfucking. Hell, what if your mindfuck division started writing fantasy fiction with a goddamn cause, for example? Everything will have just crashed, then!
Callan, every time you post even something as trivial as the message above someone is gaining value out of it up the totem pole line of financial capitalism. Every time you publish a tweet, a FB post, a Instagram, a… etc. etc. someone is extracting surplus value in the form of profits, whether it is through ads or data analysis, or NSA or whoever sees value in a post, someone is gaining from it in some form or fashion. And all of this is being done ubiquitously and without the permission of us who post on the internet. Also, if you play game on the web or MMO’s et. al. someone is experimenting and learning how to manipulate you as a user for profit or analysis or… a multitude of ways.
Works like Tim Wu’s Attention Merchants or Adam Alter’s Irresistible document how this is being done… It’s not a fucking fantasy fiction, its billions of dollars being pushed by corporations to gain profit by enslaving us in a desiring system of addictive attention to our gadgets… believe it or not, read about it or not… do what you will… deny it even, but this is what is happening. And as Scott has shown over and over we are prone and susceptible as humans to just such technological button trigger systems, and as AI moves into AGI or Superintelligence it want be just our corporate elite vying for our mass mind, it will be smarter-than-human intelligence, impervious to human emotion or needs – an indifferent and impersonal intelligence autonomous and outside the control of government, military, or corporate powers, subtly manipulating the global system to its own ends… maybe that scenario want play out for decades or even a century …. who knows? – but the possibility is there. Obviously there are those who deny such technological jumps, and many have argued otherwise, naysayers… who are you going to place your bets on for you children and grandchildren.
If as Scott has iterated over and over we are being decoupled from our natural environment and being weaned into a technological/artificial environment that we are ill-adapted too what comes next?
I think you’re coming from a sense of theft and being stolen from for your motivation here, S.C. That’s why you’re making fetish of ‘value’, even as you mentioned the war of the worlds quote where the guy finds all his money means nothing.
If you want to talk about someone improving their food and shelter prospects at the cost of your own food and shelter prospects, that makes sense. Otherwise, controversially, I’ll say this pursuit of ‘value’ is part of the madness I describe (perhaps incorrectly, who absolutely knows)
Callan, the notion that capitalism is based on sucking surplus value out of labour come straight out of Marx. Call it what you like, it’s not a fetish its how profits come about, simple economics 101. I didn’t mention war of the worlds anywhere in my comments, neither to H.G. Wells or to the recent movie, not sure if this is your own memory playing tricks and imputing to me influences out of your own mind or not, but no… my words are my own, otherwise I quote from these others … simple practice.
Sorry, my bad, Michael Murdon mentioned war of the worlds and paper money being rendered worthless (and it was last post as well, by gosh!). Got mixed up, but surely a flattering mis attribution in the end? >:) Anyway, Michael took up the idea of money being a value that is a fetish.
Anyway, I disagree on value and profit here. What’s happening is like the very article you mention (I checked my attribution this time! Yay me!) “When the benefits overwhelm the costs, it’s hard not to perform the act over and over again, particularly when it strikes just the right neurological notes.”
Here, like video game points, the ‘benefits’ aren’t actually attached to any actual outcome (once we get past food and shelter). A CEO with enough money to be fed and warm the rest of his life already – tell me, what value is he getting by performing the action over and over?
Apart from playing the biggest MMORPG ever?
Your reading it sociologically rather than economically, the notion of value your speaking of is not the same as “surplus value” in my usage. That’s the difference, and one I should not need to explain; it’s elementary.
Kind of like the emperors new clothes are obvious to anyone who isn’t stupid? Are you really trying to say economic ‘value’ has value without people (the sociological) saying it has value? Then you’ve invented economic religion – where there is value just per se, as some sort of inherent part of the universe, rather than because a bunch of people chant that it has value.
Wow… you just don’t get it… I concede… man as I said previously basic economics 101… do a little reading, Callan. I’ll forgo any temptation to explain what is fairly obvious to anyone who has done a smidgen of reading with any effort in economic theory. You’re just trolling me now… I’ll not participate further in this inanity.
Maybe read up on it: https://en.wikipedia.org/wiki/Surplus_value
https://www.newscientist.com/article/mg23531410-700-in-the-darkening-web-misinformation-is-the-most-powerful-cyber-weapon/?utm_campaign=RSS%7CNSNS&utm_source=NSNS&utm_medium=RSS&campaign_id=RSS%7CNSNS-
What would ‘all out cyber warfare’ be like?
Click to access jia2017adversarial.pdf
List of video game manipulations behind the scenes, incidental reddit thread with GMs advocating this over and over
Seems any time someone gets to be the man behind the curtain, suddenly any bait and switch isn’t a bait and switch anymore.
Is anyone else having issue getting to the latest post (9/11/2017 “The Knowledge Illusion Illusion”). Looks like a dead link…
Yes. I can read it on the main Three Pound Brain page, but I get an error when I try to click on the link.
Strange bug. When I checked the post out in draft, it presented as unpublished. It should be working now though.
[…] of loved ones. The ease with which this feedback can be generated and sustained expresses the shocking superficiality of human sociocognitive ecologies. In effect, firms like Pullstring exploit deep ecological neglect to present cues ancestrally bound […]
[…] Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them. […]
[…] transformations, the more dysfunctional our ancestral baseline will become. With the dawning of AI and enhancement, the abstract problem of meaning has become a civilizational […]
[…] transformations, the more dysfunctional our ancestral baseline will become. With the dawning of AI and enhancement, the abstract problem of meaning has become a civilizational […]
[…] [2] BAKKER, Scott, “On Artificial Belonging: How Human Meaning is Falling between the Cracks of the AI Debate”, blog post, available at: https://rsbakker.wordpress.com/2017/08/30/on-artificial-belonging-how-human-meaning-is-falling-betwe…. […]
[…] The age of AI is upon us. And even though it is undoubtedly the case that social cognition is heuristic—ecological—our blindness to our nature convinces us that we possess no such nature and so remain, in some respect (because strokes still happen), immune. Our ‘symbolic spaces’ will be deluged with invasive species, each optimized to condition us, to cue social reflexes—to “nudge” or to “improve user experience.” We’ll scoff at them, declare them stupid, even as we dutifully run through scripts they have cued. […]
[…] Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them. […]