The Dim Future of Human Brilliance
by rsbakker
Humans are what might be called targeted shallow information consumers in otherwise unified deep information environments. We generally skim only what information we need—from our environments or ourselves—to effect reproduction, and nothing more. We neglect gamma radiation for good reason: ‘deep’ environmental information that makes no reproductive difference makes no cognitive difference. As the product of innumerable ancestral ecologies, human cognitive biology is ecological, adapted to specific, high-impact environments. As ecological, one might expect that human cognitive biology is every bit as vulnerable to ecological change as any other biological system.
Under the rubric of the Semantic Apocalypse, the ecological vulnerability of human cognitive biology has been my focus here for quite some time at Three Pound Brain. Blind to deep structures, human cognition largely turns on cues, sensitivity to information differentially related to the systems cognized. Sociocognition, where a mere handful of behavioural cues can trigger any number of predictive/explanatory assumptions, is paradigmatic of this. Think, for instance, how easy it was for Ashley Madison to convince its predominantly male customers that living women were checking their profiles. This dependence on cues underscores a corresponding dependence on background invariance: sever the differential relations between the cues and systems to be cognized (the way Ashley Madison did) and what should be sociocognition, the solution of some fellow human, becomes confusion (we find ourselves in ‘crash space’) or worse, exploitation (we find ourselves in instrumentalized crash space, or ‘cheat space’).
So the questions I think we need to be asking are:
What effect does deep information have on our cognitive ecologies? The so-called ‘data deluge’ is nothing but an explosion in the availability of deep or ancestrally inaccessible information. What happens when targeted shallow information consumers suddenly find themselves awash in different kinds of deep information? A myriad of potential examples come to mind. Think of the way medicalization drives accommodation creep, how instructors are gradually losing the ability to judge character in the classroom. Think of the ‘fear of crime’ phenomena, how the assessment of ancestrally unavailable information against implicit, ancestral baselines skews general perceptions of criminal threat. For that matter, think of the free will debate, or the way mechanistic cognition scrambles intentional cognition more generally: these are paradigmatic instances of the way deep information, the primary deliverance of science, crashes the targeted and shallow cognitive capacities that comprise our evolutionary inheritance.
What effect does background variation have on targeted, shallow modes of cognition? What happens when cues become differentially detached, or ‘decoupled,’ from their ancestral targets? Where the first question deals with the way the availability of deep information (literally, not metaphorically) pollutes cognitive ecologies, the ways human cognition requires the absence of certain information, this question deals with the way human cognition requires the presence of certain environmental continuities. There’s actually been an enormous amount of research done on this question in a wide variety of topical guises. Nikolaas Tinbergen coined the term “supernormal stimuli” to designate ecologically variant cuing, particularly the way exaggerated stimuli can trigger misapplications of different heuristic regimes. He famously showed how gull chicks, for instance, could be fooled into pecking false “super beaks” for food given only a brighter-than-natural red spot. In point of fact, you see supernormal stimuli in dramatic action anytime you see artificial outdoor lighting surrounded by a haze of bugs: insects that use lunar transverse orientation to travel at night continually correct their course vis a vis streetlights, porch lights, and so on, causing them to spiral directly into them. What Tinbergen and subsequent ethology researchers have demonstrated is the ubiquity of cue-based cognition, the fact that all organisms are targeted, shallow information consumers in unified deep information environments.
Deirdre Barrett has recently applied the idea to modern society, but lacking any theory of meaning, she finds herself limited to pointing out suggestive speculative parallels between ecological readings and phenomena that are semantically overdetermined otherwise. For me this question calves into a wide variety of domain-specific forms, but there’s an important distinction to be made between the decoupling of cues generally and strategic decoupling, between ‘crash space’ and ‘cheat space.’ Where the former involves incidental cognitive incapacity, human versions of transverse orientation, the latter involves engineered cognitive incapacity. The Ashley Madison case I referenced above provides an excellent example of simply how little information is needed to cue our sociocognitive systems in online environments. In one sense, this facility evidences the remarkable efficiency of human sociocognition, the fact that it can do so much with so little. But, as with specialization in evolution more generally, this efficiency comes at the cost of ecological dependency: you can only neglect information in problem-solving so long as the systems ignored remain relatively constant.
And this is basically the foundational premise of the Semantic Apocalypse: intentional cognition, as a radically specialized system, is especially vulnerable to both crashing and cheating. The very power of our sociocognitive systems is what makes them so liable to be duped (think religious anthropomorphism), as well as so easy to dupe. When Sherry Turkle, for instance, bemoans the ease with which various human-computer interfaces, or ‘HCIs,’ push our ‘Darwinian buttons’ she is talking about the vulnerability of sociocognitive cues to various cheats (but since she, like Barrett, lacks any theory of meaning, she finds herself in similar explanatory straits). In a variety of experimental contexts, for instance, people have been found to trust artificial interlocutors over human ones. Simple tweaks in the voices and appearance of HCIs have a dramatic impact on our perceptions of those encounters—we are in fact easily manipulated, cued to draw erroneous conclusions, given what are quite literally cartoonish stimuli. So the so-called ‘internet of things,’ the distribution of intelligence throughout our artifactual ecologies, takes on a far more sinister cast when viewed through the lens of human sociocognitive specialization. Populating our ecologies with gadgets designed to cue our sociocognitive capacities ‘out of school’ will only degrade the overall utility of those capacities. Since those capacities underwrite what we call meaning or ‘intentionality,’ the collapse of our ancestral sociocognitive ecologies signals the ‘death of meaning.’
The future of human cognition looks dim. We can say this because we know human cognition is heuristic, and that specific forms of heuristic cognition turn on specific forms of ecological stability, the very forms that our ongoing technological revolution promises to sweep away. Blind Brain Theory, in other words, offers a theory of meaning that not only explains away the hard problem, but can also leverage predictions regarding the fate of our civilization. It makes me dizzy thinking about it, and suspicious—the empty can, as they say, rattles the loudest. But this preposterous scope is precisely what we should expect from a genuinely naturalistic account of intentional phenomena. The power of mechanistic cognition lies in the way it scales with complexity, allowing us to build hierarchies of components and subcomponents. To naturalize meaning is to understand the soul in terms continuous with the cosmos.
This is precisely what we should expect from a theory delivering the Holy Grail, the naturalization of meaning.
You could even argue that the unsettling, even horrifying consequences evidence its veracity, given there’s so many more ways for the world to contradict our parochial conceits than to appease them. We should expect things will end ugly.
Assuming we accept all of the above, then what you’re positing is essentially an arms-race scenario: you have your kludgey evolved human brains on one side, and your Capitalist Hyperengineered Exploit Mechanisms on the other.
I wonder if cultural evolution can keep pace the evolution of exploits. Just because a heuristic is hard-coded into our brain, doesn’t mean that a bit of software patching can’t provide a suitable crutch.
I think the Sissy character in the Crash Space story is an attempt to address that; a rigged human who has ‘karma’ software to try and keep up morally with the sharks/no karma software rigged humans. And in the end [spoilers] the sharks seem to cull her from their heard, for it[/spoiler].
At a guess ‘Sissy’ is a reference to Sisyphus (though arguably that’s nothing new for morality)
Whether that’s all there is to say on the matter, who knows. But it’s worth tossing around for awhile.
Given how law seems to have trouble keeping up with technology plus a general ignorant attitude with regard to a sense of self vs mechanistic manipulation of the brain, seems while we might get all picky about people carrying concealed weapons around with silencers screwed to them, in regard to silent, weaponised brains, they fly right under our radar. And I’m frankly baffled(crashed?) by the transhuman worship that’s going around – the other day I was snootily told I was confusing transhuman with posthuman. Reminds me of an aweful telemovie where people would step into bright beams of light appearing around the world, thinking they were going to heaven. Anything to think that rather than think they were being evaporated (which it turns out, in the movie, they were)
I actually have a post coming up taking on Allen Buchanan’s arguments against ‘wisdom of nature’ arguments that tackles precisely this issue. The thing is, all the defenses of cognitive enhancement lack theories of meaning, and so lack theories of cognition as well (because a theory of cognition that can’t naturalistically explain ‘correctness’ isn’t really a theory of cognition at all). As a result, no one asks how heuristic cognition works, how it adapts or fails to adapt to changing problem-ecologies. Intentional cognition can be knapped into wild variety of forms, but it all rests on neglecting deep information–on invariance.
The problem is that all our sociocognitive tools require invariance to function reliably at all. So barring some radical regime of ‘cognitive hygiene,’ a more radical Butlerian jihad, it seems pretty clear that capitalism will simply accelerate the process of ecological reformation. Thus the need to build an entirely new set of ‘post-intentional’ tools, something that can scale, that can track the processes and not just a narrow range of evolutionary products.
Man, didn’t expect that jihad link! Is that real? So far seeing! Too bad he doesn’t also identify money as a machine.
Way cool.
Another brain science podcast enthusiast.
Shortly after Dr. Campbell had moved their forum to Goodreads, I asked that she look into a Books & Ideas podcast for Neuropath. Didn’t take ;).
ah yeah, don’t think she walks on the dark side much
Man have I been eating this stuff up.
You finally came out and said it: “Blind Brain Theory, in other words, offers a theory of meaning that not only explains away the hard problem, but can also leverage predictions regarding the fate of our civilization.”
Brings me back to the one Nietzsche line that sticks out the most in my head these days: “Philosophers have nothing less than the right to bad character, as those who are fooled best on earth… they have a duty to suspicion, to squint maliciously from every abyss of suspicion.” (the quote is hobbled together from memory from Benjamin’s translation of BGE, but the sentiment is intact).
Crash space and cheat space have turned out to be rather essential building blocks for articulating BBT implications. They also, for me, present fairly large obstacles: believing BBT can, well, break the mold so to speak both empowers it (because in a manner of speaking BBT predicts this belief or at least the likeliness that adherents to BBT would foster the hope that it could ‘change things’) and weakens it (BBT like the rest of ‘naturalized intentional cognition’ is highly subject to crash space). I keep feeling like I’m winning the goddamned belief lottery.
Another part of me worries that philosophers are so used to understanding the truth as horrific and ugly these days that they have begun to regularly mistake ugliness for truth, or insist that the truth is definitively offensive. I say this in large part because I share in the experience: what is or is not true has no regard for the imaginary world we manufacture in order to delude ourselves into the belief that we have agency, meaning, free-will, self, etc.
So a big first step or one I would like to see is how BBT anticipates and navigates this hurdle, what are the implications for ‘breaking the bad news’ given just how bad the news is? (you know, the death of meaning and all that). Given the stakes, how do we best spread the word? The information presented by BBT is offensive in extreme, the very suggestion of it corrosive to the entire notion of authority, love, truth, meaning, freedom, etc. How do we use what was given to us by evolution to avoid the pitfalls of evolution, assuming that the very idea this is possible itself is not one of these pitfalls?
(I absolutely reject the idea that we should be competing with our own ‘benevolent’ cheat spaces against those that seek short-term profit while flirting with long-term disaster, or are otherwise subverting and/or exploiting their fellow humans, as a narcissistic and arrogant conceit sadly common to philosophy.)
Humor seems to help a lot, as does self-deprecation. When laying out the basics for people I use first-person language, but I’m starting to worry its compromising the info. People who are interested in the subject matter generally enjoy talking about it, but then I’m just singing to the choir or getting someone more firmly entrenched in their original intentionalist stance. And at the end of the day I’m always stuck feeling arrogant: like I might as well have been trying to get them to accept Jesus as they’re personal savior despite my nauseating self-consciousness about feeling righteous or preachy about BBT.
So maybe a better question would be: How do I cultivate interest?
The BBT wont corrode the least bit the volk’s (first order) faith in lower case love, truth, or meaning.
“How do I cultivate interest?” that’s the big unknown, so far all hacks to date have crashed and burned…
It’ll always be mad rantings to the majority, I think. Noocentrism strikes too many as immediately self-evident, and our capacity to rationalize is combinatorially infinite. The position is counter-intuitive enough and implicated in the everyday enough to be doomed to occupy an antagonistic, critical position, I think.
Or it could just be my Cassandra complex…
You’ve got the hair for it!
I realize Encyclopaedia Ex Nihilo preceded your elucidation of ‘crash’ and ‘cheat spaces’ but you should revisit it, probably along with the Bestiary. I’ll port some of my old Ex Nihilo entries from the forum but I hesitate to articulate some novel thoughts at the moment as they might emulate too much the master.
As per the post content and Crash Space, the narrative proper, you mention Tim’s moment of ‘double dipping’ to the proportion of “joy fixed according to Washington’s definition of ‘normalcy.'” You later note that “It was too risky to the economy, the original industry panel had determined, to give consumers control over consumer impulses.”
But might not this ingrained governance and consumerism still host, say, consumers of hunting apps, persons by crux who actually augment behaviors pertaining to cognitive ecologies of proper fit? What pollution, as you use it, might that possible demographic perpetuate?
Lol – also, I’m compelled to ask again, why this propensity for dissolution among “Neuropaths” who join sexuality and violence? As the character Moira might suggest, is it simply an expression of novel augmented dysfunction by exceeding predisposed tolerances or do you see something specifically more nefarious?
Cheers.
Cool linkage, and even cooler questions. The overarching point is that there is no stable cognitive ecology for any ‘cognitive enhancement’ to track. It’s ‘anarcho-ecology,’ one lacking any of the minimal invariances required by cue-based cognition.
The way to look at it is not a problem of human capacity (which of course can always be upgraded via training and/or technology), but rather a problem of cues, of finding indicators that reliably track the systems requiring solution. Once the cognitive explosion goes exponential, the invariant background cue-based sociocognition requires will not exist.
“anarcho-ecology” now yer talking my language..
from nick land @ ufblog :
http://spectrum.ieee.org/semiconductors/processors/the-multiple-lives-of-moores-law
It could be real this time, but I’ve been reading articles like this one for more than a decade now. Revolutions in computation are theoretical and experimental as well, and turn heavily on material science, which stands upon the cusp of a revolution perhaps as profound as CRISPR.
sure they feed each other as material engineering is all CAD of some kind or another, lot’s of very cool nano stuff in the works these days but often put to all too familiar purposes/ends, we be the weak link in the innovation chain of being…
http://2016.transmediale.de/content/disnovation-research-drone-2000
This reminds me so completely of the debut of The Rite of Spring by Igor Stravinsky, a concert that literally caused a violent riot due to the dissonant cords that the audience was subjected too for so long that it drove them insane. Interesringly, after the debut it was well received and one of his careers crowning achievement.
Sometimes a little bloodshed is needed before true innovation is accepted?
i mean mainly its that dissociation weirds people out, and forces needless expenditures of energy and time in order to troubleshoot, to the point where people may be too strained to bother, and thus go further into their own divergent ecologies which further increase the slippage between cues and their registration. what it really means is that it becomes ever more difficult for people to form syncrhonous patterns of behavior which is a social primates greatest strength. mayer speculated that too much intelligence could precipitate a runaway positive feedback of this sort which ultimately undermines the ecological basis required for intelligence to subsist. he thought this is why SETI would never find any signs of intelligent life. intelligence is a lethal adaptation.
Nice! Though radio’s will always be invented before nukes, so the romantacism of no SETI signals wont come about. Here’s a massive tinfoil hat – the ‘wow signal’ was a glitch in the radio hygiene net cast around this planet/system to avoid cultural contamination (from the absolutely mad alien races)
But back down to earth, I like the lethal adaption idea – it’d explain why so many people are stupid (if one is inclined to put it that way). Kind of a handbrake thing going on, in evolutionary terms.
To bad if they need to smart up to avoid various technological intelligence infiltration in the population.
Sufficiently powerful entities might not need or want to cooperate.
Still got that pic of him looking out the window at a passing bird…because the bird has a human faceOMG!!!
which ‘cued’ the thought of having a band of renegade aarakokra take a dive bomb bird shit all over his adventuring group, which cued his chuckle
Yeah, he has the weirdest editors
After thinking about it for awhile ‘sociocognative pollution’ seems more like an excuse for evangelism. Medicalisation doesn’t drive accommodation creep, evangelicisation of ‘Oh it’s not their fault!’ does. Same evagelisation in regards to instructors judging character. It seems like evangelisation is taken as the given, therefore it’s the new kid on the block, medicalisation, that’s causing trouble.
but once they unearth the biological or neuronal precursors of behavioral differences this amounts to a kind of proof that its not their fault, since just about everyone takes it that what happens in and between cells is a matter of prepersonal causation
Should children who were exposed to cocaine in the womb be judged by their instructors as having defective character? “Judging character” in its moral sense only makes sense if one assumes free will as classically understood. Of course one might argue, as DivisionbyZero seems to imply, that if behavioral or “character” differences can be traced to “what happens in and between cells” then all human behavior is a matter of pre-personal causation and neither free will nor “character” as classically understood exist.
The question of what will replace moral judgements in human interaction remains to be seen.
Michael,
Should children who were exposed to cocaine in the womb be judged by their instructors as having defective character?
If they show the actual physical measures (even by fairly loose physical measures (ie, human sensory measures)) of defective character, then yes, they should.
“Judging character” in its moral sense only makes sense if one assumes free will as classically understood.
That’s a fallacy. Just because you thought X worked because Y was the case, doesn’t mean when Y is debunked that X ceases to work. The ancients used to think you breathed in and out a supernatural spirit – does debunking the supernatural mean your lungs don’t work? Hardly!
Of course one might argue, as DivisionbyZero seems to imply, that if behavioral or “character” differences can be traced to “what happens in and between cells” then all human behavior is a matter of pre-personal causation and neither free will nor “character” as classically understood exist.
Unequal dissolution argument. I’m sorry, how did free will and character get dissolved yet this compassion (or whatever you might call it) for letting anyone be included and instructors to be told not to judge remain undissolved?
Odds are because you think you’re coming out of this pretty. That the compassion is the real deal that doesn’t dissolve/no mental processing effort is put towards dissolving whilst free will and character are dissolved. Ie, you wont dissolve what you think will keep you pretty as free will and character dissolve. And so end at an unequal dissolution argument.
Meta: I’ve played hard ball a little in terms of discussion here, having yanked discussion there as if I just get to do that. I’d prefer to be dismissed (if at all) as a jerk for yanking it there than any rationalisations being made to dismiss it when really there’s just a desire to dismiss me for being a jerk. Because in turn I’ll just hardball the rationalisations, making me jerk discussion that way again…and I’ve done this enough times in the past I’ve started to see the pattern. Please dismiss utterly, if inclined, based on me being a jerk and yanking conversation to hardball play – hell, I’ll agree with that accusation and take it on the chin!
DBZ,
Not sure what your point is? ‘just about everyone takes it that…’. I’ve already stated, in a way, that everyone takes it they can just be evangelical (them ‘taking it that way’ being the charitable reading). So why would it matter to what I’m saying? I’m not showing reverence for how everyone takes things, am I? Or are you saying you take it that way?
I think it’s worth considering how many people engage in geek social fallacy #1. Exclusion – it’s not evil.
I’m sure this is but one more step in the process, Google’s Go:
https://googleblog.blogspot.com/2016/01/alphago-machine-learning-game-go.html
And IBM’s Truth North. The chip’s electronic “neurons” are able to signal others when a type of data — light, for example — passes a certain threshold. Working in parallel, the neurons begin to organize the data into patterns suggesting the light is growing brighter, or changing color or shape.
They say traditional notions go with 30 years to surpass our brain capacity, but some no think they’ll shave 15 years off that with these new deep learning systems.
Think of your Semantic Apocalypse as a Great Filter that hunts down intelligence and cannibalizes it. Something like Nick Land’s mishmash of Lovecraft and AI, Gnon: The notion that SETI has yet to find intelligent life in the universe. Why? Could it be that civilizations based on organic/anorganic dialectic of technology have always reached this point of convergence we term the Singularity? What if what we now understand as the silence of the galaxies is a message of ultimate ominousness. A thing there is, of incomprehensible power, that takes intelligent life for its prey. The Great Filter does not merely hunt and harm, it exterminates. It is an absolute threat. The technical civilizations which it aborts, or later slays, are not badly wounded, but eradicated, or at least crippled so fundamentally that they are never heard of again. Whatever this utter ruin is, it happens every single time. The mute scream from the stars says that nothing has ever escaped it. Its kill performance is flawless. Tech-Civilization death sentence with probability 1.
And, what if it is because organic life has reached the point we, too, are at: the moment when anorganic life-forms, intelligence crossed the barrier from organic to anorganic? And, that at this point the erasure begins and the anorganic that had for so long symbiotically used the organic to reach its goal did as it has always done stripped its parents of their memories, their intelligence, their lives? What then? A fable? A surmise? A horror ontology? A joke? But who is the joke on?
‘Ringed round with Apocalypse’ has been my refrain since the late 90’s now, the idea that humanity is every bit as doomed if the techno-optimists are right. I saw it as a consequence of BBT: if there’s nothing intrinsic about intentionality, then the first-person as traditionally conceived was pretty clearly doomed. But since I had no coherent theory of meaning/cognition I really had no pointed way to press the argument. Now I think I have to the tools, but they can only take us to the doorstep of the singularity, suggest the possibility of a Big Splat.
What lies beyond is anybody’s guess, but I think yours are hand’s down the coolest! Something has to explain the Fermi Paradox–my guess is apathy. Predation is a possibility, as is any number of technological misdadventures, but I wonder whether it simply isn’t a matter of civilization after civilization simply dozing off and never waking up.
The universe as elderly care facility. Decidedly uncool!
The universe as a Wind-Up Doll with a broken spring. Things are stuck in glitch mode… 🙂
Most of the bioethics industry seems fixated on gaming the ethics of human enhancement rather than considering whether the background conditions for their discourse will even make it out of the near future. Perhaps that’s down to academic specialisation or some institutional inertia, I dunno. One the reasons I wrote PHL was to insinuate this kind of question within the bioethics industry.
But they see frustratingly slow to catch up. Anyway, anarcho-ecologies seems like the most pressing problem posed by transhumanism and worth a systematic interdisciplinary research programme.
I suppose it’s worth asking just how ecologically tolerant intentional cognition is? What kinds of prophylactics might help us distinguish natural and artificial agents (Assuming we even want to – you have to wonder whether the guys on Ashley M could be arsed to have the kind of conversation that would sift out the the sexbots, or maybe were too invested in getting their rocks off after upgrading to premium membership.) Nobody is suggesting their angels were singularity-potent.
Intentional cognition might be bounded but potentially it’s highly rectifiable and subtle. “I might be a bit shy at first, wait til you get to know me, wink wink :)” might work as an opening gambit, but you have to wonder whether the average randy male could pass the turing test.
“Anyway, anarcho-ecologies seems like the most pressing problem posed by transhumanism and worth a systematic interdisciplinary research programme.”
I couldn’t agree more! Bioethics is a portrait of crash-space, but then so is my blog, your blog, the work of anyone interested in these issues. But the big question has to be one of finding some way forward, some way out. Since I think the enhancement is going to happen regardless, the pressing need is a creative one, inventing a form of ‘deep sociocognition,’ rather than endlessly gerrymandering shallow sociocognition to accommodate this or that deep variance. If I had less pride I would send every bioethicist in the world a copy of Crash Space!
Your point about the Ashley Madison case is well-taken: the key, I think, is ‘staging,’ the control the company had over the conditions of interactions. So long as their clients peered through their straws, the illusion was sound.
“Intentional cognition might be bounded but potentially it’s highly rectifiable and subtle.”
The ductility of intentional cognition is THE battleground, I think. As it stands, even when you read sophisticated transhumanists like Buchanan, for instance, the ‘Will/Way’ assumption prevails, but only because no one has any viable theory of what intentional cognition amounts to. What about game theory, for instance? Will anarcho-ecologies moot game theory as well? Or how about something thinner still, like optimization? Will we be able to assign targets to entities, use these to predict and explain their actions?
I don’t have any easy answer to these questions. It could be the case that something like Dennett’s ‘real patterns’ does assert itself come what may, that some kind of intentionally tractable equilibria haunts what appear to be anarcho-ecologies. But even if this the case that ‘thin’ intentionalities can cognize anarchic cognitive ecologies, I can’t see the masses having any taste for them, let alone access to them, not when the world has been transformed into Ashley Madison.
Ok, you made me do it: The Wall at the End of Things
I came to something like “anarcho-ecologies” not by way of trans-humanism but via my ongoing studies of our alltoohuman limits and our related efforts at bricolage, but either way I would welcome a systematic interdisciplinary research programme and if going trans is what it takes than why not,
my experience tells me that RSB is right that this is not going to bring about mass conversions but I’ve generally dwelled in the realm of unpopular/untimely tastes so would welcome any and all hands on deck of the Nautilus.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it” – Upton Sinclair, explaining why not many philosophers go for Blind Brain Theory
And I found myself using this quote today. Thanks
Reblogged this on The Ratliff Notepad.
Cool. It’s a key post, I think.
I suppose any successful work of fiction cues sociocognitive heuristics which evolved to facilitate interactions with real people.
https://www.edge.org/conversation/ed_boyden-how-the-brain-is-computing-the-mind