Optimally Engaged Experience
by rsbakker
To give you an idea as to how far the philosophical tradition has fallen behind:
The best bot writing mimics human interaction by creating emotional connection and engaging users in “real” conversation. Socrates and his buddies knew that stimulating dialogue, whether it was theatrical or critical, was important contributing to a fulfilling experience. We, as writers forging this new field of communication and expression, should strive to provide the same.
This signals the obsolescence of the tradition simply because it concretizes the radically ecological nature of human social cognition. Abstract argument is fast becoming commercial opportunity.
Sarah Wulfeck develops hybrid script/AI conversational user interfaces for a company called, accurately if shamelessly, Pullstring. Her thesis in this blog post is that the shared emphasis on dialogue one finds in the Socratic method and chatbot scripting is no coincidence. The Socratic method is “basically Internet Trolling, ancient Greek style,” she claims, insofar as “[y]ou assume the other participant in the conversation is making false statements, and you challenge those statements to find the inconsistencies.” Since developers can expect users to troll their chatbots in exactly this way, its important they possess the resources to play Socrates’ ancient game. Not only should a chatbot be able to answer questions in a ‘realistic’ manner, it should be able to ask them as well. “By asking the user questions and drawing out dialogue from your user, you’re making them feel “heard” and, ultimately, providing them with an optimally engaged experience.”
Thus the title.
What she’s referring to, here, is the level of what Don Norman calls ‘visceral design’:
Visceral design aims to get inside the user’s/customer’s/observer’s head and tug at his/her emotions either to improve the user experience (e.g., improving the general visual appeal) or to serve some business interest (e.g., emotionally blackmailing the customer/user/observer to make a purchase, to suit the company’s/business’s/product owner’s objectives).
The best way into a consumer’s wallet is to push their buttons—or in this case, pull their sociocognitive strings. The Socratic method, Wulfeck is claiming, renders the illusion of human cognition more seamless, thus cuing belief and, most importantly, trust, which for the vendor counts as ‘optimal engagement.’
Now it goes without saying that the Socratic method is way more than the character development tool Wulfeck makes of it here. Far from the diagnostic prosecutor immortalized by Plato, Wulfeck’s Socrates most resembles the therapeutic Socrates depicted by Xenophon. For her, the improvement of the user experience, not the provision of understanding, is the summum bonum. Chatbot development in general, you could say, is all about going through the motions of things that humans find meaningful. She’s interested in the Chinese Room version of the Socratic method, and no more.
The thing to recall, however, is that this industry is in its infancy, as are the technologies underwriting it. Here we are, at the floppy-disk stage, and our Chinese Rooms are already capable of generating targeted sociocognitive hallucinations.
Note the resemblance between this and the problem-ecology facing film and early broadcast television. “Once you’ve mapped out answers to background questions about your bot,” Wulfeck writes, “you need to prepare further by finding as many holes as you can ahead of time.” What she’s talking about is adding distinctions, complicating the communicative environment, in ways that make for a more seamless interaction. Adding wrinkles smooths the interaction. Complicating artificiality enables what could be called “artificiality neglect,” the default presumption that the interaction is a natural one.
As a commercial enterprise, the developmental goal is to induce trust, not to earn it. ‘Trust’ here might be understood as business-as-usual functioning for human-to-human interaction. The goal is to generate the kind of feedback the consumer would receive from a friend, and so cue business-as-usual friend behaviour. We rarely worry, let alone question, the motives of loved ones. The ease with which this feedback can be generated and sustained expresses the shocking superficiality of human sociocognitive ecologies. In effect, firms like Pullstring exploit deep ecological neglect to present cues ancestrally bound to actual humans in circumstances with nary a human to be found. Just as film and television engineers optimize visual engagement by complicating their signal beyond a certain business-as-usual threshold, chatbot developers are optimizing social engagement in the same way. They’re attempting to achieve ‘critical social fusion,’ to present signals in ways allowing the parasitization of human cognitive ecologies. Where Pixar tricks us into hallucinating worlds, Pullstring (which, interestingly enough, was founded by former Pixar executives) dupes us into hallucinating souls.
Cognition consists in specialized sensitivities to signals, ‘cues,’ correlated to otherwise occluded systematicities in ways that propagate behaviour. The same way you don’t need to touch a thing to move it—you could use the proverbial 10ft pole—you don’t need to know a system to manipulate it. A ‘shallow cognitive ecology’ simply denotes our dependence on ‘otherwise occluded systematicities,’ the way certain forms of cognition depend on certain ancestral correlations obtaining. Since the facts of our shallow cognitive ecology also belong to those ‘otherwise occluded systematicities,’ we are all but witless to the ecological nature of our capacities.
Cues cue, whether ancestrally or artifactually sourced. There are endlessly more ways to artificially cue a cognitive system. Cheat space, the set of all possible artifactually sourced cuings, far exceeds the set of possible ancestral sourcings. It’s worth noting that this space of artifactual sourcing is the real frontier of techno-industrial exploitation. The battle isn’t for attention—at least not merely. After all, the ‘visceral level’ described above escapes attention altogether. The battle is for behaviour—our very being. We do as we are cued. Some cues require conscious attention, while a great many others do not.
As should be clear, Wulfeck’s Socratic method is a cheat space Socratic method. Trust requires critical social fusion, that a chatbot engage human interlocutors the way a human would. This requires asking and answering questions, making the consumer feel—to use Wulfeck’s own scarequotes—“heard.” The more seamlessly inhuman sources can replace human ones, the more effectively the consumer can be steered. The more likely they will express gratitude.
Crash.
The irony of this is that the Socratic method is all about illuminating the ecological limits of philosophical reflection. “Core to the Socratic Method,” Wulfeck writes in conclusion, “is questioning, analyzing and ultimately, simplifying conversation.” But this is precisely what Socrates did not do, as well as why he was ultimately condemned to death by his fellow Athenians. Socrates problematized conversation, complicated issues that most everyone thought straightforward, simple. And he did this by simply asking his fellows, What are these tools we are using? Why do our intuitions crash the moment we begin interrogating them?
Plato’s Socrates, at least, was not so much out to cheat cognition as to crash it. Think of the revelation, the discovery that one need only ask second-order questions to baffle every interlocutor. What is knowledge? What is the Good? What is justice?
Crash. Crash. Crash.
We’re still rooting through the wreckage, congenitally thinking these breakdowns are a bug, something to be overcome, rather than an obvious clue to the structure of our cognitive ecologies—a structure that is being prospected as we speak. There’s gold in dem der blindnesses. The Socratic method, if anything, reveals the profundity of medial neglect, the blindness of cognition to the nature of cognition. It reveals, in other words, the very ignorance that makes Wulfeck’s cheat space ‘Socratic method’ just another way to numb us to the flickering lights.
To be human is to be befuddled, to be constantly bumping into your own horizons. I’m sure that chatbots, by time they get to the gigabyte thumb-drive phase, will find some way of simulating this too. As Wulfeck herself writes, “It’s okay if your bot has to say “I don’t know,” just make sure it’s saying it in a satisfying and not dismissive way.”
before the engineering can get much better the social science will have to get better which will take something like a Kuhnian shift, for instance if you look at the kinds of testing clinicians use for neuropsych evals (crappy personality tests and so on) they are also quite far behind the times and despite all the scolding headlines social-psychology depts and the like remain largely unchanged.
Also engineers/startuppers(let alone sales-people, biz schoolers) tend towards very simplistic takes on neurology/behavior see all the Facebook types going on and on about dopamine hits and the like. My bet is that it will take a govt like China to really push these things in a truly new direction, past pellets for pigeons.
Agreed. Did you get a chance to check out the article on ‘social credit scoring’ in Wired a couple months back, Dirk? I sometimes think the Chinese are the MOST naive when it comes to these issues.
I’ve been following the reporting but not sure I caught that one, right now they are really just in the big brother fear mongering mode but they are dumping a ton of money into AI and some of their technical grad schools are getting very good so while Stanford is invested in clickbait they might well come closest to fulfilling yer scenario, if they don’t I don’t think anyone will, basic research is getting thin on the ground…
With the social credit scoring stuff the approach seems so flat-footed: they’re like teachers who think grades incentivize prosocial behaviour in all students both in the classroom and out. Another example of the failure to think things through ecologically. Nothing stays buttoned down after this point, at least not with capital running the technological show. Every system is blind. Every system possesses a cheat space. Every cheat can be cheated. Pancakes are a woefully static metaphor to use to describe what’s happening to us: endlessly trammelled battlegrounds capture more of the offending dynamics I think.
“Another example of the failure to think things through ecologically” sure but this is my broader point to you along these lines truly ecological thinking evades almost everyone and so is not likely to ever be operationalized/institutionalized at a mass scale, no way to get the personnel let alone build the needed reflexivity into any system which could be scaled up, just check out the dead ends of cybernetics over the decades so much promise so painfully little in actual progress…
“But today, I see within us all (myself included) the replacement of complex inner density with a new kind of self-evolving under the pressure of information overload and the technology of the “instantly available”. A new self that needs to contain less and less of an inner repertory of dense cultural inheritance—as we all become “pancake people”—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”
https://www.edge.org/3rd_culture/foreman05/foreman05_index.html
Pretty cool. For my money, George Dyson’s response is the one.
indeed, not bad for the time and so much better than current sites like Aeon and the rest pushing out crap along the lines of:
http://newbooksnetwork.com/michael-ruse-on-purpose-princeton-up-2017/
timely meditations apparently
https://understandingsociety.blogspot.com/2018/02/folk-psychology-and-alexa.html
Yeah that is totally interesting. I think it’s just a further elaboration on that psychological experiment where people are shown random pictures or something under the assumption that there is a meaning behind the order or their group of pictures that are shown. That experiment that shows that people just make meaning and it really doesn’t matter what that meeting is or something like that.
But I think what is more significant is the assumption that we are talking about a common human being, as if all human beings are subject to this kind chat box experience.
I think we have to come to terms with this idea that there really is no common human being there is only a human being that is assumed, and for the purpose of these kind of interactions. Because I could be responding to a chat bot right now and it really doesn’t do anything to any sort of quality of my life to it’s experiences are what I do in that life that I knew it was the chat box or that I didn’t know it was a chat box. I’m not going to go purchase anything nor make any sort of suggestive comments about what our interaction was that would suggest any sort of inauthenticity or anything like that depending on whether it was a chat bot or not.
I think that in these kinds of experiments and results we need to qualify what really is going on within the context that there is no “common human being” that these experiments are revealing of. They are revealing only a certain type of humanity that is being used in towards a project that we could call enlightenment or my during a tea or technology or progress there any types of those labels.
But it is interesting. I think it’s even more interesting that these types of experiments are assumed upon a common type of human. I think that’s the most interesting thing about these kind of technological “developments“.
Thx.
depends on the ‘level’ of cognition/computation yer interested-in/describing and whether or not yer interested in the how/functions/anatomy or the content,for most of what we are/do there are major overlaps that really have nothing to do with the history of ideas but with evolution,
here I think RSB isn’t so much interested in basic science research as engineering/applications and to have major effects (enough of the population for significant impacts) one doesn’t need to engage everyone, think about the votes in the US presidential election or the impacts of social-media on print/tv news or the like.
The common human, though, would say ‘we’ have these various ‘new kind’ of’deceptive’ interactions with the bot. The only deception that is going on is from the perspective of the makers of the bots. The ‘we’ that is including them in the humanity that is getting deceived or manipulated.
But human beings do this to each other all the time. Is it really nifty that we can create another machine that can implement a deception that was already implied in the human who was involved in the project ? Not really, I think.
It’s just ‘more human’ stuff,more humans marvelling at how well they can deny their own involvement in outcomes.
you seem to miss the ways in which more matters, more interactions, more impact, etc and that it’s not just bots but all sorts of “nudges” and algorithms which are used to judge/manage us in relation to the data being gathered by our interactions. They don’t have to be sophisticated to hurt us,
It’s not that I don’t appreciate the ability to manipulate people and myself it’s more that I just kind of except that that is the reality of the situation.
This does not mean them that I am admitting some sort of defeat or that I’m powerless or anything.
But it does realize or have a certain “saying“ response to actual situations.
The example that I think is very pertinent takes place in my own home. Now I must say that regardless of what psychology might say of interpersonal relationships and healthy psychology or whatever the hell people want to say, The fact of any social relationship is that it exceeds whatever sort of application we want to apply to it so far is was healthy and good for people within the relationship.
But anyways, the example I want to use is recycling. I do have a certain dread about the end of the world and about us polluting our planet and about being manipulated on the Internet and having too much my information on the Internet. All of these things do go on in my mind and exhibit for me a certain kind of what I consider responsible attitude towards our world.
But the fact of the matter is I do not act 100% of the time in accordance with these principles that I hold so dear. I would say that half the time I do not recycle all the things that could be recycled. Now I am not saying this is something that I feel I need to improve on or I need to reflect upon my life in such a way that I could do a better job or something like that. I say this as reflecting the bear fact of the situation that when I’m cleaning up after dinner I don’t want to fucking walk out to the fucking recycling trashcan and throw away those stupid cans and bottles. Anyone can judge me as they like but the fact of the matter is that I don’t always store away the cans and bottles into the recycling.
Now even a very very conservative estimate that extrapolate from my experience would tell us that most people in the world behave worse than I do.
So what I’m kind of saying so far is all this technology and ARI and bots being able to act like human beings is that this is not an exceptional situation and I feel it will only create damage in his much is it anything else that human beings make create damage in the world weather with intention or not intention weather by people making judgements upon other people about what they were supposed to do or people reflecting upon themselves and feeling guilty, whatever the situation anything we do is going to cause a relatively more or lasts situation of harm in our world.
I say that this is not exceptional. My position in the world is that we are not special as universal creatures or as universal objects.
I am being manipulated. And I say so what I do my best not to be manipulated but the fact is even most probably my ability to act in such a way to suppose that I might not be manipulated is it self most probably already planned for or being taken into account towards manipulating me. Lol
Again I say this is not exceptional. This is not something that I need to retaliate against. But in actuality my daughter when she grows up and has mature this situation is just something that she’s going to take in stride as part of her every day experience. Something that I think is so terrible or contrary to what I consider a human being is actually going to be a grounding part of what it is to be human 50 years from now or 100 years from now.
OK I’m done I this voice dictation tends to make me ramble so I apologize
…it’s like the question implicit to Blade Runner. 😄
https://jacobinmag.com/2018/01/virginia-eubanks-interview-automating-inequality-poverty
Yes. Thank you. You are right. I was not thinking along those lines.
But
As I say of the Two Routes:
The truth is not real. And in reality one must engage fully, as if it is true. With no recourse out.
These routes do not reduce to negate one or the other.
“It’s more convenient. It’s great to not have to carry around paper [food] stamps anymore. But also, my caseworker uses it to track all of my purchases.” And I had this look on my face, a look of total shock. And Dorothy said, “Oh. You didn’t know that, did you?” I did not. She said, “You all” — meaning middle-class people like me — “You all should be paying attention to what happens to us, because they’re coming for you next.”
It was always taken for granted that the poor have no right to privacy regarding how they use government benefits. The erosion of both privacy and the idea of privacy as a right started in the welfare system because the data were readily available. As the ability to gather data rose the socioeconomic ladder the destruction of privacy rights followed. Part of what makes the idea of Turing level chatbots so disturbing is what will be possible when corporations and governments can combine our deepest spiritual and moral aspirations with all the information already available about criminal histories, spending habits and the like.
And as we learned from our favorite whipping girl, Ashley Madison, the Turing test is a test of human gullibility as much as a test of technological power.
I dunno, viruses rely on a common type of human and they work pretty well.
They work on a common type, except those who don’t get the virus. For example. My wife and daughter both got this years flu virus the same day even. Slammed them for about 4 days. Two weeks to completely get over it. I did nothing out of the ordinary to prevent myself from getting it. Took care of them. Didn’t even get a dry throat.
The point I try to make is that these ‘computerized events’ do nothing to effect the creature I am. As I said, even if I am responding to a bot right now. Nothing in my life would change. And in fact my daughters generation will be so acclimated to such interactions it won’t matter at all what I thought about it. Their ‘normal’ will be my ‘just old’.
Perhaps I’m mis-reading or reading too much into it; but advertising does nothing for me or to me. It is not that I am ignorant of what effect it might have on me, It is more that there is a group of people who claim that I am being ignorant.
Again: the sensible answer should be “so what”?
Another example: I have no stake in my existence whether some future humanity coloniZes Mars. Yet I am supposed to ‘beleive’ that that will be a great achievement and therefore I am supposed to see that I am indeed a part of this “great human Endeavor” or I am being ignorant.
I refuse to concede to this type of either or ideology. As if I wouldn’t find going to mars interesting. Just because I claim I have no stake in the activity considered a “human” goal.
Does that make sense?
I don’t understand a reference to ‘the sensible answer’ – such an answer would be for everyone, and for those who can be affected by semantic viruses ‘So what?’ is hardly a sensible answer?
I don’t know if Scott’s post seem like it requires as part of the argument having to accept oneself as vulnerable – I don’t think it requires it. But even if you think you’re immune, as much as you were looking after your daughter when a regular virus got to her, what about a semantic virus and it’s affect on finances?
The biology is common, and this constrains the efflorescent possibilities of plasticity/training, which, as dirk points out, are either wildly divergent or horrifically convergent (and everything in-between) depending on your level of analysis. Now that machine learning has been loosed upon the world, handling differences between individuals is simply a matter of exposure and resources. The patterns don’t have to be the same; they just have to be there.
people seem to struggle to grasp that in relation to engineering there is a kind of pragmatism where the mechanism just needs to achieve the particular end/task at hand to be ‘optimally engaged’, doesn’t need to be all too human. also as one of yer commenters failed to grasp in an exchange on yer last post when one moves from the ambiguities of verbal theories/propositions to actionable/mathematized engineering tasks/ends there are lot of assumptions that must be made/concretized.
Now often these assumptions aren’t particularly grounded (good luck getting engineers/coders to see that their “data driven” projects are putting their calculating power in service of their intuitions) and so the goal of a more autonomous (even reflexive) system is much harder than any moonshot.
for a more careful consideration see: https://www.youtube.com/watch?v=_sY911zqgys
In that respect they are like influenza viruses. It’s one thing not to have gotten the flu this season. It’s quite another thing to never have gotten the flu. Chatbots might become able to evolve faster than our intellectual immune systems can follow.
Plato’s Socrates, at least, was not so much out to cheat cognition as to crash it. Think of the revelation, the discovery that one need only ask second-order questions to baffle every interlocutor. What is knowledge? What is the Good? What is justice?
Crash. Crash. Crash.
Isn’t that what Wulfeck is referring to? To be the one triggering a crash is to be left in a strong position – to watch the spluttering, struggling, gasping of the other.
I mean, how can you argue any following statements from a guy that has just done that? Surely he sounds absolutely in the know as much as ones own thoughts are abject?
What distinguishes one crash from another?
‘What price can you put on love?’
[Okay, smartarsery aside, one might say the intent with the Socratic method crash is different, ie trying to lead to something different – like the difference between a blade cutting open skin with a serial killer vs the same blade and skin with a surgeon. But both cut just as much.
I’m trying to partially undermine any crash capacity my own post had (if I flatter myself to think it could have any capacity to do so) to partially avoid this issue. Some deliberate lameness, not just naturally occurring lameness.]
The Socratic ‘crash’ is only from a certain view, and a view that wants to be recognized for its ‘power to manifest a view’.
When you read Plato, the Socratic method referred to in this ‘crash’ is one that misses what Socrates is doing for the sake of the ‘inspired interpretation’ of that reader who sees the ‘Socratic method’ as crashing something; indeed, it is a ‘miss’ that is taken up by a bunch of people who like wise get together to ‘enforce’ this ‘missing interpretation’ so they can all relish in their ‘power to crash things’
It’s like some one who reads the Cliffs Notes. And the gets all the answers on the quiz right. The ‘A’ on the test only means something in certain circles and under particular conditions.
Done crashed me with how Socratic method isn’t about crashing, bro!
If I were to argue it’s not the Socratic method, I would hope the difference is here “[y]ou assume the other participant in the conversation is making false statements”. As in, just assuming the other guy is automatically making false statements – that’d be damn dogmatic!
Yea. Lol. The Socratic method, it would seem, once identified, stops progress along a certain line. I suppose then how one reacts defines whether there has been a crash. For the issue is whether a person thinks thier thought is able to get outside of its own limit to be able to consider what is universal of humanity: in the case of the ‘method’, now one I notice the questioning, a certain type of person would discount everything that comes after, seeing such ‘doubt’, its method and results, as ‘already found’. Crash!
The thing is, no argument can be made to describe to the crasher or the crashed that thier reaction method is a mistake, because everyone is assumed to have a common inspirational resource to finding the truth of things, again, through this common resource that, again, everyone has access to given certain conditions (education, biology, intelligence, race, class). We simply have to put it all in the same pot to verify that the view that crashes is correct. No argument can be made to get the crasher to see beyond its proclaiming view. So anyone who says different becomes ‘ignorant’, or a number of other discounting and generally insulting remarks, rather than simply not contained in the category that is assumed common.
Ahh…. but I think this is what Dude is pointing to with the Bot thing, But I wouldn’t lump “all Philosophy” as behind; I call it “conventional”.
My point is that to lump it all together might have missed some loose ends, so ready to be the head of progress, such jumpers have missed something that just might come up and bite them: maybe knowledge of whether it is a bot falls under the same philosophical rubric that the jumpers are suggesting they are getting beyond, so they are not behind in the progressive move.
I think such a view forgets to check if the lug nuts are right before they accelerate back into the race. They might win, but then this race never ends, so what’s the probability of a tire falling off in such a progressive race? 😄
Crash.
Reblogged this on .
Danke. Love your handle, uteropensante.
Semantic apocalypse creep – https://www.buzzfeed.com/charliewarzel/the-terrifying-future-of-fake-news?utm_term=.ikRoJ1mKw#.hfzq90MZy
Don’t worry. We won’t really be screwed until they can spoof taste, touch and smell.
I think the general effect will be to drive us back to reliance on ‘in the flesh’ interactions. I think that would be a good thing for those capable of it. But on the other hand, I’m old enough to have grown up before the always on connectivity of smartphones existed, so I’m not as addicted to them as the media claim young people are. My hope of people rejecting this technology in a sort of peaceful Butlerian Jihad is probably baseless, but I do hope, and I think we might have about as much time to deal with this as we have with climate change.
…perhaps I am saying that it doesn’t matter whether I interact with another human being or a bot: the engagement towards capitalism occurs everywhere even when we do t expect it. That bots are joining the fray only is important for those who feel they are making a name for themselves, and perhaps in 100 years no one will give a crap. It’s just that right now. I just think it’s interesting. 🙂
Maybe it’s just me, but Sarah Wulfeck’s article reads a lot like the advice you get in creative writing classes. You could says chatbots are just fictional characters. In fact I wonder if chatbot creation might someday come to be considered an art form. What would a chatbot Kellhus or chatbot Achamian be like? Alternately, you could think of chatbots as attempts to create software that can pass the Turing test. Critics have said that the hallmark of a great literary character (Hamlet, Falstaff) is that they seem real enough that you can imagine how they might act in circumstances other than those of the work in which they originated. For that matter, imagine what narrative art forms might be possible if we can combine Turing level chat and virtual reality. These works of art/entertainment might prove to be more valuable than the ability of chat to sneak sports references into dishwasher sales pitches. Scott, maybe there’s a business opportunity here for novelists who can work with software engineers.
I think the same. It’s precisely the approach I take to my novels: reality is a function of detail. The difference lies in the dynamism and ecological upshot of the manipulation involved. Which is why I would sooner eat my teeth than trade jobs with Wulfeck! 😉
Re: Manipulation.
Wouldn’t a mutualism also apply in regards to classic fantasy writing – a willingness to undergo the manipulation that is fantasy, along with invoking that manipulation unto others? A ‘Do unto others…’ mutualism? Where as do the people working at Pullstring want to have what they make done unto them?
Fabulous stuff, Scott. Incidentally, an MA I’m teaching on moves without fuss from a block on Plato’s Meno to some material on Consciousness, based on Keith Frankish’s excellent book for the OU’s sadly defunct philosophy of mind module. So I’ve posted a brief summary and link to this post on our module forum. A perfect segue where none seemed possible. Best wishes, David
Way cool. Thanks David.
https://link.springer.com/article/10.1007/s11229-018-1716-9
General ecological information supports engagement with affordances for ‘higher’ cognition
Where to begin? If Jorge or Ochlo are around I’d be interested in understanding how something like this would read in actual neuro circles.
they might want to start @ http://www.uc.edu/cap/research.html
View at Medium.com
“What men often don’t get, and don’t hear enough, is that they are beautiful and dignified creatures. The intrinsic worth of men does not depend on women in any way, not for approval or by submission.”
What does “intrinsic worth” mean? Can things have value or worth other than their prices? If so, how can that value/worth be determined?
It probably means ‘loved (to some degree) just for existing’
To go back to money, it’s like being given some money to play at a game of poker/socialisation. Without any play into the game, an individual just abandons the game.
A man should value his own worth.
https://tricycle.org/magazine/treasures-translation/
Doing so means you’re not playing the same game – checkers at a chess game or vice versa. But granted if you’re not being dealt into a game what else is there but to value your own worth?
What indeed.
Also this reminds me of a Rick & Morty episode, where [spoilers] alien parasites enter their home and brainwash them to thinking they are friends and family in order to take their food, rapidly breeding during this time. When the family does cotton on, they even begin to turn on each other, beginning to think each other are the parasites. The parasites weakness is you have no unpleasant memories of them – real people will have generated some unpleasant memories, thus they are distinguishable from parasites. This is the way the parasites are eliminated (with one unfortunate false positive). But Wulfeck is too smart to let such a simple weakness slip in…
It might be a useful cultural touchstone to call upon in discussion of fiscal AI’s which are designed to act human in order to extract money.