Optimally Engaged Experience

by rsbakker

To give you an idea as to how far the philosophical tradition has fallen behind:

The best bot writing mimics human interaction by creating emotional connection and engaging users in “real” conversation. Socrates and his buddies knew that stimulating dialogue, whether it was theatrical or critical, was important contributing to a fulfilling experience. We, as writers forging this new field of communication and expression, should strive to provide the same.

This signals the obsolescence of the tradition simply because it concretizes the radically ecological nature of human social cognition. Abstract argument is fast becoming commercial opportunity.

Sarah Wulfeck develops hybrid script/AI conversational user interfaces for a company called, accurately if shamelessly, Pullstring. Her thesis in this blog post is that the shared emphasis on dialogue one finds in the Socratic method and chatbot scripting is no coincidence. The Socratic method is “basically Internet Trolling, ancient Greek style,” she claims, insofar as “[y]ou assume the other participant in the conversation is making false statements, and you challenge those statements to find the inconsistencies.” Since developers can expect users to troll their chatbots in exactly this way, its important they possess the resources to play Socrates’ ancient game. Not only should a chatbot be able to answer questions in a ‘realistic’ manner, it should be able to ask them as well. “By asking the user questions and drawing out dialogue from your user, you’re making them feel “heard” and, ultimately, providing them with an optimally engaged experience.”

Thus the title.

What she’s referring to, here, is the level of what Don Norman calls ‘visceral design’:

Visceral design aims to get inside the user’s/customer’s/observer’s head and tug at his/her emotions either to improve the user experience (e.g., improving the general visual appeal) or to serve some business interest (e.g., emotionally blackmailing the customer/user/observer to make a purchase, to suit the company’s/business’s/product owner’s objectives).

The best way into a consumer’s wallet is to push their buttons—or in this case, pull their sociocognitive strings. The Socratic method, Wulfeck is claiming, renders the illusion of human cognition more seamless, thus cuing belief and, most importantly, trust, which for the vendor counts as ‘optimal engagement.’

Now it goes without saying that the Socratic method is way more than the character development tool Wulfeck makes of it here. Far from the diagnostic prosecutor immortalized by Plato, Wulfeck’s Socrates most resembles the therapeutic Socrates depicted by Xenophon. For her, the improvement of the user experience, not the provision of understanding, is the summum bonum. Chatbot development in general, you could say, is all about going through the motions of things that humans find meaningful. She’s interested in the Chinese Room version of the Socratic method, and no more.

The thing to recall, however, is that this industry is in its infancy, as are the technologies underwriting it. Here we are, at the floppy-disk stage, and our Chinese Rooms are already capable of generating targeted sociocognitive hallucinations.

Note the resemblance between this and the problem-ecology facing film and early broadcast television. “Once you’ve mapped out answers to background questions about your bot,” Wulfeck writes, “you need to prepare further by finding as many holes as you can ahead of time.” What she’s talking about is adding distinctions, complicating the communicative environment, in ways that make for a more seamless interaction. Adding wrinkles smooths the interaction. Complicating artificiality enables what could be called “artificiality neglect,” the default presumption that the interaction is a natural one.

As a commercial enterprise, the developmental goal is to induce trust, not to earn it. ‘Trust’ here might be understood as business-as-usual functioning for human-to-human interaction. The goal is to generate the kind of feedback the consumer would receive from a friend, and so cue business-as-usual friend behaviour. We rarely worry, let alone question, the motives of loved ones. The ease with which this feedback can be generated and sustained expresses the shocking superficiality of human sociocognitive ecologies. In effect, firms like Pullstring exploit deep ecological neglect to present cues ancestrally bound to actual humans in circumstances with nary a human to be found. Just as film and television engineers optimize visual engagement by complicating their signal beyond a certain business-as-usual threshold, chatbot developers are optimizing social engagement in the same way. They’re attempting to achieve ‘critical social fusion,’ to present signals in ways allowing the parasitization of human cognitive ecologies.  Where Pixar tricks us into hallucinating worlds, Pullstring (which, interestingly enough, was founded by former Pixar executives) dupes us into hallucinating souls.

Cognition consists in specialized sensitivities to signals, ‘cues,’ correlated to otherwise occluded systematicities in ways that propagate behaviour. The same way you don’t need to touch a thing to move it—you could use the proverbial 10ft pole—you don’t need to know a system to manipulate it. A ‘shallow cognitive ecology’ simply denotes our dependence on ‘otherwise occluded systematicities,’ the way certain forms of cognition depend on certain ancestral correlations obtaining. Since the facts of our shallow cognitive ecology also belong to those ‘otherwise occluded systematicities,’ we are all but witless to the ecological nature of our capacities.

Cues cue, whether ancestrally or artifactually sourced. There are endlessly more ways to artificially cue a cognitive system. Cheat space, the set of all possible artifactually sourced cuings, far exceeds the set of possible ancestral sourcings. It’s worth noting that this space of artifactual sourcing is the real frontier of techno-industrial exploitation. The battle isn’t for attention—at least not merely. After all, the ‘visceral level’ described above escapes attention altogether. The battle is for behaviour—our very being. We do as we are cued. Some cues require conscious attention, while a great many others do not.

As should be clear, Wulfeck’s Socratic method is a cheat space Socratic method. Trust requires critical social fusion, that a chatbot engage human interlocutors the way a human would. This requires asking and answering questions, making the consumer feel—to use Wulfeck’s own scarequotes—“heard.” The more seamlessly inhuman sources can replace human ones, the more effectively the consumer can be steered. The more likely they will express gratitude.

Crash.

The irony of this is that the Socratic method is all about illuminating the ecological limits of philosophical reflection. “Core to the Socratic Method,” Wulfeck writes in conclusion, “is questioning, analyzing and ultimately, simplifying conversation.” But this is precisely what Socrates did not do, as well as why he was ultimately condemned to death by his fellow Athenians. Socrates problematized conversation, complicated issues that most everyone thought straightforward, simple. And he did this by simply asking his fellows, What are these tools we are using? Why do our intuitions crash the moment we begin interrogating them?

Plato’s Socrates, at least, was not so much out to cheat cognition as to crash it. Think of the revelation, the discovery that one need only ask second-order questions to baffle every interlocutor. What is knowledge? What is the Good? What is justice?

Crash. Crash. Crash.

We’re still rooting through the wreckage, congenitally thinking these breakdowns are a bug, something to be overcome, rather than an obvious clue to the structure of our cognitive ecologies—a structure that is being prospected as we speak. There’s gold in dem der blindnesses. The Socratic method, if anything, reveals the profundity of medial neglect, the blindness of cognition to the nature of cognition. It reveals, in other words, the very ignorance that makes Wulfeck’s cheat space ‘Socratic method’ just another way to numb us to the flickering lights.

To be human is to be befuddled, to be constantly bumping into your own horizons. I’m sure that chatbots, by time they get to the gigabyte thumb-drive phase, will find some way of simulating this too. As Wulfeck herself writes, “It’s okay if your bot has to say “I don’t know,” just make sure it’s saying it in a satisfying and not dismissive way.”