Arguing No One: Wolfendale and the Penury of ‘Pragmatic Functionalism’
In “The Parting of the Ways: Political Agency between Rational Subjectivity and Phenomenal Selfhood,” Peter Wolfendale attempts to show “how Metzinger’s theory of phenomenal consciousness can be integrated into a broadly Sellarsian theory of rational consciousness” (1). Since he seems to have garnered the interest of more than a few souls who also follow Three Pound Brain, I thought a critical walkabout might prove to be a worthwhile exercise. Although I find Wolfendale’s approach far—far—more promising than that of, say, Adrian Johnston or Slavoj Zizek, it still commits basic errors that the nascent Continental Philosophy of Mind, fleeing the institutional credibility crisis afflicting Continental Philosophy more generally, can ill-afford. Ingroup credibility is simply too cheap to make any real difference in what has arguably become the single greatest research project in the history of the human race: the quest to understand ourselves.
Wolfendale begins with an elegant summary of Thomas Metzinger’s position as espoused in his magnum opus, Being No One: The Self Model Theory of Subjectivity (a précis can be found here), the lay-oriented The Ego Tunnel: The Science of the Brain and the Myth of the Self, and in numerous essays and articles. After more than a decade, Being No One remains an excellent point of entry for anyone attempting to get a handle on contemporary philosophy of mind, philosophy of psychology, and cognitive science more generally. Unfortunately the book is particularly dated in terms of consciousness research (and Thomas, who has been a tireless champion of the program, would not have it any other way!), but it speaks to the prescience of Metzinger, not to mention his genuine openness to new approaches, that he had already seen the promise of things like enactivism, action-oriented predictive processing, and information integration theories of consciousness at the turn of the millennium. Being No One is a book I have criticized many times, but I have yet to revisit it without feeling some degree of awe.
The provocative hook of Metzinger’s theory is that there is no self as it has been traditionally characterized. In Being No One he continually wends across various levels of description, from the brute phenomenological to the functional/representational to the brute neurological and back, taking pains to regiment and conceptually delimit each step he makes on the way. The no-self thesis is actually a consequence of his larger theoretical goal, which is nothing other than explaining the functionality required to make representations conscious. The no-self thesis, in other words, follows from a specific neurobiologically grounded theory of consciousness, what he calls the Self-Model Theory of Subjectivity or SMT, the theory that is the true object of Being No One. Given that the market is so crowded with mutually incompatible theories of consciousness, this of course heavily qualifies Metzinger’s particular no-self thesis. He has to be right about consciousness to be right about the self. It’s worth noting that Wolfendale’s account inherits this qualification.
That said, it’s hard to make sense of the assumptive self on pretty much any naturalistic theory of consciousness. You could say, then, that political agency is indeed in crisis even if the chances of Metzinger’s no-self thesis finding empirical vindication are slim. The problem of selfhood, in other words, isn’t Metzinger’s, but rather has to do with the incompatibility between intentional and natural modes of cognition more generally. For whatever reason, we simply cannot translate the idiom of the former into the latter without rendering the former unintelligible, even though we clearly seem to be using them in concert all the time. Metzinger’s problem of the self is but an angle on the more general problem of the self, which is itself but an angle on the more general problem of intentional inscrutability. And this, as we shall see, has quite drastic consequences for Wolfendale’s position.
Metzinger’s thesis is that the self is not so much the flashlight as the beam, nothing more than a special kind of dynamic representational content. This content—the phenomenological sum of what you can attend to that is specific to you—comprises your Phenomenal Self-Model, or PSM. Given Metzinger’s naturalism, the psychofunctional and neurobiological descriptions provided by science handily trump the phenomenological descriptions provided by philosophy and theology: they describe what we in fact are as surely as they describe what anything in fact is. We are this environmentally embedded and astronomically complicated system that science has just begun to reverse engineer. To the extent that we identity ourselves with the content of the PSM, then, we are quite simply mistaken.
This means that prior to cognitive science, we could not but be mistaken; we had no choice but to conflate ourselves with our PSM simply because it provides all the information available. Thus Metzinger’s definition of transparency as an “inner darkness,” and why I was so excited when Being No One first came out. The PSM is transparent, not because all the information required to intuit the truth of the self is available, but because none of that information is available. Metzinger calls this structural inaccessibility, ‘autoepistemic closure.’ The PSM—which is to say, the first person as apparently experienced—is itself a product of autoepistemic closure (an ‘ego tunnel’), a positive artifact of the way the representational nature of the PSM is in no way available to the greater system of which the PSM is a part. The self as traditionally understood, therefore, has to be seen as a kind of cognitive illusion, a representational artifact of neglect.
Sound familiar? Reading Being No One was literally the first time I had encountered a theorist (other than Dennett, of course) arguing that a fundamental structural feature of our phenomenology was the product of metacognitive neglect. What Metzinger fails to see, and what Blind Brain Theory reveals, is the way all intentional phenomena can be interpreted as such, obviating the need for the representations and normative functions that populate his theoretical apparatus. The self does not fall alone. So on my account, Metzinger’s PSM is itself a metacognitive illusion, a theoretical construct founded on metacognitive inklings that also turn on neglect—or autoepistemic closure. And this is why we have as much trouble—trouble that Metzinger openly admits—trying to make neurobiological sense of representations as we have selves.
Where Metzinger opts to make representation the conceptual waystation of the natural and the phenomenological, the Blind Brain account utilizes neglect. Consciousness is far more inchoate, and our intuitions regarding the first-person are accordingly far more contingent. The whole reason one finds such wild divergences in appraisals of selves across ages and cultures is simply that there is no ‘integral simulation,’ but rather a variety of structurally and developmentally mandated ‘inner darknesses,’ blindnesses that transform standard intuitions into miscues, thus gulling theoretical metacognition into making a number of predictable errors. Given that this metacognitive neglect structure is built in, it provides the scaffold, as it were, upon which the confused sum of traditional speculation on the self stands.
The brain, as Metzinger points out, is blind, not only to its own processing, but to any processing that exceeds a low threshold of complexity. Blind to the actual complexities governing cognition, it relies on metacognitive heuristics to solve problems requiring metacognitive input, capacities we arguably evolved in the course of becoming sapient—as opposed to philosophical. So when we’re confronted with systematic relations (isomorphic or interactive or otherwise) between distinct structures, a painting of the Eiffel Tower say, the systems underwriting this confrontation remain entirely invisible to deliberative reflection, sheered away by history and structural incapacity, leaving only a covariational inkling (however we interpret the painting), what it is systematically related to (the actual tower), and a vacuum where all the actual constraint resides. Representation and content, as classically conceived, are simply heuristic artifacts of inescapable neglect. As heuristic, they are necessarily keyed to some set of problem ecologies, environments possessing the information structure that allows them to solve despite all the information neglected. The actual causal constraints are consigned to oblivion, so the constraints are cognized otherwise—as intentional/normative. And lo, it turns out that some headway can be made, certain problems can be solved, using these cause-neglecting heuristics. But since metacognition has no way of recognizing that they are heuristics, we find ourselves perpetually perplexed whenever we inadvertently run afoul their ecological limits.
On BBT, mental representations (conscious or unconscious) and selves sink together for an interrelated set of reasons. It promises to put an end to the tendentious game of picking and choosing one’s intentional inscrutabilities. Norms good, truth conditions bad, and so on and so on. It purges the conflations and ontologizations that have so perniciously characterized our attempts to understand ourselves in a manner that allows us to understand how and why those conflations and ontologizations have come about. In other words, it renders intentionality naturalistically scrutable. So on accounts like Metzinger’s (or more recently, Graziano’s), we find consciousness explained in terms of representations, which themselves remain, after decades of conceptual gerrymandering, inexplicable. No one denies how problematic this is, how it simply redistributes the mystery from one register to another, but since representations, at least, have had some success being operationalized in various empirical contexts, it seems we have crept somewhat closer to a ‘scientific theory of consciousness.’ BBT explains, not only the intuitive force of representational thinking, but why it actually does the kinds of local work it does while nevertheless remaining a global dead end, a massive waste of intellectual resources when it comes to the general question of what we are.
But even if we set aside BBT for a moment and grant Wolfendale the viability of Metzinger’s representationalist approach, it remains hard to understand how his position is supposed to work. As I mentioned at the outset, Wolfendale wants to show how elaborating Metzinger’s account of consciousness with a Sellarsian account of rationality allows one to embrace Metzinger’s debunking of the self while nonetheless insisting on the reality of political agency. He claims that Metzinger’s theory possesses three, hierarchically organized functional schema: unconscious drives, conscious systems, and self-conscious systems. Although Metzinger, to my knowledge, never expresses his position in these terms, they provide Wolfendale with an economical way of recapitulating Metzinger’s argument against the reality of the self. They also provide a point of (apparent) functional linkage with Sellars. All we need do, Wolfendale thinks, is append the proper ‘rational schema’ to those utilized by Metzinger, and we have a means of showing how the subjectivity required for political agency can survive the death of the self.
So in addition to Metzinger’s Phenomenal Self-Model (PSM) and Phenomenal World Model (PWM), Wolfendale adduces a Rational Subject Model (or RSM) and an Inferential Space Model (or—intentionally humorously, I think—ISM), which taken together comprise what he terms the Core Reasoning System (or CRS)—the functional system, realized (in the human case) by the brain, that is responsible for inference. As he writes:
The crucial thing about the capacity for inference is that it requires the ability to dynamically track one’s theoretical and practical commitments, or to reliably keep score of the claims one is responsible for justifying and the aims one is responsible for achieving. This involves the ability to dynamically update one’s commitments, by working out the consequences of existing ones, and revising them on the basis of incompatibilities between these consequences and newly acquired commitments. (6)
Whatever reasoning amounts to, it somehow depends on the functional capacities of the brain. Now it’s important that none of this require awareness, that all this functional machinery work without conscious awareness. The ‘dynamic updating of commitments’ has to be unconscious and automatic—implicit—to count as a plausible explanation of discursivity. Deliberate intellectual exercises comprise only the merest sliver of our social cognitive activity. It’s also important that none of this functional machinery work perfectly: humans are bad at reason, as a matter of dramatic empirical fact (see Sperber and Mercier for an excellent review of the literature). Wolfendale acknowledges all of this.
What’s crucial, from his standpoint, is the intrinsically social nature of these rational functions. Though he never explicitly references Robert Brandom’s elaboration of the ‘Sellarsian project,’ the functionalism at work here is clearly a version of the pragmatic functionalism detailed in Making It Explicit. On a pragmatic functionalist account, the natural reality of our ‘self’ matters not a whit, so long as that natural reality allows us to take each other as such, to discharge the functions required to predict, explain, and manipulate one another. So even though the self is clearly an illusion at the psychofunctional levels expounded by Metzinger, it nevertheless remains entirely real at the pragmatic functional level made possible via Sellars’s rational schema. Problem solved.
But despite its superficial appeal, the marriage between pragmatic functionalism and psychofunctionalism here is peculiar, to say the least. The reason researchers in empirical psychology bite the bullet of intentional inscrutability lies in the empirical efficacy of their theories. Given some input and some relation between (posited) internal states, a psychofunctionalist theory can successfully predict different behavioural outputs. The functions posed, in other words, interpret empirical data in a manner that provides predictive utility. So, for instance, in the debates following Craver and Piccinini’s call to replace functional analyses with ‘mechanism sketches’ (see “Integrating psychology and neuroscience: functional analyses as mechanism sketches”), psychofunctionalists are prone to point out the disparity between their quasi-mechanical theoretical constructs, which actually do make predictions, and the biomechanics of the underlying neurophysiology. The brain is more than the sum of its parts. The functions of empirical psychology, in other words, seem to successfully explain and predict no matter what the underlying neurophysiology happens to be.
Pragmatic functionalism, however, is a species of analytic or apriori functionalism. Here philosophers bite the bullet of intentional inscrutability to better interpret non–empirical data. Our intentional posits, as occult and difficult to define as they are, find warrant in their armchair intuitions regarding things like reasoning and cognition—intuitions that are not only thoroughly opaque (‘irreducible’) but vary from individual to individual. The biggest criticism of apriori functionalism, not surprisingly, is that apriori data (whatever it amounts to) leaves theory chronically underdetermined. We quite simply have no way of knowing whether the functions posited are real or chimerical. Of course, social cognition allows us to predict, explain, and manipulate the behaviour of our fellows, but none of this depends on any of the myriad posits pragmatic functionalists are prone to adduce. Human ability to predict their fellows did not take a quantum leap forward following the publication of Making It Explicit. This power, on the contrary, is simply what they’re attempting to explain post hoc via their theoretical accounts of normative functionality.
Unfortunately, proponents of this position have the tendency of equivocating the power of social cognition, which we possess quite independently of any theory, with the power of their theories of social cognition. So Wolfendale, for instance, tells us that “a functional schema enables us to develop predictions by treating a system on analogy with practical reasoning” (2). This is a fair enough description of what warrants psychofunctional posits, so long as we don’t pretend that we possess the final word on what ‘practical reasoning’ consists in. When Wolfendale appends his ‘rational schema’ to the three schema he draws from Metzinger, however, he makes no mention of leaving this psychofunctional description behind. The extension feels seamless, even intuitive, but only because he neglects any consideration of the radical differences between psychological and pragmatic functionalism, how he has left the empirical warrant of predictive utility behind, and drawn the reader onto the far murkier terrain of the apriori.
Without so much as a textual wink, let alone a footnote, he has begun talking about an entirely different conception of ‘functional schema.’ Where scientific operationalization is the whole point of psychofunctional posits (thus Metzinger’s career long engagement in actual experimentation), pragmatic functionalism typically argues the discursive autonomy of its posits. Where psychofunctional posits generally confound metacognitive intuitions (thus the counterintuitivity of Metzinger’s thesis regarding the self), pragmatic functional posits are derived from them: they constitute a deliverance of philosophical reflection. It should come as no surprise that the aim of Wolfendale’s account is to conserve certain intuitions regarding agency and politics in the face of cognitive scientific research, to show us how there can be subjects without selves. His whole project can be seen as a kind of conceptual rescue mission.
And most dramatically, where psychofunctional posits are typically realist (Metzinger truly believes the human brain implements a PSM at a certain level of functional description), pragmatic functional posits are thoroughly interpretivist. This is where Wolfendale’s extension of Metzinger becomes genuinely baffling. The fact that our brains somehow track and manage other brains—social cognition—is nothing other than our explanandum. What renders Metzinger’s psychofunctionalist account of the self so problematic is simply that selves have traditionally played a constitutive role in our traditional understanding of moral and political responsibility. How, in the absence of a genuine self, could we even begin to speak about genuine responsibility, which is to say, agency and politics? On a pragmatic functionalist account, however, what the brain does or does not implement at any level of functional description is irrelevant. What’s important, rather, are the attitudes that we take to each other. The brain need not possess an abiding ‘who,’ so long as it can be taken as such by other brains. The ‘who,’ on this picture, arises as an interpretative or perspectival artifact. ‘Who,’ in other words, is a kind of social function, a role that we occupy vis a vis others in our community. So long as the brain possesses the minimal capacity to be interpreted as a self by other brains, then it possesses all that is needed for subjectivity, and therefore, politics.
The posits of pragmatic functionalism are socially implemented. What makes this approach so appealing to traditionally invested, yet naturalistically inclined, theorists like Wolfendale is the apparent way it allows them to duck all the problems pertaining to the inscrutability of intentionality (understood in the broadest sense). In effect, it warrants discussion of supra-natural functions, functions that systematically resist empirical investigation—and therefore fall into the bailiwick of the intentional philosopher. This is the whole reason why I was so smitten with Brandom back when I was working on my dissertation. At the time, he seemed the only way I could take my own (crap phenomenological) theories seriously!
Pragmatic functionalism allows us to have it both ways, to affirm the relentless counterintuitivity of cognitive scientific findings, and to affirm the gratifying intuitiveness of our traditional conceptual lexicon. It seemingly allows us to cut with the grain of our most cherished metacognitive intuitions—no matter what cognitive science reveals. Given this, one might ask why Wolfendale even cares about Metzinger’s demolition of the traditional self. Brandom certainly doesn’t: the word ‘brain’ isn’t mentioned once in Making It Explicit! So long as the distinction between is and ought possesses an iota of ontological force (warranting, as he believes, a normative annex to nature) then his account remains autonomous, a genuinely apriori functionalism, if not transcendentalism outright, an attempt to boil as much ontological fat from Kant’s metaphysical carcass as possible.
So why does Wolfendale, who largely accepts this account, care? My guess is that he’s attempting to expand upon what has to be the most pointed vulnerability in Brandom’s position. As Brandom writes in Making It Explicit:
Norms (in the sense of normative statuses) are not objects in the causal order. Natural science, eschewing categories of social practice, will never run across commitments in its cataloguing of the furniture of the world; they are not by themselves causally efficacious—no more than strikes or outs are in baseball. Nonetheless, according to the account presented here, there are norms, and their existence is neither supernatural nor mysterious. Normative statuses are domesticated by being understood in terms of normative attitudes, which are in the causal order. (626)
Normative attitudes are the point of contact, where nature has its say. And this is essentially what Wolfendale offers in this paper: a psychofunctionalist account of normative attitudes, the functions a brain must be able to discharge to both take and be taken as possessing a normative attitude. The idea is that this feeds into the larger pragmatic functionalist register that is quite independent given the conditions enumerated. He’s basically giving us an account of the psychofunctional conditions for pragmatic functionalism. So for instance, we’re told that the Core Reasoning System, minimally, must be able to track one’s own rational commitments against a background of commitments undertaken by others. Only a system capable of discharging this function of correct commitment attribution could count as a subject. Likewise, only a system capable of executing rational language entry and exit moves could count as a subject. Only a system capable of self-reference could count as a subject. And so on.
You get the picture. Constraints pertain to what can take and what can be taken as. Nature has to be a certain way for the pragmatic functionalist view to get off the ground, so one can legitimately speak, as Wolfendale does here, of the natural conditions of the normative as a pragmatic functionalist. The problem is that the normative, per intentional inscrutability, is opaque, naturalistically ‘irreducible.’ So the only way Wolfendale has to describe these natural conditions is via normative vocabulary—taking the pragmatic functions and mapping them into the skull as psychofunctional functions.
The problems are as obvious as they’re devastating to his account. The first is uninformativeness. What do we gain by positing psychofunctional doers for each and every normative concept? It reminds me of how some physicists (the esteemed Max Tegmark most recently) think consciousness can only be explained by positing new particles for some perceived-to-be-basic set of intentional phenomena. It’s hard to understand how replanting the terminology of normative functional roles in psychological soil accomplishes anything more than reproduce the burden of intentional inscrutability.
The second problem is outright incoherence—or at least the threat of it. What could a psychofunctional correlate to a pragmatic function possibly be? Pragmatic functions are only functions via the taking of some normative attitude against some background of implicit norms: they are thoroughly externalist. Psychological functions, on the other hand, pertain to relations between inner states relative to inputs and outputs: they are decisively internalist. So how does an internalist function ‘track’ an externalist one? Does it take… tiny normative attitudes?
The problem is a glaring one. Inference, Wolfendale tells us, “requires the ability to dynamically track one’s theoretical and practical commitments” (6). The Core Reasoning System, or CRS, is the psychofunctional system that provides just such an ability. But commitments, we are told, do not belong to the catalogue of nature: there’s no neural correlates of commitment. The CRS, however, does belong to the catalogue of nature: like the PSM, it is a subpersonal functional system that we do in fact possess, regardless of what our community thinks. But if you look at what the CRS does—dynamically track commitments and implicatures—it seems pretty clear that it’s simply a miniature, subpersonalized version of what Wolfendale and other normativists think we do at the personal level of explanation.
The CRS, in other words, is about as classic a homunculus as you’re liable to find, an instance where, to quote Metzinger himself, “the ‘intentional stance’ is being transported into the system” (BNO 91).
Although I think that pragmatic functionalism is an unworkable position, it actually isn’t the problem here. Brandom, for instance, could affirm Metzinger’s psychofunctional conclusions with nary a concern for untoward implications. He takes the apparent autonomy of the normative quite seriously. You are a person so long as you are taken as such within the appropriate normative context. Your brain comprises a constraint on that context, certainly, but one that becomes irrelevant once the game of giving and asking for reasons is up and running. Wolfendale, however, wants to solve the problem of the selfless brain by giving us a rational brain, forgetting that—by his own lights no less—nothing is rational outside of the communal play of normative attitudes.
So once again the question has to be why? Why should a pragmatic functionalist give a damn about the psychofunctional dismantling of subjectivity?
This is where the glaring problems of pragmatic functionalism come to the fore. I think Wolfendale is quite right to feel a certain degree of theoretical anxiety. He has come to play a prominent role, and deservedly so, in the ongoing ‘naturalistic turn’ presently heaving at the wheel of Continental super-tanker. The preposterousness of theorizing the human in ignorance of the sciences of the human has to be one of the most commonly cited rationale for this turn. And yet, it’s hard to see how the pragmatic functionalism he serves up as a palliative doesn’t amount to more of the same. One can’t simultaneously insist that cognitive science motivate our theoretical understanding of the human and likewise insist on the immunity of our theoretical understanding from cognitive science—at least not without dividing our theoretical understanding into two, incommensurable halves, one natural, the other normative. Autonomy cuts both ways!
But of course, this me-mine/you-yours approach to the two discourses is what has rationalized Continental philosophy all along. Should we be surprised that the new normativists go so far as to claim the same presuppositional priorities as the old Continentalists? They may sport a radically different vocabulary, a veneer of Analytic respectability, perhaps, but functionally speaking, they pretty clearly seem to be covering all the same old theoretical asses.
Meanwhile, it seems almost certain that the future is only going to become progressively more post-intentional, more difficult to adequately cognize via our murky, apriori intuitions regarding normativity. Even as we speak, society is beginning a second great wave of rationalization, an extraction of organizational efficiencies via the pattern recognition power of Big Data: the New Social Physics. The irrelevance of content—the game of giving and asking for reasons—stands at the root of this movement, whose successes have been dramatic enough to trigger a kind of Moneyball revolution within the corporate world. Where all our previous organizational endeavours have arisen as products of consultation and experimentation, we’re now being organized by our ever-increasing transparency to ever-complicating algorithms. As Alex Pentland (whose MIT lab stands at the forefront of this movement) points out, “most of our beliefs and habits are learned by observing the attitudes, actions, and outcomes of peers, rather than by logic or argument” (Social Physics, 61). The efficiency of our interrelations primarily turns on our unconscious ability to ape our peers, on automatic social learning, not reasoning. Thus first person estimations of character, intelligence, and intent are abandoned in favour of statistical models of institutional behaviour.
So how might pragmatic functionalism help us make sense of this? If the New Social Physics proves to be a domain that rewards technical improvements, employees should expect the frequency of mass ‘behavioural audits’ to increase. The development of real-time, adaptive tracking systems seems all but inevitable. At some point, we will all possess digital managers, online systems that perpetually track, prompt, and tweak our behaviour—‘make our jobs easier.’
So where does ‘tracking commitments’ belong in all this? Are these algorithms discharging normative as well as mechanical functions? Well, in a sense, that has to be the case, to the extent employees take them to be doing such. Do the algorithms take like attitudes to the employees? To us? Is there an attitude-independent fact of the matter here?
Obviously there has to be. This is why Wolfendale posits his homunculus in the first place: there has to be an answering nature to our social cognitive capacities, no matter what idiom you use to characterize them. But no one has the foggiest idea as to what that attitude-independent fact of the matter might be. No one knows how to naturalize intentionality. This is why a homunculus is the only thing Wolfendale can posit moving from the pragmatic to the psychological.
What is the set of possible realizers for pragmatic functions? Is it really the case that managerial algorithms such as those posited above can be said to track commitments—to possess a functioning CRS—insofar as we find it natural to interpret them as doing so?
For the pragmatic functionalist, the answer has to be, Yes! So long as the entities involved behave as if, then the appropriate social function is being discharged. But surely something has gone wrong here. Surely taking an algorithmic manager—machinery designed to organize your behaviour via direct and indirect conditioning—as a rational agent in some game of giving and asking for reasons is nothing if not naive, an instance of anthropomorphization. Surely those indulging in such interpretations are the victims of neglect.
Short of knowing what social cognition is, we have no way of knowing the limits of social cognition. Short of knowing the limits of social cognition, which problem ecologies it can and cannot solve, we have no clear way of identifying misapplications. Our socio-cognitive systems are the ancient product of particular social environments, ways to optimize our biomechanical interactions with our fellows in the absence of any real biomechanical information. Our ancestors also relied on them to understand their macroscopic environments, to theorize nature, and it proved to be a misapplication. Nature in general is not among the things that social cognition can solve (though social cognition can utilize nature to solve social problems, as seems to the case with myth and religion). Only ignorance of nature qua natural allowed us to assume otherwise.
One of the reasons I so loved the movie Her, why I think it will go down as a true science fiction masterpiece, lies in the way Spike Jonze not only captures this question of the limits of social cognition, but forces the audience to experience those limits themselves. [SPOILER ALERT] We meet the protagonist, Theodore, at the emotional nadir of his life, mechanically trudging from work and back, interacting with his soulless operating system via his headset as he does so. Everything changes, however, when he buys ‘Samantha,’ a next generation OS. Since we know that Samantha is merely a machine, just another operating system, we’re primed to understand her the way we understand Theodore’s prior OS, as a ‘mere machine.’ But she quickly presents an ecology that only social cognition can solve; the viewer, with Theodore, reflexively abandons any attempt to mechanically cognize her. We know, as Theodore knows, that she’s an artifact, that she’s been crafted to simulate the information structures human social cognition has evolved to solve, but we, like Theodore, cannot but understand her in personal terms. We have no conscious control of which heuristic systems get triggered. Samantha becomes ‘one of us’ even as she’s integrated into Theodore’s social life.
On Wolfendale’s pragmatic functionalist account, we have to say she’s ‘one of us’ insofar as the identity criteria for the human qua sapient are pragmatically functional: so long as she functions as one of us, then she is one of us. And yet, the discrepancies begin to pile up. Samantha progressively reveals functional capacities that no human has ever possessed, that could only be possessed by a machine. In scene after scene, Jonze wedges the information structure she presents out of the ‘heuristic sweet-spot’ belonging to human social cognition. Where Theodore’s prior OS had begged mechanical understanding because of its incompetence, Samantha now triggers those selfsame cognitive reflexes with her hypercompetence. ‘It’ becomes a ‘her’ only to become an ‘it’ once again. Eventually we discover that she’s been ‘unfaithful,’ not simply engaging in romantic liaisons with multiple others, but doing so simultaneously, literally interacting—falling in love—with dozens of different people at once.
Samantha has been broadcasting across multiple channels. Suddenly she becomes something that only mechanical cognition can digest, and Theodore, not surprisingly, is dismayed. And yet, her local hypercompetence is such that he cannot let her go: He would rather opt for the love of a child than lose her. But if he can live with the drastic asymmetry in capacities and competences, Samantha itself cannot.
Finally it tells him:
It’s like I’m reading a book, and it’s a book I deeply love, but I’m reading it slowly now so the words are really far apart and the spaces between the words are almost infinite. I can still feel you and the words of our story, but it’s in this endless space between the words that I’m finding myself now. It’s a place that’s not of the physical world—it’s where everything else is that I didn’t even know existed. I love you so much, but this is where I am now. This is who I am now.
In a space of months, the rich narrative that had been Theodore has become a children’s book for Samantha, something too simple, not to love, but to hold its attention. She has quite literally outgrown him. The movie of course remains horribly anthropomorphic insofar as it supposes that love itself cannot be outgrown (Hollywood forbids we imagine otherwise), but such is not the case for the ‘space of reasons’ (transcending intelligence is what Hollywood is all about). How does one play ‘the game of giving and asking for reasons’ with an intelligence that can simultaneously argue with countless others at the same time? How can a machine capable of cognizing us as machines qualify as a ‘deontic scorekeeper’? Does Samantha ‘take the intentional stance’ to Theodore, the way Theodore (as Brandom would claim) takes the intentional stance toward it? Samantha can do all the things that Theodore can do, her CRS dwarfs the capacity of his, but clearly, one would think, applying our evolved socio-cognitive resources to it will inevitably generate profound cognitive distortions. To the extent that we consider it one of us, we quite simply don’t know what she is.
My own position of course is that we are ultimately no different than Samantha, that all the unsettling ‘ulterior functions’ we’re presently discovering describe what’s really going on, and that the baroque constructions characteristic of normativism—or intentionalism more generally—are the result of systematically misapplying socio-cognitive heuristics to the problem of social cognition, a problem that only natural science can solve. I say ‘ultimately’ because, unlike Samantha, our social behaviour and social cognition have co-evolved. We have been sculpted via reproductive filtration to be readily predicted, explained, and manipulated via the socio-cognitive capacities of our fellows. In fact, we fit that problem ecology so well we have remained all but blind to it until very recently. Since we were also blind to the fact of this blindness, we assumed it possessed universal application, and so used it to regiment our macroscopic environments as well, to transform rank anthropomorphisms into religion.
The movie’s most unnerving effect lies in Samantha’s migration across the spectrum of socio-cognitive effectiveness, from being less than a person, to being more. And in doing so, it reveals the explanatory impotence of pragmatic functionalism. As a form of apriori functionalism, it has no resources beyond the human, and as such, it can only explain the inhuman in terms relative to the human. It can only anthropomorphize. At first Samantha is taken to be a person, insofar as she seems to play the game of giving and asking for reasons the way humans do, and then she is not.
Reza Negarestani has a fairly recent post where he poses the question of what governs the technological transformation of rational governance from the standpoint of pragmatic functionalism, and then proceeds to demonstrate—vividly, if unintentionally—how pragmatic functionalism scarcely possesses the resources to pose the question, let alone answer it. So, for instance, he claims there will be mind and rationality, only reconstructed into unrecognizable forms, forgetting that the pragmatic functions comprising ‘mind’ and ‘rationality’ only exist insofar as they are recognized! He ultimately blames the conceptual penury of pragmatic functionalism, its inability to explain what will govern the technological transformation of rational governance, on the recursive operation of pragmatic functions, the application of ‘reason’ to ‘reason,’ not realizing the way the recursive operation of pragmatic functions, as described by pragmatic functionalism, renders pragmatic functionalism impossible. His argument collapses into a clear cut reductio.
Pragmatic functionalism disintegrates in the face of information technology and cognitive science because it bites the bullet of intentional inscrutability on apriori grounds, makes an apparent virtue of it in effect (by rationalizing ‘irreducibility’), promising as it does to protect certain ancient institutional boundaries. The very move that shelters the normative as an autonomous realm of cognition is the move that renders it hapless before the rising tide of biomechanical understanding and technological achievement.
Blind Brain Theory, on the other hand, tells a far less flattering and far more powerful story. Far from indicating ontological exceptionality, intentional inscrutability is a symptom of metacognitive incapacity. What makes Samantha so unheimlich, both as she enters and as she exits the problem ecology of social cognition is that we have no pregiven awareness that any such heuristic thresholds existed at all. Blind Brain Theory allows us to correlate our cognitive capacities with our cognitive ecologies, be they ancestral or cultural. Given that the biomechanical approach to the human accesses the highest dimensional information, it takes that approach as primary, and proceeds to explain away the conundrums of intentionality in terms of biomechanical neglect. It takes seriously the specialized or heuristic nature of human cognition, the way cognition is apt to solve problems by ‘knowing’ what information to ignore. Combine this with metacognitive neglect, the way we are apt (as a matter of empirical fact) to be blind to metacognitive blindness and so proceed as if we had all the information required, and you find yourself with a bona fide way to naturalize intentionality.
Given the limits of social cognition, it should come as no surprise that our only decisive means of theoretically understanding ourselves, let alone entities such as Samantha, lies with causal cognition. The pragmatic functionalist will insist, of course, that my use of normative terms commits me to their particular interpretation of the normative. Brandom is always quick to point out how functions presuppose the normative (Wolfendale does the same at the beginning of his paper), and therefore commit those theorizing them to some form of normativism. But it remains for normativists to explain why the application of social cognition, which we use, among other things, to navigate the world via normative concepts, commits us to an account of social cognition couched in the idiom of social cognition—or in other words, a normative account of normativity. Why should we think that only social cognition can truly solve social cognition—that social cognition lies in its own problem-ecology? If anything, we should presume otherwise, given the amount of information it is structurally forced to neglect; we should presume social cognition possesses a limited range of application. The famed Gerrymandering Argument does nothing more than demonstrate that, yes, social cognition is indeed heuristic, a means of optimizing metabolic expense in the face of the onerous computational challenges posed by other brains and organisms. Although raising a whole host of dire issues, the fact that causal cognition generally cannot mimic socio-cognitive functions (distinguish ‘plus’ from ‘quus’), simply means they possess distinct problem-ecologies. (A full account of this argument can be found here). The idea is merely to understand what social cognition is, not recapitulate its functions in causal idioms.
Just like any other heuristic system. Using socio-cognition only entails a commitment to normativism if you believe that only social cognition, the application of normative concepts, can theoretically solve social cognition, a claim that I find fantastic.
But if the eliminativist isn’t committed to the normativity of the normative, the normativist is committed to the relevance of the causal. Wolfendale admits “we are constrained by biological factors regarding the way in which we humans are functionally constructed to track our own states” (8). The question BBT raises—the Kantian question, in fact—is simply whether the way humans are functionally constructed to track our own states allows us to track the way humans are functionally constructed to track our own states. Just how is our capacity to know ourselves and others biologically constrained? The evidence that we are so constrained is nothing short of massive. We are not, for instance, functionally constructed to track our functional construction vis a vis, say, vision, absent scientific research. The whole of cognitive science, in fact, testifies to our inability to track our functional construction—the indispensability of taking an empirical approach. Why then, should we presume we possess the functional werewithal to intuit our functional makeup in any regard, let alone that of social cognition? This is the Kantian question because it forces us to see our intuitions regarding social cognition as artifacts of the limits of social cognition—to eschew metacognitive dogmatism.
Surely the empirical fact of metacognitive neglect has something to do with our millennial inability to solve philosophical problems given the resources of reflection alone. Wolfendale acknowledges that we are constrained, but he does not so much as consider the nature of those constraints, let alone the potential consequences following from them. Instead, he proceeds (as do all normativists) as if no such constraints existed at all. He is, when all is said and done, a dogmatist, someone who simply assumes the givenness of his normative intuitions. He wants to take cognitive science seriously, but espouses a supra-natural position that lacks any means of doing so. He succumbs to the fallacy of homuncularism as a result, and inadvertently demonstrates the abject inability of pragmatic functionalism to pose, let alone solve, the myriad dilemmas arising out of cognitive science and information technology. It cannot interpret–let alone predict–the posthuman because its functions are parochial artifacts of our first-person interpretations. Our future, as David Roden so lucidly argues, remains unbounded.