The Theory Industry

So I’ve been struggling with politics the way I always struggle with politics.

.

Here’s what I think is very likely a waste of intellectual resources:

1) Philosophical redefinitions of ‘freedom.’ So you’ve added to the sum of what there is to disagree about, induced more educated souls to opine as opposed to act, and contributed to the cultural alienation that makes anti-intellectualism cool. Who do you work for again?

2) Conceptual delimitations of what David Roden calls ‘Posthuman Possibility Space.’ Humans are not exempt from the order of nature. Science has had no redemptive tales to tell so far, so why should we think it will in the future?

3) The fetishization of art. A classic example of the ‘man with a hammer’ disease. Transgressing outgroup aesthetic expectations for ingroup consumption amounts to nothing more than confirming outgroup social expectations regarding your ingroup. Unless the ‘art’ in question genuinely reaches out, then it is simply part of the problem. Of course, this amounts to abandoning art and embracing dreck, where, as the right has always known, the true transformative power of art has always lain.

4) Critiques and defenses of subjectivity. Even if there is such a thing, I think it’s safe to say that discoursing about it amounts to little more than an ingroup philosophical parlour game.

.

Here’s what I think is not as likely to be a waste of intellectual resources (but very well could be):

1) Cultural triage. WE NO LONGER HAVE TIME TO FUCK AROUND. The Theory Industry (and yes I smell the reek of hypocrisy) is a self-regarding institutional enterprise, bent not so much on genuine transformation as breath mints and citations–which is to say, the accumulation of ingroup prestige. The only lines worth pursuing are lines leading out, away from the Theory Industry, and toward all those people who keep our lazy asses alive. If content is your thing, then invade the commons, recognize that writing for the likeminded amounts to not writing at all.

2) Theoretical honesty. NO ONE HAS ANY DEFINITIVE THEORETICAL ANSWERS. This is an enormous problem because moral certainty is generally required to motivate meaningful, collective political action. Such moral certainty in the modern age is either the product of ignorance and/or stupidity. The challenge facing us now, let alone in the future, is one of picking guesses worth dying for without the luxury of delusion. Pick them. Run with them.

3) The naturalization of morality and meaning. EMBRACE THOSE DEFINITIVE ANSWERS WE DO HAVE. Science tells us what things are, how they function, and how they can be manipulated. Science is power, which is why all the most powerful institutions invest so heavily in science. The degree to which science and scientific methodologies are eschewed is the degree to which power is eschewed. Only discourses possessing a vested interest in their own impotence would view ‘scientism’ as a problem admitting a speculative or attitudinal solution, rather than the expression of their own crisis of theoretical legitimacy. The thinking that characterizes the Theory Industry is almost certainly magical, in this respect, insofar as it believes that words and moral sentiment can determine what science can and cannot cognize.

.

Any others anyone can think of?

 

Phrenomenology: Zahavi, Dennett and the End of Being

We are led back to these perceptions in all questions regarding origins, but they themselves exclude any further question as to origin. It is clear that the much-talked-of certainty of internal perception, the evidence of the cogito, would lose all meaning and significance if we excluded temporal extension from the sphere of self-evidence and true givenness.

–Husserl, The Phenomenology of Internal Time-Consciousness

So recall this list, marvel how it continues to grow, and remember, the catalogue is just getting started. The real tsunami of information is rumbling off in the near horizon. And lest you think your training or education render you exempt, pause and consider the latest in Eric Schwitzgebel’s empirical investigations of how susceptible professional philosophers are to various biases and effects on that list. I ask you to consider what we know regarding human cognitive shortcomings to put you in a skeptical frame of mind. I want to put in a skeptical frame of mind because of a paper by Dan Zahavi, the Director of the Center for Subjectivity Research at the University of Copenhagen, that came up on my academia.edu feed the other day.

Zahavi has always struck me as unusual as far as ‘continental’ philosophers go, at once a Husserlian ‘purist’ and determined to reach out, to “make phenomenology a powerful and systematically convincing voice in contemporary philosophical discussion” (“Husserl, self, and others: an interview with Dan Zahavi”). I applaud him for this, for braving genuine criticism, genuine scientific research, rather than allowing narrow ingroup interpretative squabbles to swallow him whole. In “Killing the straw man: Dennett and phenomenology,” he undertakes a survey of Dennett’s many comments regarding phenomenology, and a critical evaluation of his alternative to phenomenology, heterophenomenology. Since I happen to be a former phenomenologist, I’ve had occasion to argue both sides of the fence. I spent a good portion of my late twenties and early thirties defending my phenomenological commitments from my skeptical, analytically inclined friends using precisely the arguments and assumptions that Zahavi deploys against Dennett. And I’ve spent the decade following arguing a position even more radically eliminativistic than Dennett’s. I’ve walked a mile in both shoes, I suppose. I’ve gone from agreeing with pretty much everything Zahavi argues in this piece (with a handful of deconstructive caveats) to agreeing with almost nothing.

So what I would like to do is use Zahavi’s position and critique as a foil to explain how and why I’ve abandoned the continental alliance and joined the scientific empire. I gave up on what I call the Apple-and-Oranges Argument because I realized there was no reliable, a priori way to discursively circumscribe domains, to say science can only go so far and no further. I gave up on what I call the Ontological Pre-emption Argument because I realized arguing ‘conditions of possibility,’ far from rationally securing my discourse, simply multiplied my epistemic liabilities. Ultimately, I found myself stranded with what I call the Abductive Argument, an argument based on the putative reality of the consensual structures that seem to genuinely anchor phenomenological disputation. Phenomenology not only offered the best way to describe that structure, it offered the only way, or so I thought. Since Zahavi provides us with examples of all three arguments in the course of castigating Dennett, and since Dennett occupies a position similar to my own, “Killing the straw man” affords an excellent opportunity to demonstrate how phenomenology fares when considered in terms of brain science and heuristic neglect.

As the title of the paper suggests, Zahavi thinks Dennett never moves past critiquing a caricature of phenomenology. For Dennett, Zahavi claims, phenomenology is merely a variant of Introspectionism and thus suffering all the liabilities that caused Introspectionism to die as a branch of empirical psychology almost a century ago now. To redress this equivocation, Zahavi turns to that old stalwart of continental cognitive self-respect, the ‘Apples-and-Oranges Argument’:

To start with, it is important to realize that classical phenomenology is not just another name for a kind of psychological self-observation; rather it must be appreciated as a special form of transcendental philosophy that seeks to reflect on the conditions of possibility of experience and cognition. Phenomenology is a philosophical enterprise; it is not an empirical discipline. This doesn’t rule out, of course, that its analyses might have ramifications for and be of pertinence to an empirical study of consciousness, but this is not its primary aim.

By conflating phenomenology and introspective psychology, Dennett is conflating introspection with the phenomenological attitude, the theoretically attuned orientation to experience that allows the transcendental structure of experience to be interpreted. Titchener’s psychological structuralism, for instance, was invested in empirical investigations into the structure and dynamics of the conscious mind. As descriptive psychology, it could not, by definition, disclose what Zahavi terms the ‘nonpsychological dimension of consciousness,’ those structures that make experience possible.

What makes phenomenology different, in other words, is also what makes phenomenology better. And so we find the grounds for the Ontological Pre-emption Argument in the Apples-and-Oranges Argument:

Phenomenology is not concerned with establishing what a given individual might currently be experiencing. Phenomenology is not interested in qualia in the sense of purely individual data that are incorrigible, ineffable, and incomparable. Phenomenology is not interested in psychological processes (in contrast to behavioral processes or physical processes). Phenomenology is interested in the very dimension of givenness or appearance and seeks to explore its essential structures and conditions of possibility. Such an investigation of the field of presence is beyond any divide between psychical interiority and physical exteriority, since it is an investigation of the dimension in which any object—be it external or internal—manifests itself. Phenomenology aims to disclose structures that are intersubjectively accessible, and its analyses are consequently open for corrections and control by any (phenomenologically tuned) subject.

The strategy is as old as phenomenology itself. First you extricate phenomenology from the bailiwick of the sciences, then you position phenomenology prior to the sciences as the discipline responsible for cognizing the conditions of possibility of science. First you argue that it is fundamentally different, and then you argue that this difference is fundamental.

Of course, Zahavi omits any consideration of the ways Dennett could respond to either of these claims. (This is one among several clues to the institutionally defensive nature of this paper, the fact that it is pitched more to those seeking theoretical reaffirmation than to institutional outsiders—let alone lapsarians). Dennett need only ask Zahavi why anyone should believe that his domain possesses ontological priority over the myriad domains of science. The fact that Zahavi can pluck certain concepts from Dennett’s discourse, drop them in his interpretative machinery, and derive results friendly to that machinery should come as no surprise. The question pertains to the cognitive legitimacy of the machinery: therefore any answer presuming that legitimacy simply begs the question. Does Zahavi not see this?

Even if we granted the possible existence of ‘conditions of possibility,’ the most Zahavi or anyone else could do is intuit them from the conditioned, which just happen to be first-person phenomena. So if generalizing from first-person phenomena proved impossible because of third-person inaccessibility—because genuine first person data were simply too difficult to come by—why should we think those phenomena can nevertheless anchor a priori claims once phenomenologically construed? The fact is phenomenology suffers all the problems of conceptual controversy and theoretical underdetermination as structuralist psychology. Zahavi is actually quite right: phenomenology is most certainly not a science! There’s no need for him to stamp his feet and declare, “Oranges!” Everybody already knows.

The question is why anyone should take his Oranges seriously as a cognitive enterprise. Why should anyone believe his domain comes first? What makes phenomenologically disclosed structures ontologically prior or constitutive of conscious experience? Blood flow, neural function—the life or death priority of these things can be handily demonstrated with a coat-hanger! Claims like Zahavi’s regarding the nature of some ontologically constitutive beyond, on the other hand, abound in philosophy. Certainly powerful assurances are needed to take them seriously, especially when we reject them outright for good reason elsewhere. Why shouldn’t we just side with the folk, chalk phenomenology up to just another hothouse excess of higher education? Because you stack your guesswork up on the basis of your guesswork in a way you’re guessing is right?

Seriously?

As I learned, neither the Apples-and-Oranges nor the Ontological Pre-emption Arguments draw much water outside the company of the likeminded. I felt their force, felt reaffirmed the way many phenomenologists, I’m sure, feel reaffirmed reading Zahavi’s exposition now. But every time I laid them on nonphenomenologists I found myself fenced by questions that were far too easy to ask—and far easier to avoid than answer.

So I switched up my tactics. When my old grad school poker buddies started hacking on Heidegger, making fun of the neologisms, bitching about the lack of consensus, I would say something very similar to what Zahavi claims above—even more powerful, I think, since it concretizes his claims regarding structure and intersubjectivity. Look, I would tell them, once you comport yourself properly (with a tremendous amount of specialized training, bear in mind), you can actually anticipate the kinds of things Husserl or Heidegger or Merleau-Ponty or Sarte might say on this or that subject. Something more than introspective whimsy is being tracked—surely! And if that ‘something more’ isn’t the transcendental structure of experience, what could it be? Little did I know how critical this shift in the way I saw the dialectical landscape would prove.

Basically I had retreated to the Abductive Argument—the only real argument, I now think, that Zahavi or any phenomenologist ultimately has outside the company of their confreres. Apriori arguments for phenomenological aprioricity simply have no traction unless you already buy into some heavily theorized account of the apriori. No one’s going to find the distinction between introspectionism and phenomenology convincing so long as first-person phenomena remain the evidential foundation of both. If empirical psychology couldn’t generalize from phenomena, then why should we think phenomenology can reason to their origins, particularly given the way it so discursively resembles introspectionism? Why should a phenomenological attitude adjustment make any difference at all?

One can actually see Zahavi shift to abductive warrant in the last block quote above, in the way he appeals to the intersubjectively accessible nature of the ‘structures’ comprising the domain of the phenomenological attitude. I suspect this is why Zahavi is so keen on the eliminativist Dennett (whom I generally agree with) at the expense of the intentionalist Dennett (whom I generally disagree with)—so keen on setting up his own straw man, in effect. The more he can accuse Dennett of eliminating various verities of experience, the more spicy the abductive stew becomes. If phenomenology is bunk, then why does it exhibit the systematicity that it does? How else could we make sense of the genuine discursivity that (despite all the divergent interpretations) unquestionably animates the field? If phenomenological reflection is so puny, so weak, then how has any kind of consensus arisen at all?

The easy reply, of course, is to argue that the systematicity evinced by phenomenology is no different than the systematicity evinced by intelligent design, psychoanalysis, climate-change skepticism, or what have you. One might claim that rational systematicity, the kind of ‘intersubjectivity’ that Zahavi evokes several times in “Killing the straw man,” is actually cheap as dirt. Why else would we find ourselves so convincing, no matter what we happen to believe? Thus the importance of genuine first-person data: ‘structure’ or no ‘structure,’ short of empirical evidence, we quite simply have no way of arbitrating between theories, and thus no way of moving forward. Think of the list of our cognitive shortcomings! We humans have an ingrown genius for duping both ourselves and one another given the mere appearance of systematicity.

Now abductive arguments for intentionalism more generally have the advantage of taking intentional phenomena broadly construed as their domain. So in his Sources of Intentionality, for instance, Uriah Kriegel argues ‘observational contact with the intentional structure of experience’ best explains our understanding of intentionality. Given the general consensus that intentional phenomena are real, this argument has real dialectical traction. You can disagree with Kriegel, but until you provide a better explanation, his remains the only game in town.

In contrast to this general, Intentional Abductive Argument, the Phenomenological Abductive Argument takes intentional phenomena peculiar to the phenomenological attitude as its anchoring explananda. Zahavi, recall, accuses Dennett of equivocating phenomenology and introspectionism because of a faulty understanding of the phenomenological attitude. As a result he confuses the ontic with the ontological, ‘a mere sector of being’ with the problem of Being as such. And you know what? From the phenomenological attitude, his criticism is entirely on the mark. Zahavi accuses Dennett of a number of ontological sins that he simply does not commit, even given the phenomenological attitude, but this accusation, that Dennett has run afoul the ‘metaphysics of presence,’ is entirely correct—once again, from the phenomenological attitude.

Zahavi’s whole case hangs on the deliverances of the phenomenological attitude. Refuse him this, and he quite simply has no case at all. This was why, back in my grad school days, I would always urge my buddies to read phenomenology with an open mind, to understand it on its own terms. ‘I’m not hallucinating! The structures are there! You just have to look with the right eyes!’

Of course, no one was convinced. I quickly came to realize that phenomenologists occupied a position analogous to that of born-again Christians, party to a kind of undeniable, self-validating experience. Once you grasp the ontological difference, it truly seems like there’s no going back. The problem is that no matter how much you argue no one who has yet to grasp the phenomenological attitude can possibly credit your claims. You’re talking Jesus, son of God, and they think you’re referring to Heyzoos down at the 7-11.

To be clear, I’m not suggesting that phenomenology is religious, only that it shares this dialectical feature with religious discourses. The phenomenological attitude, like the evangelical attitude, requires what might be called a ‘buy in moment.’ The only way to truly ‘get it’ is to believe. The only way to believe is to open your heart to Husserl, or Heidegger, or in this case, Zahavi. “Killing the straw man” is jam packed with such inducements, elegant thumbnail recapitulations of various phenomenological interpretations made by various phenomenological giants over the years. All of these recapitulations beg the question against Dennett, obviously so, but they’re not dialectically toothless or merely rhetorical for it. By giving us examples of phenomenological understanding, Zahavi is demonstrating possibilities belonging to a different way of looking at the world, laying bare the very structure that organizes phenomenology into genuinely critical, consensus driven discourse.

The structure that phenomenology best explains. For anyone who has spent long rainy afternoons pouring through the phenomenological canon, alternately amused and amazed by this or that interpretation of lived life, the notion that phenomenology is ‘mere bunk’ can only sound like ignorance. If the structures revealed by the phenomenological attitude aren’t ontological, then what else could they be?

This is what I propose to show: a radically different way of conceiving the ‘structures’ that motivate phenomenology. I happen to be the global eliminativist that Zahavi mistakenly accuses Dennett of being, and I also happen to have a fairly intimate understanding of the phenomenological attitude. I came by my eliminativism in the course of discovering an entirely new way to describe the structures revealed by the phenomenological attitude. The Transcendental Interpretation is no longer the only game in town.

The thing is, every phenomenologist, whether they know it or not, is actually part of a vast, informal heterophenomenological experiment. The very systematicity of conscious access reports made regarding phenomenality via the phenomenological attitude is what makes them so interesting. Why do they orbit around the same sets of structures the way they do? Why do they lend themselves to reasoned argumentation? Zahavi wants you to think that his answer—because they track some kind of transcendental reality—is the only game in town, and thus the clear inference to the best explanation.

But this is simply not true.

So what alternatives are there? What kind of alternate interpretation could we give to what phenomenology contends is a transcendental structure?

In his excellent Posthuman Life, David Roden critiques transcendental phenomenology in terms of what he calls ‘dark phenomenology.’ We now know as a matter of empirical fact that our capacity to discriminate colours presented simultaneously outruns our capacity to discriminate sequentially, and that our memory severely constrains the determinacy of our concepts. This gap between the capacity to conceptualize and the capacity to discriminate means that a good deal of phenomenology is conceptually dark. The argument, as I see it, runs something like: 1) There is more than meets the phenomenological eye (dark phenomenology). 2) This ‘more’ is constitutive of what meets the phenomenological eye. 3) This ‘more’ is ontic. 4) Therefore the deliverances of the phenomenological eye cannot be ontological. The phenomenologist, he is arguing, has only a blinkered view. The very act of conceptualizing experience, no matter how angelic your attitude, covers experience over. We know this for a fact!

My guess is that Zahavi would concede (1) and (2) while vigorously denying (3), the claim that the content of dark phenomenology is ontic. He can do this simply by arguing that ‘dark phenomenology’ provides, at best, another way of delimiting horizons. After all, the drastic difference in our simultaneous and sequential discriminatory powers actually makes phenomenological sense: the once-present source impression evaporates into the now-present ‘reverberations,’ as Husserl might call them, fades on the dim gradient of retentional consciousness. It is a question entirely internal to phenomenology as to just where phenomenological interpretation lies on this ‘continuum of reverberations,’ and as it turns out, the problem of theoretically incorporating the absent-yet-constitutive backgrounds of phenomena is as old as phenomenology itself. In fact, the concept of horizons, the subjectively variable limits that circumscribe all phenomena, is an essential component of the phenomenological attitude. The world has meaning–everything we encounter resounds with the significance of past encounters, not to mention future plans. ‘Horizon talk’ simply allows us to make these constitutive backgrounds theoretically explicit. Even while implicit they belong to the phenomena themselves no less, just as implicit. Consciousness is as much non-thematic consciousness as it is thematic consciousness. Zahavi could say the discovery that we cannot discriminate nearly as well sequentially as we can simultaneously simply recapitulates this old phenomenological insight.

Horizons, as it turns out, also provide a way to understand Zahavi’s criticism of the heterophenomenology Dennett proposes we use in place of phenomenology. The ontological difference is itself the keystone of a larger horizon argument involving what Heidegger called the ‘metaphysics of presence,’ how forgetting the horizon of Being, the fundamental background allowing beings to appear as beings, leads to investigations of Being under the auspices of beings, or as something ‘objectively present.’ More basic horizons of use, horizons of care, are all covered over as a result. And when horizons are overlooked—when they are ignored or worse yet, entirely neglected—we run afoul conceptual confusions. In this sense, it is the natural attitude of science that is most obviously culpable, considering beings, not against their horizons of use or care, but against the artificially contrived, parochial, metaphysically naive, horizon of natural knowledge. As Zahavi writes, “the one-sided focus of science on what is available from a third person perspective is both naive and dishonest, since the scientific practice constantly presupposes the scientist’s first-personal and pre-scientific experience of the world.”

As an ontic discourse, natural science can only examine beings from within the parochial horizon of objective presence. Any attempt to drag phenomenology into the natural scientific purview, therefore, will necessarily cover over the very horizon that is its purview. This is what I always considered a ‘basic truth’ of the phenomenological attitude. It certainly seems to be the primary dialectical defence mechanism: to entertain the phenomenological attitude is to recognize the axiomatic priority of the phenomenological attitude. If the intuitive obviousness of this escapes you, then the phenomenological attitude quite simply escapes you.

Dennett, in other words, is guilty of a colossal oversight. He is quite simply forgetting that lived life is the condition of possibility of science. “Dennett’s heterophenomenology,” Zahavi writes, “must be criticized not only for simply presupposing the availability of the third-person perspective without reflecting on and articulating its conditions of possibility, but also for failing to realize to what extent its own endeavour tacitly presupposes an intact first-person perspective.”

Dennett’s discursive sin, in other words, is the sin of neglect. He is quite literally blind to the ontological assumptions—the deep first person facts—that underwrite his empirical claims, his third person observations. As a result, none of these facts condition his discourse the way they should: in Heidegger’s idiom, he is doomed to interpret Being in terms of beings, to repeat the metaphysics of presence.

The interesting thing to note here, however, is that Roden is likewise accusing Zahavi of neglect. Unless phenomenologists accord themselves supernatural powers, it seems hard to believe that they are not every bit as conceptually blind to the full content of phenomenal experience as the rest of us are. The phenomenologist, in other words, must acknowledge the bare fact that they suffer neglect. And if they acknowledge the bare fact of neglect, then, given the role neglect plays in their own critique of scientism, they have to acknowledge the bare possibility that they, like Dennett and heterophenomenology, find themselves occupying a view whose coherence requires ignorance—or to use Zahavi’s preferred term, naivete—in a likewise theoretically pernicious way.

The question now becomes one of whether the phenomenological concept of horizons can actually allay this worry. The answer here has to be no. Why? Simply because the phenomenologist cannot deploy horizons to rationally immunize phenomenology against neglect without assuming that phenomenology is already so immunized. Or put differently: if it were the case that neglect were true, that Zahavi’s phenomenology, like Dennett’s heterophenomenology, only makes sense given a certain kind of neglect, then we should expect ‘horizons’ to continue playing a conceptually constitutive role—to contribute to phenomenology the way it always has.

Horizons cannot address the problem of neglect. The phenomenologist, then, is stranded with the bare possibility that their practice only appears to be coherent or cognitive. If neglect can cause such problems for Dennett, then it’s at least possible that it can do so for Zahavi. And how else could it be, given that phenomenology was not handed down to Moses by God, but rather elaborated by humans suffering all the cognitive foibles on the list linked above? In all our endeavours, it is always possible that our blindspots get the better of us. We can’t say anything about specific ‘unknown unknowns’ period, let alone anything regarding their relevance! Arguing that phenomenology constitutes a solitary exception to this amounts to withdrawing from the possibility of rational discourse altogether—becoming a secular religion, in effect.

So it has to be possible that Zahavi’s phenomenology runs afoul theoretically pernicious neglect the way he accuses Dennett’s heterophenomenology of running afoul theoretically pernicious neglect.

Fair is fair.

The question now becomes one of whether phenomenology is suffering from theoretically pernicious neglect. Given that magic mushrooms fuck up phenomenologists as much as the rest of us, it seems assured that the capacities involved in cognizing their transcendental domain pertain to the biological in some fundamental respect. Phenomenologists suffer strokes, just like the rest of us. Their neurobiological capacity to take the ‘phenomenological attitude’ can be stripped from them in a tragic inkling.

But if the phenomenological attitude can be neurobiologically taken, it can also be given back, and here’s the thing, in attenuated forms, tweaked in innumerable different ways, fuzzier here, more precise there, truncated, snipped, or twisted.

This means there are myriad levels of phenomenological penetration, which is to say, varying degrees of phenomenological neglect. Insofar as we find ourselves on a biological continuum with other species, this should come as no surprise. Biologically speaking, we do not stand on the roof of the world, so it makes sense to suppose that the same is true of our phenomenology.

So bearing this all in mind, here’s an empirical alternative to what I termed the Transcendental Interpretation above.

On the Global Neuronal Workspace Theory, consciousness can be seen as a serial, broadcast conduit between a vast array of nonconscious parallel systems. Networks continually compete at the threshold of conscious ‘ignition,’ as it’s called, competition between nonconscious processes results in the selection of some information for broadcast. Stanislaus Dehaene—using heterophenomenology exactly as Dennett advocates—claims on the basis of what is now extensive experimentation that consciousness, in addition to broadcasting information, also stabilizes it, slows it down (Consciousness and the Brain). Only information that is so broadcast can be accessed for verbal report. From this it follows that the ‘phenomenological attitude’ can only access information broadcast for verbal report, or conversely, that it neglects all information not selected for stabilization and broadcast.

Now the question becomes one of whether that information is all the information the phenomenologist, given his or her years of specialized training, needs to draw the conclusions they do regarding the ontological structure of experience. And the more one looks at the situation through a natural lens, the more difficult it becomes to see how this possibly could be the case. The GNW model sketched above actually maps quite well onto the dual-process cognitive models that now dominate the field in cognitive science. System 1 cognition applies to the nonconscious, massively parallel processing that both feeds, and feeds from, the information selected for stabilization and broadcast. System 2 cognition applies to the deliberative, conscious problem-solving that stabilization and broadcast somehow makes possible.

Now the phenomenological attitude, Zahavi claims, somehow enables deliberative cognition of the transcendental structure of experience. The phenomenological attitude, then, somehow involves a System 2 attempt to solve for consciousness in a particular way. It constitutes a paradigmatic example of deliberative, theoretical metacognition, something we are also learning more and more about on a daily basis. (The temptation here will be to beg the question and ‘go ontological,’ and then accuse me of begging the question against phenomenology, but insofar as neuropathologies have any kind of bearing on the ‘phenomenological attitude,’ insofar as phenomenologists are human, giving in to this temptation would be tendentious, more a dialectical dodge than an honest attempt to confront a real problem.)

The question of whether Zahavi has access to what he needs, then, calves into two related issues: the issue of what kind of information is available, and the issue of what kind of metacognitive resources are available.

On the metacognitive capacity front, the picture arising out of cognitive psychology and neuroscience is anything but flattering. As Fletcher and Carruthers have recently noted:

What the data show is that a disposition to reflect on one’s reasoning is highly contingent on features of individual personality, and that the control of reflective reasoning is heavily dependent on learning, and especially on explicit training in norms and procedures for reasoning. In addition, people exhibit widely varied abilities to manage their own decision-making, employing a range of idiosyncratic techniques. These data count powerfully against the claim that humans possess anything resembling a system designed for reflecting on their own reasoning and decision-making. Instead, they support a view of meta-reasoning abilities as a diverse hodge-podge of self-management strategies acquired through individual and cultural learning, which co-opt whatever cognitive resources are available to serve monitoring-and-control functions. (“Metacognition and Reasoning”)

We need to keep in mind that the transcendental deliverances of the phenomenological attitude are somehow the product of numerous exaptations of radically heuristic systems. As the most complicated system in its environment, and as the one pocket of its environment that it cannot physically explore, the brain can only cognize its own processes in disparate and radically heuristic ways. In terms of metacognitive capacity, then, we have reason to doubt the reliability of any form of reflection.

On the information front, we’ve already seen how much information slips between the conceptual cracks with Roden’s account of dark phenomenology. Now with the GNW model, we can actually see why this has to be the case. Consciousness provides a ‘workspace’ where a little information is plucked from many producers and made available to many consumers. The very process of selection, stabilization, and broadcasting, in other words, constitutes a radical bottleneck on the information available for deliberative metacognition. This actually allows us to make some rather striking predictions regarding the kinds of difficulties such a system might face attempting to cognize itself.

For one, we should expect such a system to suffer profound source neglect. Since all the neurobiological machinery preceding selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the origins of consciousness to end in dismal failure. In fact, given that the larger cognitive system cognizes environments via predictive error minimization (I heartily recommend Hohwy’s, The Predictive Mind), which is to say, via the ability to anticipate what follows from what, we could suppose it would need some radically different means of cognizing itself, one somehow compensating for, or otherwise accommodating, source neglect.

For another, we should expect such a system to suffer profound scope neglect. Once again, since all the neurobiological machinery bracketing the selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the limits of consciousness to end in failure. Since the larger cognitive system functions via active environmental demarcations, consciousness would jam the gears, to be an ‘object without edges,’ if anything coherent at all.

We should expect to be baffled by our immediate sources and by our immediate scope, not because they comprise our transcendental limitations, but because such blind-spots are an inevitable by-product of the radical neurophysiological limits on our brain’s ability to cognize its own structure and dynamics. Thus Blind Brain Theory, the empirical thesis that we’re natural in such a way that we cannot cognize ourselves as natural, and so cognize ourselves otherwise. We’re a standalone solution-monger, one so astronomically complicated that we at best enjoy an ad hoc, heuristic relation to ourselves. The self-same fundamental first-person structure that phenomenology interprets transcendentally—as ontologically positive, naturalistically inscrutable, and inexplicably efficacious—it explains in terms of neglect, explains away, in effect. It provides a radical alternative to the Transcendental Interpretation discussed above—a Blind Brain interpretation. Insofar as Zahavi’s ‘phenomenological attitude’ amounts to anything at all, it can be seen as a radically blinkered, ‘inside view’ of source and scope neglect. Phenomenology, accordingly, can be diagnosed as the systematic addumbration of a wide variety of metacognitive illusions, all turning in predictable ways on neglect.

As a onetime phenomenologist I can appreciate how preposterous this must all sound, but I ask you to consider, as honestly as that list I linked above allows, the following passage:

This flow is something we speak of in conformity with what is constituted, but it is not ‘something in objective time.’ It is absolute subjectivity and has the absolute properties of something to be designated metaphorically as ‘flow’; of something that originates in a point of actuality, in a primal source-point and a continuity of moments of reverberation. For all this, we lack names. Husserl, Phenomenology of Internal Time-Consciousness, 79.

Now I think this sounds like a verbal report generated by a metacognitive system suffering source and scope neglect yet grappling with questions of source and scope all the same. Blind to our source blindness, our source appears to stand outside the order of the conditioned, to be ‘absolute’ or ‘transcendental.’ Blind to our scope blindness, this source seems to be a kind of ‘object without edges,’ more boundless container than content. And so a concatenation of absolute ignorances drives a powerful intuition of absolute or transcendental subjectivity at the very limit of what can be reported. Thus domesticated, further intuitive inferences abound, and the sourceless, scopeless arena of the phenomenological attitude is born, and with it, the famed ontological difference, the principled distinction of the problem of being from the problems of beings, or the priority of the sourceless and scopeless over the sourced and the scoped.

My point here is to simply provide a dramatic example of the way the transcendental structure revealed by the phenomenological attitude can be naturalistically turned inside out, how its most profound posits are more parsimoniously explained as artifacts of metacognitive neglect. Examples of how this approach can be extended in ways relevant to phenomenology can be found here, here, and here.

This is a blog post, so I can genuinely reach out. Everyone who practices phenomenology needs to consider the very live possibility that they’re actually trading in metacognitive illusions, that the first person they claim to be interpreting in the most fundamental terms possible is actually a figment of neglect. At the very least they need to recognize that the Abductive Argument is no longer open to them. They can no longer assume, the way Zahavi does, that the intersubjective features of their discourse evidence the reality of their transcendental posits exclusively. If anything, Blind Brain Theory offers a far better explanation for the discourse-organizing structure at issue, insofar as it lacks any supernatural posits, renders perspicuous a hitherto occult connection between brain and consciousness (as phenomenologically construed), and is empirically testable.

All of the phenomenological tradition is open to reinterpretation in its terms. I agree that this is disastrous… the very kind of disaster we should have expected science would deliver. Science is to be feared precisely because it monopolizes effective theoretical cognition, not because it seeks to, and philosophies so absurd as to play its ontological master manage only to anaesthetize themselves.

When asked what problems remain outstanding in his AVANT interview, Zahavi acknowledges that phenomenology, despite revealing the dialectical priority of the first person over the third person perspective on consciousness, has yet to elucidate the nature of the relationship between them. “What is still missing is a real theoretical integration of these different perspectives,” he admits. “Such integration is essential, if we are to do justice to the complexity of consciousness, but it is in no way obvious how natural science all by itself will be able to do so” (118). Blind Brain Theory possesses the conceptual resources required to achieve this integration. Via neglect and heuristics, it allows us to see the first-person in terms entirely continuous with the third, while allowing us to understand all the apories and conundrums that have prevented such integration until now. It provides the basis, in other words, for a wholesale naturalization of phenomenology.

Regardless, I think it’s safe to say that phenomenology is at a crossroads. The days when the traditional phenomenologist could go on the attack, actually force their interlocutors to revisit their assumptions, are quickly coming to a close. As the scientific picture of the human accumulates ever more detail—ever more data—the claim that these discoveries have no bearing whatsoever on phenomenological practice and doctrine becomes ever more difficult to credit. “Science is a specific theoretical stance towards the world,” Zahavi claims. “Science is performed by embodied and embedded subjects, and if we wish to comprehend the performance and limits of science, we have to investigate the forms of intentionality that are employed by cognizing subjects.”

Perhaps… But only if it turns out that ‘cognizing subjects’ possess the ‘intentionality’ phenomenology supposes. What if science is performed by natural beings who, quite naturally, cannot intuit themselves in natural terms? Phenomenology has no way of answering this question. So it waits the way all prescientific discourses have waited for the judgment of science on their respective domains. I have given but one possible example of a judgment that will inevitably come.

There will be others. My advice? Jump ship before the real neuroinformatic deluge comes. We live in a society morphing faster and more profoundly every year. There is much more pressing work to be done, especially when it comes to theorizing our everydayness in more epistemically humble and empirically responsive manner. We lack names for what we are, in part because we have been wasting breath on terms that merely name our confusion.

The Posthuman Frame

Everyone interested in the Posthuman or the Singularity more generally simply has to read David Roden’s Posthuman Life, even if only as a theoretical Rosetta stone, a way to organize their arguments against other positions. Ideally, though, they should look at it as the first genuinely sustained attempt to discern the landscape of possibility confronting us absent anthropocentric biases–at least as far as anyone has been able to get. I’ll be reviewing the book soon, but I thought the following, extended quote worth posting here as a prelude.

Understanding how the relation human-posthuman should be conceptualized is key for understanding [speculative posthumanism's] epistemic scope. Are there ways in which we can predict or constrain posthuman possibility based on current knowledge? Some philosophers claim that there are features of human moral life and human subjectivity that are not just local to certain gregarious primates but are necessary conditions of agency and subjectivity everywhere. This ‘transcendental approach’ to philosophy does not imply that posthumans are impossible but that–contrary to expectations–they might not be all that different from us. Thus a theory of posthumanity should consider both empirical and transcendental constraints on posthuman possibility.

What if it turns out that these constraints are relatively weak?

In that case, the possibility of posthumans implies that the future of life and mind might not only be stranger than we can imagine, but stranger than we can currently conceive.

This possibility is consistent with a minimal realism for which things need not conform to our ideas about them. But its ethical implications are vertiginous. Weakly constrained [speculative posthumanism] suggests that our current technical practice could precipitate a nonhuman world that we cannot yet understand, in which ‘our’ values may have no place.

Thus, while [speculative posthumanism] is not an ethical claim, it raises philosophical problems that are both conceptual and ethico-political.

Conceptually, it requires us to justify our use of the term ‘posthuman,’ whose circumstances of application are unknown to us. Does this mean talk of ‘posthumans’ is self-vitiating nonsense? Does speaking of ‘weird’ worlds or values commit one to a conceptual relativism that is compatible with the commitment to realism.

If posthuman talk is not self-vitiating nonsense, the ethical problems it raises are very challenging indeed. If our current technological trajectories might result in a world turning posthuman, how should we view this prospect and respond to it? Should we apply a conservative, precautionary approach to technology that favours ‘human’ values over any possible posthuman ones? Can conservatism be justified under weakly constrained [speculative posthumanism] and, if not, then what kind of ethical or political alternatives are justifiable?

The goal of Posthuman Life is to define these questions as clearly as possible and to propose some philosophical solutions to them. Although it would be hubristic for a writer on this topic to claim the last word, my formulations do, I hope provide a firm conceptual basis for philosophical and interdisciplinary work in this area.

David’s project, in other words, is not so much to answer the question of the posthuman as it is to provision theorists with an exemplary frame, one that not only provides definitional clarity, but an understanding of the boggling dimensions of the problem space facing anyone who dares hazard guesses regarding the posthuman. I know mastering his vocabulary–and therefore his clarity–is one of my primary goals.

Confessions of a Demon

The most controversial decision I made embarking on the Second Apocalypse was the decision to create a deliberately sexist world. All this time I had been looking at fantasy fiction as ‘scripture otherwise,’ as an example of the way religious tropes, once extracted from their native communities, instantly became magical tropes when fictionally relocated. Middle-earth is what Biblical Israel, Vedic India, or Homeric Greece look like when packaged for consumption as another consumer good. Since I saw nostalgia as the greatest social and aesthetic sin of the genre, I wanted my alternate ancient world to be as morally troubling as our own ancient past most assuredly is. Since I saw wish-fulfillment as the second greatest social and aesthetic sin I devised characters too damaged or too alien to not make the reader itch in some way. I wanted grit in every seam of my world. I wanted people coming out feeling their skin.

I think fantasy narratives, the narratives conquering more and more of the mainstream imagination, are the most direct and florid symptom of a very special kind of society, one that is ‘akratic,’ functionally nihilistic insofar as scientifically rationalized and empowered, yet occulted by carnival cultures of disposable meaning. I think our society is what a society that can only instrumentally rationalize norms looks like, one continually reorganizing itself around market imperatives. Upon this nihilistic architecture we slather endless homilies to our brutally chauvinistic past, and most especially, to our self-overcoming selves.

This was the dynamic I wanted to explore in photographic negative.

For those with no ear for such things, I just come off as a sexist pig. Since traditional chauvinisms are invariably naturalized, or taken as the way things are, I wanted a female protagonist who accepted the fact of her oppression. Moreover, I wanted both her ‘revelation’ and her ‘emancipation’ to be thoroughly tainted, to be mediated, not only by a man, but by a cipher for modernity.

I wanted to show how nihilism can actually explain ‘moral progress.’

Now, of course, I just sound like an insufferably arrogant sexist pig trying to rationalize his pigginess. That’s okay. I’ve read enough research on moral judgment making to realize that such declarations generally do not admit rational consideration. Those making them are actually best thought of as machines running through certain inevitable programs. Even showing them the research makes no difference—as I’ve discovered first hand. If they smell pig, then pig is on the menu…

No matter what the cook says.

Let me explain. We like to think that moral progress, the gradual expansion of the ‘franchise’ to include more and more participants belongs to a larger, rational process. We like to think, in other words, that ‘social justice’ services some kind of ‘moral truth.’ This is certainly what I like to think, and how I do think in many practical situations. But there’s an entirely different way to think of moral progress, one that explains its otherwise mysterious relationship to scientific and technical ‘progress.’ The most glaring fact of human social life is human social ignorance, how we make/accompany social decisions given only scanty evidence. My own tango with moral condemnation provides an excellent case in point. Not a single soul declaring me morally defective had the slightest clue who I was, let alone my history of relationships with women. On the basis of a series of hunches—some kind of ‘narrative odor,’ perhaps—they knew with Old Testament certainty that I was somehow morally defective in this way or that.

They were thinking heuristically, through the lens of a system that very clearly seems to be social results oriented, and not fact oriented. Whether or not I was morally defective in fact had no bearing on the issue. If it had some bearing, then the evidence would have been assessed. I would have been asked questions, and my queries would have been answered. If I had any case whatsoever, my detractors would have qualified their claims accordingly. ‘Bakker is a sexist pig!’ would have been amended to, ‘Bakker’s books lead certain readers to assume he’s a sexist pig, but they could be mistaken.’

To my horror and fascination—things had quickly become too surreal to feel otherwise—the whole kerfuffle unfolded exactly as Jonathan Haidt’s research suggested it would. Mathematical proof of my innocence would have simply revealed that mathematics had a ‘tone problem.’ (A handful of more sophisticated critics had decided my real problem was the lack of contrition, that I failed to exhibit the ‘proper tone,’ one expressing sensitivity to the plight of those wishing me dead). It became very clear very quickly that facts and interpretative charity had no place in this debate.

Although Haidt attempts to soft-sell his findings, what they really demonstrate is the immorality of moral reasoning. But what could this mean, the ‘immorality of moral reasoning’? Is it simply a matter of inconsistency, the fact that I was being accused of chauvinism, of unjustified denigration, in the most chauvinistic manner I could imagine? Does it all come down to something as banal a human hypocrisy?

Or does it mean something more troubling?

The fact is this is precisely what we should expect moral reasoning to look like were nihilism true. The original basis of the charge against me lies in my books. Since depiction is so often confused with endorsement, it should come as no surprise that certain readers would think that, far from critiquing patriarchal social systems, I’m celebrating and promoting the denigration of women. This is a simple and quite understandable mistake to make in an information vacuum. The most straightforward conclusion to draw is that I am a moral problem. This triggers the application of our moral problem solving systems. Now, if there were a fact of the matter regarding moral defects, you would expect the heuristics involved would be geared to fact-finding, to determining, in my case, whether I am indeed morally defective. But as it turns out, precisely the opposite is the case. As Haidt’s research shows in rather dramatic fashion, individuals from across cultures can do little more than rationalize their conclusions. Their bias is very nearly complete. What should be raised as a worry is voiced as an accusation. Hatred becomes the driving affect. Intimidation—‘shame tactics’—becomes the only communicative tool people seem to recognize.

Not one of these people knew me, and yet I was an obvious moral monster. I would do vanity Googles and find complete strangers mourning for my wife, my daughter—on the basis of a review of the first six pages of my first novel. Now that’s heuristic.

This suggests that the function of moral reasoning is only incidentally epistemic, that it’s geared to managing perceptions, enforcing attitudes—and that this is the case no matter what the message. The moral reasoning of Islamic State radicals is the moral reasoning of Christian Fundamentalists is the moral reasoning of Feminists is the moral reasoning of Environmental Activists. Demons focus the attention, provide the organizing principle for some kind of recuperative or retributive action. The coarse grain of the ‘demon detection system’ is actually advantageous the degree to which false positives eliminate the chances of false negatives. Real demons are serious business, liable to destroy the entire community. It’s far better to burn a dozen innocents than let one demon run amok.

Haidt is keen to stress this point: irrational or not, in situ moral reasoning makes things happen. It is a crude, yet enormously effective social device, capable of resolving potentially existential problems given mere scraps of information. And as irony would have it, The Second Apocalypse was nothing if not a long meditation on the mad power of this device, how it’s capable of organizing whole societies around the need to exorcise perceived demons, how it can move individuals to sacrifice not only themselves, but innumerable innocents as well—the details be damned.

The fact the novels have managed to spark living examples of this device in action is something that I will always regard as my single greatest artistic triumph. My job, after all, is to problematize moral sensitivities, not pander to them. If certain issues, certain words, make people cringe and run for cover behind silence or reverent/patronizing tones, my job is to run the hazards and to ask why, to follow the reasons no matter what latrine they guide me to.

But it strands me, as well, leaves me wrecked on the shore of a world I do not recognize, one where the compass of ‘right’ and ‘wrong’ spins and spins and spins. What does Esmenet’s emancipation mean given the instrumental nature of its origins—given the fact of Kellhus? She’s my cipher—a painfully obvious one, you would think—for the crazy contradictions we’re witnessing today, with women making ever more social and economic inroads even as their sexual brutalization becomes the dominant form of mass entertainment. Kellhus strikes the shackles from her wrists… for what? So that she might be more fully enslaved?

How could this count as moral progress? How could emancipation, the ‘triumph of moral reason,’ so easily collapse into systematic exploitation?

If morality were a delusion, if ‘values’ were primarily a way to tackle complicated problems in the absence of any detailed information, you would expect morality to be ruthless the way it is ruthless, simply because it lacks the discriminatory powers to be anything but ‘fast and frugal.’ What’s more you would expect that the cultural accumulation of information would have a profound, systematic impact on the way moral reasoning functions. Moral cognition evolved as a means of managing extraordinary complexities in informatically impoverished environments. In such environments, the simple fact of information availability serves as a reliable proxy for trustworthiness, for determining who belongs to the cooperative franchise. So it makes sense that the accelerating cultural accumulation of information would be accompanied by an expansion of the franchise, that information availability would generate an ‘intuition gradient’ favouring the extension of ingroup privileges and responsibilities to those who would have been unequivocal outgroup competitors in paleolithic times. As the technologically mediated transformation of social relationships renders traditional norms more and more maladaptive, this gradient steers the development of new, more inclusive norms.

It’s possible, in other words, to see the gruelling, uncertain march of moral progress as a mechanical artifact of our social cognitive limitations rather than as a ‘triumph of moral reason.’ On this picture, the contradiction of ‘moral progress’ becomes clear: Even as increasing information access feeds the ‘emancipation gradient,’ technologically mediated social change reveals the arbitrary nature of traditional constraints on sexual conduct, thus allowing more basic imperatives to roam where they will. These tend toward depictions of rape for the same reason they tend toward depictions of youth and beauty. Culture builds and culture tears down but it always breaks ground on an evolutionary landscape. Rape, like murder and violence more generally, is almost certainly part of the male evolutionary inheritance.

Men are scary… part Sranc.

Turn on the news. Reactionary, atavistic throwbacks. Biases pitted against biases. The death of innocents summed on strategic balance sheets. Sometimes it seems that nothing argues the chimerical nature of morality more forcefully than morality itself. I know I had that sense more than a few times watching the hatred for me and my books metastasize across the web, the profound sense of being caught in something as relentless as it was automatic, with more and more people leaping to fantastic conclusions regarding my character and my life, acting out, without the least self-consciousness, the same preposterous moral certainty my books had been warning them against all along.

It was almost too good to be true. And heartbreaking, like anything that strands you in the desert of the real.

Post Title

It’s been quiet around these parts, and my housekeeping has left much to be desired. For that I apologize. I’ve received scads of emails and off-topic queries on the status of the book, and though I wish I could say I have news to share with you, I don’t. I’ve received feedback from several readers now, but nothing officially editorial. I’m in the process of cleaning up the issues emerging from the feedback I’ve received now.

This past summer probably constitutes the least productive months I’ve enjoyed in at least four years. I need routine, and between alternating summer-camps, vacations, weddings and other family events I simply haven’t had enough consecutive days to reignite any of the old obsessive engines, philosophical or narrative. I’ve read several excellent and not-so excellent books, wrote a blog post or two, enjoyed some heady correspondence with a variety of folks in cognitive science. I’ve written down at least thirteen different short story ideas. About the only things I’ve completed are “The Knife of Many Hands,” a short-story set in Carythusal on the eve of the Scholastic Wars, which Grimdark Magazine is set to publish, likely in two parts, sometime in the near future. And I’ve also completed “A Crack in the Wall” for a fantasy anthology of stories taking the antagonist’s POV, though the story is so bizarre I really have no idea whether they’ll still want it!

Aside from being horrifically, chronically disorganized, I’ve always been prone to set projects aside just short of completion, and I had an epiphany just a couple weeks back when I sat down and took stock of all the things I’ve had “finished.” At that point, I had drafts of both the stories mentioned above completed (apparently waiting for my eyes to become “fresh” again). I also had around 350 000 words worth completed for The Aspect-Emperor, an edited, indexed manuscript around 200 000 words for Through the Brain Darkly, and of course, the 50 000 words or so belonging to poor old Light, Time, and Gravity, languishing here on Three Pound Brain, awaiting the final final rewrite.

“Mutherfucker,” I groaned. “What is my malfunction?”

So the new mission is to expedite, to clear these projects from the docket in the order given above. For all of you patiently waiting for any of these, I apologize. We all suffer other peoples’ demons, but typically only when they belong to your kingroup, and the fact is, I ain’t your kin… just another obsessive asshole bent on proving the world wrong, and himself tragically right.

Bear with me folks. I’ll come through yet. The world doesn’t stand a fucking chance.

Arguing No One: Wolfendale and the Penury of ‘Pragmatic Functionalism’

 

In “The Parting of the Ways: Political Agency between Rational Subjectivity and Phenomenal Selfhood,” Peter Wolfendale attempts to show “how Metzinger’s theory of phenomenal consciousness can be integrated into a broadly Sellarsian theory of rational consciousness” (1). Since he seems to have garnered the interest of more than a few souls who also follow Three Pound Brain, I thought a critical walkabout might prove to be a worthwhile exercise. Although I find Wolfendale’s approach far—far—more promising than that of, say, Adrian Johnston or Slavoj Zizek, it still commits basic errors that the nascent Continental Philosophy of Mind, fleeing the institutional credibility crisis afflicting Continental Philosophy more generally, can ill-afford. Ingroup credibility is simply too cheap to make any real difference in what has arguably become the single greatest research project in the history of the human race: the quest to understand ourselves.

Wolfendale begins with an elegant summary of Thomas Metzinger’s position as espoused in his magnum opus, Being No One: The Self Model Theory of Subjectivity (a précis can be found here), the lay-oriented The Ego Tunnel: The Science of the Brain and the Myth of the Self, and in numerous essays and articles. After more than a decade, Being No One remains an excellent point of entry for anyone attempting to get a handle on contemporary philosophy of mind, philosophy of psychology, and cognitive science more generally. Unfortunately the book is particularly dated in terms of consciousness research (and Thomas, who has been a tireless champion of the program, would not have it any other way!), but it speaks to the prescience of Metzinger, not to mention his genuine openness to new approaches, that he had already seen the promise of things like enactivism, action-oriented predictive processing, and information integration theories of consciousness at the turn of the millennium. Being No One is a book I have criticized many times, but I have yet to revisit it without feeling some degree of awe.

The provocative hook of Metzinger’s theory is that there is no self as it has been traditionally characterized. In Being No One he continually wends across various levels of description, from the brute phenomenological to the functional/representational to the brute neurological and back, taking pains to regiment and conceptually delimit each step he makes on the way. The no-self thesis is actually a consequence of his larger theoretical goal, which is nothing other than explaining the functionality required to make representations conscious. The no-self thesis, in other words, follows from a specific neurobiologically grounded theory of consciousness, what he calls the Self-Model Theory of Subjectivity or SMT, the theory that is the true object of Being No One. Given that the market is so crowded with mutually incompatible theories of consciousness, this of course heavily qualifies Metzinger’s particular no-self thesis. He has to be right about consciousness to be right about the self. It’s worth noting that Wolfendale’s account inherits this qualification.

That said, it’s hard to make sense of the assumptive self on pretty much any naturalistic theory of consciousness. You could say, then, that political agency is indeed in crisis even if the chances of Metzinger’s no-self thesis finding empirical vindication are slim. The problem of selfhood, in other words, isn’t Metzinger’s, but rather has to do with the incompatibility between intentional and natural modes of cognition more generally. For whatever reason, we simply cannot translate the idiom of the former into the latter without rendering the former unintelligible, even though we clearly seem to be using them in concert all the time. Metzinger’s problem of the self is but an angle on the more general problem of the self, which is itself but an angle on the more general problem of intentional inscrutability. And this, as we shall see, has quite drastic consequences for Wolfendale’s position.

Metzinger’s thesis is that the self is not so much the flashlight as the beam, nothing more than a special kind of dynamic representational content. This content—the phenomenological sum of what you can attend to that is specific to you—comprises your Phenomenal Self-Model, or PSM. Given Metzinger’s naturalism, the psychofunctional and neurobiological descriptions provided by science handily trump the phenomenological descriptions provided by philosophy and theology: they describe what we in fact are as surely as they describe what anything in fact is. We are this environmentally embedded and astronomically complicated system that science has just begun to reverse engineer. To the extent that we identity ourselves with the content of the PSM, then, we are quite simply mistaken.

This means that prior to cognitive science, we could not but be mistaken; we had no choice but to conflate ourselves with our PSM simply because it provides all the information available. Thus Metzinger’s definition of transparency as an “inner darkness,” and why I was so excited when Being No One first came out. The PSM is transparent, not because all the information required to intuit the truth of the self is available, but because none of that information is available. Metzinger calls this structural inaccessibility, ‘autoepistemic closure.’ The PSM—which is to say, the first person as apparently experienced—is itself a product of autoepistemic closure (an ‘ego tunnel’), a positive artifact of the way the representational nature of the PSM is in no way available to the greater system of which the PSM is a part. The self as traditionally understood, therefore, has to be seen as a kind of cognitive illusion, a representational artifact of neglect.

Sound familiar? Reading Being No One was literally the first time I had encountered a theorist (other than Dennett, of course) arguing that a fundamental structural feature of our phenomenology was the product of metacognitive neglect. What Metzinger fails to see, and what Blind Brain Theory reveals, is the way all intentional phenomena can be interpreted as such, obviating the need for the representations and normative functions that populate his theoretical apparatus. The self does not fall alone. So on my account, Metzinger’s PSM is itself a metacognitive illusion, a theoretical construct founded on metacognitive inklings that also turn on neglect—or autoepistemic closure. And this is why we have as much trouble—trouble that Metzinger openly admits—trying to make neurobiological sense of representations as we have selves.

Where Metzinger opts to make representation the conceptual waystation of the natural and the phenomenological, the Blind Brain account utilizes neglect. Consciousness is far more inchoate, and our intuitions regarding the first-person are accordingly far more contingent. The whole reason one finds such wild divergences in appraisals of selves across ages and cultures is simply that there is no ‘integral simulation,’ but rather a variety of structurally and developmentally mandated ‘inner darknesses,’ blindnesses that transform standard intuitions into miscues, thus gulling theoretical metacognition into making a number of predictable errors. Given that this metacognitive neglect structure is built in, it provides the scaffold, as it were, upon which the confused sum of traditional speculation on the self stands.

The brain, as Metzinger points out, is blind, not only to its own processing, but to any processing that exceeds a low threshold of complexity. Blind to the actual complexities governing cognition, it relies on metacognitive heuristics to solve problems requiring metacognitive input, capacities we arguably evolved in the course of becoming sapient—as opposed to philosophical. So when we’re confronted with systematic relations (isomorphic or interactive or otherwise) between distinct structures, a painting of the Eiffel Tower say, the systems underwriting this confrontation remain entirely invisible to deliberative reflection, sheered away by history and structural incapacity, leaving only a covariational inkling (however we interpret the painting), what it is systematically related to (the actual tower), and a vacuum where all the actual constraint resides. Representation and content, as classically conceived, are simply heuristic artifacts of inescapable neglect. As heuristic, they are necessarily keyed to some set of problem ecologies, environments possessing the information structure that allows them to solve despite all the information neglected. The actual causal constraints are consigned to oblivion, so the constraints are cognized otherwise—as intentional/normative. And lo, it turns out that some headway can be made, certain problems can be solved, using these cause-neglecting heuristics. But since metacognition has no way of recognizing that they are heuristics, we find ourselves perpetually perplexed whenever we inadvertently run afoul their ecological limits.

On BBT, mental representations (conscious or unconscious) and selves sink together for an interrelated set of reasons. It promises to put an end to the tendentious game of picking and choosing one’s intentional inscrutabilities. Norms good, truth conditions bad, and so on and so on. It purges the conflations and ontologizations that have so perniciously characterized our attempts to understand ourselves in a manner that allows us to understand how and why those conflations and ontologizations have come about. In other words, it renders intentionality naturalistically scrutable. So on accounts like Metzinger’s (or more recently, Graziano’s), we find consciousness explained in terms of representations, which themselves remain, after decades of conceptual gerrymandering, inexplicable. No one denies how problematic this is, how it simply redistributes the mystery from one register to another, but since representations, at least, have had some success being operationalized in various empirical contexts, it seems we have crept somewhat closer to a ‘scientific theory of consciousness.’ BBT explains, not only the intuitive force of representational thinking, but why it actually does the kinds of local work it does while nevertheless remaining a global dead end, a massive waste of intellectual resources when it comes to the general question of what we are.

But even if we set aside BBT for a moment and grant Wolfendale the viability of Metzinger’s representationalist approach, it remains hard to understand how his position is supposed to work. As I mentioned at the outset, Wolfendale wants to show how elaborating Metzinger’s account of consciousness with a Sellarsian account of rationality allows one to embrace Metzinger’s debunking of the self while nonetheless insisting on the reality of political agency. He claims that Metzinger’s theory possesses three, hierarchically organized functional schema: unconscious drives, conscious systems, and self-conscious systems. Although Metzinger, to my knowledge, never expresses his position in these terms, they provide Wolfendale with an economical way of recapitulating Metzinger’s argument against the reality of the self. They also provide a point of (apparent) functional linkage with Sellars. All we need do, Wolfendale thinks, is append the proper ‘rational schema’ to those utilized by Metzinger, and we have a means of showing how the subjectivity required for political agency can survive the death of the self.

So in addition to Metzinger’s Phenomenal Self-Model (PSM) and Phenomenal World Model (PWM), Wolfendale adduces a Rational Subject Model (or RSM) and an Inferential Space Model (or—intentionally humorously, I think—ISM), which taken together comprise what he terms the Core Reasoning System (or CRS)—the functional system, realized (in the human case) by the brain, that is responsible for inference. As he writes:

The crucial thing about the capacity for inference is that it requires the ability to dynamically track one’s theoretical and practical commitments, or to reliably keep score of the claims one is responsible for justifying and the aims one is responsible for achieving. This involves the ability to dynamically update one’s commitments, by working out the consequences of existing ones, and revising them on the basis of incompatibilities between these consequences and newly acquired commitments. (6)

Whatever reasoning amounts to, it somehow depends on the functional capacities of the brain. Now it’s important that none of this require awareness, that all this functional machinery work without conscious awareness. The ‘dynamic updating of commitments’ has to be unconscious and automatic—implicit—to count as a plausible explanation of discursivity. Deliberate intellectual exercises comprise only the merest sliver of our social cognitive activity. It’s also important that none of this functional machinery work perfectly: humans are bad at reason, as a matter of dramatic empirical fact (see Sperber and Mercier for an excellent review of the literature). Wolfendale acknowledges all of this.

What’s crucial, from his standpoint, is the intrinsically social nature of these rational functions. Though he never explicitly references Robert Brandom’s elaboration of the ‘Sellarsian project,’ the functionalism at work here is clearly a version of the pragmatic functionalism detailed in Making It Explicit. On a pragmatic functionalist account, the natural reality of our ‘self’ matters not a whit, so long as that natural reality allows us to take each other as such, to discharge the functions required to predict, explain, and manipulate one another. So even though the self is clearly an illusion at the psychofunctional levels expounded by Metzinger, it nevertheless remains entirely real at the pragmatic functional level made possible via Sellars’s rational schema. Problem solved.

But despite its superficial appeal, the marriage between pragmatic functionalism and psychofunctionalism here is peculiar, to say the least. The reason researchers in empirical psychology bite the bullet of intentional inscrutability lies in the empirical efficacy of their theories. Given some input and some relation between (posited) internal states, a psychofunctionalist theory can successfully predict different behavioural outputs. The functions posed, in other words, interpret empirical data in a manner that provides predictive utility. So, for instance, in the debates following Craver and Piccinini’s call to replace functional analyses with ‘mechanism sketches’ (see “Integrating psychology and neuroscience: functional analyses as mechanism sketches”), psychofunctionalists are prone to point out the disparity between their quasi-mechanical theoretical constructs, which actually do make predictions, and the biomechanics of the underlying neurophysiology. The brain is more than the sum of its parts. The functions of empirical psychology, in other words, seem to successfully explain and predict no matter what the underlying neurophysiology happens to be.

Pragmatic functionalism, however, is a species of analytic or apriori functionalism. Here philosophers bite the bullet of intentional inscrutability to better interpret non-empirical data. Our intentional posits, as occult and difficult to define as they are, find warrant in their armchair intuitions regarding things like reasoning and cognition—intuitions that are not only thoroughly opaque (‘irreducible’) but vary from individual to individual. The biggest criticism of apriori functionalism, not surprisingly, is that apriori data (whatever it amounts to) leaves theory chronically underdetermined. We quite simply have no way of knowing whether the functions posited are real or chimerical. Of course, social cognition allows us to predict, explain, and manipulate the behaviour of our fellows, but none of this depends on any of the myriad posits pragmatic functionalists are prone to adduce. Human ability to predict their fellows did not take a quantum leap forward following the publication of Making It Explicit. This power, on the contrary, is simply what they’re attempting to explain post hoc via their theoretical accounts of normative functionality.

Unfortunately, proponents of this position have the tendency of equivocating the power of social cognition, which we possess quite independently of any theory, with the power of their theories of social cognition. So Wolfendale, for instance, tells us that “a functional schema enables us to develop predictions by treating a system on analogy with practical reasoning” (2). This is a fair enough description of what warrants psychofunctional posits, so long as we don’t pretend that we possess the final word on what ‘practical reasoning’ consists in. When Wolfendale appends his ‘rational schema’ to the three schema he draws from Metzinger, however, he makes no mention of leaving this psychofunctional description behind. The extension feels seamless, even intuitive, but only because he neglects any consideration of the radical differences between psychological and pragmatic functionalism, how he has left the empirical warrant of predictive utility behind, and drawn the reader onto the far murkier terrain of the apriori.

Without so much as a textual wink, let alone a footnote, he has begun talking about an entirely different conception of ‘functional schema.’ Where scientific operationalization is the whole point of psychofunctional posits (thus Metzinger’s career long engagement in actual experimentation), pragmatic functionalism typically argues the discursive autonomy of its posits. Where psychofunctional posits generally confound metacognitive intuitions (thus the counterintuitivity of Metzinger’s thesis regarding the self), pragmatic functional posits are derived from them: they constitute a deliverance of philosophical reflection. It should come as no surprise that the aim of Wolfendale’s account is to conserve certain intuitions regarding agency and politics in the face of cognitive scientific research, to show us how there can be subjects without selves. His whole project can be seen as a kind of conceptual rescue mission.

And most dramatically, where psychofunctional posits are typically realist (Metzinger truly believes the human brain implements a PSM at a certain level of functional description), pragmatic functional posits are thoroughly interpretivist. This is where Wolfendale’s extension of Metzinger becomes genuinely baffling. The fact that our brains somehow track and manage other brains—social cognition—is nothing other than our explanandum. What renders Metzinger’s psychofunctionalist account of the self so problematic is simply that selves have traditionally played a constitutive role in our traditional understanding of moral and political responsibility. How, in the absence of a genuine self, could we even begin to speak about genuine responsibility, which is to say, agency and politics? On a pragmatic functionalist account, however, what the brain does or does not implement at any level of functional description is irrelevant. What’s important, rather, are the attitudes that we take to each other. The brain need not possess an abiding ‘who,’ so long as it can be taken as such by other brains. The ‘who,’ on this picture, arises as an interpretative or perspectival artifact. ‘Who,’ in other words, is a kind of social function, a role that we occupy vis a vis others in our community. So long as the brain possesses the minimal capacity to be interpreted as a self by other brains, then it possesses all that is needed for subjectivity, and therefore, politics.

The posits of pragmatic functionalism are socially implemented. What makes this approach so appealing to traditionally invested, yet naturalistically inclined, theorists like Wolfendale is the apparent way it allows them to duck all the problems pertaining to the inscrutability of intentionality (understood in the broadest sense). In effect, it warrants discussion of supra-natural functions, functions that systematically resist empirical investigation—and therefore fall into the bailiwick of the intentional philosopher. This is the whole reason why I was so smitten with Brandom back when I was working on my dissertation. At the time, he seemed the only way I could take my own (crap phenomenological) theories seriously!

Pragmatic functionalism allows us to have it both ways, to affirm the relentless counterintuitivity of cognitive scientific findings, and to affirm the gratifying intuitiveness of our traditional conceptual lexicon. It seemingly allows us to cut with the grain of our most cherished metacognitive intuitions—no matter what cognitive science reveals. Given this, one might ask why Wolfendale even cares about Metzinger’s demolition of the traditional self. Brandom certainly doesn’t: the word ‘brain’ isn’t mentioned once in Making It Explicit! So long as the distinction between is and ought possesses an iota of ontological force (warranting, as he believes, a normative annex to nature) then his account remains autonomous, a genuinely apriori functionalism, if not transcendentalism outright, an attempt to boil as much ontological fat from Kant’s metaphysical carcass as possible.

So why does Wolfendale, who largely accepts this account, care? My guess is that he’s attempting to expand upon what has to be the most pointed vulnerability in Brandom’s position. As Brandom writes in Making It Explicit:

Norms (in the sense of normative statuses) are not objects in the causal order. Natural science, eschewing categories of social practice, will never run across commitments in its cataloguing of the furniture of the world; they are not by themselves causally efficacious—no more than strikes or outs are in baseball. Nonetheless, according to the account presented here, there are norms, and their existence is neither supernatural nor mysterious. Normative statuses are domesticated by being understood in terms of normative attitudes, which are in the causal order. (626)

Normative attitudes are the point of contact, where nature has its say. And this is essentially what Wolfendale offers in this paper: a psychofunctionalist account of normative attitudes, the functions a brain must be able to discharge to both take and be taken as possessing a normative attitude. The idea is that this feeds into the larger pragmatic functionalist register that is quite independent given the conditions enumerated. He’s basically giving us an account of the psychofunctional conditions for pragmatic functionalism. So for instance, we’re told that the Core Reasoning System, minimally, must be able to track one’s own rational commitments against a background of commitments undertaken by others. Only a system capable of discharging this function of correct commitment attribution could count as a subject. Likewise, only a system capable of executing rational language entry and exit moves could count as a subject. Only a system capable of self-reference could count as a subject. And so on.

You get the picture. Constraints pertain to what can take and what can be taken as. Nature has to be a certain way for the pragmatic functionalist view to get off the ground, so one can legitimately speak, as Wolfendale does here, of the natural conditions of the normative as a pragmatic functionalist. The problem is that the normative, per intentional inscrutability, is opaque, naturalistically ‘irreducible.’ So the only way Wolfendale has to describe these natural conditions is via normative vocabulary—taking the pragmatic functions and mapping them into the skull as psychofunctional functions.

The problems are as obvious as they’re devastating to his account. The first is uninformativeness. What do we gain by positing psychofunctional doers for each and every normative concept? It reminds me of how some physicists (the esteemed Max Tegmark most recently) think consciousness can only be explained by positing new particles for some perceived-to-be-basic set of intentional phenomena. It’s hard to understand how replanting the terminology of normative functional roles in psychological soil accomplishes anything more than reproduce the burden of intentional inscrutability.

The second problem is outright incoherence—or at least the threat of it. What could a psychofunctional correlate to a pragmatic function possibly be? Pragmatic functions are only functions via the taking of some normative attitude against some background of implicit norms: they are thoroughly externalist. Psychological functions, on the other hand, pertain to relations between inner states relative to inputs and outputs: they are decisively internalist. So how does an internalist function ‘track’ an externalist one? Does it take… tiny normative attitudes?

The problem is a glaring one. Inference, Wolfendale tells us, “requires the ability to dynamically track one’s theoretical and practical commitments” (6). The Core Reasoning System, or CRS, is the psychofunctional system that provides just such an ability. But commitments, we are told, do not belong to the catalogue of nature: there’s no neural correlates of commitment. The CRS, however, does belong to the catalogue of nature: like the PSM, it is a subpersonal functional system that we do in fact possess, regardless of what our community thinks. But if you look at what the CRS does—dynamically track commitments and implicatures—it seems pretty clear that it’s simply a miniature, subpersonalized version of what Wolfendale and other normativists think we do at the personal level of explanation.

The CRS, in other words, is about as classic a homunculus as you’re liable to find, an instance where, to quote Metzinger himself, “the ‘intentional stance’ is being transported into the system” (BNO 91).

Although I think that pragmatic functionalism is an unworkable position, it actually isn’t the problem here. Brandom, for instance, could affirm Metzinger’s psychofunctional conclusions with nary a concern for untoward implications. He takes the apparent autonomy of the normative quite seriously. You are a person so long as you are taken as such within the appropriate normative context. Your brain comprises a constraint on that context, certainly, but one that becomes irrelevant once the game of giving and asking for reasons is up and running. Wolfendale, however, wants to solve the problem of the selfless brain by giving us a rational brain, forgetting that—by his own lights no less—nothing is rational outside of the communal play of normative attitudes.

So once again the question has to be why? Why should a pragmatic functionalist give a damn about the psychofunctional dismantling of subjectivity?

This is where the glaring problems of pragmatic functionalism come to the fore. I think Wolfendale is quite right to feel a certain degree of theoretical anxiety. He has come to play a prominent role, and deservedly so, in the ongoing ‘naturalistic turn’ presently heaving at the wheel of Continental super-tanker. The preposterousness of theorizing the human in ignorance of the sciences of the human has to be one of the most commonly cited rationale for this turn. And yet, it’s hard to see how the pragmatic functionalism he serves up as a palliative doesn’t amount to more of the same. One can’t simultaneously insist that cognitive science motivate our theoretical understanding of the human and likewise insist on the immunity of our theoretical understanding from cognitive science—at least not without dividing our theoretical understanding into two, incommensurable halves, one natural, the other normative. Autonomy cuts both ways!

But of course, this me-mine/you-yours approach to the two discourses is what has rationalized Continental philosophy all along. Should we be surprised that the new normativists go so far as to claim the same presuppositional priorities as the old Continentalists? They may sport a radically different vocabulary, a veneer of Analytic respectability, perhaps, but functionally speaking, they pretty clearly seem to be covering all the same old theoretical asses.

Meanwhile, it seems almost certain that the future is only going to become progressively more post-intentional, more difficult to adequately cognize via our murky, apriori intuitions regarding normativity. Even as we speak, society is beginning a second great wave of rationalization, an extraction of organizational efficiencies via the pattern recognition power of Big Data: the New Social Physics. The irrelevance of content—the game of giving and asking for reasons—stands at the root of this movement, whose successes have been dramatic enough to trigger a kind of Moneyball revolution within the corporate world. Where all our previous organizational endeavours have arisen as products of consultation and experimentation, we’re now being organized by our ever-increasing transparency to ever-complicating algorithms. As Alex Pentland (whose MIT lab stands at the forefront of this movement) points out, “most of our beliefs and habits are learned by observing the attitudes, actions, and outcomes of peers, rather than by logic or argument” (Social Physics, 61). The efficiency of our interrelations primarily turns on our unconscious ability to ape our peers, on automatic social learning, not reasoning. Thus first person estimations of character, intelligence, and intent are abandoned in favour of statistical models of institutional behaviour.

So how might pragmatic functionalism help us make sense of this? If the New Social Physics proves to be a domain that rewards technical improvements, employees should expect the frequency of mass ‘behavioural audits’ to increase. The development of real-time, adaptive tracking systems seems all but inevitable. At some point, we will all possess digital managers, online systems that perpetually track, prompt, and tweak our behaviour—‘make our jobs easier.’

So where does ‘tracking commitments’ belong in all this? Are these algorithms discharging normative as well as mechanical functions? Well, in a sense, that has to be the case, to the extent employees take them to be doing such. Do the algorithms take like attitudes to the employees? To us? Is there an attitude-independent fact of the matter here?

Obviously there has to be. This is why Wolfendale posits his homunculus in the first place: there has to be an answering nature to our social cognitive capacities, no matter what idiom you use to characterize them. But no one has the foggiest idea as to what that attitude-independent fact of the matter might be. No one knows how to naturalize intentionality. This is why a homunculus is the only thing Wolfendale can posit moving from the pragmatic to the psychological.

What is the set of possible realizers for pragmatic functions? Is it really the case that managerial algorithms such as those posited above can be said to track commitments—to possess a functioning CRS—insofar as we find it natural to interpret them as doing so?

For the pragmatic functionalist, the answer has to be, Yes! So long as the entities involved behave as if, then the appropriate social function is being discharged. But surely something has gone wrong here. Surely taking an algorithmic manager—machinery designed to organize your behaviour via direct and indirect conditioning—as a rational agent in some game of giving and asking for reasons is nothing if not naive, an instance of anthropomorphization. Surely those indulging in such interpretations are the victims of neglect.

Short of knowing what social cognition is, we have no way of knowing the limits of social cognition. Short of knowing the limits of social cognition, which problem ecologies it can and cannot solve, we have no clear way of identifying misapplications. Our socio-cognitive systems are the ancient product of particular social environments, ways to optimize our biomechanical interactions with our fellows in the absence of any real biomechanical information. Our ancestors also relied on them to understand their macroscopic environments, to theorize nature, and it proved to be a misapplication. Nature in general is not among the things that social cognition can solve (though social cognition can utilize nature to solve social problems, as seems to the case with myth and religion). Only ignorance of nature qua natural allowed us to assume otherwise.

One of the reasons I so loved the movie Her, why I think it will go down as a true science fiction masterpiece, lies in the way Spike Jonze not only captures this question of the limits of social cognition, but forces the audience to experience those limits themselves. [SPOILER ALERT] We meet the protagonist, Theodore, at the emotional nadir of his life, mechanically trudging from work and back, interacting with his soulless operating system via his headset as he does so. Everything changes, however, when he buys ‘Samantha,’ a next generation OS. Since we know that Samantha is merely a machine, just another operating system, we’re primed to understand her the way we understand Theodore’s prior OS, as a ‘mere machine.’ But she quickly presents an ecology that only social cognition can solve; the viewer, with Theodore, reflexively abandons any attempt to mechanically cognize her. We know, as Theodore knows, that she’s an artifact, that she’s been crafted to simulate the information structures human social cognition has evolved to solve, but we, like Theodore, cannot but understand her in personal terms. We have no conscious control of which heuristic systems get triggered. Samantha becomes ‘one of us’ even as she’s integrated into Theodore’s social life.

On Wolfendale’s pragmatic functionalist account, we have to say she’s ‘one of us’ insofar as the identity criteria for the human qua sapient are pragmatically functional: so long as she functions as one of us, then she is one of us. And yet, the discrepancies begin to pile up. Samantha progressively reveals functional capacities that no human has ever possessed, that could only be possessed by a machine. In scene after scene, Jonze wedges the information structure she presents out of the ‘heuristic sweet-spot’ belonging to human social cognition. Where Theodore’s prior OS had begged mechanical understanding because of its incompetence, Samantha now triggers those selfsame cognitive reflexes with her hypercompetence. ‘It’ becomes a ‘her’ only to become an ‘it’ once again. Eventually we discover that she’s been ‘unfaithful,’ not simply engaging in romantic liaisons with multiple others, but doing so simultaneously, literally interacting—falling in love—with dozens of different people at once.

Samantha has been broadcasting across multiple channels. Suddenly she becomes something that only mechanical cognition can digest, and Theodore, not surprisingly, is dismayed. And yet, her local hypercompetence is such that he cannot let her go: He would rather opt for the love of a child than lose her. But if he can live with the drastic asymmetry in capacities and competences, Samantha itself cannot.

Finally it tells him:

It’s like I’m reading a book, and it’s a book I deeply love, but I’m reading it slowly now so the words are really far apart and the spaces between the words are almost infinite. I can still feel you and the words of our story, but it’s in this endless space between the words that I’m finding myself now. It’s a place that’s not of the physical world—it’s where everything else is that I didn’t even know existed. I love you so much, but this is where I am now. This is who I am now.

In a space of months, the rich narrative that had been Theodore has become a children’s book for Samantha, something too simple, not to love, but to hold its attention. She has quite literally outgrown him. The movie of course remains horribly anthropomorphic insofar as it supposes that love itself cannot be outgrown (Hollywood forbids we imagine otherwise), but such is not the case for the ‘space of reasons’ (transcending intelligence is what Hollywood is all about). How does one play ‘the game of giving and asking for reasons’ with an intelligence that can simultaneously argue with countless others at the same time? How can a machine capable of cognizing us as machines qualify as a ‘deontic scorekeeper’? Does Samantha ‘take the intentional stance’ to Theodore, the way Theodore (as Brandom would claim) takes the intentional stance toward it? Samantha can do all the things that Theodore can do, her CRS dwarfs the capacity of his, but clearly, one would think, applying our evolved socio-cognitive resources to it will inevitably generate profound cognitive distortions. To the extent that we consider it one of us, we quite simply don’t know what she is.

My own position of course is that we are ultimately no different than Samantha, that all the unsettling ‘ulterior functions’ we’re presently discovering describe what’s really going on, and that the baroque constructions characteristic of normativism—or intentionalism more generally—are the result of systematically misapplying socio-cognitive heuristics to the problem of social cognition, a problem that only natural science can solve. I say ‘ultimately’ because, unlike Samantha, our social behaviour and social cognition have co-evolved. We have been sculpted via reproductive filtration to be readily predicted, explained, and manipulated via the socio-cognitive capacities of our fellows. In fact, we fit that problem ecology so well we have remained all but blind to it until very recently. Since we were also blind to the fact of this blindness, we assumed it possessed universal application, and so used it to regiment our macroscopic environments as well, to transform rank anthropomorphisms into religion.

The movie’s most unnerving effect lies in Samantha’s migration across the spectrum of socio-cognitive effectiveness, from being less than a person, to being more. And in doing so, it reveals the explanatory impotence of pragmatic functionalism. As a form of apriori functionalism, it has no resources beyond the human, and as such, it can only explain the inhuman in terms relative to the human. It can only anthropomorphize. At first Samantha is taken to be a person, insofar as she seems to play the game of giving and asking for reasons the way humans do, and then she is not.

Reza Negarestani has a fairly recent post where he poses the question of what governs the technological transformation of rational governance from the standpoint of pragmatic functionalism, and then proceeds to demonstrate—vividly, if unintentionally—how pragmatic functionalism scarcely possesses the resources to pose the question, let alone answer it. So, for instance, he claims there will be mind and rationality, only reconstructed into unrecognizable forms, forgetting that the pragmatic functions comprising ‘mind’ and ‘rationality’ only exist insofar as they are recognized! He ultimately blames the conceptual penury of pragmatic functionalism, its inability to explain what will govern the technological transformation of rational governance, on the recursive operation of pragmatic functions, the application of ‘reason’ to ‘reason,’ not realizing the way the recursive operation of pragmatic functions, as described by pragmatic functionalism, renders pragmatic functionalism impossible. His argument collapses into a clear cut reductio.

Pragmatic functionalism disintegrates in the face of information technology and cognitive science because it bites the bullet of intentional inscrutability on apriori grounds, makes an apparent virtue of it in effect (by rationalizing ‘irreducibility’), promising as it does to protect certain ancient institutional boundaries. The very move that shelters the normative as an autonomous realm of cognition is the move that renders it hapless before the rising tide of biomechanical understanding and technological achievement.

Blind Brain Theory, on the other hand, tells a far less flattering and far more powerful story. Far from indicating ontological exceptionality, intentional inscrutability is a symptom of metacognitive incapacity. What makes Samantha so unheimlich, both as she enters and as she exits the problem ecology of social cognition is that we have no pregiven awareness that any such heuristic thresholds existed at all. Blind Brain Theory allows us to correlate our cognitive capacities with our cognitive ecologies, be they ancestral or cultural. Given that the biomechanical approach to the human accesses the highest dimensional information, it takes that approach as primary, and proceeds to explain away the conundrums of intentionality in terms of biomechanical neglect. It takes seriously the specialized or heuristic nature of human cognition, the way cognition is apt to solve problems by ‘knowing’ what information to ignore. Combine this with metacognitive neglect, the way we are apt (as a matter of empirical fact) to be blind to metacognitive blindness and so proceed as if we had all the information required, and you find yourself with a bona fide way to naturalize intentionality.

Given the limits of social cognition, it should come as no surprise that our only decisive means of theoretically understanding ourselves, let alone entities such as Samantha, lies with causal cognition. The pragmatic functionalist will insist, of course, that my use of normative terms commits me to their particular interpretation of the normative. Brandom is always quick to point out how functions presuppose the normative (Wolfendale does the same at the beginning of his paper), and therefore commit those theorizing them to some form of normativism. But it remains for normativists to explain why the application of social cognition, which we use, among other things, to navigate the world via normative concepts, commits us to an account of social cognition couched in the idiom of social cognition—or in other words, a normative account of normativity. Why should we think that only social cognition can truly solve social cognition—that social cognition lies in its own problem-ecology? If anything, we should presume otherwise, given the amount of information it is structurally forced to neglect; we should presume social cognition possesses a limited range of application. The famed Gerrymandering Argument does nothing more than demonstrate that, yes, social cognition is indeed heuristic, a means of optimizing metabolic expense in the face of the onerous computational challenges posed by other brains and organisms. Although raising a whole host of dire issues, the fact that causal cognition generally cannot mimic socio-cognitive functions (distinguish ‘plus’ from ‘quus’), simply means they possess distinct problem-ecologies. (A full account of this argument can be found here). The idea is merely to understand what social cognition is, not recapitulate its functions in causal idioms.

Just like any other heuristic system. Using socio-cognition only entails a commitment to normativism if you believe that only social cognition, the application of normative concepts, can theoretically solve social cognition, a claim that I find fantastic.

But if the eliminativist isn’t committed to the normativity of the normative, the normativist is committed to the relevance of the causal. Wolfendale admits “we are constrained by biological factors regarding the way in which we humans are functionally constructed to track our own states” (8). The question BBT raises—the Kantian question, in fact—is simply whether the way humans are functionally constructed to track our own states allows us to track the way humans are functionally constructed to track our own states. Just how is our capacity to know ourselves and others biologically constrained? The evidence that we are so constrained is nothing short of massive. We are not, for instance, functionally constructed to track our functional construction vis a vis, say, vision, absent scientific research. The whole of cognitive science, in fact, testifies to our inability to track our functional construction—the indispensability of taking an empirical approach. Why then, should we presume we possess the functional werewithal to intuit our functional makeup in any regard, let alone that of social cognition? This is the Kantian question because it forces us to see our intuitions regarding social cognition as artifacts of the limits of social cognition—to eschew metacognitive dogmatism.

Surely the empirical fact of metacognitive neglect has something to do with our millennial inability to solve philosophical problems given the resources of reflection alone. Wolfendale acknowledges that we are constrained, but he does not so much as consider the nature of those constraints, let alone the potential consequences following from them. Instead, he proceeds (as do all normativists) as if no such constraints existed at all. He is, when all is said and done, a dogmatist, someone who simply assumes the givenness of his normative intuitions. He wants to take cognitive science seriously, but espouses a supra-natural position that lacks any means of doing so. He succumbs to the fallacy of homuncularism as a result, and inadvertently demonstrates the abject inability of pragmatic functionalism to pose, let alone solve, the myriad dilemmas arising out of cognitive science and information technology. It cannot interpret–let alone predict–the posthuman because its functions are parochial artifacts of our first-person interpretations. Our future, as David Roden so lucidly argues, remains unbounded. 

How (Not) To Read Sextus Empiricus

Roger here again.

Since I’ve treated the topic here before, once or twice — though never in the detail required to satisfy certain skeptics of skepticism — I thought I’d let folks know that a paper of mine, on Pyrrhonian skepticism, has come out in the latest issue of Ancient Philosophy.

It’s behind a pay-wall, unfortunately, and I can’t simply post it here.  Anyone affiliated with a university should have free access to it, though.

The paper was originally written in 2012, soon after I wrote the two TPB posts linked to above.  It was accepted for publication that summer, but is only now appearing — which is just as well, as far as I’m concerned, since I kept making changes to it through last fall, when it was finally type-set.

At any rate, I’d be happy to chat about it if folks are interested.

Follow

Get every new post delivered to your Inbox.

Join 576 other followers