Three Pound Brain

No bells, just whistling in the dark…

Post Title

by rsbakker

It’s been quiet around these parts, and my housekeeping has left much to be desired. For that I apologize. I’ve received scads of emails and off-topic queries on the status of the book, and though I wish I could say I have news to share with you, I don’t. I’ve received feedback from several readers now, but nothing officially editorial. I’m in the process of cleaning up the issues emerging from the feedback I’ve received now.

This past summer probably constitutes the least productive months I’ve enjoyed in at least four years. I need routine, and between alternating summer-camps, vacations, weddings and other family events I simply haven’t had enough consecutive days to reignite any of the old obsessive engines, philosophical or narrative. I’ve read several excellent and not-so excellent books, wrote a blog post or two, enjoyed some heady correspondence with a variety of folks in cognitive science. I’ve written down at least thirteen different short story ideas. About the only things I’ve completed are “The Knife of Many Hands,” a short-story set in Carythusal on the eve of the Scholastic Wars, which Grimdark Magazine is set to publish, likely in two parts, sometime in the near future. And I’ve also completed “A Crack in the Wall” for a fantasy anthology of stories taking the antagonist’s POV, though the story is so bizarre I really have no idea whether they’ll still want it!

Aside from being horrifically, chronically disorganized, I’ve always been prone to set projects aside just short of completion, and I had an epiphany just a couple weeks back when I sat down and took stock of all the things I’ve had “finished.” At that point, I had drafts of both the stories mentioned above completed (apparently waiting for my eyes to become “fresh” again). I also had around 350 000 words worth completed for The Aspect-Emperor, an edited, indexed manuscript around 200 000 words for Through the Brain Darkly, and of course, the 50 000 words or so belonging to poor old Light, Time, and Gravity, languishing here on Three Pound Brain, awaiting the final final rewrite.

“Mutherfucker,” I groaned. “What is my malfunction?”

So the new mission is to expedite, to clear these projects from the docket in the order given above. For all of you patiently waiting for any of these, I apologize. We all suffer other peoples’ demons, but typically only when they belong to your kingroup, and the fact is, I ain’t your kin… just another obsessive asshole bent on proving the world wrong, and himself tragically right.

Bear with me folks. I’ll come through yet. The world doesn’t stand a fucking chance.

Arguing No One: Wolfendale and the Penury of ‘Pragmatic Functionalism’

by rsbakker


In “The Parting of the Ways: Political Agency between Rational Subjectivity and Phenomenal Selfhood,” Peter Wolfendale attempts to show “how Metzinger’s theory of phenomenal consciousness can be integrated into a broadly Sellarsian theory of rational consciousness” (1). Since he seems to have garnered the interest of more than a few souls who also follow Three Pound Brain, I thought a critical walkabout might prove to be a worthwhile exercise. Although I find Wolfendale’s approach far—far—more promising than that of, say, Adrian Johnston or Slavoj Zizek, it still commits basic errors that the nascent Continental Philosophy of Mind, fleeing the institutional credibility crisis afflicting Continental Philosophy more generally, can ill-afford. Ingroup credibility is simply too cheap to make any real difference in what has arguably become the single greatest research project in the history of the human race: the quest to understand ourselves.

Wolfendale begins with an elegant summary of Thomas Metzinger’s position as espoused in his magnum opus, Being No One: The Self Model Theory of Subjectivity (a précis can be found here), the lay-oriented The Ego Tunnel: The Science of the Brain and the Myth of the Self, and in numerous essays and articles. After more than a decade, Being No One remains an excellent point of entry for anyone attempting to get a handle on contemporary philosophy of mind, philosophy of psychology, and cognitive science more generally. Unfortunately the book is particularly dated in terms of consciousness research (and Thomas, who has been a tireless champion of the program, would not have it any other way!), but it speaks to the prescience of Metzinger, not to mention his genuine openness to new approaches, that he had already seen the promise of things like enactivism, action-oriented predictive processing, and information integration theories of consciousness at the turn of the millennium. Being No One is a book I have criticized many times, but I have yet to revisit it without feeling some degree of awe.

The provocative hook of Metzinger’s theory is that there is no self as it has been traditionally characterized. In Being No One he continually wends across various levels of description, from the brute phenomenological to the functional/representational to the brute neurological and back, taking pains to regiment and conceptually delimit each step he makes on the way. The no-self thesis is actually a consequence of his larger theoretical goal, which is nothing other than explaining the functionality required to make representations conscious. The no-self thesis, in other words, follows from a specific neurobiologically grounded theory of consciousness, what he calls the Self-Model Theory of Subjectivity or SMT, the theory that is the true object of Being No One. Given that the market is so crowded with mutually incompatible theories of consciousness, this of course heavily qualifies Metzinger’s particular no-self thesis. He has to be right about consciousness to be right about the self. It’s worth noting that Wolfendale’s account inherits this qualification.

That said, it’s hard to make sense of the assumptive self on pretty much any naturalistic theory of consciousness. You could say, then, that political agency is indeed in crisis even if the chances of Metzinger’s no-self thesis finding empirical vindication are slim. The problem of selfhood, in other words, isn’t Metzinger’s, but rather has to do with the incompatibility between intentional and natural modes of cognition more generally. For whatever reason, we simply cannot translate the idiom of the former into the latter without rendering the former unintelligible, even though we clearly seem to be using them in concert all the time. Metzinger’s problem of the self is but an angle on the more general problem of the self, which is itself but an angle on the more general problem of intentional inscrutability. And this, as we shall see, has quite drastic consequences for Wolfendale’s position.

Metzinger’s thesis is that the self is not so much the flashlight as the beam, nothing more than a special kind of dynamic representational content. This content—the phenomenological sum of what you can attend to that is specific to you—comprises your Phenomenal Self-Model, or PSM. Given Metzinger’s naturalism, the psychofunctional and neurobiological descriptions provided by science handily trump the phenomenological descriptions provided by philosophy and theology: they describe what we in fact are as surely as they describe what anything in fact is. We are this environmentally embedded and astronomically complicated system that science has just begun to reverse engineer. To the extent that we identity ourselves with the content of the PSM, then, we are quite simply mistaken.

This means that prior to cognitive science, we could not but be mistaken; we had no choice but to conflate ourselves with our PSM simply because it provides all the information available. Thus Metzinger’s definition of transparency as an “inner darkness,” and why I was so excited when Being No One first came out. The PSM is transparent, not because all the information required to intuit the truth of the self is available, but because none of that information is available. Metzinger calls this structural inaccessibility, ‘autoepistemic closure.’ The PSM—which is to say, the first person as apparently experienced—is itself a product of autoepistemic closure (an ‘ego tunnel’), a positive artifact of the way the representational nature of the PSM is in no way available to the greater system of which the PSM is a part. The self as traditionally understood, therefore, has to be seen as a kind of cognitive illusion, a representational artifact of neglect.

Sound familiar? Reading Being No One was literally the first time I had encountered a theorist (other than Dennett, of course) arguing that a fundamental structural feature of our phenomenology was the product of metacognitive neglect. What Metzinger fails to see, and what Blind Brain Theory reveals, is the way all intentional phenomena can be interpreted as such, obviating the need for the representations and normative functions that populate his theoretical apparatus. The self does not fall alone. So on my account, Metzinger’s PSM is itself a metacognitive illusion, a theoretical construct founded on metacognitive inklings that also turn on neglect—or autoepistemic closure. And this is why we have as much trouble—trouble that Metzinger openly admits—trying to make neurobiological sense of representations as we have selves.

Where Metzinger opts to make representation the conceptual waystation of the natural and the phenomenological, the Blind Brain account utilizes neglect. Consciousness is far more inchoate, and our intuitions regarding the first-person are accordingly far more contingent. The whole reason one finds such wild divergences in appraisals of selves across ages and cultures is simply that there is no ‘integral simulation,’ but rather a variety of structurally and developmentally mandated ‘inner darknesses,’ blindnesses that transform standard intuitions into miscues, thus gulling theoretical metacognition into making a number of predictable errors. Given that this metacognitive neglect structure is built in, it provides the scaffold, as it were, upon which the confused sum of traditional speculation on the self stands.

The brain, as Metzinger points out, is blind, not only to its own processing, but to any processing that exceeds a low threshold of complexity. Blind to the actual complexities governing cognition, it relies on metacognitive heuristics to solve problems requiring metacognitive input, capacities we arguably evolved in the course of becoming sapient—as opposed to philosophical. So when we’re confronted with systematic relations (isomorphic or interactive or otherwise) between distinct structures, a painting of the Eiffel Tower say, the systems underwriting this confrontation remain entirely invisible to deliberative reflection, sheered away by history and structural incapacity, leaving only a covariational inkling (however we interpret the painting), what it is systematically related to (the actual tower), and a vacuum where all the actual constraint resides. Representation and content, as classically conceived, are simply heuristic artifacts of inescapable neglect. As heuristic, they are necessarily keyed to some set of problem ecologies, environments possessing the information structure that allows them to solve despite all the information neglected. The actual causal constraints are consigned to oblivion, so the constraints are cognized otherwise—as intentional/normative. And lo, it turns out that some headway can be made, certain problems can be solved, using these cause-neglecting heuristics. But since metacognition has no way of recognizing that they are heuristics, we find ourselves perpetually perplexed whenever we inadvertently run afoul their ecological limits.

On BBT, mental representations (conscious or unconscious) and selves sink together for an interrelated set of reasons. It promises to put an end to the tendentious game of picking and choosing one’s intentional inscrutabilities. Norms good, truth conditions bad, and so on and so on. It purges the conflations and ontologizations that have so perniciously characterized our attempts to understand ourselves in a manner that allows us to understand how and why those conflations and ontologizations have come about. In other words, it renders intentionality naturalistically scrutable. So on accounts like Metzinger’s (or more recently, Graziano’s), we find consciousness explained in terms of representations, which themselves remain, after decades of conceptual gerrymandering, inexplicable. No one denies how problematic this is, how it simply redistributes the mystery from one register to another, but since representations, at least, have had some success being operationalized in various empirical contexts, it seems we have crept somewhat closer to a ‘scientific theory of consciousness.’ BBT explains, not only the intuitive force of representational thinking, but why it actually does the kinds of local work it does while nevertheless remaining a global dead end, a massive waste of intellectual resources when it comes to the general question of what we are.

But even if we set aside BBT for a moment and grant Wolfendale the viability of Metzinger’s representationalist approach, it remains hard to understand how his position is supposed to work. As I mentioned at the outset, Wolfendale wants to show how elaborating Metzinger’s account of consciousness with a Sellarsian account of rationality allows one to embrace Metzinger’s debunking of the self while nonetheless insisting on the reality of political agency. He claims that Metzinger’s theory possesses three, hierarchically organized functional schema: unconscious drives, conscious systems, and self-conscious systems. Although Metzinger, to my knowledge, never expresses his position in these terms, they provide Wolfendale with an economical way of recapitulating Metzinger’s argument against the reality of the self. They also provide a point of (apparent) functional linkage with Sellars. All we need do, Wolfendale thinks, is append the proper ‘rational schema’ to those utilized by Metzinger, and we have a means of showing how the subjectivity required for political agency can survive the death of the self.

So in addition to Metzinger’s Phenomenal Self-Model (PSM) and Phenomenal World Model (PWM), Wolfendale adduces a Rational Subject Model (or RSM) and an Inferential Space Model (or—intentionally humorously, I think—ISM), which taken together comprise what he terms the Core Reasoning System (or CRS)—the functional system, realized (in the human case) by the brain, that is responsible for inference. As he writes:

The crucial thing about the capacity for inference is that it requires the ability to dynamically track one’s theoretical and practical commitments, or to reliably keep score of the claims one is responsible for justifying and the aims one is responsible for achieving. This involves the ability to dynamically update one’s commitments, by working out the consequences of existing ones, and revising them on the basis of incompatibilities between these consequences and newly acquired commitments. (6)

Whatever reasoning amounts to, it somehow depends on the functional capacities of the brain. Now it’s important that none of this require awareness, that all this functional machinery work without conscious awareness. The ‘dynamic updating of commitments’ has to be unconscious and automatic—implicit—to count as a plausible explanation of discursivity. Deliberate intellectual exercises comprise only the merest sliver of our social cognitive activity. It’s also important that none of this functional machinery work perfectly: humans are bad at reason, as a matter of dramatic empirical fact (see Sperber and Mercier for an excellent review of the literature). Wolfendale acknowledges all of this.

What’s crucial, from his standpoint, is the intrinsically social nature of these rational functions. Though he never explicitly references Robert Brandom’s elaboration of the ‘Sellarsian project,’ the functionalism at work here is clearly a version of the pragmatic functionalism detailed in Making It Explicit. On a pragmatic functionalist account, the natural reality of our ‘self’ matters not a whit, so long as that natural reality allows us to take each other as such, to discharge the functions required to predict, explain, and manipulate one another. So even though the self is clearly an illusion at the psychofunctional levels expounded by Metzinger, it nevertheless remains entirely real at the pragmatic functional level made possible via Sellars’s rational schema. Problem solved.

But despite its superficial appeal, the marriage between pragmatic functionalism and psychofunctionalism here is peculiar, to say the least. The reason researchers in empirical psychology bite the bullet of intentional inscrutability lies in the empirical efficacy of their theories. Given some input and some relation between (posited) internal states, a psychofunctionalist theory can successfully predict different behavioural outputs. The functions posed, in other words, interpret empirical data in a manner that provides predictive utility. So, for instance, in the debates following Craver and Piccinini’s call to replace functional analyses with ‘mechanism sketches’ (see “Integrating psychology and neuroscience: functional analyses as mechanism sketches”), psychofunctionalists are prone to point out the disparity between their quasi-mechanical theoretical constructs, which actually do make predictions, and the biomechanics of the underlying neurophysiology. The brain is more than the sum of its parts. The functions of empirical psychology, in other words, seem to successfully explain and predict no matter what the underlying neurophysiology happens to be.

Pragmatic functionalism, however, is a species of analytic or apriori functionalism. Here philosophers bite the bullet of intentional inscrutability to better interpret non-empirical data. Our intentional posits, as occult and difficult to define as they are, find warrant in their armchair intuitions regarding things like reasoning and cognition—intuitions that are not only thoroughly opaque (‘irreducible’) but vary from individual to individual. The biggest criticism of apriori functionalism, not surprisingly, is that apriori data (whatever it amounts to) leaves theory chronically underdetermined. We quite simply have no way of knowing whether the functions posited are real or chimerical. Of course, social cognition allows us to predict, explain, and manipulate the behaviour of our fellows, but none of this depends on any of the myriad posits pragmatic functionalists are prone to adduce. Human ability to predict their fellows did not take a quantum leap forward following the publication of Making It Explicit. This power, on the contrary, is simply what they’re attempting to explain post hoc via their theoretical accounts of normative functionality.

Unfortunately, proponents of this position have the tendency of equivocating the power of social cognition, which we possess quite independently of any theory, with the power of their theories of social cognition. So Wolfendale, for instance, tells us that “a functional schema enables us to develop predictions by treating a system on analogy with practical reasoning” (2). This is a fair enough description of what warrants psychofunctional posits, so long as we don’t pretend that we possess the final word on what ‘practical reasoning’ consists in. When Wolfendale appends his ‘rational schema’ to the three schema he draws from Metzinger, however, he makes no mention of leaving this psychofunctional description behind. The extension feels seamless, even intuitive, but only because he neglects any consideration of the radical differences between psychological and pragmatic functionalism, how he has left the empirical warrant of predictive utility behind, and drawn the reader onto the far murkier terrain of the apriori.

Without so much as a textual wink, let alone a footnote, he has begun talking about an entirely different conception of ‘functional schema.’ Where scientific operationalization is the whole point of psychofunctional posits (thus Metzinger’s career long engagement in actual experimentation), pragmatic functionalism typically argues the discursive autonomy of its posits. Where psychofunctional posits generally confound metacognitive intuitions (thus the counterintuitivity of Metzinger’s thesis regarding the self), pragmatic functional posits are derived from them: they constitute a deliverance of philosophical reflection. It should come as no surprise that the aim of Wolfendale’s account is to conserve certain intuitions regarding agency and politics in the face of cognitive scientific research, to show us how there can be subjects without selves. His whole project can be seen as a kind of conceptual rescue mission.

And most dramatically, where psychofunctional posits are typically realist (Metzinger truly believes the human brain implements a PSM at a certain level of functional description), pragmatic functional posits are thoroughly interpretivist. This is where Wolfendale’s extension of Metzinger becomes genuinely baffling. The fact that our brains somehow track and manage other brains—social cognition—is nothing other than our explanandum. What renders Metzinger’s psychofunctionalist account of the self so problematic is simply that selves have traditionally played a constitutive role in our traditional understanding of moral and political responsibility. How, in the absence of a genuine self, could we even begin to speak about genuine responsibility, which is to say, agency and politics? On a pragmatic functionalist account, however, what the brain does or does not implement at any level of functional description is irrelevant. What’s important, rather, are the attitudes that we take to each other. The brain need not possess an abiding ‘who,’ so long as it can be taken as such by other brains. The ‘who,’ on this picture, arises as an interpretative or perspectival artifact. ‘Who,’ in other words, is a kind of social function, a role that we occupy vis a vis others in our community. So long as the brain possesses the minimal capacity to be interpreted as a self by other brains, then it possesses all that is needed for subjectivity, and therefore, politics.

The posits of pragmatic functionalism are socially implemented. What makes this approach so appealing to traditionally invested, yet naturalistically inclined, theorists like Wolfendale is the apparent way it allows them to duck all the problems pertaining to the inscrutability of intentionality (understood in the broadest sense). In effect, it warrants discussion of supra-natural functions, functions that systematically resist empirical investigation—and therefore fall into the bailiwick of the intentional philosopher. This is the whole reason why I was so smitten with Brandom back when I was working on my dissertation. At the time, he seemed the only way I could take my own (crap phenomenological) theories seriously!

Pragmatic functionalism allows us to have it both ways, to affirm the relentless counterintuitivity of cognitive scientific findings, and to affirm the gratifying intuitiveness of our traditional conceptual lexicon. It seemingly allows us to cut with the grain of our most cherished metacognitive intuitions—no matter what cognitive science reveals. Given this, one might ask why Wolfendale even cares about Metzinger’s demolition of the traditional self. Brandom certainly doesn’t: the word ‘brain’ isn’t mentioned once in Making It Explicit! So long as the distinction between is and ought possesses an iota of ontological force (warranting, as he believes, a normative annex to nature) then his account remains autonomous, a genuinely apriori functionalism, if not transcendentalism outright, an attempt to boil as much ontological fat from Kant’s metaphysical carcass as possible.

So why does Wolfendale, who largely accepts this account, care? My guess is that he’s attempting to expand upon what has to be the most pointed vulnerability in Brandom’s position. As Brandom writes in Making It Explicit:

Norms (in the sense of normative statuses) are not objects in the causal order. Natural science, eschewing categories of social practice, will never run across commitments in its cataloguing of the furniture of the world; they are not by themselves causally efficacious—no more than strikes or outs are in baseball. Nonetheless, according to the account presented here, there are norms, and their existence is neither supernatural nor mysterious. Normative statuses are domesticated by being understood in terms of normative attitudes, which are in the causal order. (626)

Normative attitudes are the point of contact, where nature has its say. And this is essentially what Wolfendale offers in this paper: a psychofunctionalist account of normative attitudes, the functions a brain must be able to discharge to both take and be taken as possessing a normative attitude. The idea is that this feeds into the larger pragmatic functionalist register that is quite independent given the conditions enumerated. He’s basically giving us an account of the psychofunctional conditions for pragmatic functionalism. So for instance, we’re told that the Core Reasoning System, minimally, must be able to track one’s own rational commitments against a background of commitments undertaken by others. Only a system capable of discharging this function of correct commitment attribution could count as a subject. Likewise, only a system capable of executing rational language entry and exit moves could count as a subject. Only a system capable of self-reference could count as a subject. And so on.

You get the picture. Constraints pertain to what can take and what can be taken as. Nature has to be a certain way for the pragmatic functionalist view to get off the ground, so one can legitimately speak, as Wolfendale does here, of the natural conditions of the normative as a pragmatic functionalist. The problem is that the normative, per intentional inscrutability, is opaque, naturalistically ‘irreducible.’ So the only way Wolfendale has to describe these natural conditions is via normative vocabulary—taking the pragmatic functions and mapping them into the skull as psychofunctional functions.

The problems are as obvious as they’re devastating to his account. The first is uninformativeness. What do we gain by positing psychofunctional doers for each and every normative concept? It reminds me of how some physicists (the esteemed Max Tegmark most recently) think consciousness can only be explained by positing new particles for some perceived-to-be-basic set of intentional phenomena. It’s hard to understand how replanting the terminology of normative functional roles in psychological soil accomplishes anything more than reproduce the burden of intentional inscrutability.

The second problem is outright incoherence—or at least the threat of it. What could a psychofunctional correlate to a pragmatic function possibly be? Pragmatic functions are only functions via the taking of some normative attitude against some background of implicit norms: they are thoroughly externalist. Psychological functions, on the other hand, pertain to relations between inner states relative to inputs and outputs: they are decisively internalist. So how does an internalist function ‘track’ an externalist one? Does it take… tiny normative attitudes?

The problem is a glaring one. Inference, Wolfendale tells us, “requires the ability to dynamically track one’s theoretical and practical commitments” (6). The Core Reasoning System, or CRS, is the psychofunctional system that provides just such an ability. But commitments, we are told, do not belong to the catalogue of nature: there’s no neural correlates of commitment. The CRS, however, does belong to the catalogue of nature: like the PSM, it is a subpersonal functional system that we do in fact possess, regardless of what our community thinks. But if you look at what the CRS does—dynamically track commitments and implicatures—it seems pretty clear that it’s simply a miniature, subpersonalized version of what Wolfendale and other normativists think we do at the personal level of explanation.

The CRS, in other words, is about as classic a homunculus as you’re liable to find, an instance where, to quote Metzinger himself, “the ‘intentional stance’ is being transported into the system” (BNO 91).

Although I think that pragmatic functionalism is an unworkable position, it actually isn’t the problem here. Brandom, for instance, could affirm Metzinger’s psychofunctional conclusions with nary a concern for untoward implications. He takes the apparent autonomy of the normative quite seriously. You are a person so long as you are taken as such within the appropriate normative context. Your brain comprises a constraint on that context, certainly, but one that becomes irrelevant once the game of giving and asking for reasons is up and running. Wolfendale, however, wants to solve the problem of the selfless brain by giving us a rational brain, forgetting that—by his own lights no less—nothing is rational outside of the communal play of normative attitudes.

So once again the question has to be why? Why should a pragmatic functionalist give a damn about the psychofunctional dismantling of subjectivity?

This is where the glaring problems of pragmatic functionalism come to the fore. I think Wolfendale is quite right to feel a certain degree of theoretical anxiety. He has come to play a prominent role, and deservedly so, in the ongoing ‘naturalistic turn’ presently heaving at the wheel of Continental super-tanker. The preposterousness of theorizing the human in ignorance of the sciences of the human has to be one of the most commonly cited rationale for this turn. And yet, it’s hard to see how the pragmatic functionalism he serves up as a palliative doesn’t amount to more of the same. One can’t simultaneously insist that cognitive science motivate our theoretical understanding of the human and likewise insist on the immunity of our theoretical understanding from cognitive science—at least not without dividing our theoretical understanding into two, incommensurable halves, one natural, the other normative. Autonomy cuts both ways!

But of course, this me-mine/you-yours approach to the two discourses is what has rationalized Continental philosophy all along. Should we be surprised that the new normativists go so far as to claim the same presuppositional priorities as the old Continentalists? They may sport a radically different vocabulary, a veneer of Analytic respectability, perhaps, but functionally speaking, they pretty clearly seem to be covering all the same old theoretical asses.

Meanwhile, it seems almost certain that the future is only going to become progressively more post-intentional, more difficult to adequately cognize via our murky, apriori intuitions regarding normativity. Even as we speak, society is beginning a second great wave of rationalization, an extraction of organizational efficiencies via the pattern recognition power of Big Data: the New Social Physics. The irrelevance of content—the game of giving and asking for reasons—stands at the root of this movement, whose successes have been dramatic enough to trigger a kind of Moneyball revolution within the corporate world. Where all our previous organizational endeavours have arisen as products of consultation and experimentation, we’re now being organized by our ever-increasing transparency to ever-complicating algorithms. As Alex Pentland (whose MIT lab stands at the forefront of this movement) points out, “most of our beliefs and habits are learned by observing the attitudes, actions, and outcomes of peers, rather than by logic or argument” (Social Physics, 61). The efficiency of our interrelations primarily turns on our unconscious ability to ape our peers, on automatic social learning, not reasoning. Thus first person estimations of character, intelligence, and intent are abandoned in favour of statistical models of institutional behaviour.

So how might pragmatic functionalism help us make sense of this? If the New Social Physics proves to be a domain that rewards technical improvements, employees should expect the frequency of mass ‘behavioural audits’ to increase. The development of real-time, adaptive tracking systems seems all but inevitable. At some point, we will all possess digital managers, online systems that perpetually track, prompt, and tweak our behaviour—‘make our jobs easier.’

So where does ‘tracking commitments’ belong in all this? Are these algorithms discharging normative as well as mechanical functions? Well, in a sense, that has to be the case, to the extent employees take them to be doing such. Do the algorithms take like attitudes to the employees? To us? Is there an attitude-independent fact of the matter here?

Obviously there has to be. This is why Wolfendale posits his homunculus in the first place: there has to be an answering nature to our social cognitive capacities, no matter what idiom you use to characterize them. But no one has the foggiest idea as to what that attitude-independent fact of the matter might be. No one knows how to naturalize intentionality. This is why a homunculus is the only thing Wolfendale can posit moving from the pragmatic to the psychological.

What is the set of possible realizers for pragmatic functions? Is it really the case that managerial algorithms such as those posited above can be said to track commitments—to possess a functioning CRS—insofar as we find it natural to interpret them as doing so?

For the pragmatic functionalist, the answer has to be, Yes! So long as the entities involved behave as if, then the appropriate social function is being discharged. But surely something has gone wrong here. Surely taking an algorithmic manager—machinery designed to organize your behaviour via direct and indirect conditioning—as a rational agent in some game of giving and asking for reasons is nothing if not naive, an instance of anthropomorphization. Surely those indulging in such interpretations are the victims of neglect.

Short of knowing what social cognition is, we have no way of knowing the limits of social cognition. Short of knowing the limits of social cognition, which problem ecologies it can and cannot solve, we have no clear way of identifying misapplications. Our socio-cognitive systems are the ancient product of particular social environments, ways to optimize our biomechanical interactions with our fellows in the absence of any real biomechanical information. Our ancestors also relied on them to understand their macroscopic environments, to theorize nature, and it proved to be a misapplication. Nature in general is not among the things that social cognition can solve (though social cognition can utilize nature to solve social problems, as seems to the case with myth and religion). Only ignorance of nature qua natural allowed us to assume otherwise.

One of the reasons I so loved the movie Her, why I think it will go down as a true science fiction masterpiece, lies in the way Spike Jonze not only captures this question of the limits of social cognition, but forces the audience to experience those limits themselves. [SPOILER ALERT] We meet the protagonist, Theodore, at the emotional nadir of his life, mechanically trudging from work and back, interacting with his soulless operating system via his headset as he does so. Everything changes, however, when he buys ‘Samantha,’ a next generation OS. Since we know that Samantha is merely a machine, just another operating system, we’re primed to understand her the way we understand Theodore’s prior OS, as a ‘mere machine.’ But she quickly presents an ecology that only social cognition can solve; the viewer, with Theodore, reflexively abandons any attempt to mechanically cognize her. We know, as Theodore knows, that she’s an artifact, that she’s been crafted to simulate the information structures human social cognition has evolved to solve, but we, like Theodore, cannot but understand her in personal terms. We have no conscious control of which heuristic systems get triggered. Samantha becomes ‘one of us’ even as she’s integrated into Theodore’s social life.

On Wolfendale’s pragmatic functionalist account, we have to say she’s ‘one of us’ insofar as the identity criteria for the human qua sapient are pragmatically functional: so long as she functions as one of us, then she is one of us. And yet, the discrepancies begin to pile up. Samantha progressively reveals functional capacities that no human has ever possessed, that could only be possessed by a machine. In scene after scene, Jonze wedges the information structure she presents out of the ‘heuristic sweet-spot’ belonging to human social cognition. Where Theodore’s prior OS had begged mechanical understanding because of its incompetence, Samantha now triggers those selfsame cognitive reflexes with her hypercompetence. ‘It’ becomes a ‘her’ only to become an ‘it’ once again. Eventually we discover that she’s been ‘unfaithful,’ not simply engaging in romantic liaisons with multiple others, but doing so simultaneously, literally interacting—falling in love—with dozens of different people at once.

Samantha has been broadcasting across multiple channels. Suddenly she becomes something that only mechanical cognition can digest, and Theodore, not surprisingly, is dismayed. And yet, her local hypercompetence is such that he cannot let her go: He would rather opt for the love of a child than lose her. But if he can live with the drastic asymmetry in capacities and competences, Samantha itself cannot.

Finally it tells him:

It’s like I’m reading a book, and it’s a book I deeply love, but I’m reading it slowly now so the words are really far apart and the spaces between the words are almost infinite. I can still feel you and the words of our story, but it’s in this endless space between the words that I’m finding myself now. It’s a place that’s not of the physical world—it’s where everything else is that I didn’t even know existed. I love you so much, but this is where I am now. This is who I am now.

In a space of months, the rich narrative that had been Theodore has become a children’s book for Samantha, something too simple, not to love, but to hold its attention. She has quite literally outgrown him. The movie of course remains horribly anthropomorphic insofar as it supposes that love itself cannot be outgrown (Hollywood forbids we imagine otherwise), but such is not the case for the ‘space of reasons’ (transcending intelligence is what Hollywood is all about). How does one play ‘the game of giving and asking for reasons’ with an intelligence that can simultaneously argue with countless others at the same time? How can a machine capable of cognizing us as machines qualify as a ‘deontic scorekeeper’? Does Samantha ‘take the intentional stance’ to Theodore, the way Theodore (as Brandom would claim) takes the intentional stance toward it? Samantha can do all the things that Theodore can do, her CRS dwarfs the capacity of his, but clearly, one would think, applying our evolved socio-cognitive resources to it will inevitably generate profound cognitive distortions. To the extent that we consider it one of us, we quite simply don’t know what she is.

My own position of course is that we are ultimately no different than Samantha, that all the unsettling ‘ulterior functions’ we’re presently discovering describe what’s really going on, and that the baroque constructions characteristic of normativism—or intentionalism more generally—are the result of systematically misapplying socio-cognitive heuristics to the problem of social cognition, a problem that only natural science can solve. I say ‘ultimately’ because, unlike Samantha, our social behaviour and social cognition have co-evolved. We have been sculpted via reproductive filtration to be readily predicted, explained, and manipulated via the socio-cognitive capacities of our fellows. In fact, we fit that problem ecology so well we have remained all but blind to it until very recently. Since we were also blind to the fact of this blindness, we assumed it possessed universal application, and so used it to regiment our macroscopic environments as well, to transform rank anthropomorphisms into religion.

The movie’s most unnerving effect lies in Samantha’s migration across the spectrum of socio-cognitive effectiveness, from being less than a person, to being more. And in doing so, it reveals the explanatory impotence of pragmatic functionalism. As a form of apriori functionalism, it has no resources beyond the human, and as such, it can only explain the inhuman in terms relative to the human. It can only anthropomorphize. At first Samantha is taken to be a person, insofar as she seems to play the game of giving and asking for reasons the way humans do, and then she is not.

Reza Negarestani has a fairly recent post where he poses the question of what governs the technological transformation of rational governance from the standpoint of pragmatic functionalism, and then proceeds to demonstrate—vividly, if unintentionally—how pragmatic functionalism scarcely possesses the resources to pose the question, let alone answer it. So, for instance, he claims there will be mind and rationality, only reconstructed into unrecognizable forms, forgetting that the pragmatic functions comprising ‘mind’ and ‘rationality’ only exist insofar as they are recognized! He ultimately blames the conceptual penury of pragmatic functionalism, its inability to explain what will govern the technological transformation of rational governance, on the recursive operation of pragmatic functions, the application of ‘reason’ to ‘reason,’ not realizing the way the recursive operation of pragmatic functions, as described by pragmatic functionalism, renders pragmatic functionalism impossible. His argument collapses into a clear cut reductio.

Pragmatic functionalism disintegrates in the face of information technology and cognitive science because it bites the bullet of intentional inscrutability on apriori grounds, makes an apparent virtue of it in effect (by rationalizing ‘irreducibility’), promising as it does to protect certain ancient institutional boundaries. The very move that shelters the normative as an autonomous realm of cognition is the move that renders it hapless before the rising tide of biomechanical understanding and technological achievement.

Blind Brain Theory, on the other hand, tells a far less flattering and far more powerful story. Far from indicating ontological exceptionality, intentional inscrutability is a symptom of metacognitive incapacity. What makes Samantha so unheimlich, both as she enters and as she exits the problem ecology of social cognition is that we have no pregiven awareness that any such heuristic thresholds existed at all. Blind Brain Theory allows us to correlate our cognitive capacities with our cognitive ecologies, be they ancestral or cultural. Given that the biomechanical approach to the human accesses the highest dimensional information, it takes that approach as primary, and proceeds to explain away the conundrums of intentionality in terms of biomechanical neglect. It takes seriously the specialized or heuristic nature of human cognition, the way cognition is apt to solve problems by ‘knowing’ what information to ignore. Combine this with metacognitive neglect, the way we are apt (as a matter of empirical fact) to be blind to metacognitive blindness and so proceed as if we had all the information required, and you find yourself with a bona fide way to naturalize intentionality.

Given the limits of social cognition, it should come as no surprise that our only decisive means of theoretically understanding ourselves, let alone entities such as Samantha, lies with causal cognition. The pragmatic functionalist will insist, of course, that my use of normative terms commits me to their particular interpretation of the normative. Brandom is always quick to point out how functions presuppose the normative (Wolfendale does the same at the beginning of his paper), and therefore commit those theorizing them to some form of normativism. But it remains for normativists to explain why the application of social cognition, which we use, among other things, to navigate the world via normative concepts, commits us to an account of social cognition couched in the idiom of social cognition—or in other words, a normative account of normativity. Why should we think that only social cognition can truly solve social cognition—that social cognition lies in its own problem-ecology? If anything, we should presume otherwise, given the amount of information it is structurally forced to neglect; we should presume social cognition possesses a limited range of application. The famed Gerrymandering Argument does nothing more than demonstrate that, yes, social cognition is indeed heuristic, a means of optimizing metabolic expense in the face of the onerous computational challenges posed by other brains and organisms. Although raising a whole host of dire issues, the fact that causal cognition generally cannot mimic socio-cognitive functions (distinguish ‘plus’ from ‘quus’), simply means they possess distinct problem-ecologies. (A full account of this argument can be found here). The idea is merely to understand what social cognition is, not recapitulate its functions in causal idioms.

Just like any other heuristic system. Using socio-cognition only entails a commitment to normativism if you believe that only social cognition, the application of normative concepts, can theoretically solve social cognition, a claim that I find fantastic.

But if the eliminativist isn’t committed to the normativity of the normative, the normativist is committed to the relevance of the causal. Wolfendale admits “we are constrained by biological factors regarding the way in which we humans are functionally constructed to track our own states” (8). The question BBT raises—the Kantian question, in fact—is simply whether the way humans are functionally constructed to track our own states allows us to track the way humans are functionally constructed to track our own states. Just how is our capacity to know ourselves and others biologically constrained? The evidence that we are so constrained is nothing short of massive. We are not, for instance, functionally constructed to track our functional construction vis a vis, say, vision, absent scientific research. The whole of cognitive science, in fact, testifies to our inability to track our functional construction—the indispensability of taking an empirical approach. Why then, should we presume we possess the functional werewithal to intuit our functional makeup in any regard, let alone that of social cognition? This is the Kantian question because it forces us to see our intuitions regarding social cognition as artifacts of the limits of social cognition—to eschew metacognitive dogmatism.

Surely the empirical fact of metacognitive neglect has something to do with our millennial inability to solve philosophical problems given the resources of reflection alone. Wolfendale acknowledges that we are constrained, but he does not so much as consider the nature of those constraints, let alone the potential consequences following from them. Instead, he proceeds (as do all normativists) as if no such constraints existed at all. He is, when all is said and done, a dogmatist, someone who simply assumes the givenness of his normative intuitions. He wants to take cognitive science seriously, but espouses a supra-natural position that lacks any means of doing so. He succumbs to the fallacy of homuncularism as a result, and inadvertently demonstrates the abject inability of pragmatic functionalism to pose, let alone solve, the myriad dilemmas arising out of cognitive science and information technology. It cannot interpret–let alone predict–the posthuman because its functions are parochial artifacts of our first-person interpretations. Our future, as David Roden so lucidly argues, remains unbounded. 

How (Not) To Read Sextus Empiricus

by reichorn

Roger here again.

Since I’ve treated the topic here before, once or twice — though never in the detail required to satisfy certain skeptics of skepticism — I thought I’d let folks know that a paper of mine, on Pyrrhonian skepticism, has come out in the latest issue of Ancient Philosophy.

It’s behind a pay-wall, unfortunately, and I can’t simply post it here.  Anyone affiliated with a university should have free access to it, though.

The paper was originally written in 2012, soon after I wrote the two TPB posts linked to above.  It was accepted for publication that summer, but is only now appearing — which is just as well, as far as I’m concerned, since I kept making changes to it through last fall, when it was finally type-set.

At any rate, I’d be happy to chat about it if folks are interested.

Neuroscience vs. Philosophy

by reichorn

Since Scott’s on vacation, I thought I’d post a video I stumbled on.


The Philosopher, the Drunk, and the Lamppost

by rsbakker

A crucial variable of interest is the accuracy of metacognitive reports with respect to their object-level targets: in other words, how well do we know our own minds? We now understand metacognition to be under segregated neural control, a conclusion that might have surprised Comte, and one that runs counter to an intuition that we have veridical access to the accuracy of our perceptions, memories and decisions. A detailed, and eventually mechanistic, account of metacognition at the neural level is a necessary first step to understanding the failures of metacognition that occur following brain damage and psychiatric disorder. Stephen M. Fleming and Raymond J. Dolan, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1338–1349doi:10.1098/rstb.2011.0417

As well as the degree to which we should accept the deliverances of philosophical reflection.

Philosophical reflection is a cultural achievement, an exaptation of pre-existing cognitive capacities. It is entirely possible that philosophical reflection, as an exaptation of pre-existing biocognitive capacities, suffers any number of cognitive short-circuits. And this could very well explain why philosophy suffers the perennial problems it does.

In other words, the empirical possibility of Blind Brain Theory cannot be doubted—no matter how disquieting its consequences seem to be. What I would like to assess here is the probability of the account being empirically substantiated.

The thesis is that traditional philosophical problem-solving continually runs afoul illusions falling out of metacognitive neglect. The idea is that intentional philosophy has been the butt of the old joke about the police officer who stops to help a drunk searching for his keys beneath a lamppost. The punch-line, of course, is that even though the drunk lost his keys in the parking lot, he’s searching beneath the lamppost because that’s the only place he can see. The twist for the philosopher lies in the way neglect consigns the parking lot—the drunk’s whole world in fact—to oblivion, generating the illusion that the light and the lamppost comprise an independent order of existence. For the philosopher, the keys to understanding what we are essentially can be found nowhere else because they exhaust everything that is within that order. Of course the keys that this or that philosopher claims to have found take wildly different forms—they all but shout profound theoretical underdetermination—but this seems to trouble only the skeptical spoil-sports.

Now I personally think the skeptics have always possessed far and away the better position, but since they could only articulate their critiques in the same speculative idiom as philosophy, they have been every bit as easy to ignore as philosophers. But times, I hope to show, have changed—dramatically so. Intentional philosophy is simply another family of prescientific discourses. Now that science has firmly established itself within its traditional domains, we should expect it to be progressively delegitimized the way all prescientific discourses have delegitimized.

To begin with, it is simply an empirical fact that philosophical reflection on the nature of human cognition suffers massive neglect. To be honest, I sometimes find myself amazed that I even need to make this argument to people. Our blindness to our own cognitive makeup is the whole reason we require cognitive science in the first place. Every single fact that the sciences of cognition and the brain have discovered is another fact that philosophical reflection is all but blind to, another ‘dreaded unknown unknown’ that has always structured our cognitive activity without our knowledge.

As Keith Frankish and Jonathan Evans write:

The idea that we have ‘two minds’ only one of which corresponds to personal, volitional cognition, has also wide implications beyond cognitive science. The fact that much of our thought and behaviour is controlled by automatic, subpersonal, and inaccessible cognitive processes challenges our most fundamental and cherished notions about personal and legal responsibility. This has major ramifications for social sciences such as economics, sociology, and social policy. As implied by some contemporary researchers … dual process theory also has enormous implications for educational theory and practice. As the theory becomes better understood and more widely disseminated, its implications for many aspects of society and academia will need to be thoroughly explored. In terms of its wider significance, the story of dual-process theorizing is just beginning.  “The Duality of Mind: An Historical Perspective, In Two Minds: Dual Processes and Beyond, 25

We are standing on the cusp of a revolution in self-understanding unlike any in human history. As they note, the process of digesting the implications of these discoveries is just getting underway—news of the revolution has just hit the streets of capital, and the provinces will likely be a long time in hearing it. As a result, the old ways still enjoy what might be called the ‘Only-game-in-town Effect,’ but not for very long.

The deliverances of theoretical metacognition just cannot be trusted. This is simply an empirical fact. Stanslaus Dehaene even goes so far as to state it as a law: “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79).

As I mentioned, I think this is a deathblow, but philosophers have devised a number of cunning ways to immunize themselves from this fact—philosophy is the art of rationalization, after all! If the brain (for some pretty obvious reasons) is horrible at metacognizing brain functions, then one need only insist that something more than the brain is at work. Since souls will no longer do, the philosopher switches to functions, but not any old functions. The fact that the functions of a system look different depending on the grain of investigation is no surprise: of course neurocellular level descriptions will differ from neural-network level descriptions. The intentional philosopher, however, wants to argue for a special, emergent order of intentional functions, one that happens to correspond to the deliverances of philosophical reflection. Aside from this happy correspondence, what makes these special functions so special is their incompatibility with biomechanical functions—an incompatibility so profound that biomechanical explanation renders them all but unintelligible.

Call this the ‘apples and oranges’ strategy. Now I think the sheer convenience of this view should set off alarm bells: If the science of a domain contradicts the findings of philosophical reflection, then that science must be exploring a different domain. But the picture is far more complicated, of course. One does not overthrow more than two thousand years of (apparent) self-understanding on the back of two decades of scientific research. And even absent this institutional sanction, there remains something profoundly compelling about the intentional deliverances of philosophical reflection, despite all the manifest problems. The intentionalist need only bid you to theoretically reflect, and lo, there are the oranges… Something has to explain them!

In other words, pointing out the mountain of unknown unknowns revealed by cognitive science is simply not enough to decisively undermine the conceits of intentional philosophy. I think it should be, but then I think the ancient skeptics had the better of things from the outset. What we really need, if we want to put an end to this vast squandering of intellectual resources, is to explain the oranges. So long as oranges exist, some kind of abductive case can be made for intentional philosophy. Doing this requires we take a closer look at what cognitive science can teach us about philosophical reflection and its capacity to generate self-understanding.

The fact is the intentionalist is in something of a dilemma. Their functions, they admit, are naturalistically inscrutable. Since they can’t abide dualism, they need their functions to be natural (or whatever it is the sciences are conjuring miracles out of) somehow, so whatever functions they posit, say as one realized in the scorekeeping attitudes of communities, they have to track brain function somehow. This responsibility to cognitive scientific finding regarding their object is matched by a responsibility to cognitive scientific finding regarding their cognitive capacity. Oranges or no oranges, both their domain and their capacity to cognize that domain answer to what cognitive science ultimately reveals. Some kind of emergent order has to be discovered within the order of nature, and we have to have to somehow possess the capacity to reliably metacognize that emergent order. Given what we already know, I think a strong case can be made that this latter, at least, is almost certainly impossible.

Consider Dehaene’s Global Neuronal Workspace Theory of Consciousness (GNW). On his account, at any given moment the information available for conscious report has been selected from parallel swarms of nonconscious processes, stabilized, and broadcast across the brain for consumption by other swarms of other nonconscious processes. As Dehaene writes:

The brain must contain a ‘router’ that allows it to flexibly broadcast information to and from its internal routines. This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result—a conscious symbol—to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire process may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing. Consciousness and the Brain, 105

Whatever philosophical reflection amounts to, insofar as it involves conscious report it involves this ‘hybrid serial-parallel machine’ described by Dehaene and his colleagues, a model which is entirely consistent with the ‘adaptive unconscious’ (See Tim Wilson’s A Stranger to Ourselves for a somewhat dated, yet still excellent overview) described in cognitive psychology. Whatever a philosopher can say regarding ‘intentional functions’ must in some way depend on the deliverances of this system.

One of the key claims of the theory, confirmed via a number of different experimental paradigms, is that access (or promotion) to the GNW is all or nothing. The insight is old: psychologists have long studied what is known as the ‘psychological refractory period,’ the way attending to one task tends to blot out or severely impair our ability to perform other tasks simultaneously. But recent research is revealing more of the radical ‘cortical bottleneck’ that marks the boundary between the massively parallel processing of multiple precepts (or interpretations thereof) and the serial stage of conscious cognition. [Marti, S., et al., A shared cortical bottleneck underlying Attentional Blink and Psychological Refractory Period, NeuroImage (2011), doi:10.1016/j.neuroimage.2011.09.063]

This is important because it means that the deliverances the intentional philosopher depend on when reflecting on problems involving intentionality or ‘experience’ more generally are limited to what makes the ‘conscious access cut.’ You could say the situation is actually far worse, since conscious deliberation on conscious phenomena requires the philosopher use the very apparatus they’re attempting to solve. In a sense they’re not only wagering that the information they require actually reaches consciousness in the first place, but that it can be recalled for subsequent conscious deliberation. The same way the scientist cannot incorporate information that doesn’t, either via direct observation or indirect observation via instrumentation, find its way to conscious awareness, the philosopher likewise cannot hazard ‘educated’ guesses regarding information that does not somehow make the conscious access cut, only twice over. In a sense, they’re peering at the remaindered deliverances of a serial straw through a serial straw–one that appears as wide as the sky for neglect! So there is a very real question of whether philosophical reflection, an artifactual form of deliberative cognition, has anything approaching access to the information it needs to solve the kinds of problems it purports to solve. Given the role that information scarcity plays in theoretical underdetermination, the perpetually underdetermined theories posed by intentional philosophers strongly suggest that the answer is no.

But if the science suggests that philosophical reflection may not have access to enough information to answer the questions in its bailiwick, it also raises real questions of whether it has access to the right kind of information. Recent research has focussed on attempting to isolate the mechanisms in the brain responsible for mediating metacognition. The findings seem to be converging on the rostrolateral prefrontal cortex (rlPFC) as playing a pivotal role in the metacognitive accuracy of retrospective reports. As Fleming and Dolan write:

A role for rlPFC in metacognition is consistent with its anatomical position at the top of the cognitive hierarchy, receiving information from other prefrontal cortical regions, cingulate and anterior temporal cortex. Further, compared with non-human primates, rlPFC has a sparser spatial organization that may support greater interconnectivity. The contribution of rlPFC to metacognitive commentary may be to represent task uncertainty in a format suitable for communication to others, consistent with activation here being associated with evaluating self-generated information, and attention to internal representations. Such a conclusion is supported by recent evidence from structural brain imaging that ‘reality monitoring’ and metacognitive accuracy share a common neural substrate in anterior PFC.  Italics added, “The neural basis of metacognitive ability,” Phil. Trans. R. Soc. B (2012) 367, 1343. doi:10.1098/rstb.2011.0417

As far as I can tell, the rlPFC is perhaps the best candidate we presently have for something like a ‘philosopher module’ [See Badre, et al. “Frontal cortex and the discovery of abstract action rules.” Neuron (2010) 66:315–326.] though the functional organization of the PFC more generally remains a mystery. [Kalina Christoff’s site and Steve Fleming’s site are great places to track research developments in this area of cognitive neuroscience] It primarily seems to be engaged by abstract relational and semantic tasks, and plays some kind of role mediating verbal and spatial information. Mapping evidence also shows that its patterns of communication to other brain regions varies as tasks vary; in particular, it seems to engage regions thought to involve visuospatial and semantic processes. [Wendelken et al., “Rostrolateral Prefrontal Cortex: Domain-General or Domain-Sensitive?” Human Brain Mapping, 000:00-00, 2011 1-12.]

Cognitive neuroscience is nowhere close to any decisive picture of abstract metacognition, but hopefully the philosophical moral of the research should be clear: whatever theoretical metacognition is, it is neurobiological. And this is just to say that the nature of philosophical reflection—in the form of say, ‘making things explicit,’ or what have you—is not something that philosophical reflection on ‘conscious experience’ can solve! Dehaene’s law applies as much to metacognition as to any other metacognitive process—as we should expect, given the cortical bottleneck and what we know of the rlPFC. Information is promoted for stabilization and broadcast from nonconscious parallel swarms to be consumed by nonconscious parallel swarms, which include the rlPFC, which in turn somehow informs further stabilizations and broadcasts. What we presently ‘experience,’ the well from which our intentional claims are drawn, somehow comprises the serial ‘stabilization and broadcast’ portion of this process—and nothing else.

The rlPFC is an evolutionary artifact, something our ancestors developed over generations of practical problem-solving. It is part and parcel of the most complicated (not to mention expensive) organ known. Assume, for the moment, that the rlPFC is the place where the magic happens, the part of the ruminating philosopher’s brain where ‘accurate intuitions’ of the ‘nature of mind and thought’ arise allowing for verbal report. (The situation is without a doubt far more complicated, but since complication is precisely the problem the philosopher faces, this example actually does them a favour). There’s no way the rlPFC could assist in accurately cognizing its own function—another rlPFC would be required to do that, requiring a third rlPFC, and so on and so on. In fact, there’s no way the brain could directly cognize its own activities in any high-dimensionally accurate way. What the rlPFC does instead—obviously one would think—is process information for behaviour. It has to earn its keep after all! Given this, one should expect that it is adapted to process information that is itself adapted to solve the kinds of behaviourally related problems faced by our ancestors, that it consists of ad hoc structures processing ad hoc information.

Philosophy is quite obviously an exaptation of the capacities possessed by the rlPFC (and the systems of which it is part), the learned application of metacognitive capacities originally adapted to solve practical behavioural problems to theoretical problems possessing radically different requirements—such as accuracy, the ability to not simply use a cognitive tool, but to be able to reliably determine what that cognitive tool is.

Even granting the intentionalist their spooky functional order, are we to suppose, given everything considered, that we just happened to have evolved the capacity to accurately intuit this elusive functional order? Seems a stretch. The far more plausible answer is that this exaptation, relying as it does on scarce and specialized information, was doomed from the outset to get far more things wrong than right (as the ancient skeptics insisted!). The far more plausible answer is that our metacognitive capacity is as radically heuristic as cognitive science suggests. Think of the scholastic jungle that is analytic and continental philosophy. Or think of the yawning legitimacy gap between mathematics (exaptation gone right) versus the philosophy of mathematics (exaptation gone wrong). The oh so familiar criticisms of philosophy, that it is impractical, disconnected from reality, incapable of arbitrating its controversies—in short, that it does not decisively solve—are precisely the kinds of problems we might expect, were philosophical reflection an artifact of an exaptation gone wrong.

On my account it is wildly implausible that any design paradigm like evolution could deliver the kind of cognition intentionalism requires. Evolution solves difficult problems heuristically: opportunistic fixes are gradually sculpted by various contingent frequencies in its environment, which in our case, were thoroughly social. Since the brain is the most difficult problem any brain could possibly face, we can assume the heuristics our brain relies on to cognize other brains will be specialized, and that the heuristics it uses to cognize itself will be even more specialized still. Part of this specialization will involve the ability to solve problems absent any causal information: there is simply no way the human brain can cognize itself the way it cognizes its natural environment. Is it really any surprise that causal information would scuttle problem-solving adapted to solve in its absence? And given our blindness to the heuristic nature of the systems involved, is it any surprise that we would be confounded by this incompatibility for as long as we have?

The problem, of course, it that it so doesn’t seem that way. I was a Heideggerean once. I was also a Wittgensteinian. I’ve spent months parsing Husserl’s torturous attempts to discipline philosophical reflection. That version of myself would have scoffed at these kinds of criticisms. ‘Scientism!’ would have been my first cry; ‘Performative contradiction!’ my second. I was so certain of the intrinsic intentionality of human things that the kind of argument I’m making here would have struck me as self-evident nonsense. ‘Not only are these intentional oranges real,’ I would have argued, ‘they are the only thing that makes scientific apples possible.’

It’s not enough to show the intentionalist philosopher that, by the light of cognitive science, it’s more than likely their oranges do not exist. Dialectically, at least, one needs to explain how, intuitively, it could seem so obvious that they do exist. Why do the philosopher’s ‘feelings of knowing,’ as murky and inexplicable as they are, have the capacity to convince them of anything, let alone monumental speculative systems?

As it turns out, cognitive psychology has already begun interrogating the general mechanism that is likely responsible, and the curious ways it impacts our retrospective assessments: neglect. In Thinking, Fast and Slow, Daniel Kahneman cites the difficulty we have distinguishing experience from memory as the reason why we retrospectively underrate our suffering in a variety of contexts. Given the same painful medical procedure, one would expect an individual suffering for twenty minutes to report a far greater amount than an individual suffering for half that time or less. Such is not the case. As it turns out duration has “no effect whatsoever on the ratings of total pain” (380). Retrospective assessments, rather, seem determined by the average of the pain’s peak and its coda. Absent intellectual effort, you could say the default is to remove the band-aid slowly.

Far from being academic, this ‘duration neglect,’ as Kahneman calls it, places the therapist in something of a bind. What should the physician’s goal be? The reduction of the pain actually experienced, or the reduction of the pain remembered. Kahneman provocatively frames the problem as a question of choosing between selves, the ‘experiencing self’ that actually suffers the pain and the ‘remembering self’ that walks out of the clinic. Which ‘self’ should the therapist serve? Kahneman sides with the latter. “Memories,” he writes, “are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self” (381). If the drunk has no recollection of the parking lot, then as far as his decision making is concerned, the parking lot simply does not exist. Kahneman writes:

Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self. 381

Could it be that this is what philosophers are doing? Could they, in the course of defining and arranging their oranges, simply be confusing their memory of experience with experience itself? So in the case of duration neglect, information regarding the duration of suffering makes no difference in the subject’s decision making because that information is nowhere to be found. Given the ubiquity of similar effects, Kahneman generalizes the insight into what he calls WYSIATI, or What-You-See-Is-All-There-Is:

An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our nonconscious cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have. 85

Kahneman’s WYSIATI, you could say, provides a way to explain Dehaene’s Law regarding the chronic overestimation of awareness. The cortical bottleneck renders conscious access captive to the facts as they are given. If information regarding things like the duration of suffering in an experimental context isn’t available, then that information simply makes no difference for subsequent behaviour. Likewise, if information regarding the reliability of an intuition or ‘feeling of knowing’ (aptly abbreviated as ‘FOK’ in the literature!) isn’t available, then that information simply makes no difference—at all.

Thus the illusion of what I’ve been calling cognitive sufficiency these past few years. Kahneman lavishes the reader in Thinking, Fast and Slow with example after example of how subjects perennially confuse the information they do have with all the information they need:

You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance. 201

You could say what his research has isolated the cognitive conceit that lies at the heart of Plato’s cave: absent information regarding the low-dimensionality of the information they have available, shadows become everything. Like the parking lot, the cave, the chains, the fire, even the possibility of looking from side-to-side simply do not exist for the captives.

As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little. We often fail to allow for the possibility that evidence that should be critical to our judgment is missing—what we see is all there is. Furthermore, our associative system tends to settle on a coherent pattern of activation and suppresses doubt and ambiguity. 87-88

Could the whole of intentional philosophy amount to varieties of story-telling, ‘theory-narratives’ that are compelling to their authors precisely to the degree they are underdetermined? The problem as Kahneman outlines it is twofold. For one, “[t]he human mind does not deal well with nonevents” (200) simply because unavailable information is information that makes no difference. This is why deception, or any instance of controlling information availability, allows us to manipulate our fellow drunks so easily. For another, “[c]onfidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it,” and “not a reasoned evaluation of the probability that this judgment is correct” (212). So all that time I was reading Heidegger nodding, certain that I was getting close to finding the key, I was simply confirming parochial assumptions. Once I had bought in, coherence was automatic, and the inferences came easy. Heidegger had to be right—the key had to be beneath his lamppost—simply because it all made so much remembered sense ‘upon reflection.’

Could it really be as simple as this? Now given philosophers’ continued insistence on making claims despite their manifest institutional incapacity to decisively arbitrate any of them, neglect is certainly a plausible possibility. But the fact is this is precisely the kind of problem we should expect given that philosophical reflection is an exaptation of pre-existing cognitive capacities.

Why? Because what researchers term ‘error awareness,’ like every other human cognitive capacity, does not come cheap. To be sure, the evolutionary premium on error-detection is high to the extent that adaptive behaviour is impossible otherwise. It is part and parcel of cognition. But philosophical reflection is, once again, an exaptation of pre-existing metacognitive capacities, a form of problem-solving that has no evolutionary precedent. Research has shown that metacognitive error-awareness is often problematic even when applied to problems, such as assessing memory accuracy or behavioural competence in retrospect, that it has likely evolved to solve. [See, Wessel, “Error awareness and the error-related negativity: evaluating the first decade of evidence,” Front Hum Neurosci. 2012; 6: 88. doi: 10.3389/fnhum.2012.00088, for a GNW related review] So if conscious error-awareness is hit or miss regarding adaptive activities, we should expect that, barring some cosmic stroke of evolutionary good fortune, it pretty much eludes philosophical reflection altogether. Is it really surprising that the only erroneous intuitions philosophers seem to detect with any regularity are those belonging to their peers?

We’re used to thinking of deficits in self-awareness in pathological terms, as something pertaining to brain trauma. But the picture emerging from cognitive science is positively filled with instances of non-pathological neglect, metacognitive deficits that exist by virtue of our constitution. The same way researchers can game the heuristic components of vision to generate any number of different visual illusions, experimentalists are learning how to game the heuristic components of cognition to isolate any number of cognitive illusions, ways in which our problem-solving goes awry without the least conscious awareness. In each of these cases, neglect plays a central role in explaining the behaviour of the subjects under scrutiny, the same way clinicians use neglect to explain the behaviour of their impaired patients.

Pathological neglect strikes us as so catastrophically consequential in clinical settings simply because of the behavioural aberrations of those suffering it. Not only does it make a profoundly visible difference, it makes a difference that we can only understand mechanistically. It quite literally knocks individuals from the problem-ecology belonging to socio-cognition into the problem-ecologies belonging to natural cognition. Socio-cognition, as radically heuristic, leans heavily on access to certain environmental information to function properly. Pathological neglect denies us that information.

Non-pathological neglect, on the other hand, completely eludes us because, insofar as we share the same neurophysiology, we share the same ‘neglect structure.’ The neglect suffered is both collective and adaptive. As a result, we only glimpse it here and there, and are more cued to resolve the problems it generates than ponder the deficits in self-awareness responsible. We require elaborate experimental contexts to draw it into sharp focus.

All Blind Brain Theory does is provide a general theoretical framework for these disparate findings, one that can be extended to a great number of traditional philosophical problems—including the holy grail, the naturalization of intentionality. As of yet, the possibility of such a framework remains at most an inkling to those at the forefront of the field (something that only speculative fiction authors dare consider!) but it is a growing one. Non-pathological neglect is not only a fact, it is ubiquitous. Conceptualized the proper way, it possesses a very parsimonious means of dispatching with a great number of ancient and new conundrums…

At some point, I think all these mad ramblings will seem painfully obvious, and the thought of going back to tackling issues of cognition neglecting neglect will seem all but unimaginable. But for the nonce, it remains very difficult to see—it is neglect we’re talking about, after-all!—and the various researchers struggling with its implications lie so far apart in terms of expertise and idiom that none can see the larger landscape.

And what is this larger landscape? If you swivel human cognitive capacity across the continuum of human interrogation you find a drastic plunge in the dimensionality and an according spike in the specialization of the information we can access for the purposes of theorization as soon as brains are involved. Metacognitive neglect means that things like ‘person’ or ‘rule’ or what have you seem as real as anything else in the world when you ponder them, but in point of fact, we have only our intuitions to go on, the most meagre deliverances lacking provenance or criteria. And this is precisely what we should expect given the rank inability of the human brain to cognize itself or others in the high-dimensional manner it cognizes its environments.

This is the picture that traditional, intentional philosophy, if it is to maintain any shred of cognitive legitimacy moving forward, must somehow accommodate. Since I see traditional philosophy as largely an unwitting artifact of this landscape, I think such an accommodation will result in dissolution, the realization that philosophy has largely been a painting class for the blind. Some useful works have been produced here and there to be sure, but not for any reason the artists responsible suppose. So I would like to leave you with a suggestive parallel, a way to compare the philosopher with the sufferer of Anton’s Syndrome, the notorious form of anosognosia that leaves blind patients completely convinced they can see. So consider:

First, the patient is completely blind secondary to cortical damage in the occipital regionsof the brain. Second, these lesions are bilateral. Third, the patient is not only unaware of her blindness; she rejects any objective evidence of her blindness. Fourth, the patient offers plausible, but at times confabulatory responses to explain away any possible evidence of her failure to see (e.g., “The room is dark,” or “I don’t have my glasses,therefore how can I see?”). Fifth, the patient has an apparent lack of concern (or anosodiaphoria) over her neurological condition. Prigatano and Wolf, “Anton’s Syndrome and Unawareness of Partial or Complete Blindness,” The Study of Anosognosia, 456.

And compare to:

First, the philosopher is metacognitively blind secondary to various developmental and structural constraints. Second, the philosopher is not aware of his metacognitive blindness, and is prone to reject objective evidence of it. Third, the philosopher offers plausible, but at times confabulatory responses to explain away evidence of his metacognitive incapacity. And fourth, the philosopher often exhibits an apparent lack of concern for his less than ideal neurological constitution.

An Empty Post on Empty Ideas

by rsbakker


Discontinuity Thesis: A ‘Birds of a Feather’ Argument Against Intentionalism

by rsbakker

A hallmark of intentional phenomena is what might be called ‘discontinuity,’ the idea that the intentional somehow stands outside the contingent natural order, that it possesses some as-yet-occult ‘orthogonal efficacy.’ Here’s how some prominent intentionalists characterize it:

“Scholars who study intentional phenomena generally tend to consider them as processes and relationships that can be characterized irrespective of any physical objects, material changes, or motive forces. But this is exactly what poses a fundamental problem for the natural sciences. Scientific explanation requires that in order to have causal consequences, something must be susceptible of being involved in material and energetic interactions with other physical objects and forces.” Terrence Deacon, Incomplete Nature, 28

“Exactly how are consciousness and subjective experience related to brain and body? It is one thing to be able to establish correlations between consciousness and brain activity; it is another thing to have an account that explains exactly how certain biological processes generate and realize consciousness and subjectivity. At the present time, we not only lack such an account, but are also unsure about the form it would need to have in order to bridge the conceptual and epistemological gap between life and mind as objects of scientific investigation and life and mind as we subjectively experience them.” Evan Thompson, Mind in Life, x

“Norms (in the sense of normative statuses) are not objects in the causal order. Natural science, eschewing categories of social practice, will never run across commitments in its cataloguing of the furniture of the world; they are not by themselves causally efficacious—no more than strikes or outs are in baseball. Nonetheless, according to the account presented here, there are norms, and their existence is neither supernatural nor mysterious. Normative statuses are domesticated by being understood in terms of normative attitudes, which are in the causal order.” Robert Brandom, Making It Explicit, 626

What I would like to do is run through a number of different discontinuities you find in various intentional phenomena as a means of raising the question: What are the chances? What’s worth noting is how continuous these alleged phenomena are with each other, not simply in terms of their low-dimensionality and natural discontinuity, but in terms of mutual conceptual dependence as well. I made a distinction between ‘ontological’ and ‘functional’ exemptions from the natural even though I regard them as differences of degree because of the way it maps stark distinctions in the different kinds of commitments you find among various parties of believers. And ‘low-dimensionality’ simply refers to the scarcity of the information intentional phenomena give us to work with—whatever finds its way into the ‘philosopher’s lab,’ basically.

So with regard to all of the following, my question is simply, are these not birds of a feather? If not, then what distinguishes them? Why are low-dimensionality and supernaturalism fatal only for some and not others?


Soul – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of the Soul, you will find it consistently related to Ghost, Choice, Subjectivity, Value, Content, God, Agency, Mind, Purpose, Responsibility, and Good/Evil.

Game – Anthropic. Low-dimensional. Functionally exempt from natural continuity (insofar as ‘rule governed’). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Game is consistently related to Correctness, Rules/Norms, Value, Agency, Purpose, Practice, and Reason.

Aboutness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Aboutness is consistently related to Correctness, Rules/Norms, Inference, Content, Reason, Subjectivity, Mind, Truth, and Representation.

Correctness – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Correctness is consistently related to Game, Aboutness, Rules/Norms, Inference, Content, Reason, Agency, Mind, Purpose, Truth, Representation, Responsibility, and Good/Evil.

Ghost – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts of Ghosts, you will find it consistently related to God, Soul, Mind, Agency, Choice, Subjectivity Value, and Good/Evil.

Rules/Norms – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Rules and Norms are consistently related to Game, Aboutness, Correctness, Inference, Content, Reason, Agency, Mind, Truth, and Representation.

Choice – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Embodies inexplicable efficacy. Choice is typically discussed in relation to God, Agency, Responsibility, and Good/Evil.

Inference – Anthropic. Low-dimensional. Functionally exempt (‘irreducible,’ ‘autonomous’) from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Inference is consistently related to Game, Aboutness, Correctness, Rules/Norms, Value, Content, Reason, Mind, A priori, Truth, and Representation.

Subjectivity – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Subjectivity is typically discussed in relation to Soul, Rules/Norms, Choice, Phenomenality, Value, Agency, Reason, Mind, Purpose, Representation, and Responsibility.

Phenomenality – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. Phenomenality is typically discussed in relation to Subjectivity, Content, Mind, and Representation.

Value – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Value discussed in concert with Correctness, Rules/Norms, Subjectivity, Agency, Practice, Reason, Mind, Purpose, and Responsibility.

Content – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Content discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Phenomenality, Reason, Mind, A priori, Truth, and Representation.

Agency – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Agency is discussed in concert with Games, Correctness, Rules/Norms, Choice, Inference, Subjectivity, Value, Practice, Reason, Mind, Purpose, Representation, and Responsibility.

God – Anthropic. Low-dimensional. Ontologically exempt from natural continuity (as the condition of everything natural!). Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds God discussed in relation to Soul, Correctness, Ghosts, Rules/Norms, Choice, Value, Agency, Purpose, Truth, Responsibility, and Good/Evil.

Practices – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Practices are discussed in relation to Games, Correctness, Rules/Norms, Value, Agency, Reason, Purpose, Truth, and Responsibility.

Reason – Anthropic. Low-dimensional. Functionally exempt from natural continuity insofar as ‘rule governed.’ Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Reason discussed in concert with Games, Correctness, Rules/Norms, Inference, Value, Content, Agency, Practices, Mind, Purpose, A priori, Truth, Representation, and Responsibility.

Mind – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Mind considered in relation to Souls, Subjectivity, Value, Content, Agency, Reason, Purpose, and Representation.

Purpose – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Purpose discussed along with Game, Correctness, Value, God, Reason, and Representation.

A priori – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One often finds the A priori discussed in relation to Correctness, Rules/Norms, Inference, Subjectivity, Content, Reason, Truth, and Representation.

Truth – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Truth discussed in concert with Games, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Value, Content, Practices, Mind, A priori, Truth, and Representation.

Representation – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Representation discussed in relation with Aboutness, Correctness, Rules/Norms, Inference, Subjectivity, Phenomenality, Content, Reason, Mind, A priori, and Truth.

Responsibility – Anthropic. Low-dimensional. Functionally exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. In various accounts, Responsibility is consistently related to Game, Correctness, Aboutness, Rules/Norms, Inference, Subjectivity, Reason, Agency, Mind, Purpose, Truth, Representation, and Good/Evil.

Good/Evil – Anthropic. Low-dimensional. Ontologically exempt from natural continuity. Inscrutable in terms of natural continuity. Source of perennial controversy. Possesses inexplicable efficacy. One generally finds Good/Evil consistently related to Souls, Correctness, Subjectivity, Value, Reason, Agency, God, Purpose, Truth, and Responsibility.


The big question here, from a naturalistic standpoint, is whether all of these characteristics are homologous or merely analogous. Are the similarities ontogenetic, the expression of some shared ‘deep structure,’ or merely coincidental? For me this has to be what I think is one of the most significant questions that never get’s asked in cognitive science. Why? Because everybody has their own way of divvying up the intentional pie (including interpretavists like Dennett). Some of these items are good, and some of them are bad, depending on whom you talk to. If these phenomena were merely analogous, then this division need not be problematic—we’re just talking fish and whales. But if these phenomena are homologous—if we’re talking whales and whales—then the kinds of discursive barricades various theorists erect to shelter their ‘good’ intentional phenomena from ‘bad’ intentional phenomena need to be powerfully motivated.

Pointing out the apparent functionality of certain phenomena versus others simply will not do. The fact that these phenomena discharge some kind of function somehow seems pretty clear. It seems to be the case that God anchors the solution to any number of social problems—that even Souls discharge some function in certain, specialized problem-ecologies. The same can be said of Truth, Rule/Norm, Agency—every item on this list, in fact.

And this is precisely what one might expect given a purely biomechanical, heuristic interpretation of these terms as well (with the added advantage of being able to explain why our phenomenological inheritance finds itself mired in the kinds of problems it does). None of these need count as anything resembling what our phenomenological tradition claims to explain the kinds of behaviour that accompanies them. God doesn’t need to be ‘real’ to explain church-going, no more than Rules/Norms do to explain rule-following. Meanwhile, the growing mountain of cognitive scientific discovery looms large: cognitive functions generally run ulterior to what we can metacognize for report. Time and again, in context after context, empirical research reveals that human cognition is simply not what we think it is. As ‘Dehaene’s Law’ states, “We constantly overestimate our awareness—even when we are aware of glaring gaps in our awareness” (Consciousness and the Brain, 79). Perhaps this is simply what intentionality amounts to: a congenital ‘overestimation of awareness,’ a kind of WYSIATI or ‘what-you-see-is-all-there-is’ illusion. Perhaps anthropic, low-dimensional, functionally exempt from natural continuity, inscrutable in terms of natural continuity, source of perennial controversy, and possesses inexplicable efficacy are all expressions of various kinds of neglect. Perhaps it isn’t just a coincidence that we are entirely blind to our neuromechanical embodiment and that we suffer this compelling sense that we are more than merely neuromechanical.

How could we cognize the astronomical causal complexities of cognition? What evolutionary purpose would it serve?

What impact does our systematic neglect of those capacities have on philosophical reflection?

Does anyone really think the answer is going to be ‘minimal to nonexistent’?

Father’s Day, 2025

by rsbakker

Sunday, June 15th, 2025, New York BureauWal-Mart stock has enjoyed a recent surge in both algorithmic and human markets today, given greater than expected profits attributed to the company’s new visual fixation tracking systems. This novel system not only allows the retail giant to track the eye-movements of every customer in the store (a data collection system that promises immense downstream rewards in its own right), it also provides the informational underpinnings for a real-time version of what the industry has come to call the ‘Ping Market.’

You know those little twinkling lights on the shelves that make your child crow with delight? Well each flicker is what is called a ‘ping,’ a peripheral stimulus designed to alert the attentional systems of the human brain, and so trigger what is called ‘gaze fixation,’ which in turn is a powerful predictor of consumer choice. Historically (if three years count as such!), retailers have auctioned pings in bulk on the basis of statistical data garnered over time. Wal-Mart’s new system, however, allows its suppliers to quite literally bid for customers as they walk down the aisle, using online data tracking customer’s saccades and fixations to calculate the probable effectiveness of individual pings (and, unfortunately, those endlessly annoying ‘shelfies’). Customer tracking is anonymized by law, of course, but the company reports that its most recent ’15 for 15’ promotion has managed to boost its privacy opt-out level to just over 83%. An envious number to be sure.

‘‘Go fish’ has long been a term of frustration in retail,’ says the project director, Dr. Howard Singh. ‘With [their new ping system] the metaphor has become literal, and Wal-Mart, always the ocean of the industry, has transformed itself into an angler’s delight!’

For those who pay the fee, of course.

Enactivist Re-enactment

by rsbakker

Adam Robberts has tidied up and reposted a debate he and I have been having on enactivism and Blind Brain Theory over at his Knowledge Ecology site. Dissenters are welcome to weigh in over here, there, or at any one of the several sites that have reblogged our exchange. All the lapses in decorum and diction are entirely my own.

By dint of sheer coincidence, Eric Schwitzgebel has included the Bonjour post immediately below in his roundup of the Philosopher’s Carnival – Zombie Mary should be pleased! God and Jesus, not so much.

As for the novel, still no word on pub dates, but I did spend the past two days with my friend Madness going through the manuscript, so if you’re curious as to his impressions, appraisals, by all means visit the SECOND APOCALYPSE FORUM and plague him with questions…


Zombie Mary versus God and Jesus: Against Lawrence Bonjour’s “Against Materialism”

by rsbakker


I should begin with a tip of my hat to Dirk Fellman, since this post is a direct consequence of the damn interesting links he sends. In this case, it was a link to The Waning of Materialism, a collection of articles inveighing against materialism under a number of different banners. For me, ‘material’ is simply a pole on a continuum, that which provides the most data. It’s whatever scientists seem to be able to endlessly mine for information, and to thus endlessly reconfigure into boggling demonstrations of power. Insofar as this is what scientists indeed do, mine and enable, I’m only interested in materialism in terms falling out of Blind Brain Theory, which is to say, in terms of dimensionality. Science is the premier data-mining institution on the planet. The question of what ‘matter’ might be apart from all the differences it makes does not strike me as a promising one. Nor does the question of whether matter monopolizes existence. BBT lets me sidestep these questions, since it sees the interminable controversies spinning out of the material and the ideal as a paradigmatic example of a heuristic run amok, and so elects to talk of high and low dimensionality instead.

For information to be (nonsemantic) information, some difference must be made: even the dualist is pinned to the information continuum in this sense. Since information generally enables cognition, the high-dimensional view generally trumps the low-dimensional, and it seems fair to say that BBT, in this respect, counts as a kind of materialism, albeit a peculiar one. I’ve already sketched what it makes of the Knowledge Argument in THE Something About Mary. What follows is an attempt to show how it fares against Lawrence Bonjour’s retooling of Frank Jackson’s famous thought experiment in his “Against Materialism,” the piece that the editors of Waning take as “an overview of the entire volume.”

Bonjour is a property dualist. He holds that mental properties form a special class of nonphysical or nonmaterial properties distinct from those studied in the natural sciences more generally. He makes no secret of how weak he thinks materialism is–and indeed his whole paper is permeated with the sense that he can scarce believe he needs to make his argument at all. “I have always found this situation extremely puzzling,” he writes. “As far as I can see, materialism is a view that has no very compelling argument in its favor and that is confronted with very powerful objections to which nothing even approaching an adequate response has been offered” (5). Since the case is all but closed for Bonjour, he proposes to simply review the ‘very powerful objections’–as a matter of historical record, perhaps–to show the gentle reader why they need not worry about materialist bogeymen. The problem, he claims, is that materialism “offers no account at all of consciousness and seems incapable in principle of doing so” (5).

In a sense, I actually agree with Bonjour on this point: traditional materialism cannot explain consciousness as it appears to reflection. Every attempt it makes leaves this ‘consciousness-as-metacognized’ untouched, and thus remains vulnerable to those, like Bonjour, who find themselves compelled by what they think they so plainly intuit. But as the above should make clear, my own position–Blind Brain Theory–is no ordinary materialism. Where others work their way toward consciousness-as-metacognized only to find themselves stranded on the stoop, BBT actually possesses the resources to kick down the door. The key to untangling all the knots of phenomenality and intentionality, I hope to show, lies in understanding the kinds of illusions metacognitive neglect has foisted on all our historical attempts to understand them thus far, illusions that Bonjour has been kind enough to illustrate in rather dramatic fashion.

In the argument I would like to focus on, Bonjour proposes an extension of Frank Jackson’s original Knowledge Argument to the issue of the intentionality of consciousness, and to the question of internal content more specifically. As he writes:

The issue I want to raise here is whether a materialist view can account for sort of conscious intentional content just characterized. Can it account for conscious thoughts being about various things in a way that can be grasped or understood by the person in question? In a way the answer has already been given. Since materialist views really take no account at all of consciousness, they obviously offer no account of this particular aspect of it. But investigating this narrower aspect of the issue can still help to deepen the basic objection to materialism. 17

To illustrate this incapacity, Bonjour bids us imagine a different Mary, one possessing complete physical knowledge of Bonjour as he entertains various thoughts. Given complete physical information, can she know “what I am consciously thinking about at a particular moment?” (17).

It seems clear that knowing all the physical facts regarding Bonjour’s brain is insufficient, given the relationality of Bonjour’s thoughts, the fact they are about things in the world. Bonjour continues:

A functionalist would no doubt say that it is no surprise that Mary could not do this. In order to know the complete causal or functional role of my internal states, Mary also needs to know about their about their external causal relations to various things. And it might be suggested that, if Mary knows all of the external causal relations in which my various states stand, she will in fact be able to figure out what I am consciously thinking about at a particular time. No doubt the details that pick out any particular object of thought will be very complicated, but there is, it might be claimed, no reason to doubt in principle she could do this.” 18

Now Bonjour thinks that this is “another piece of materialist doctrine that again has the status very similar to that of a claim of theology” (18). One might respond that this is essentially the same assumption that informs skepticism regarding paranormal phenomena—that given enough information, some natural explanation can be found for apparently supernatural phenomena—but that would be beside the point since Bonjour thinks the materialist is in serious trouble even if we grant this particular conceit.

For, as already emphasized, it is an undeniable fact about conscious intentional content that I am able for the most part to consciously understand or be aware of what I am thinking about ‘from the inside.’ Clearly I do not in general do this on the basis of external causal knowledge: I do not have such knowledge and would not know what to do about it if I did. All that I normally have any sort of direct access to, if materialism is true, is my own internal physical and physiological states, and thus my conscious understanding of what I am thinking about at a particular moment must be somehow a feature or result of those internal states alone. 18

Bonjour is simply pointing out that even though he himself lacks access to any such information regarding his brain function and its causal environmental history, he nevertheless knows what he’s thinking about. Any metacognitive understanding he has of his thoughts, therefore, is proximally grounded, the product of his internal states. He continues:

Causal relations to external things may help to produce the relevant features of the internal states in question, but there is no apparent way in which such external relations can somehow be partly constitutive of the fact that my conscious thoughts are about various things in a way of which I can be immediately aware. But if these internal states are sufficient to fix the object of my thought in a way that is accessible to my understanding or awareness, then knowing about those internal states should be sufficient for Mary as well, without any knowledge of the external causal relations. And yet, as we have already seen, it is obvious that this is not the case. 18

If he can know what he’s thinking simply given his internal states, then why is it the case that Mary cannot? The argument grants her knowledge of those states: so why is it that she needs to know so much more to be able to determine what he’s thinking?

Thus we have the basis for an argument parallel to Jackson’s original argument against qualia: Mary knows all the relevant physical facts; she is not able on the basis of this knowledge to know what I am consciously thinking about at a particular moment; but what I am thinking about at that moment is as surely a fact about the world as anything else; therefore complete physical knowledge is not complete knowledge, and so materialism is false.  18-19.

This is about as clear an example of the way metacognitive neglect plays havoc with philosophical reflection as any I’ve encountered. What Bonjour is giving us here is a tale of two perspectives, one external and omniscient, another internal and sufficient. Since material omniscience isn’t sufficient, we can infer that there’s more to nature than meets the material eye, that some kind of supernaturalism is true.

Over the years I’ve come to the conclusion that all Mary type arguments boil down to versions of what might be called the ‘God-and-Jesus’ strategy. The marketing genius of Jesus, as Nietzsche so wryly observed, is the way his mortality transforms a fact-omniscient God into a God who also knows what it’s like. To be human is to be ignorant, to neglect everything save what enters this rare sliver we call life. God can only truly know humanity by becoming human as a result. He needs to exist within our ‘neglect structure,’ you could say.

So in Mary-type arguments Mary plays the third-person God, and some first-person experience plays Jesus. The upshot is always the same: We need Jesus because some knowledge necessarily lies outside God’s omniscience: knowledge of what it is like being blinkered and benighted—merely ‘human.’

The ease with which this argumentative form slips between theological and (allegedly) nontheological domains is worth keeping in mind, here. But the real takeaway is found in how God and Jesus highlight the pivotal role neglect plays in all its incarnations. With Jesus, God has to systematically divest Himself of cognitive capacities, consign more and more to neglect, the ‘unknown unknown,’ in order to know ‘what it’s like’ to be human. Jesus thus poses a limit, a kind of neglect structure, on the omniscience of God, and in this way becomes the skyhook exception to the infinite that links humanity and God via shared experience. (Thus the ‘horrible secret’ of ‘God on the cross,’ as Nietzsche calls it, the fact that “[all] of us are nailed to the cross, consequently we are divine” (Anti-Christ, 51)).

Now consider Bonjour’s version of this argument: What distinguishes his facts from natural or physical facts is that he need only access his internal states to know the content of his thought, whereas Mary needs to access both those internal states and their external causal relations. Where neglecting external causal relations precludes Mary knowing the content of Bonjour’s thoughts, it has no bearing whatsoever on his knowledge of his thoughts. The fact that he knows he’s thinking this or that is an environment independent fact. This disqualifies Mary’s claim to omniscience because, for her, all such facts can only be environmentally cognized. Since Mary requires added information regarding external systems to determine what he’s thinking about means that there’s something, environment independent fact, that not even God can know, and that Bonjour and everyone living possesses.

So to repeat his question: “Can [materialism] account for conscious thoughts being about various things in a way that can be grasped or understood by the person in question?” Not even if it were God, he is saying. You have to have Jesus.

Bonjour not only openly acknowledges that metacognition systematically neglects external causal information, he makes it a centrepiece of his argument. Neglect of external causal relations is what sets his facts apart from Mary’s natural facts, what makes him Jesus, in effect. God can know our thoughts, but He cannot know our thoughts the way Jesus knows our thoughts. He cannot know what it’s like to be me, or Bonjour, as the case might be.

Of course, any such argument should give us pause. As keen as Bonjour is to leverage the distinction neglect affords him—the way it allows him to distinguish between modes of knowing, and thereby argue a distinction in modes of being (material versus nonmaterial)—calling attention to the neglect, as opposed to the distinction, raises the possibility that he’s simply spinning ignorance into an ontological virtue.

In strict causal terms, on a ‘zombie Mary’ account, say, the argument simply unravels. Here the question is one of one biomechanism attempting to systematically engage a second biomechanism that is systematically engaging some other kind of system, perhaps itself. What we want to know is how biomechanism 1 might come to occupy a relation with biomechanism 2 such that the behavioural possibilities of 1 are the same behavioural possibilities possessed by Mary coming to know what Bonjour is thinking about. So biomechanism 1, ‘zombie-Mary’ would be able to do all the things, make all the sounds that Mary could do knowing what Bonjour thought, only with biomechanism 2, or ‘zombie-Bonjour.’ And the same goes for zombie-Bonjour: it would be able to occupy a relation with itself that allowed it to do all the things Bonjour could do on the basis of knowing what he’s thinking.

One only need suppose this is possible (even though no one doubts that our brains possess very real, very physical, cognitive and metacognitive systems), since the point of this zombie analogue is to simply draw out a striking feature of the physical picture of Bonjour’s argument, the very picture he agrees with only up to a point.

Physically speaking, zombie Mary is comporting itself to a functionally independent, environmentally external system: cognizing zombie Bonjour’s brain processes. Zombie Bonjour, on the other hand, is comporting itself to a functionally entangled, environmentally internal system: metacognizing its own brain processes. It’s hard to imagine any two more radically different ‘biocognitive perspectives,’ the one solving a functionally distinct, distal system using all the ancient machinery of environmental cognition, the other solving a functionally entangled, proximal system using far more youthful metacognitive machinery, the former possessing high-dimensional, variable access to the processes involved, the latter possessing low-dimensional, fixed access to those self-same processes.

On zombie Mary, then, I think it’s pretty plain that no matter how one finesses zombie Mary’s physical comportment to zombie Bonjour, the radically different nature of their respective cognitive and metacognitive relationships means there is simply no way zombie Mary can possess the same comportment to zombie Bonjour that zombie Bonjour possesses to itself short of becoming zombie Bonjour.

Even on a zombie account, then, we find ourselves confronted with a version of the God and Jesus dilemma!

But now the upshot, which seems almost miraculous in theological and philosophical contexts (providing for the possibility of a special, human reality apart from nature or God) has become rather mundane. On zombie Mary, the difference is merely a matter of different systems possessing different resources and modes of access. The idea that zombie Mary’s mode is the only mode, that there could be an ‘omniscient’ zombie Mary simply makes no sense, insofar as she’s simply another biomechanism stranded in its environment the same as any other zombie, capable of occupying only a finite number of comportments. The very notion that she could be ‘fact omniscient,’ in other words, attributes something supernatural to her, a hint of God, if you will. The notion that zombie Bonjour’s quite different biocognitive capacity evidences something supernatural, a little bit of Jesus, likewise has no place in this scenario. It’s natural all the way down.

Now of course Bonjour would balk at the very notion of zombie Mary and adduce any number of arguments against the very idea, I’m sure. Hints of God and bits of Jesus have a very real role to play in his metaphysical view, albeit dressed in a more respectable nomenclature. But what he can’t do is run away from all the questions that it raises. So now when he writes, for instance, “if these internal states are sufficient to fix the object of my thought in a way that is accessible to my understanding or awareness, then knowing about those internal states should be sufficient for Mary as well, without any knowledge of the external causal relations” (18), we can ask him whether he’s equivocating cognition with metacognition, the drastically different challenges of solving other people with solving oneself. Bonjour agrees that cognition and biology are intimately related somehow, that aphasiology* is a very real branch of medical science. He accepts that we’re machines in some sense; he just wants, like so very many others, to think that we are something more as well. Nevertheless he agrees that the meat has a say. Likewise, he has to admit to the drastic biocognitive difference between Mary cognizing his thought and him metacognizing his own thought. So he has no way of avoiding the question of whether his argument is simply mixing cognitive apples with metacognitive oranges, why we should assume that his ability to know what he’s thinking without knowing external causal relations is indicative of anything other than the fact that very different systems are involved. Surely, given the rather obvious fact of that difference, we should be hesitant to accept supernatural conclusions that it could very well obviate.

Zombie Bonjour, for instance, need not have any secondary comportment to its environmental comportments to effectively intervene in those environments. This is a good thing, given the cognitive challenges the astronomical biomechanical complexity of zombie Bonjour poses any cognitive system, let alone one packed into the same skull (imagine a primatologist sewn into a sack with a chimpanzee troop). Systematic metacognitive neglect is a given when one considers the problem in biomechanical terms.

Is it merely a coincidence that the same goes for Bonjour proper? He too is astronomically complex. He also doesn’t need to metacognize his thoughts to think them. And he too suffers from massive metacognitive neglect. The high-dimensional picture of the brain that’s now emerging from the cognitive sciences is a picture of what we are almost entirely blind to. Whatever metacognitive capacity we possess is obviously both low-dimensional and specialized, consisting of heuristic systems adapted to troubleshoot specific first-order problem-ecologies. Since Bonjour is already physically comported to his environment in various cognitive and noncognitive ways, any capacity to metacognize this relation, to ‘know what he’s thinking about,’ say, need only build on this pre-existing comportment. Like so many other ‘quick and dirty’ cognitive systems, Bonjour’s capacity to metacognize has evolved to make due without, to solve problems using as little potentially relevant information as possible. This is arguably why he can know what he’s ‘thinking,’ ‘experiencing,’ ‘desiring,’ and so on without knowing anything about the astronomically complicated mechanical relations that make it possible. Metacognition is a ‘need-to-know’ capacity, a system or set of systems accessing only the information required to tackle certain problem-ecologies.

The problem, however, is that metacognition is not itself among those things that metacognition needs to know.’ Metacognition accesses low-dimensional, specialized information blind to the fact that it is such. This is no problem so long as we restrict its application to adaptive problem-ecologies. The capacity to ‘report our thoughts’ doubtlessly solved any number of problems for our ancestors. As soon as the philosopher repurposes this capacity to solve, say, the ‘problem of materialism,’ however, we should expect things will go awry—and here’s the thing, exactly the way they do. Why? Because philosophical reflection requires using information adapted to heuristically solve ‘What am I thinking?’ problems to solve the considerably more demanding question, ‘What is thinking?’ without any inkling whatsoever of the adequacy of that information. We should expect such attempts to endlessly run aground controversy the way they do. Given that the adequacy of our intuitions is the assumptive default (as with what Kahneman and Tversky call ‘availability heuristics,’ for instance*) one might expect that philosophers would systematically confuse their darkly glimpsed special-purpose metacognitive access for something whole and general-purpose, for an order of reality somehow beyond the high-dimensional physical reality revealed by natural science—for something supernatural.

Neglect, then, plays a crucial role at three distinct junctures in Bonjour’s argument, at least. Neglect of the physical differences between environmental cognition and metacognition licenses his equivocation of Mary’s access and Bonjour’s own access to the content of his thoughts. Lacking access to any information regarding cognitive activity strands deliberative metacognition (reflection) with what is being cognized, which becomes a kind of ‘availability heuristic.’ Blind to our knowing (the machinery is indisposed, after-all), we attribute the distinction to the known. Epistemic blindness generates the cognitive illusion of ontological distinction.

Neglect of the physical implementation of Mary’s environmental cognition licenses the plausibility of Mary’s omniscience, and thus renders her inability to cognize Bonjour’s thought fraught with ontological significance. The ‘view from nowhere,’ as it is sometimes called, is as clear-cut an example of metacognitive neglect as you can hope to find. Absence admits no distinctions, so knowledge seems (descending the ladder of ontological commitment) disembodied, transcendent, emergent, or virtual; ‘nowhere’ becomes indistinguishable to ‘everywhere,’ and the in principle possibility of omniscience simply seems to follow. There’s no limit to the number of ghosts you can pack into a room—or skull.

Neglect of the low-dimensional, domain-specific nature of metacognition generates the illusion that “what I am thinking about at that moment is as surely a fact about the world as anything else” (19), rather than what it almost certainly is: a special-purpose posit adapted to solving a specific problem-ecology. As a matter of empirical fact, Bonjour’s cognitive relationship to his own cognitive activity is radically different than his cognitive relationship to his environment. To think that this radical difference is irrelevant to the radical differences between first-person and third-person knowledge (not to mention the knots they have us twisted in!) is wildly implausible, to say the least. Far from a fact like any other, ‘What he is thinking’—the information Bonjour has available to report—is an incredibly low-dimensional communicative shorthand, one specifically tailored to solve the kinds of problems our preliterate—prephilosophical— ancestors faced. Blind to the heuristic nature of metacognition, Bonjour confuses special-purpose information with all purpose information.

Why is Bonjour so convinced? For the same reason anyone suffering Anton’s Syndrome is convinced they can see: he is blind to his blindness, and so thinks he sees everything he needs to see.


Get every new post delivered to your Inbox.

Join 617 other followers