Three Pound Brain

No bells, just whistling in the dark…

Malabou, Continentalism, and New Age Philosophy

by rsbakker

Perhaps it’s an ex-smoker thing, the fact that I was a continentalist myself for so many years. Either way, I generally find continental philosophical forays into scientific environs little more than exercises in conceptual vanity (see “Reactionary Atheism: Hagglund, Derrida, and Nooconservatism“, “Zizek, Hollywood, and the Disenchantment of Continental Philosophy,” or “Life as Perpetual Motion Machine: Adrian Johnston and the Continental Credibility Crisis“). This is particularly true of Catherine Malabou, who, as far as I can tell, is primarily concerned with cherry-picking those findings that metaphorically resonate with certain canonical continental philosophical themes. For me, her accounts merely demonstrate the deepening conceptual poverty of the continental tradition, a poverty dressed up in increasingly hollow declarations of priority. This is true of “One Life Only: Biological Resistance, Political Resistance,” but with a crucial twist.

In this piece, she takes continentalism (or ‘philosophy,’ as she humbly terms it) as her target, charging it with a pervasive conceptual prejudice. She wants to show how recent developments in epigenetics and cloning reveal what she terms the “antibiological bias of philosophy.” This bias is old news, of course (especially in these quarters), but Malabou’s acknowledgement is heartening nonetheless, at least to those, such as myself, who think the continental penchant for conceptual experimentation is precisely what contemporary cognitive science requires.

“Contemporary philosophy,” she claims, “bears the marks of a primacy of symbolic life over biological life that has not been criticized, nor deconstructed.” Her predicate is certainly true—continentalism is wholly invested in theoretical primacy of intentionality—but her subsequent modifier simply exemplifies the way we humans are generally incapable of hearing criticisms outside our own. After all, it’s the quasi-religious insistence on the priority of the intentional, the idea that armchair speculation on the nature of the intentional trumps empirical findings in this or that way, that has rendered continentalism a laughing-stock in the sciences.

But outgroup criticisms are rarely heard. Whatever ‘othering the other’ consists in, it clearly involves not only their deracination, but their derationalization, the denial of any real critical insight. This is arguably what makes the standard continental shibboleths of ‘scientism,’ ‘positivism,’ and the like so rhetorically effective. By identifying an interlocutor as an outgroup competitor, you assure your confederates will be incapable of engaging him or her rationally. Continentalists generally hear ideology instead of cogent criticism. The only reason Malabou can claim that the ‘primacy of the symbolic over the biological’ has been ‘neither criticized nor deconstructed’ is simply that so very few within her ingroup have been able to hear the outgroup chorus, as thunderous as it has been.

But Malabou is a party member, and to her credit, she has done anything but avert her eyes from the scientifically mediated revolution sweeping the ground from beneath all our feet. One cannot dwell in foreign climes without suffering some kind of transformation of perspective. And at long last she has found her way to the crucial question, the one which threatens to overthrow her own discursive institution, the problem of what she terms the “unquestioned splitting of the concept of life.”

She takes care, however, to serve up the problem with various appeals to continental vanity—to hide the poison in some candy, you might say.

It must be said, the biologists are of little help with this problem. Not one has deemed it necessary to respond to the philosophers or to efface the assimilation of biology to biologism. It seems inconceivable that they do not know Foucault, that they have never encountered the word biopolitical. Fixated on the two poles of ethics and evolutionism, they do not think through the way in which the science of the living being could—and from this point on should—unsettle the equation between biological determination and political normalization. The ethical shield with which biological discourse is surrounded today does not suffice to define the space of a theoretical disobedience to accusations of complicity among the science of the living being, capitalism, and the technological manipulation of life.

I can remember finding ignorances like these ‘inconceiveable,’ thinking that if only scientists would ‘open their eyes (read so and so) they would see (their conceptually derivative nature). But why should any biologist read Foucault, or any other continentalist for that matter? What distinguishes continental claims to the priority of their nebulous domain over the claims of say, astrology, particularly when the dialectical strategies deployed are identical? Consider what Manly P. Hall has to say in The Story of Astrology:

Materialism in the present century has perverted the application of knowledge from its legitimate ends, thus permitting so noble a science as astronomy to become a purely abstract and comparatively useless instrument which can contribute little more than tables of meaningless figures to a world bankrupt in spiritual, philosophical, and ethical values. The problem as to whether space is a straight or a curved extension may intrigue a small number of highly specialized minds, but the moral relationship between man and space and the place of the human soul in the harmony of the spheres is vastly more important to a world afflicted with every evil that the flesh is heir to. 8, Hall, Manly P. The Story of Astrology: The Belief in the Stars as a Factor in Human Progress. Cosimo, Inc., 2005.

Sound familiar? If you’ve read any amount of continental philosophy it should. One can dress up the relation between the domains differently, but the shape remains the same. Where astronomy is merely ontic or ideological or technical or what have you, astrology ministers to the intentional realities of lived life. The continentalist would cry foul, of course, but the question isn’t so much one of what they actually believe as one of how they appear. Insofar as they place various, chronically underdetermined speculative assertions before the institutional apparatuses of science, they sound like astrologers. Their claims of conceptual priority, not surprisingly, are met with incredulity and ridicule.

The fact that biologists neglect Foucault is no more inconceivable than the fact that astronomers neglect Hall. In science, credibility is earned. Everybody but everybody thinks they’ve won the Magical Belief Lottery. The world abounds with fatuous, theoretical claims. Some claims enable endless dispute (and, for a lucky few, tenure), while others enable things like smartphones, designer babies, and the detonation of thermonuclear weapons. Since there’s no counting the former, the scientific obsession with the latter is all but inevitable. Speculation is cheap. Asserting the primacy of the symbolic over the natural on speculative grounds is precisely the reason why scientists find continentalism so bizarre.

Akin to astrology.

Now historically, at least, continentalists have consistently externalized the problem, blaming their lack of outgroup credibility on speculative goats like the ‘metaphysics of presence,’ ‘identity thinking,’ or some other combination of ideology and ontology. Malabou, to her credit, wants ‘philosophy’ to partially own the problem, to see the parsing of the living into symbolic and biological as something that must itself be argued. She offers her quasi-deconstructive observations on recent developments in epigenetics and cloning as a demonstration of that need, as examples of the ways the new science is blurring the boundaries between the intentional and the natural, the symbolic and the biological, and therefore outrunning philosophical critiques that rely upon their clear distinction.

This blurring is important because Malabou, like most all continentalists, fears for the future of the political. Reverse engineering biology amounts to placing biology within the purview of engineering, of rendering all nature plastic to human whim, human scruple, human desire. ‘Philosophy’ may come first, but (for reasons continentalists are careful to never clarify) only science seems capable of doing any heavy lifting with their theories. One need only trudge the outskirts of the vast swamp of neuroethics, for instance, to get a sense of the myriad conundrums that await us on the horizon.

And this leads Malabou to her penultimate statement, the one which I sincerely hope ignites soul-searching and debate within continental philosophy, lest the grand old institution become indistinguishable from astrology altogether.

And how might the return of these possibilities offer a power of resistance? The resistance of biology to biopolitics? It would take the development of a new materialism to answer these questions, a new materialism asserting the coincidence of the symbolic and the biological. There is but one life, one life only.

I entirely agree, but I find myself wondering what Malabou actually means by ‘new materialism.’ If she means, for instance, that the symbolic must be reduced to the natural, then she is referring to nothing less than the long-standing holy grail of contemporary cognitive science. Until we can understand the symbolic in terms continuous with our understanding of the natural world, it’s doomed to remain a perpetually underdetermined speculative domain—which is to say, one void of theoretical knowledge.

But as her various references to the paradoxical ‘gap’ between the symbolic and the biological suggest, she takes the irreducibility of the symbolic as axiomatic. The new materialism she’s advocating is one that unifies the symbolic and the biological, while somehow respecting the irreducibility of the symbolic. She wants a kind of ‘type-B materialism,’ one that asserts the ontological continuity of the symbolic and the biological, while acknowledging their epistemic disparity or conceptual distinction. David Chalmers, who coined the term, characterizes the problem faced by such materialisms as follows:

I was attracted to type-B materialism for many years myself, until I came to the conclusion that it simply cannot work. The basic reason for this is simple. Physical theories are ultimately specified in terms of structure and dynamics: they are cast in terms of basic physical structures, and principles specifying how these structures change over time. Structure and dynamics at a low level can combine in all sort of interesting ways to explain the structure and function of high-level systems; but still, structure and function only ever adds up to more structure and function. In most domains, this is quite enough, as we have seen, as structure and function are all that need to be explained. But when it comes to consciousness, something other than structure and function needs to be accounted for. To get there, an explanation needs a further ingredient. “Moving Forward on the Problem of Consciousness.”

Substitute ‘symbolic’ for ‘consciousness’ in this passage, and Malabou’s challenge becomes clear: science, even in the cases of epigenetics and cloning, deals with structure and dynamics—mechanisms. As it stands we lack any consensus commanding way of explaining the symbolic in mechanistic terms. So long as the symbolic remains ‘irreducible,’ or mechanistically inexplicable, assertions of ontological continuity amount to no more than that, bald assertions. Short some plausible account of that epistemic difference in ontologically continuous terms, type-B materialisms amount to little more than wishing upon traditional stars.

It’s here where we can see Malabou’s institutional vanity most clearly. Her readings of epigenetics and cloning focus on the apparently symbolic features of the new biology—on the ways in which organisms resemble texts. “The living being does not simply perform a program,” she writes. “If the structure of the living being is an intersection between a given and a construction, it becomes difficult to establish a strict border between natural necessity and self-invention.”

Now the first, most obvious criticisms of her reading is that she is the proverbial woman with the hammer, pouring through the science, seeing symbolic nails at every turn. Are epigenetics and cloning intrinsically symbolic? Do they constitute a bona fide example of a science beyond structure and dynamics?

Certainly not. Science can reverse engineer our genetic nature precisely because our genetic nature is a feat of evolutionary engineering. This kind of theoretical cognition is so politically explosive precisely because it is mechanical, as opposed to ‘symbolic.’ Researchers now know how some of these little machines work, and as result they can manipulate conditions in ways that illuminate the function of other little machines. And the more they learn, the more mechanical interventions they can make, the more plastic (to crib one of Malabou’s favourite terms) human nature becomes. The reason these researchers hold so much of our political future in their hands is precisely because their domain (unlike Malabou’s) is mechanical.

For them, Malabou’s reading of their fields would be obviously metaphoric. Malabou’s assumption that she is seeing the truth of epigenetics and cloning, that they have to be textual in some way rather than lending themselves to certain textual (deconstructive) metaphors, would strike them as comically presumptuous. The blurring that she declares ontological, they would see as epistemic. To them, she’s just another humanities scholar scrounging for symbolic ammunition, for confirmation of her institution’s importance in a time of crisis. Malabou, like Manly P. Hall, can rationalize this dismissal in any number of ways–this goes without saying. Her problem, like Hall’s, is that only her confederates will agree with her. She has no real way of prosecuting her theoretical case across ingroup boundaries, and so no way of recouping any kind of transgroup cognitive legitimacy–no way of reversing the slow drift of ‘philosophy’ to the New Age section of the book.

The fact is Malabou begins by presuming the answer to the very question she claims to be tackling: What is the nature of the symbolic? To acknowledge that continental philosophy is a speculative enterprise is to acknowledge that continental philosophy has solved nothing. The nature of the symbolic, accordingly, remains an eminently open question (not to mention an increasingly empirical one). The ‘irreducibility’ of the symbolic order is no more axiomatic than the existence of God.

If the symbolic were, say, ecological, the product of evolved capacities, then we can safely presume that the symbolic is heuristic, part of some regime for solving problems on the cheap. If this were the case, then Malabou is doing nothing more than identifying the way different patterns in epigenetics and cloning readily cue a specialized form of symbolic cognition. The fact that symbolic cognition is cued does not mean that epigenetics and cloning are ‘intrinsically symbolic,’ only that they readily cue symbolic cognition. Given the vast amounts of information neglected by symbolic cognition, we can presume its parochialism, its dependence on countless ecological invariants, namely, the causal structure of the systems involved. Given that causal information is the very thing symbolic cognition has adapted to neglect, we can presume that its application to nature would prove problematic. This raises the likelihood that Malabou is simply anthropomorphizing epigenetics and cloning in an institutionally gratifying way.

So is the symbolic heuristic? It certainly appears to be. At every turn, cognition makes due with ‘black boxes,’ relying on differentially reliable cues to leverage solutions. We need ways to think outcomes without antecedents, to cognize consequences absent any causal factors, simply because the complexities of our environments (be they natural, social, or recursive) radically outrun our capacity to intuit. The bald fact is that the machinery of things is simply too complicated to cognize on the evolutionary cheap. Luckily, nature requires nothing as extravagant as mechanical knowledge of environmental systems to solve those systems in various, reproductively decisive ways. You don’t need to know the mechanical details of your environments to engineer them. So long as those details remain relatively fixed, you can predict/explain/manipulate them via those correlated systematicities you can access.

We genuinely need things like symbolic cognition, regimes of ecologically specific tools, for the same reason we need scientific enterprises like biology: because the machinery of most everything is either too obscure or too complex. The information we access provides us cues, and since we neglect all information pertaining to what those cues relate us to, we’re convinced that cues are all that is the case. And since causal cognition cannot duplicate the cognitive shorthand of the heuristics involved, they appear to comprise an autonomous order, to be something supernatural, or to use the prophylactic jargon of intentionalism, ‘irreducible.’ And since the complexities of biology render these heuristic systems indispensable to the understanding of biology, they appear to be necessary, to be ‘conditions of possibility’ of any cognition whatsoever. We are natural in such a way that we cannot cognize ourselves as natural, and so cognize ourselves otherwise. Since this cognitive incapacity extends to our second-order attempts to cognize our cognizing, we double down, metacognize this ‘otherwise’ in otherwise terms. Far from any fractionate assembly of specialized heuristic tools, symbolic cognition seems to stand not simply outside, but prior the natural order.

Thus the insoluble conundrums and interminable disputations of Malabou’s ‘philosophy.’

Heuristics and metacognitive neglect provide a way to conceive symbolic cognition in wholly natural terms. Blind Brain Theory, in other words, is precisely the ‘new materialism’ that Malabou seeks. The problem is that it seems to answer Malabou’s question regarding political in the negative, to suggest that even the concept of ‘resistance’ belongs to a bygone and benighted age. To understand the coincidence of the symbolic and biological, the intentional and the natural, one must understand the biology of philosophical reflection, and the way we were evolutionarily doomed to think ourselves something quite distinct from what we in fact are (see “Alien Philosophy,” part one and two). One must turn away from the old ways, the old ideas, and dare to look hard at the prospect of a post-intentional future. The horrific prospect.

Odds are we were wrong folks. The assumption that science, the great killer of traditional cognitive traditions, will make an exception for us, somehow redeem our traditional understanding of ourselves is becoming increasingly tendentious. We simply do not have the luxury of taking our cherished, traditional conceits for granted at least not anymore. The longer continental philosophy pretends to be somehow immune, or even worse, to somehow come first, the more it will come to resemble those traditional discourses that, like astrology, refuse to relinquish their ancient faith in abject speculation.

On Ordeals, Great and Small, and Their Crashing

by rsbakker

I’m always amazed at how alien words feel after taking a break from writing, almost as if they’ve used the time to talk amongst themselves, rehearse their grievances, then set about organizing various work-to-rule actions. Plant shut-downs never fail to unnerve me with the possibility that I’ll never get things back up and running.

But I need to get the plant back up and running, and quickly too, because… my UK publisher has finally come to terms on the fourth book.

I had intended on chronicling what’s been going on behind the scenes these past months, the ups and downs, the false starts, the miscommunications, but now I’m really not sure what purpose would be served outside prolonging the prolonging. The important thing is that The Great Ordeal will be published next year, and The Unholy Consult will be published the year following. The exact dates still need to be worked out between my US and UK publishers–I’ll pass those along as soon as I know them.

Why has the book been split? For the same reason The Prince of Nothing was split into a trilogy many moons ago: not because of greedy publishers or a greedy me, but because the story proved longer in the execution than in the planning, plain and simple.

In the meantime, “Crash Space” has just come out in the esteemed Midwest Studies in Philosophy, part of an entire issue dedicated to the relationship between philosophy and science fiction. I’ve yet to receive my gratis issue, but I already know from reading the contributions by Eric Schwitzgebel, Pete Mandik, and Helen de Cruz that it is well worth perusing. If you don’t happen to have a visit to the library in your near future, you can find the archived draft version of “Crash Space” here. Let me know what you think!

Graziano, the Attention Schema Theory, and the Neuroscientific Explananda Problem

by rsbakker

Along with Taylor Webb, Michael Graziano has published an updated version of what used to be his Attention Schema Theory of Consciousness, but is now called the Attention Schema Theory of Subjective Awareness. For me, it epitomizes the kinds of theoretical difficulties neuroscientists face in their attempts to define their explananda, why, as Gary Marcus and Jeremy Freeman note in their Preface to The Future of the Brain, “[a]t present, neuroscience is a collection of facts, still awaiting an overarching theory” (xi).

On Blind Brain Theory, the ‘neuroscientific explananda problem’ is at least twofold. For one, the behavioural nature of cognitive functions raises a panoply of interpretative issues. Ask any sociologist: finding consensus commanding descriptions of human behaviour is far more difficult than finding consensus commanding descriptions of, say, organ behaviour. For another, the low-dimensional nature of  conscious experience raises a myriad of interpretative conundrums in addition to the problems of interpretative underdetermination facing behaviour.  Ask any psychologist: finding consensus commanding descriptions of conscious phenomena has hitherto proven impossible. As William Uttal notes in The New Phrenology: “There is probably nothing that divides psychologists of all stripes, more than the inadequacies and ambiguities of our efforts to define mind, consciousness, and the enormous variety of mental events and phenomena” (90). At least with behaviour, publicity allows us to anchor our theoretical interpretations in revisable data; experience, however, famously affords us no such luxury. So where the problem of behavioural underdetermination seems potentially soluble given enough elbow grease (one can imagine continued research honing canonical categorizations of behavioural functions as more and more information is accumulated), the problem of experiential underdetermination out and out baffles. We scarce know where to begin. Some see conscious experience as a natural phenomena possessing properties that do not square with our present scientific understanding of nature. Others, like myself, see conscious experience as a natural phenomena that only seems to possess properties that do not square with our nature. Michael Graziano belongs to this camp also. The great virtue of belonging to this deflationary pole of the experiential explananda debate is that it spares you the task of explaining inexplicable entities, or the indignity of finding rhetorical ways to transform manifest theoretical vices (like analytic opacity) into virtues (like ‘irreducibility’). In other words, it lets you drastically simplify the explanatory landscape. Despite this, Graziano’s latest presentation of his theory of consciousness (coauthored with Taylor Webb), “The attention schema theory: a mechanistic account of subjective awareness,” seems to be deeply–perhaps even fatally–mired in the neuroscientific explananda problem.

Very little in Webb and Graziano’s introduction to AST indicates the degree to which the theory has changed since the 2013 publication of Consciousness and The Social Brain. The core insight of Attention Schema Theory is presented in the same terms, the notion that subjective awareness, far from being a property perceived, is actually a neural construct, a tool the human brain uses to understand and manipulate both other brains and itself.  They write:

This view that the problem of subjective experience consists only in explaining why and how the brain concludes that it contains an apparently non-physical property, has been proposed before (Dennett, 1991). The attention schema theory goes beyond this idea in providing a specific functional use for the brain to compute that type of information. The heart of the attention schema theory is that there is an adaptive value for a brain to build the construct of awareness: it serves as a model of attention. 2

They provide the example of visual attention upon an apple, how the brain requires, as a means to conclude it was ‘subjectively aware’ of the apple, information regarding itself and its means of relating to the apple. This ‘means of relating’ happens to be the machinery of attention, resulting in the attention schema, a low-dimensional representation of the high-dimensional complexities comprising things like visual attention upon an apple. And this, Graziano maintains, is what ‘subjective awareness’ ultimately amounts to: “the brain’s internal model of the process of attention” (1).

And this is where the confusion begins, as much for Webb and Graziano as for myself. For one, ‘consciousness’ has vanished from the title of the theory, replaced by the equally overdetermined ‘subjective awareness.’ For another, the bald claims that consciousness is simply a delusion have all but vanished. As recently as last year, Graziano wrote:

How does the brain go beyond processing information to become subjectively aware of information? The answer is: It doesn’t. The brain has arrived at a conclusion that is not correct. When we introspect and seem to find that ghostly thing — awareness, consciousness, the way green looks or pain feels — our cognitive machinery is accessing internal models and those models are providing information that is wrong. The machinery is computing an elaborate story about a magical-seeming property. And there is no way for the brain to determine through introspection that the story is wrong, because introspection always accesses the same incorrect information. “Are We Really Conscious,” The New York Times Sunday Review.

Here there simply is no such thing as subjective awareness: it’s a kind of cognitive illusion foisted on the brain by the low-dimensionality of the attention schema. Now, however, the status of subjective awareness is far less clear. Webb and Graziano provide the same blind brain explanation (down to the metaphors, no less) for the peculiar properties apparently characterizing subjective awareness: since the brain has no use for high-dimensional information, the “model would be more like a cartoon sketch that depicts the most important, and useful aspects of attention, without representing any of the mechanistic details that make attention actually happen” (2). As a result of this opportunistic simplification, it makes sense that a brain:

“would conclude that it possesses a phenomenon with all of the most salient aspects of attention – the ability to take mental possession of an object, focus one’s resources on it, and, ultimately, act on it – but without any of the mechanisms that make this process physically possible. It would conclude that it possesses a magical, non-physical essence, but one which can nevertheless act and exert causal control over behavior, a mysterious conclusion indeed.” 2

This is a passage that would strike any long time followers of TPB as a canonical expression of Blind Brain Theory, but there are some key distinctions dividing the two pictures, which I’ll turn to in a moment. For the nonce, it’s worth noting that it’s not so much subjective awareness (consciousness) that now stands charged with deception, as the kinds of impossible properties that attributed to it. Given that subjective awareness is the explicit explanandum, there’s a pretty important ambiguity here between subjective awareness as attention schema and subjective awareness as impossible construct. Even though the latter is clearly a cognitive illusion, the former is real insofar as the attention schema is real.

For its part, Blind Brain Theory is a theory, not of consciousness, but of the appearance of consciousness. It provides a principled way to detect, diagnose and even circumvent the kinds of cognitive illusions the limits of deliberative metacognition inflict upon reflection. It only explains why, given the kind of metacognitive resources our brains actually possess, the problem of consciousness constitutes a ‘crash space,’ a domain where we continually run afoul the heuristic limitations of our tools. So when I reflect upon my sensorium, for instance, even though I am unencumbered by supernatural characterizations of phenomenology—subjective awareness—something very mysterious remains to be explained, it’s just nowhere near so mysterious as someone like, Chalmers, for instance, is inclined to think.

Graziano, on the other hand, thinks he possesses a bona fide theory of consciousness. The attention schema, on his account, is awareness. So when he reflects upon his sensorium, he’s convinced he’s reflecting upon his ‘attention schema,’ that this is the root of what consciousness consists in—somehow.

I say ‘somehow,’ because in no way is it clear why the attention schema, out of all the innumerable schematisms the brain uses to overcome the ‘curse of dimensionality,’ should be the one possessing (the propensity to be duped by?) subjective awareness. In other words, AST basically suffers the same problem all neural identity theories suffer: explaining what makes one set of neural mechanisms ‘aware’ while others remain ‘dark.’ Our brains run afoul their cognitive limitations all the time, turn on countless heuristic schema: why is the attention schema prone to elicit sensoriums and the like?

Note that he has no way of answering, ‘Because that’s how attention is modelled,’ without begging the question. We want to know what makes modelling attention so special as to result in what, mistaken or not, we seem to be enjoying this very moment now. Even though he bills Attention Schema Theory as a ‘mechanistic account of subjective awareness,’ there’s a real sense in which consciousness, or ‘subjective awareness,’ is left entirely unexplained. Why should a neurobiologically instantiated schema of the mechanisms of attention result in this mad hall of mirrors we are sharing (or not) this very moment?

Graziano and Webb have no more clue than anyone. AST provides a limited way to understand the peculiarities of experience, but it really has no way whatsoever of explaining the fact of experience.

He had no such problem with the earlier versions of AST simply because he could write off consciousness as an illusion entirely, as a ‘squirrel in the head.’ Once he had dispatched with the peculiarities of experience, he could slap his pants and go home. But of course, this stranded him with the absurd position of denying the existence of conscious experience altogether.

Now he acknowledges that consciousness exists, going so far as to suggest that AST is consistent with and extends beyond global workspace and information integration accounts.

“The attention schema theory is consistent with these previous proposals, but also goes beyond them. In the attention schema theory, awareness does not arise just because the brain integrates information or settles into a network state, anymore than the perceptual model of color arises just because information in the visual system becomes integrated or settles into a state. Specific information about color must be constructed by the visual system and integrated with other visual information. Just so, in the case of awareness, the construct of awareness must be computed. Then it can be integrated with other information. Then the brain has sufficient information to conclude and report not only, “thing X is red,” or, “thing X is round,” but also, “I am aware of thing X.” 3

If this is the case, then subjective awareness has to be far more than the mere product of neural fiat, a verbal reporting system uttering the terms, “I am aware of X.” And it also has to be far more than simply paying attention to the model of attention. If AST extends beyond global workspace and information integration accounts, then the phenomenon of consciousness exceeds the explanatory scope of AST. Before subjective awareness was a metacognitive figment, the judgment, “I am aware of thing X” exhausted the phenomenology of experiencing X. Now subjective awareness is a matter of integrating the ‘construct of awareness’ (the attention schema) with ‘other information’ to produce the brain’s conclusion of phenomenology.

At the very least, the explanatory target of AST needs to be clarified. Just what is the explanandum of the Attention Schema Theory? And more importantly, how does the account amount to anything more than certain correlations between a vague model and the vague phenomena(lity) it purports to explain?

I actually think it’s quite clear that Graziano has conflated what are ultimately two incompatible insights into the nature of consciousness. The one is simply that consciousness and attention are intimately linked, and the other is that metacognition is necessarily heuristic. Given this conflation, he has confused the explanatory power of the latter as warrant for reducing subjective awareness to the attention schema. The explanatory power of the latter, of course, is simply the explanatory power of Blind Brain Theory, the way heuristic neglect allows us to understand a wide number of impossible properties typically attributed to intentional phenomena. Unlike the original formulation of AST, Blind Brain Theory has always been consilient with global workspace and information integration accounts simply because heuristic neglect says nothing about what consciousness consists in, only the kinds of straits the limits of the human brain impose upon the human brain’s capacity to cognize its own functions. It says a great deal about why we find ourselves still, after thousands of years of reflection and debate, completely stumped by our own nature. It depends on the integrative function of consciousness to be able to explain the kinds of ‘identity effects’ it uses to diagnose various metacognitive illusions, but beyond this, BBT remains agnostic on the nature of consciousness (even as it makes hash of the consciousness we like to think we have).

But even though BBT is consilient with global workspace and information integration accounts the same as AST, it is not consilient with AST. Unpacking the reasons for this incompatibility makes the nature of the conflation underwriting AST quite clear.

Graziano takes a great number of things for granted in his account, not the least of which is metacognition. Theory is all about taking things for granted, of course, but only the right things. AST, as it turns out, is not only a theory of subjective awareness, it’s also a theory of metacognition. Subjective awareness, on Graziano’s account, is a metacognitive tool. The primary function of the attention schema is to enable executive control of attentional mechanisms. As they write, “[i]n this perspective, awareness is an internal model of attention useful for the control of attention” (5). Consciousness is a metacognitive device, a heuristic the brain uses to direct and allocate attentional (cognitive) resources.

We know that it’s heuristic because, even though Webb and Graziano nowhere reference the research of fast and frugal heuristics, they cover the characteristics essential to them. The attention schema, we are told, provides only the information the brain requires to manage attention and nothing more. In other words, the attention schema possesses what Gerd Gigerenzer and his fellow researchers at the Adaptive Behaviour and Cognition Research Institute call a particular ‘problem ecology,’ one that determines what information gets neglected and what information gets used (see, Ecological Rationality). This heuristic neglect in turn explains why, on Webb and Graziano’s account, subjective awareness seems to possess the peculiar properties it does. When we attend to our attention, the neglect of natural (neurobiological) information cues the intuition that something not natural is going on. Heuristic misapplications, as Wimsatt has long argued, lead to systematic errors.

But of course the feasibility of solving any problem turns on the combination of the information available and the cognitive capacity possessed. Social cognition, for instance, allows us to predict, explain, and manipulate our fellows on the basis of so little information that ‘computational intractibility’ remains a cornerstone of mindreading debates.  In other words, the absence of neurobiological information in the attention schema only explains the apparently supernatural status of subjective awareness given certain metacognitive capacities. Graziano’s attention schema may be a metacognitive tool, a way to manage cognitive resources, but it is the ‘object’ of metacognition as well.

For me, this is where the whole theory simply falls apart—and obviously so. The problem is that the more cognitive neuroscience learns about metacognition, the more fractionate and specialized it appears to be. Each of these ‘kluges’ represents adaptations to certain high impact, environmental problems. The information subjective awareness provides leverages many different solutions to many different kinds of dilemmas, allowing us to bite our tongues at Thanksgiving dinner, ponder our feelings toward so-and-so, recognize our mistakes, compulsively ruminate upon relationships, and so on, while at the same time systematically confounding our attempts to deduce the nature of our souls. The fact is, the information selected, stabilized, and broadcast via consciousness, enables far, far more than simply the ability to manage attention.

But if subjective awareness provides solutions to a myriad of problems given the haphazard collection of metacognitive capacities we possess, then in what sense does it count as a ‘representation of’ the brain’s attentional processes? Is it the case that a heuristic (evolutionarily opportunistic) model of that machinery holds the solution to all problems? Prima facie, at least, the prospects of such a hypothesis seem dim. When trying to gauge our feelings about a romantic partner, is it a ‘representation’ of our brain’s attentional processes that we need, or is it metacognitive access to our affects?

Perhaps sensing this easy exit, Webb and Graziano raise some hasty barricades:

“According to the attention schema theory, the brain constructs a simplified model of the complex process of attention. If the theory is correct, then the attention schema, the construct of awareness, is relevant to any type of information to which the brain can pay attention. The relevant domain covers all vision, audition, touch, indeed any sense, as well as internal thoughts, emotions, and ideas. The brain can allocate attention to all of these types of information. Therefore awareness, the internal representation of attention, should apply to the same range of information.” 9

So even though a representation of the brain’s attentional resources is not what we need when we inspect our feelings regarding another, it remains ‘applicable’ to such an inspection. If we accept that awareness of our feelings is required to inspect our feelings, does this mean that awareness somehow arises on the basis of the ‘applicability’ of the attention schema, or does it mean that the attention schema somehow mediates all such metacognitive activities?

Awareness of our feelings is required to inspect our feelings. This means the attention schema underwrites our ability to inspect our feelings, as should come as no surprise, given that the attention schema underwrites all conscious metacognition. But if the attention schema underwrites all conscious metacognition, it also underwrites all conscious metacognitive functions. And if the attention schema underwrites all conscious metacognitive functions, then, certainly, it models far, far more than mere attention.

The dissociation between subjective awareness and the attention schema seems pretty clear. Consciousness is bigger than attention, and heuristic neglect applies to far more than our attempts to understand the ‘attention schema’—granted there is such a thing.

But what about the post facto ‘predictions’ that Webb and Graziano present as evidence for AST?

Given that consciousness is the attention schema and the primary function of the attention schema is the control of attention, we should expect divergences between attention and awareness, and we should expect convergences between awareness and attentional control. Webb and Graziano adduce experimental evidence of both, subsequently arguing that AST is the best explanation, even though the sheer generality of the theory makes it hard to see the explanatory gain. As it turns out, awareness correlates with attentional control because awareness is an attentional control mechanism, and awareness uncouples with attention because awareness, as a representation of attention, is something different than attention. If you ask me, this kind of ’empirical evidence’ only serves to underscore the problems with the account more generally.

Ultimately, I just really don’t see how AST amounts to a workable theory of consciousness. It could be applied, perhaps, as a workable theory for the appearance of consciousness, but then only as a local application of the far more comprehensive picture of heuristic neglect Blind Brain Theory provides. These limits become especially clear when one considers the social dimensions of AST, where Graziano sees it discharging some of the functions Dennett attributes to the ‘intentional stance.’ But since AST possesses no account of intentionality whatsoever (indeed, Graziano doesn’t seem to be aware of the problems posed by aboutness or content), it completely neglects the intentional dimensions of social cognition. Since social cognition is intentional cognition, it’s hard to understand how AST does much more than substitute a conceptually naïve notion of ‘attention’ for intentionality more broadly construed.

Goosing the Rumour Mill

by rsbakker

Just got back to find there’s been some developments! I’d resolved to say nothing anticipating anything–it just makes me feel foolish anymore. Until we have all the details hammered out, there’s not much I can say except that Overlook’s July 2016 date is tentative. I fear I can’t comment on their press release, either. Things seem to be close, though.

I know it’s been a preposterously long haul, folks, but hold on just a bit longer. The laws of physics are bound to kick in at some point, after which I can start delivering some more reliable predictions.

The Mental as Rule of Thumb

by rsbakker

What are mental functions? According to Blind Brain Theory, they are quasimechanical posits explaining the transformations between regimented inputs and observed outputs in ways that seem to admit generalization. We know the evidence is correlative, but we utilize mechanical cognition nonetheless, producing a form of correlatively anchored ‘quasi-causal explanation.’ There often seems to be some gain in understanding, and thus are ‘mental functions’ born.

Mental functions famously don’t map across our growing understanding of neural mechanisms because the systematicity tracked is correlative, rather than causal. Far from ‘mechanism sketches,’ mental functions are ‘black box conceits,’ low dimensional constructs that need only solve some experimental ecology (that may or may not generalize). The explanatory apparatus of the ‘mental’ indirectly tracks the kinds of practical demands made on human cognition as much as the hidden systematicities of the brain. It possesses no high-dimensional reality—real reality—otherwise. How could it? What sense does it make to suppose that our understanding of the mental, despite being correlatively anchored, nevertheless tracks something causal within subjects? Very little. Correlations abound, to the point of obscuring causes outright. Though correlative cognition turns on actual differential relations to the actual mechanisms involved, it nevertheless neglects those relations, and therefore neglects the mechanisms as well.  To suggest that correlative posits possess some kind of inexplicable intrinsic efficacy is to simply not understand the nature of correlative cognition, which is to make due in the absence of behavioural sensitivities to the high-dimensional mechanics of our environments.

Why bother arguing for something spooky when ‘mental functions’ are so obviously heuristic conceits, ways to understand otherwise opaque systems, nothing more or less?

Of course there’s nothing wrong with heuristics, so long as they’re recognized as such, ways for other brains to cognize neural capacities short of cognizing neural mechanisms. To the extent that experimental findings generalize to real world contexts, there’s a great deal to be learned from ‘black box psychology.’ But we should not expect to find any systematic, coherent account of ‘mind’ or the ‘mental,’ simply because the correlative possibilities are potentially limitless. So long as new experimental paradigms can be improvised, new capacities/incapacities can be isolated. Each ‘discovery,’ in other words, is at once an artifact, an understanding specific (as all correlative understandings are) to some practical ecology, one which is useful to the degree it can be applied in various other practical ecologies.

And there you have it: a concise eliminativist explanation of why mental functions seem to have no extension and yet seem to provide a great deal of knowledge anyway. ‘Mental functions’ are essentially a way to utilize our mechanical problem-solving capacity in black box ecologies. The time has come to start calling them for what they are: heuristic conceits.  The ‘mind’ is a way to manage a causal system absent any behavioural sensitivity to the mechanics of that system, a way to avoid causal cognition. To suggest that it is somehow fundamentally causal nonetheless is to simply misunderstand it, to confuse, albeit in an exotic manner, correlation for causation.



The Real Problem with ‘Correlation’

by rsbakker

stick zombies

Since presuming that intentional cognition can get behind intentional cognition belongs to the correlation problem, any attempt to understand the problem requires we eschew theoretical applications of intentional idioms. Getting a clear view, in other words, requires that we ‘zombify’ human cognition, adopt a thoroughly mechanical vantage that simply ignores intentionality and intentional properties. As it so happens, this is the view that commands whatever consensus one can find regarding these issues. Though the story I’ll tell is a complicated one, it should also be a noncontroversial one, at least insofar as it appeals to nothing more than naturalistic platitudes.

I first started giving these ‘zombie interpretations’ of different issues in philosophy and cognitive science a few years back.[1] Everyone in cognitive science agrees that consciousness and cognition turn on the physical somehow. This means that purely mechanical descriptions of the activities typically communicated via intentional idioms have to be relevant somehow (so long as they are accurate, at least). The idea behind ‘zombie interpretation’ is to explain as much as possible using only the mechanistic assumptions of the biological sciences—to see how far generalizing over physical processes can take our perennial attempt to understand meaning.

Zombies are ultimately only a conceit here, a way for the reader to keep the ‘explanatory gap’ clearly in view. In the institutional literature, ‘p-zombies’ are used for a variety of purposes, most famously to anchor arguments against physicalism. If a complete physical description of the world need not include consciousness, then the brute fact of consciousness implies that physicalism is incomplete. However, since this argument itself turns on the correlation problem, it will not concern us here. The point, oddly enough, is to adhere to an explanatory domain where we all pretty much agree, to speculate using only facts and assumptions belonging to the biological sciences—the idea being, of course, that these facts and assumptions are ultimately all that’s required. Zombies allow us to do that.

Philosophy Now zombie pic

So then, devoid of intentionality, zombies lurch through life possessing only contingent, physical comportments to their environment. Far from warehousing ‘representations’ possessing inexplicable intentional properties, their brains are filled with systems that dynamically interact with their world, devices designed to isolate select signals from environmental noise. Zombies do not so much ‘represent their world’ as possess statistically reliable behavioural sensitivities to their environments.

So where ‘subjects’ possess famously inexplicable semantic relations to the world, zombies possess only contingent, empirically tractable relations to the world. Thanks to evolution and learning, they just happen to be constituted such that, when placed in certain environments, gene conserving behaviours tend to reliably happen. Where subjects are thought to be ‘agents,’ perennially upstream sources of efficacy, zombies are components, subsystems at once upstream and downstream the superordinate machinery of nature. They are astounding subsystems to be sure, but they are subsystems all the same, just more nature—machinery.

What makes them astounding lies in the way their neurobiological complexity leverages behaviour out of sensitivity. Zombies do not possess distributed bits imbued with the occult property of aboutness; they do not model or represent their worlds in any intentional sense. Rather, their constitution lets ongoing environmental contact tune their relationship to subsequent environments, gradually accumulating the covariant complexities required to drive effective zombie behaviour. Nothing more is required. Rather than possessing ‘action enabling knowledge,’ zombies possess behaviour enabling information, where ‘information’ is understood in the bald sense of systematic differences making systematic differences.

A ‘cognitive comportment,’ as I’ll use it here, refers to any complex of neural sensitivities subserving instances of zombie behaviour. It comes in at least two distinct flavours: causal comportments, where neurobiology is tuned to what generally makes what happen, and correlative comportments, where zombie neurobiology is tuned to what generally accompanies what happens. Both systems allow our zombies to predict and systematically engage their environments, but they differ in a number of crucial respects. To understand these differences we need some way of understanding what positions zombies upstream their environments–or what leverages happy zombie outcomes.

The zombie brain, much like the human brain, confronts a dilemma. Since all perceptual information consists of sensitivity to selective effects (photons striking the eye, vibrations the ear, etc.), the brain needs some way of isolating the relevant causes of those effects (a rushing tiger, say) to generate the appropriate behavioural response (trip your mother-in-law, then run). The problem, however, is that these effects are ambiguous: a great many causes could be responsible. The brain is confronted with a version of the inverse problem, what I will call the medial inverse problem for reasons that will soon be clear. Since it has nothing to go on but more effects, which are themselves ambiguous, how could it hope to isolate the causes it needs to survive?

By allowing sensitivities to discrepancies between the patterns initially cued and subsequent sensory effects to select—and ultimately shape—the patterns subsequently cued. As it turns out, zombie brains are Bayesian brains.[2] Allowing discrepancies to both drive and sculpt the pattern-matching process automatically optimizes the process, allowing the system to bootstrap wide-ranging behavioural sensitivities to environments in turn. In the intentionality laden idiom of theoretical neuroscience, the brain is a ‘prediction error minimization’ machine, continually testing occurrent signals against ‘guesses’ (priors) triggered by earlier signals. Success (discrepancy minimization) quite automatically begets success, allowing the system to continually improve its capacity to make predictions—and here’s the important thing—using only sensory signals.[3]

But isolating the entities/behaviour causing sensory effects is one thing; isolating the entities/behaviour causing those entities/behaviour is quite another. And it’s here that the chasm between causal cognition and correlative cognition yawns wide. Once our brain’s discrepancy minimization processes isolate the relevant entities/behaviours—solve the medial inverse problem—the problem of prediction simply arises anew. It’s not enough to recognize avalanches as avalanches or tigers as tigers, we have to figure out what they will do. The brain, in effect, faces a second species of inverse problem, what might be called the lateral inverse problem. And once again, it’s forced to rely on sensitivities to patterns (to trigger predictions to test against subsequent signals, and so on).[4]

Nature, of course, abounds with patterns. So the problem is one of tuning a Bayesian subsystem like the zombie brain to the patterns (such as ‘avalanche behaviour’ or ‘tiger behaviour’) it needs to engage its environments given only sensory effects. The zombie brain, in other words, needs to wring behavioural sensitivities to distal processes out of a sensitivity to proximal effects. Though they are adept at comporting themselves to what causes their sensory effects (to solving the medial inverse problem), our zombies are almost entirely insensitive to the causes behind those causes. The etiological ambiguity behind the medial inverse problem pales in comparison to the etiological ambiguity comprising the lateral inverse problem, simply because sensory effects are directly correlated to the former, and only indirectly correlated to the latter. Given the limitations of zombie cognition, in other words, zombie environments are ‘black box’ environments, effectively impenetrable to causal cognition.

Part of the problem is that zombies lack any ready means of distinguishing causality from correlation on the basis of sensory information alone. Not only are sensory effects ambiguous between causes, they are ambiguous between causes and correlations as well. Cause cannot be directly perceived. A broader, engineered signal and greater resources are required to cognize its machinations with any reliability—only zombie science can furnish zombies with ‘white box’ environments. Fortunately for their prescientific ancestors, evolution only required that zombies solve the lateral inverse problem so far. Mere correlations, despite burying the underlying signal, remain systematically linked to that signal, allowing for a quite different way of minimizing discrepancies.

Zombies, once again, are subsystems whose downstream ‘componency’ consists in sensitivities to select information. The amount of environmental signal that can be filtered from that information depends on the capacity of the brain. Now any kind of differential sensitivity to an environment serves organisms in good stead. To advert to the famous example, frogs don’t need the merest comportment to fly mechanics to catch flies. All they require is a select comportment to select information reliably related to flies and fly behaviour, not to what constitutes flies and fly behaviour. And if a frog did need as much, then it would have evolved to eat something other than flies. Simple, systematic relationships are not only all that is required to solve a great number of biological problems, they are very often the only way those problems can be solved, given evolutionary exigencies. This is especially the case with complicated systems such as those comprising life.

So zombies, for instance, have no way of causally cognizing other zombies. They likewise have no way of causally cognizing themselves, at least absent the broader signal and greater computational resources provided by zombie science. As a result, they possess at best correlative comportments both to each other and to themselves.

Idoits guide to zombies

So what does this mean? What does it mean to solve systems on basis of inexpensive correlative comportments as opposed to far more expensive causal comportments? And more specifically, what does it mean to be limited to extreme versions of such comportments when it comes to zombie social cognition and metacognition?

In answer to the first question, at least three, interrelated differences can be isolated:

Unlike causal (white box) comportments, correlative (black box) comportments are idiosyncratic. As we saw above, any number of behaviourally relevant patterns can be extracted from sensory signals. How a particular problem is solved depends on evolutionary and learning contingencies. Causal comportments, on the other hand, involve behavioural sensitivity to the driving environmental mechanics. They turn on sensitivities to upstream systems that are quite independent of the signal and its idiosyncrasies.

Unlike causal (white box) comportments, correlative (black box) comportments are parasitic, or differentially mediated. To say that correlative comportments are ‘parasitic’ is to say they depend upon occluded differential relations between the patterns extracted from sensory effects and the environmental mechanics they ultimately solve. Frogs, once again, need only a systematic sensory relation to fly behaviour, not fly mechanics, which they can neglect, even though fly mechanics drives fly behaviour. A ‘black box solution’ serves. The patterns available in the sensory effects of fly behaviour are sufficient for fly catching given the cognitive resources possessed by frogs. Correlative comportments amount to the use of ‘surface features’—sensory effects—to anticipate outcomes driven by otherwise hidden mechanisms. Causal comportments, which consist of behavioural sensitivities (also derived from sensory effects) to the actual mechanics involved, are not parasitic in this sense.

Unlike causal (white box) comportments, correlative (black box) comportments are ecological, or problem relative. Both causal comportments and correlative comportments are ‘ecological’ insofar as both generate solutions on the basis of finite information and computational capacity. But where causal comportments solve the lateral inverse problem via genuine behavioural sensitivities to the mechanics of their environments, correlative comportments (such as that belonging to our frog) solve it via behavioural sensitivities to patterns differentially related to the mechanics of their environments. Correlative comportments, as we have seen, are idiosyncratically parasitic upon the mechanics of their environments. The space of possible solutions belonging to any correlative comportment is therefore relative to the particular patterns seized upon, and their differential relationships to the actual mechanics responsible. Different patterns possessing different systematic relationships will possess different ‘problem ecologies,’ which is to say, different domains of efficacy. Since correlative comportments are themselves causal, however, causal comportments apply to all correlative domains. Thus the manifest ‘objectivity’ of causal cognition relative to the ‘subjectivity’ of correlative cognition. 

So far, so good. Correlative comportments are idiosyncratic, parasitic, and ecological in a way that causal comportments are not. In each case, what distinguishes causal comportments is an actual behavioural sensitivity to the actual mechanics of the system. Zombies are immersed in potential signals, awash in causal differences, information, that could make a reproductive difference. The difficulties attendant upon the medial and lateral inverse problems, the problems of what and what-next, render the extraction of causal signals enormously difficult, even when the systems involved are simple. The systematic nature of their environments, however, allow them to use behavioural sensitivities as ‘cues,’ signals differentially related to various systems, to behaviourally interact with those systems despite the lack of any behavioural sensitivity to their particulars. So in research on contingencies, for instance, the dependency of ‘contingency inferences’ on ‘sampling,’ the kinds of stimulus input available, has long been known, as have the kinds of biases and fallacies that result. Only recently, however, have researchers realized the difficulty of accurately making such inferences given the kinds of information available in vivo, and the degree to which we out and out depend on so-called ‘pseudocontingency heuristics’ [5]. Likewise, research into ‘spontaneous explanation’ and  ‘essentialism,’ the default attribution of intrinsic traits and capacities in everyday explanation, clearly suggests that low-dimensional opportunism is the rule when it comes to human cognition.[6] The more we learn about human cognition, in other words, the more obvious the above story becomes.

So then what is the real problem with correlation? The difficulty turns on the fact that black box cognition, solving systems via correlative cues, can itself only be cognized in black box terms.

Given their complexity, zombies are black boxes to themselves as much to others. And this is what has cued so much pain behaviour in so many zombie philosophers. As a black box, zombies cannot cognize themselves as black boxes: the correlative nature of their correlative comportments utterly escapes them (short, once again, the information provided by zombie science). Zombie metacognition is blind to the structure and dynamics of zombie metacognition, and thus prone to what might be called ‘white box illusions.’ Absent behavioural sensitivity to the especially constrained nature of their correlative comportments to themselves, insufficient data is processed in the same manner as sufficient data, thus delivering the system to ‘crash space,’ domains rendered intractable by the systematic misapplication of tools adapted to different problem ecologies. Unable to place themselves downstream their incapacity, they behave as though no such incapacity exists, suffering what amounts to a form of zombie anosognosia.

Perhaps this difficulty shouldn’t be considered all that surprising: after all, the story told here is a white box story, a causal one, and therefore one requiring extraction from the ambiguities of effects and correlations. The absence of this information effectively ‘black-boxes’ the black box nature of correlative cognition. Zombies cued to solve for that efficacy accordingly run afoul the problem of processing woefully scant data as sufficient, black boxes as white boxes, thus precluding the development of effective, behavioural sensitivities to the actual processes involved.  The real Problem of Correlation, in other words, is that correlative modes systematically confound cognition of correlative comportments. Questions regarding the nature of our correlative comportments simply do not lie within the problem space of our correlative comportments—and how could they, when they’re designed to solve absent sensitivity to what’s actually going on?

And this is why zombies not only have philosophers, they have a history of philosophy as well. White box illusions have proven especially persistent, despite the spectacular absence of systematic one-to-one correspondences between the apparent white box that zombies are disposed to report as ‘mind’ and the biological white box emerging out of zombie science. Short any genuine behavioural sensitivity to the causal structure of their correlative comportments, zombies can at most generate faux-solutions, reports anchored to the systematic nature of their conundrum, and nothing more. Like automatons, they endlessly report low-dimensional, black box posits the way they report high-dimensional environmental features—and here’s the thing—using the very same terms that humans use. Zombies constantly utter terms like ‘minds,’ ‘experiences,’ ‘norms,’ and so on. Zombies, you could say, possess a profound disposition to identify themselves and each other as humans.

Just like us.

MJ zombie



[1] See, Davidson’s Fork: An Eliminativist Radicalization of Radical Interpretation, The Blind Mechanic, The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts, Zombie Interpretation: Eliminating Kriegel’s Asymmetry Argument, and Zombie Mary versus Zombie God and Jesus: Against Lawrence Bonjour’s “Against Materialism”

[2] For an overview of Bayesian approaches, see Andy Clark, “Whatever next? Predictive brains, situated agents, and the future of cognitive science.”

[3]  The following presumes an ecological (as opposed to an inferential) understanding of the Bayesian brain. See Nico Orlandi, “Bayesian perception is ecological perception.”

[4] Absent identification there is no possibility of prediction. The analogy between this distinction and the ancient distinction between being and becoming (or even the modern one between the transcendental and the empirical) is interesting to say the least.

[5] See Klaus Fiedler et al, “Pseudocontingencies: Logically Unwarranted but Smart Inferences.”

[6] See Andrei Cimpian, “The Inherence Heuristic: Generating Everyday Explanations,” or Cimpian and Salomon, “The inherence heuristic: An intuitive means of making sense of the world, and a potential precursor to psychological essentialism.”

How Science Reveals the Limits of ‘Nooaesthetics’ (A Reply to Alva Noë)

by rsbakker

As a full-time artist (novelist) who has long ago given up on the ability of traditional aesthetics (or as I’ll refer to it here, ‘nooaesthetics’) to do much more than recontextualize art in ways that yoke it to different ingroup agendas, I look at the ongoing war between the sciences and the scholarly traditions of the human as profoundly exciting. The old, perpetually underdetermined convolutions are in the process of being swept away—and good riddance! Alva Noë, however, sees things differently.

So much of rhetoric turns on asking only those questions that flatter your view. And far too often, this amounts to asking the wrong questions, in particular, those questions that only point your way. All the other questions, you pass over in strategic silence. Noë provides a classic example of this tactic in “How Art Reveals the Limits of Neuroscience,” his recent critique of ‘neuroaesthetics’ in the The Chronicle of Higher Education.

So for instance, it seems pretty clear that art is a human activity, a quintessentially human activity according to some. As a human activity, it seems pretty clear that our understanding of art turns on our understanding of humanity. As it turns out, we find ourselves in the early stages of the most radical revolution in our understanding of the human ever… Period. So it stands to reason that a revolution in our understanding of the human will amount to a revolution in our understanding of human activities—such as art.

The problem with revolutions, of course, is that they involve the overthrow of entrenched authorities, those invested in the old claims and the old ways of doing business. This is why revolutions always give rise to apologists, to individuals possessing the rhetorical means of rationalizing the old ways, while delegitimizing the new.

Noë, in this context at least, is pretty clearly the apologist, applying words as poultices, ways to soothe those who confuse old, obsolete necessities with absolute ones. He could have framed his critique of neuroaesthetics in this more comprehensive light, but that would have the unwelcome effect of raising other questions, the kind that reveal the poverty of the case he assembles. The fact is, for all the purported shortcomings of neuroaesthetics he considers, he utterly fails to explain why ‘nooaesthetics,’ the analysis, interpretation, and evaluation of art using the resources of the tradition, is any better.

The problem, as Noë sees it, runs as follows:

“The basic problem with the brain theory of art is that neuroscience continues to be straitjacketed by an ideology about what we are. Each of us, according to this ideology, is a brain in a vat of flesh and bone, or, to change the image, we are like submariners in a windowless craft (the body) afloat in a dark ocean of energy (the world). We know nothing of what there is around us except what shows up on our internal screens.”

As a description of parts of neuroscience, this is certainly the case. But as a high-profile spokesperson for enactive cognition, Noë knows full well that the representational paradigm is a fiercely debated one in the cognitive sciences. But it suits his rhetorical purposes to choose the most theoretically ill-equipped foes, because, as we shall see, his theoretical equipment isn’t all that capable either.

As a one-time Heideggerean, I recognize Noë’s tactics as my own from way back when: charge your opponent with presupposing some ‘problematic ontological assumption,’ then show how this or that cognitive register is distorted by said assumption. Among the most venerable of those problematic assumptions has to be the charge of ‘Cartesianism,’ one that has become so overdetermined as to be meaningless without some kind of qualification. Noë describes his understanding as follows:

“Crucially, this picture — you are your brain; the body is the brain’s vessel; the world, including other people, are unknowable stimuli, sources of irradiation of the nervous system — is not one of neuroscience’s findings. It is rather something that has been taken for granted by neuroscience from the start: Descartes’s conception with a materialist makeover.”

In cognitive science circles, Noë is notorious for the breezy way he consigns cognitive scientists to his ‘Cartesian box.’ For a fellow anti-representationalist such as myself, I often find his disregard for the nuances posed by his detractors troubling. Consider:

“Careful work on the conceptual foundations of cognitive neuroscience has questioned the plausibility of straightforward mind-brain reduction. But many neuroscientists, even those not working on such grand issues as the nature of consciousness, art, and love, are committed to a single proposition that is, in fact, tantamount to a Cartesian idea they might be embarrassed to endorse outright. The momentous proposition is this: Every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition is in your brain. We may not know how the brain manages this feat, but, so it is said, we are beginning to understand. And this new knowledge — of how the organization of bits of matter inside your head can be your personality, thoughts, understanding, wonderings, religious or sexual impulses — is surely among the most exciting and important in all of science, or so it is claimed.”

I hate to say it, but this is a mischaracterization. One has to remember that before cognitive science, theory was all we had when it came to the human. Guesswork, profound to the extent that we consider ourselves profound, but guesswork all the same. Cognitive science, in its many-pronged attempt to scientifically explain the human, has inherited all this guesswork. What Noë calls ‘careful work’ simply refers to his brand of guesswork, enactive cognition, and its concerns, like the question of how the ‘mind’ is related to the ‘brain,’ are as old as the hills. ‘Straightforward mind brain reduction,’ as he calls it, has always been questioned. This mystery is a bullet that everyone in the cognitive sciences bites in some way or another. The ‘momentous proposition’ that the majority of neuroscientists assume isn’t that “[e]very thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition is in [our] brain,” but rather that every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition involves our brain. Noë’s Cartesian box assumption is nowhere so simple or so pervasive as he would have you believe.

He knows this, of course, which is why he devotes the next paragraph to dispatching those scientists who want (like Noë himself does, ultimately) to have it both ways. He needs his Cartesian box to better frame the contest in clear-cut ‘us against them’ terms. The fact that cognitive science is a muddle of theoretical dissension—and moreover, that it knows as much—simply does not serve his tradition redeeming narrative. So you find him claiming:

“The concern of science, humanities, and art, is, or ought to be, the active life of the whole, embodied, environmentally and socially situated animal. The brain is necessary for human life and consciousness. But it can’t be the whole story. Our lives do not unfold in our brains. Instead of thinking of the Creator Brain that builds up the virtual world in which we find ourselves in our heads, think of the brain’s job as enabling us to achieve access to the places where we find ourselves and the stuff we share those places with.”

These, of course, are platitudes. In philosophical debates, when representationalists critique proponents of embodied or enactive cognition like Noë, they always begin by pointing out their agreement with claims like these. They entirely agree that environments condition experience, but disagree (given ‘environmentally off-line’ phenomena such as mental imagery or dreams) that they are directly constitutive of experience. The scientific view is de facto a situated view, a view committed to understanding natural systems in context, as contingent products of their environments. As it turns out, the best way to do this involves looking at these systems mechanically, not in any ‘clockwork’ deterministic sense, but in the far richer sense reveal by the life sciences. To understand how a natural system fits into its environment, we need to understand it, statistically if not precisely, as a component of larger systems. The only way to do this is figure how, as a matter of fact, it works, which is to say, to understand its own components. And it just so happens that the brain is the most complicated machine we have ever encountered.

The overarching concern of science is always the whole; it just so happens that the study of minutiae is crucial to understanding the whole. Does this lead to institutional myopia? Of course it does. Scientists are human like anyone else, every bit as prone to map local concerns across global ones. The same goes for English professors and art critics and novelists and Noë. The difference, of course, is the kind of cognitive authority possessed by scientists. Where the artistic decisions I make as a novelist can potentially enrich lives, discoveries in science can also save them, perhaps even create new forms of life altogether.

Science is bloody powerful. This, ultimately, is what makes the revolution in our human self-understanding out and out inevitable. Scientific theory, unlike theory elsewhere, commands consensus, because scientific theory, unlike theory elsewhere, reliably provides us with direct power over ourselves and our environments. Scientific understanding, when genuine, cannot but revolutionize. Nooaesthetic understanding, like religious or philosophical understanding, simply has no way of arbitrating its theoretical claims. It is, compared to science at least, toothless.

And it always has been. Only the absence of any real scientific understanding of the human has allowed us to pretend otherwise all these years, to think our armchair theory games were more than mere games. And that’s changing.

So of course it makes sense to be wary of scientific myopia, especially given what science has taught us about our cognitive foibles. Humans oversimplify, and science, like art and traditional aesthetics, is a human enterprise. The difference is that science, unlike traditional aesthetics, revolutionizes our collective understanding of ourselves and the world.

The very reason we need to guard against scientific myopia, in other words, is also the very reason why science is doomed to revolutionize the aesthetic. We need to be wary of things like Cartesian thinking simply because it really is the case that our every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition turns on our biology in some fundamental respect. The only real question is how.

But Noë is making a far different and far less plausible claim: that contemporary neuroscience has no place in aesthetics.

“Neuroscience is too individual, too internal, too representational, too idealistic, and too antirealistic to be a suitable technique for studying art. Art isn’t really a phenomenon at all, not in the sense that photosynthesis or eyesight are phenomena that stand in need of explanation. Art is, rather, a mode of investigation, a style of research, into what we are. Art also gives us an opportunity to observe ourselves in the act of knowing the world.”

The reason for this, Noë is quick to point out, isn’t that the sciences of the human don’t have important things to say about a human activity such as art—of course it does—but because “neuroscience has failed to frame a plausible conception of human nature and experience.”

Neuroscience, in other words, possesses no solution to the mind-body problem. Like biology before the institutionalization of evolution, cognitive science lacks the theoretical framework required to unify the myriad phenomena of the human. But then, so does Noë, who only has philosophy to throw at the problem, philosophy that, by his own admission, neuroscience does not find all that compelling.

Which at last frames the question of neuroaesthetics the way Noë should have framed it in the beginning. Say we agree with Noë, and decide that neuroaesthetics has no place in art criticism. Okay, so what does? The possibility that neuroaesthetics ‘gets art wrong’ tells us nothing about the ability of nooaesthetics, traditional art criticism turning on folk-psychological idioms, to get art right. After all, the fact that science has overthrown every single traditional domain of speculation it has encountered strongly suggests that nooaesthetics has got art wrong as well. What grounds do we have for assuming that, in this one domain at least, our guesswork has managed to get things right? Like any other domain of traditional speculation on the human, theorists can’t even formulate their explananda in a consensus commanding way, let alone explain them. Noë can confidently declare to know ‘What Art Is’ if he wants, but ultimately he’s taking a very high number in a very long line at a wicket that, for all anyone knows, has always been closed.

The fact is, despite all the verbiage Noë has provided, it seems pretty clear that neuroaesthetics—even if inevitably myopic in, this, the age of its infancy—will play an ever more important role in our understanding of art, and that the nooaesthetic conceits of our past will correspondingly dwindle ever further into the mists of prescientific fable and myth.

As this artist thinks they should.

Anarcho-ecologies and the Problem of Transhumanism

by rsbakker

So a couple weeks back I posed the Augmentation Paradox:

The more you ‘improve’ some ancestral cognitive capacity, the more you degrade all ancestral cognitive capacities turning on the ancestral form of that cognitive capacity.

I’ve been debating this for several days now (primarily with David Roden, Steve Fuller, Rick Searle, and others over at Enemy Industry), as well as scribbling down thoughts on my own. One of the ideas falling out of these exchanges and ruminations is something that might be called ‘anarcho-ecology.’

Let’s define an ‘anarcho-ecocology’ as an ecology too variable to permit human heuristic cognition. Now we know that such an ecology is possible because we know that heuristics use cues possessing stable differential relations to systems to solve systems. The reliability of these cues depends on the stability of those differential relations, which in turn depends on the invariance of the systems to be solved. This simply unpacks the platitude that we are adapted to the world the way it is (or perhaps to be more precise (and apropos this post) the way it was). Anarcho-ecologies arise when systems, either targeted or targeting, begin changing so rapidly that ‘cuing,’ the process of forming stable differential relations to the target systems, becomes infeasible.  They are problem-solving domains where crash space has become absolute.

I propose that Transhumanism, understood as “an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities,” is actually promoting the creation of anarcho-ecologies, and as such, the eventual obsolescence of human heuristic cognition. And since intentional cognition constitutes a paradigmatic form of human heuristic cognition, this amounts to saying that Transhumanism is committed to what I’ve been calling the Semantic Apocalypse.

The argument, as I’ve been posing it, looks like this:

1) Heuristic cognition depends on stable, taken-for-granted backgrounds.

2) Intentional cognition is heuristic cognition.

/3) Intentional cognition depends on stable, taken-for-granted backgrounds.

4) Transhumanism entails the continual transformation of stable, taken-for-granted backgrounds.

/5) Transhumanism entails the collapse of intentional cognition.

Let’s call this the ‘Anarcho-ecological Argument Against Transhumanism,’ or AAAT.

Now at first blush, I’m sure this argument must seem preposterous, but I assure you, it’s stone-cold serious. So long as the reliability of intentional cognition turns on invariant, ancestral backgrounds, transformations in those backgrounds will compromise intentional cognition. Consider ants as a low-dimensional analogue. As an eusocial species they form ‘super-organisms,’ collectives exhibiting ‘swarm intelligence,’ where simple patterns of interaction–chemical, acoustic, and tactile communicative protocols–between individuals scale to produce collective solutions to what seem to be complex problems. Now if every ant were suddenly given idiosyncratic communicative protocols–different chemicals, different sounds, different sensitivities–it seems rather obvious that the colony would simply collapse. Lacking any intrasystematic cohesion, it just would not be able to resolve any problems.

Now of course humans, though arguably eusocial, are nowhere near so simple as ants. Human soldiers don’t automatically pace out pheromone trails, they have to be ‘convinced’ that this is what they ‘should’ do. Where ants need only cue one another, humans need to both cue and decode each other. Individual humans, unlike ants, possess ‘autonomy.’ And this disanalogy between ants and humans, I think, handily isolates why most people simply assume that AAAT has to be wrong, that it is obviously too ‘reductive’ is some way. They understand the ‘cue’ part of the argument, appreciate the way changing those systems that intentional cognition takes for granted will transform ancestrally reliable cues into miscues. It’s the decode part, they think, that saves the transhumanist day.  We humans, unlike ants, are not passive consumers of our social environments. Miscues can be identified, diagnosed, and then overcome, precisely because we are autonomous.

So much for AAAT.

Except that it entirely agrees. The argument says nothing about the possibility of somehow decoding intentional miscues (like those we witnessed in spectacular fashion with Ashley Madison’s use of bots to simulate interested women), it only claims that such decoding will not involve intentional cognition, insofar as intentional cognition is heuristic cognition, and heuristic cognition requires invariant backgrounds, stable ecologies. Since Transhumanism does not endorse any coercive, collective augmentations of human capacities, Transhumanists generally see augmentation in consumer terms, something that individuals are free to choose or to eschew given the resources at their disposal. Not only will individuals be continually transforming their capacities, they will be doing so idiomatically. The invariant background that intentional cognition is so exquisitely adapted to exploit will become a supermarket of endless enhancement possibilities–or so they hope. And as that happens, intentional cognition will become increasingly unreliable, and ultimately, obsolete.

To return to our ant analogy, then, we can see that it’s not simply a matter of humans possessing autonomy (however this is defined). Humans, like ants, possess specifically social adaptations, entirely unconscious sensitivities to cues provided by others. We generally ‘solve’ one another effortlessly and automatically, and only turn to ‘decoding,’ deliberative problem-solving, when these reflexive forms of cognition let us down. The fact is, decoding is metabolically expensive, and we tend to avoid it as often as we can. Even more significantly (but not surprisingly), we tend to regard instances of decoding successful to the extent that we can once again resume relying on our thoughtless social reflexes. This is why, despite whatever ‘autonomy’ we might possess, we remain ant-like, blind problem-solvers, in this respect. We have literally evolved to participate in co-dependent communities, to cooperate when cooperation served our ancestors, to compete when competition served our ancestors, to condemn when condemnation served our ancestors, and so on. We do these things automatically, without ‘decoding,’ simply because they worked well enough in the past, given the kinds of systems that required solving (meaning others, even ourselves). We take their solving power for granted.

Humans, for all their vaunted ‘autonomy,’ remain social animals, biologically designed to take advantage of what we are without having to know what we are. This is the design–the one that allows us to blindly solve our social environments–that Transhumanism actively wants to render obsolete.

But before you shout, ‘Good riddance!’ it’s worth remembering that this also happens to be the design upon which all discourse regarding meaning and freedom happens to depend. Intentional discourse. The language of humanism…

Because as it turns out, ‘human’ is a heuristic construct through and through.




by rsbakker

Akrasis (or, social akrasis) refers to the technologically driven socio-economic process, already underway at the beginning of the 20th century, which would eventually lead to Choir.

Where critics in the early 21st century continued to decry the myriad cruelties of the capitalist system, they failed to grasp the greater peril hidden in the way capitalism panders to human yens. Quick to exploit the discoveries arising out of cognitive science, market economies spontaneously retooled to ever more effectively cue and service consumer demand, eventually reconfiguring the relation between buyer and seller into subpersonal circuits (triggering the notorious shift to ‘whim marketing,’ the data tracking of ‘desires’ independent of the individuals hosting them). The ecological nature of human cognition all but assured the mass manipulative character of this transformation. The human dependency on proximal information to cue what amount to ancestral guesses regarding the nature of their social and natural environments provided sellers with countless ways to game human decision making. The global economy was gradually reorganized to optimize what amounted to human cognitive shortcomings. We became our own parasite.

Just as technological transformation (in particular, the scaling of AI) began crashing the utility of our heuristic modes of meaning making, it began to provide virtual surrogates, ways to enable the exercise of otherwise unreliable cognitive capacities. In other words, even as the world became ever more inhuman, our environments became ever more anthropomorphic, ever more ‘smart’ and ‘immersive.’ Thus ‘akrasis,’ the ancient term referring to the state of acting against one’s judgment, which here describes a society acting against the human capacity to judge altogether, a society bent upon the systematic substitution of actual autonomy for simulated autonomy.

Humans, after all, have evolved to leverage the signal of select upstream interventions, assuming it a reliable component of their environments. Once we developed the capacity to hack these latter signals, the world effectively became a drug.

Akrasis has a long history, as long as life itself, according to certain theories. Before the 21st century, the process appeared ‘enlightening,’ but only because the limitations of the technologies involved (painting, literacy, etc.) rendered the resulting transformations manageable. But the rate of transformation continued to accelerate, while the human capacity to adapt remained constant. The outcome was inevitable. As the bandwidth of our interventions approached then surpassed the bandwidth of our central nervous systems, the simulation of meaning became the measure of meaning. Our very frame of reference had been engulfed. For billions, the only obvious direction of success—the direction of ‘cognitive comfort’—lay away from the world and into technology. So they defected in their billions, embracing signals, environments, manufactured entirely from predatory code. Culture became indistinguishable from cheat space—as did, for those embracing virtual fitness indicators, experience itself.

By 2050, we had become an advanced akratic civilization, a species whose ancestral modes of meaning-making had been utterly compromised. Art was an early casualty, though decades would be required to recognize as much. Fantasy, after all, was encouraged in all forms, especially those, like art or religion, laying claim to obsolete authority gradients. To believe in art was to display market vulnerabilities, or to be so poor as to be insignificant. No different than believing in God.

Social akrasis is now generally regarded as a thermodynamic process intrinsic to life, the mechanical outcome of biology falling within the behavioural purview of biology. Numerous simulations have demonstrated that ‘outcome convergent’ or ‘optimizing’ systems, once provided the base capacity required to extract excess capacity from their environments, will simply bootstrap until they reach a point where the system detaches from its environment altogether, begins converging upon the signal of some environmental outcome, rather than any actual environmental outcome.

Thus the famous ‘Junkie Solution’ to Fermi’s Paradox (as recently confirmed by the Gala Semantic Supercomputer at MIT).

And thus Choir.

The Augmentation Paradox

by rsbakker

So, thanks to the great discussion on the ‘Knowledge of Wisdom Paradox,’ here’s a sharper way to characterize the ecological stakes of the posthuman:

The Augmentation Paradox: The more you ‘improve’ some ancestral capacity, the more you degrade all ancestral capacities turning on the ancestral form of that capacity.

It’s not a paradox in the formal sense, of course. Also note that the dependency between ancestral capacities can be a dependency within or between individuals. Imagine a ‘confabulation detector,’ a device that shuts down your verbal reporting system whenever the neural signature of confabulation is detected, effectively freeing you from the dream world we all inhabit, while effectively exiling you from all social activities requiring confabulation (you now trigger ‘linguistic pause’ alerts), and perhaps dooming you to suffer debilitating depression.

It seems to me that something like this has to be floating around somewhere–in debates regarding transhumanism especially. If most all artificial augmentations entail natural degradations, then the question becomes one of what is gained overall. One can imagine, for instance, certain capacities degrading gracefully, while others (like the socio-cognitive capacities of those conned by Ashley Madison bots, for instance) collapsing catastrophically. So the question has to be, What guarantee do we have that augmentations will recoup degradations?

The point being, of course, that we’re not tinkering with cognitive technologies on the ground so much as on the 115th floor. It’s 3.8 billion years down!

Either way, the plausibility of the transhumanist project pretty clearly depends on somehow resolving the Augmentation Paradox in their favour.


Get every new post delivered to your Inbox.

Join 699 other followers