Three Pound Brain

No bells, just whistling in the dark…

Framing “On Alien Philosophy”…

by rsbakker

dubbit

Peter Hankins of Conscious Entities fame has a piece considering “On Alien Philosophy.” The debate is just getting started, but I thought it worthwhile explaining why I think this particular paper of mine amounts to more than yet just another interpretation to heap onto the intractable problem of ourselves.

Consider the four following claims:

1) We have biologically constrained (in terms of information access and processing resources) metacognitive capacities ancestrally tuned to the solution of various practical problem ecologies, and capable of exaptation to various other problems.

2) ‘Philosophical reflection’ constitutes such an exaptation.

3) All heuristic exaptations inherit, to some extent, the problem-solving limitations of the heuristic exapted.

4) ‘Philosophical reflection’ inherits the problem-solving limitations of deliberative metacognition.

Now I don’t think there’s much anything controversial about any of these claims (though, to be certain, there’s a great many devils lurking in the details adduced). So note what happens when we add the following:

5) We should expect human philosophical practice will express, in a variety of ways, the problem-solving limitations of deliberative metacognition.

Which seems equally safe. But note how the terrain of the philosophical debate regarding the nature of the soul has changed. Any claim purporting the exceptional nature of this or that intentional phenomena now needs to run the gauntlet of (5). Why assume we cognize something ontologically exceptional when we know we are bound to be duped somehow? All things being equal, mediocre explanations will always trump exceptional ones, after all.

The challenge of (5) has been around for quite some time, but if you read (precritical) eliminativists like Churchland, Stich, or Rosenberg, this is where the battle grinds to a standstill. Why? Because they have no general account of how the inevitable problem-solving limitations of deliberative metacognition would be expressed in human philosophical practice, let alone how they would generate the appearance of intentional phenomena. Since all they have are promissory notes and suggestive gestures, ontologically exceptional accounts remain the only game in town. So, despite the power of (5), the only way to speak of intentional phenomena remains the traditional, philosophical one. Science is blind without theory, so absent any eliminativist account of intentional phenomena, it has no clear way to proceed with their investigation. So it hews to exceptional posits, trusting in their local efficacy, and assuming they will be demystified by discoveries to come.

Thus the challenge posed by Alien Philosophy. By giving real, abductive teeth to (5), my account overturns the argumentative terrain between eliminativism and intentionalism by transforming the explanatory stakes. It shows us how stupidity, understood ecologically, provides everything we need to understand our otherwise baffling intuitions regarding intentional phenomena. “On Alien Philosophy” challenges the Intentionalist to explain more with less (the very thing, of course, he or she cannot do).

Now I think I’ve solved the problem, that I have a way to genuinely naturalize meaning and cognition. The science will sort my pretensions in due course, but in the meantime, the heuristic neglect account of intentionality, given its combination of mediocrity and explanatory power, has to be regarded as a serious contender.

“On Alien Philosophy”

by rsbakker

alien-philosophy

The Journal of Consciousness Studies published “On Alien Philosophy” today–a nice way ring in my 50th year on this planet! The quotable version can be found here, but I’ve also uploaded the preprint version (with a handful of errors, including one on Dennett caught by Dennett himself no less) here. This paper has it all, only laid out in a way that saddles critics with an enormous abductive challenge. Quibbling with this or that atom of my argument is easy–too easy. The challenge is to do so in a manner explaining as much as parsimoniously.

Abstract: Given a sufficiently convergent cognitive biology, we might suppose that aliens would likely find themselves perplexed by many of the same kinds of problems that inform our traditional and contemporary philosophical debates. In particular, we can presume that ‘humanoid’ aliens would be profoundly stumped by themselves, and that they would possess a philosophical tradition organized around ‘hard problems’ falling out of their inability to square their scientific self-understanding with their traditional and/or intuitive self-understanding. As speculative as any such consideration of ‘alien philosophy’ must be, it provides a striking, and perhaps important, way to recontextualize contemporary human debates regarding cognition and consciousness.

Having contributed my bit to the great endeavour to unravel the mysteries of consciousness and cognition, I now turn to more traditional methods of unravelling consciousness and cognition… gracefully, or not. 50 deserves a hangover.

Scripture become Philosophy become Fantasy

by rsbakker

scripture

Cosmos and History has published “From Scripture to Fantasy: Adrian Johnston and the Problem of Continental Fundamentalism” in their most recent edition, which can be found here. This is a virus that needs to infect as many continental philosophy graduate students as possible, lest the whole tradition be lost to irrelevance. The last millennium’s radicals have become this millennium’s Pharisees with frightening speed, and now only the breathless have any hope of keeping pace.

ABSTRACT: Only the rise of science allowed us to identify scriptural ontologies as fantastic conceits, as anthropomorphizations of an indifferent universe. Now that science is beginning to genuinely disenchant the human soul, history suggests that traditional humanistic discourses are about to be rendered fantastic as well. Via a critical reading of Adrian Johnston’s ‘transcendental materialism,’ I attempt to show both the shape and the dimensions of the sociocognitive dilemma presently facing Continental philosophers as they appear to their outgroup detractors. Trusting speculative a priori claims regarding the nature of processes and entities under scientific investigation already excludes Continental philosophers from serious discussion. Using such claims, as Johnston does, to assert the fundamentally intentional nature of the universe amounts to anthropomorphism. Continental philosophy needs to honestly appraise the nature of its relation to the scientific civilization it purports to decode and guide, lest it become mere fantasy, or worse yet, conceptual religion.

KEYWORDS: Intentionalism; Eliminativism; Humanities; Heuristics; Speculative Materialism

All transcendental indignation welcome! I was a believer once.

Reactionary Atheism: Hagglund, Derrida, and Nooconservatism*

by rsbakker

The difference between the critic and the apologist in philosophy, one would think, is the difference between conceiving philosophy as refuge, a post hoc means to rationalize and so recuperate what we cherish or require, and conceiving philosophy as exposure, an ad hoc means to mutate thought and so see our way through what we think we cherish or require. Now in Continental philosophy so-called, the overwhelming majority of thinkers would consider themselves critics and not apologists. They would claim to be proponents of exposure, of the new, and deride the apologist for abusing reason in the service of wishful thinking.

But this, I hope to show, is little more than a flattering conceit. We are all children of Hollywood, all prone to faux-renegade affectations. Nowadays ‘critic,’ if anything, simply names a new breed of apologist. This is perhaps inevitable, in a certain sense. The more cognitive science learns regarding reason, the more intrinsically apologetic it seems to become, a confabulatory organ primarily adapted to policing and protecting our parochial ingroup aspirations. But it is also the case that thought (whatever the hell it is) has been delivered to a radically unprecedented juncture, one that calls its very intelligibility into question. Our ‘epoch of thinking’ teeters upon the abyssal, a future so radical as to make epic fantasy of everything we are presently inclined to label ‘human.’ Whether it acknowledges as much or not, all thought huddles in the shadow of the posthuman–the shadow of its end.

I’ve been thumping this particular tub for almost two decades now. It has been, for better or worse, the thematic impetus behind every novel I have written and every paper I have presented. And at long last, what was once a smattering of voices has become a genuine chorus (for reasons quite independent of my tub thumping I’m sure). Everyone agrees that something radical is happening. Also, everyone agrees that this ‘something’ turns on the every-expanding powers of science–and the sciences of the brain in particular. This has led to what promises to become one of those generational changes in philosophical thinking, at least in its academic incarnation. Though winded, thought is at last attempting to pace the times we live in. But I fear that it’s failing this attempt, that, far from exposing itself to the most uncertain future humanity has ever known, materially let alone intellectually, it is rather groping for ways to retool and recuperate a philosophical heritage that the sciences are transforming into mythology as we speak. It is attempting to innoculate thought as it exists against the sweeping transformations engulfing its social conditions. To truly expose thought, I want to argue, is to be willing to let it die…

Or become inhuman.

My position is quite simple: Now that science is overcoming the neural complexities that have for so long made an intentional citadel out of the soul, it will continue doing what it has always done, which is offer sometimes simple, sometimes sophisticated, mechanical explanations of what it finds, and so effectively ‘disenchanting’ the brain the way it has the world. This first part, at least, is uncontroversial. The real question has to do with the ‘disenchantment,’ which is to say the degree to which these mechanical explanations will be commensurate with our intentional self-understanding, or what Sellars famously called the ‘manifest image.’ Since there are infinitely more ways for our mechanistic scientific understanding to contradict our intentional prescientific understanding, we should, all things being equal, expect that the latter will be overthrown. Indeed, we already have a growing mountain of evidence trending in this direction. Given our apologetic inclinations, however, it should come as no surprise that the literature is rife with arguments why all things are not equal. Aside from an ingrained suspicion of happy endings, especially where science is concerned (I’m inclined to think it will cut our throats), the difficulty I have with such arguments lies in their reliance on metacognitive intuition. For the life of me, I cannot understand why we are in any better position peering into our souls than our ancestors were peering into the heavens. Why should the accumulation of scientific information be any friendlier to our traditional, prescientific assumptions this one time around?

I simply don’t think the human, or for that matter, any of the concepts science has chased from the world into the shadows of the human brain, will prove to be the miraculous exception. Science will rewrite ‘rules’ the way it has orbits, ‘meanings’ the way it has planets, and so on, doing what it has done so many times in the past: take simplistic, narcissistic notions founded on spare and fragmentary information and replacing them portraits of breathtaking causal complexity.

This is why I’m so suspicious of the ongoing ‘materialist turn’ in Continental philosophy, why I see it more as a crypto-apologetic attempt to rescue traditional conceptual conceits than any genuine turn away from ‘experience.’ This is how I read Zizek’s The Parallax View several weeks back, and this is how I propose to read Martin Hagglund’s project in his recent (and quite wonderfully written), Radical Atheism: Derrida and the Time of Life. Specifically, I want to take issue with his materialist characterization of Derrida’s work, even though this seems to be the aspect of his book that has drawn the most praise. Aaron Hodges, in “Martin Hagglund’s Speculative Materialism,” contends that Radical Atheism has “effectively dealt the coup de grace to any understanding of deconstructive logic that remains under the sway of idealist interpretation.” Even John Caputo, in his voluminous counterargument concedes that Hagglund’s Derrida is a materialist Derrida; he just happens to think that there are other Derridas as well.

Against the grain of Radical Atheism’s critical reception, then, I want to argue that no Derrida, Hagglund’s or otherwise, can be ‘materialist’ in any meaningful sense and remain recognizable as a ‘Derrida.’ He simply is not, as Hagglund claims, a philosopher of ‘ultratranscendence’ (as Hagglund defines the term). Derrida is not the author of any singular thought ‘beyond’ the empirical and the transcendental. Nor does he, most importantly, provide any way to explain the fundamental ‘synthesis,’ as Hagglund calls it, required to make sense of experience.

To evidence this last point, I will rehearse the explanation of ‘synthesis’ provided by the Blind Brain Theory (BBT). I will then go on to flex a bit of theoretical muscle, to demonstrate the explanatory power of BBT, the way it can ‘get behind’ and explicate philosophical positions even as notoriously arcane as Husserlian phenomenology or Derridean deconstruction. This provides us with the conceptual resources required to see the extent of Derrida’s noocentrism, the way he remains, despite the apparent profundity of his aleatory gestures, thoroughly committed to the centrality of meaning–the intentional. Far from ‘radical,’ I will contend, Derrida remains a nooconservative thinker, one thoroughly enmeshed in the very noocentric thinking Hagglund and so many others seem to think he has surpassed.

For those not familiar with Radical Atheism, I should note the selective, perhaps even opportunistic, nature of the reading I offer. From the standpoint of BBT, the distinction between deconstruction and negative theology is the distinction between deflationary conceptions of intentionality in its most proximal and distal incarnations. Thus the title of the present piece, ‘Reactionary Atheism.’ To believe in meaning of any sort is to have faith in some version of ‘God.’ Finite or infinite, mortal or immortal, the intentional form is conserved–and as I hope to show, that form is supernatural. BBT is a genuinely post-intentional theoretical position. According to it, there are no meaning makers,’ objective or subjective. According to it, you are every bit as mythological as the God you would worship or honour. In this sense, the contest between atheistic and apophatic readings of Derrida amounts to little more than another intractable theological dispute. On the account offered here, both houses are equally poxed.

My reading therefore concentrates on the first two chapters of Radical Atheism, where Hagglund provides an interpretation of how (as Derrida himself claims) trace and differance arise out of his critique of Husserl’s Phenomenology of Internal Time-consciousness. Since Hagglund’s subsequent defence of ‘radical atheism’ turns on the conclusions he draws from this interpretation–namely, the ‘ultratranscendental’ status of trace and differance and the explanation of synthesis they offer–undermining these conclusions serves to undermine Hagglund’s thesis as a whole.

Horn head

Atheism as traditionally understood, Hagglund begins, does not question the desire for God or immortality and so leaves ‘mortal’ a privative concept. To embrace atheism is to settle for mere mortality. He poses radical atheism as Derrida’s alternative, the claim that the conceptual incoherence of the desire for God and immortality forces us to affirm its contrary, the mortal:

The key to radical atheism is what I analyze as the unconditional affirmation of survival. This affirmation is not a matter of choice that some people make and others do not: it is unconditional because everyone is engaged by it without exception. Whatever one may want or whatever one may do, one has to affirm the time of survival, since it opens the possibility to live on–and thus to want something or to do something–in the first place. This unconditional affirmation of survival allows us to read the purported desire for immortality against itself. The desire to live on after death is not a desire for immortality, since to live on is to remain subjected to temporal finitude. The desire for survival cannot aim at transcending time, since the given time is the only chance for survival. There is thus an internal contradiction in the so-called desire for immortality. Radical Atheism, 2

Time becomes the limit, the fundamental constraint, the way, Hagglund argues, to understand how the formal commitments at the heart of Derrida’s work render theological appropriations of deconstruction unworkable. To understand deconstruction, you need to understand Derrida’s analysis of temporality. And once you understand Derrida’s analysis of temporality, he claims, you will see that deconstruction entails radical atheism, the incoherence of desiring immortality.

Although Hagglund will primarily base his interpretation of deconstructive temporality on a reading of Speech and Phenomena, it is significant, I think, that he begins with a reading of “Ousia and Gramme,” which is to say, a reading of Derrida’s reading of Heidegger’s reading of Hegel! In “Ousia and Gramme,” Derrida is concerned with the deconstructive revision of the Heideggerean problematic of presence. The key to this revision, he argues, lies in one of the more notorious footnotes in Being and Time, where Heidegger recapitulates the parallels between Hegel’s and Aristotle’s considerations of temporality. This becomes “the hidden passageway that makes the problem of presence communicate with the problem of the written trace” (Margins of Philosophy, 34). Turning from Heidegger’s reading of Hegel, Derrida considers what Aristotle himself has to say regarding time in Physics (4:10), keen to emphasize Aristotle’s concern with the apories that seem to accompany any attempt to think the moment. The primary problem, as Aristotle sees it, is the difficulty of determining whether the now, which divides the past from the future, is always one and the same or distinct, for the now always seems to somehow be the same now, even as it is unquestionably a different now. The lesson that Derrida eventually draws from this has to do with the way Heidegger, in his attempt to wrest time from the metaphysics of presence, ultimately commits the very theoretical sins that he imputes to Hegel and Aristotle. As he writes: “To criticize the manipulation or determination of any one of these concepts from within the system always amounts, and let this expression be taken with its full charge of meaning here, to going around in circles: to reconstituting, according to another configuration, the same system” (60). The lesson, in other words, is that there is no escaping the metaphysics of presence. Heidegger’s problem isn’t that he failed to achieve what he set out to achieve–How could it be when such failure is constitutive of philosophical thought?–but that he thought, if only for a short time, that he had succeeded.

The lesson that Hagglund draws from “Ousia and Gramme,” however, is quite different:

The pivotal question is what conclusion to draw from the antinomy between divisible time and indivisible presence. Faced with the relentless division of temporality, one must subsume time under a nontemporal presence in order to secure the philosophical logic of identity. The challenge of Derrida’s thinking stems from his refusal of this move. Deconstruction insists on a primordial division and thereby enables us to think the radical irreducibility of time as constitutive of any identity. Radical Atheism, 16-17

If there is one thing about Hagglund’s account that almost all his critics agree on, it is his clarity. But even at this early juncture, it should be clear that this purported ‘clarity’ possesses a downside. Derrida raises and adapts the Aristotelian problem of divisibility in “Ousia and Gramme” to challenge, not simply Heidegger’s claim to primordiality, but all claims to primordiality. And he criticizes Heidegger, not for thinking time in terms of presence, but for believing it was possible to think time in any other way. Derrida is explicitly arguing that ‘refusing this move’ is simply not possible, and he sees his own theoretical practice as no exception. His ‘challenge,’ as Hagglund calls it, lies in conceiving presence as something at once inescapable and impossible. Hagglund, in other words, distills his ‘pivotal question’ via a reading of “Ousia and Gramme” that pretty clearly runs afoul the very theoretical perils it warns against. We will return to this point in due course.

Having isolated the ‘pivotal,’ Hagglund turns to the ‘difficult’:

The difficult question is how identity is possible in spite of such division. Certainly, the difference of time could not even be marked without a synthesis that relates the past to the future and thus posits an identity over time. Philosophies of time-consciousness have usually solved the problem by anchoring the synthesis in a self-present subject, who relates the past to the future through memories and expectations that are given in the form of the present. The solution to the problem, however, must assume that the consciousness that experiences time in itself is present and thereby exempt from the division of time. Hence, if Derrida is right to insist that the self-identity of presence is impossible a priori, then it is all the more urgent to account for how the synthesis of time is possible without being grounded in the form of presence. 17

Identity has to come from somewhere. And this is where Derrida, according to Hagglund, becomes a revolutionary part of the philosophical solution. “For philosophical reason to advocate endless divisibility,” he writes, “is tantamount to an irresponsible empiricism that cannot account for how identity is possible” (25). This, Hagglund contends, is Derrida’s rationale for positing the trace. The nowhere of the trace becomes the ‘from somewhere’ of identity, the source of ‘originary synthesis.’ Hagglund offers Derrida’s account of the spacing of time and the temporalizing of space as a uniquely deconstructive account of synthesis, which is to say, an account of synthesis that does not “subsume time under a nontemporal presence in order to secure the philosophical logic of identity” (16).

Given the centrality of the trace to his thesis, critics of Radical Atheism were quick to single it out for scrutiny. Where Derrida seems satisfied with merely gesturing to the natural, and largely confining actual applications of trace and difference to semantic contexts, Hagglund presses further: “For Derrida, the spacing of time is an ‘ultratranscendental’ condition from which nothing can be exempt” (19). And when he says ‘nothing,’ Hagglund means nothing, arguing that everything from the ideal to “minimal forms of life” answers to the trace and differance. Hagglund was quick to realize the problem. In a 2011 Journal of Philosophy interview, he writes, “[t]he question then, is how one can legitimize such a generalization of the structure of the trace. What is the methodological justification for speaking of the trace as a condition for not only language and experience but also processes that extend beyond the human and even the living?”

Or to put the matter more simply, just what is ‘ultratranscendental’ supposed to mean?

Derrida, for his part, saw trace and differance as (to use Gasche’s term) ‘quasi-transcendental.’ Derrida’s peculiar variant of contextualism turns on his account of trace and differance. Where pragmatic contextualists are generally fuzzy about the temporality implicit to the normative contexts they rely upon, Derrida actually develops what you could call a ‘logic of context’ using trace and differance as primary operators. This is why his critique of Husserl in Speech and Phenomena is so important. He wants to draw our eye to the instant-by-instant performative aspect of meaning. When you crank up the volume on the differential (as opposed to recuperative) passage of time, it seems to be undeniably irreflexive. Deconstruction is a variant of contextualism that remains ruthlessly (but not exclusively) focussed on the irreflexivity of semantic performances, dramatizing the ‘dramatic idiom’ through readings that generate creativity and contradiction. The concepts of trace and differance provide synchronic and diachronic modes of thinking this otherwise occluded irreflexivity. What renders these concepts ‘quasi-transcendental,’ as opposed to transcendental in the traditional sense, is nothing other than trace and differance. Where Hegel temporalized the krinein of Critical Philosophy across the back of the eternal, conceiving the recuperative role of the transcendental as a historical convergence upon his very own philosophy, Derrida temporalizes the krinein within the aporetic viscera of this very moment now, overturning the recuperative role of the transcendental, reinterpreting it as interminable deflection, deferral, divergence–and so denying his thought any self-consistent recourse to the transcendental. The concept DIFFERANCE can only reference differance via the occlusion of differance. “The trace,” as Derrida writes, “is produced as its own erasure” (“Ousia and Gramme,” 65). One can carve out a place for trace and differance in the ‘system space’ of philosophical thinking, say their ‘quasi-transcendentality’ (as Gasche does in The Tain of the Mirror, for instance) resides in the way they name both the condition of possibility and impossibility of meaning and life, or one can, as I would argue Derrida himself did, evince their ‘quasi-transcendentality’ through actual interpretative performances. One can, in other words, either refer or revere.

Since second-order philosophical accounts are condemned to the former, it has become customary in the philosophical literature to assign content to the impossibility of stable content assignation, to represent the way performance, or the telling, cuts against representation, or the told. (Deconstructive readings, you could say, amount to ‘toldings,’ readings that stubbornly refuse to allow the antinomy of performance and representation to fade into occlusion). This, of course, is one of the reasons late 20th century Continental philosophy came to epitomize irrationalism for so many in the Anglo-American philosophical community. It’s worth noting, however, that in an important sense, Derrida agreed with these worries: this is why he prioritized demonstrations of his position over schematic statements, drawing cautionary morals as opposed to traditional theoretical conclusions. As a way of reading, deconstruction demonstrates the congenital inability of reason and representation to avoid implicitly closing the loop of contradiction. As a speculative account of why reason and representation possess this congenital inability, deconstruction explicitly closes that loop itself.

Far from being a theoretical virtue, then, ‘quasi-transcendence’ names a liability. Derrida is trying to show philosophy that inconsistency, far from being a distal threat requiring some kind of rational piety to avoid, is maximally proximal, internal to its very practice. The most cursory survey of intellectual history shows that every speculative position is eventually overthrown via the accumulation of interpretations. Deconstruction, in this sense, can be seen as a form of ‘interpretative time-travel,’ a regimented acceleration of processes always already in play, a kind of ‘radical translation’ put into action in the manner most violent to theoretical reason. The only way Derrida can theoretically describe this process, however, is by submitting to it–which is to say, by failing the way every other philosophy has failed. ‘Quasi-transcendence’ is his way of building this failure in, a double gesture of acknowledging and immunizing; his way of saying, ‘In speaking this, I speak what cannot be spoken.’

(This is actually the insight that ended my tenure as a ‘Branch Derridean’ what seems so long ago, the realization that theoretical outlooks that manage to spin virtue out of their liabilities result in ‘performative first philosophy,’ positions tactically immune to criticism because they incorporate some totalized interpretation of critique, thus rendering all criticisms of their claims into exemplifications of those claims. This is one of the things I’ve always found the most fascinating about deconstruction: the way it becomes (for those who buy into it) a performative example of the very representational conceit it sets out to demolish.)

‘Quasi-transcendental,’ then, refers to ‘concepts’ that can only be shown. So what then, does Hagglund mean by ‘utlratranscendental’ as opposed to ‘transcendental’ and ‘quasi-transcendental’? The first thing to note is that Hagglund, like Gasche and others, is attempting to locate Derrida within the ‘system space’ of philosophy and theory more generally. For him (opposed to Derrida), deconstruction implies a distinct position that rationalizes subsequent theoretical performances. As far as I can tell, he views the recursive loop of performance and representation, telling and told, as secondary. The ultratranscendental is quite distinct from the quasi-transcendental (though my guess is that Hagglund would dispute this). For Hagglund, rather, the ultratranscendental is thought through the lense of the transcendental more traditionally conceived:

On the one hand, the spacing of time has an ultratranscendental status because it is the condition for everything all the way up and including the ideal itself. The spacing of time is the condition not only for everything that can be cognized and experienced, but also for everything that can be thought and desired. On the other hand, the spacing of time has an ultratranscendental status because it is the condition for everything all the way down to minimal forms of life. As Derrida maintains, there is no limit to the generality of differance and the structure of the trace applies to all fields of the living. Radical Atheism, 19

The ultratranscendental, in other words, is simply an ‘all the way’ transcendental, as much a condition of possibility of life as a condition of possibility of experience. “The succession of time,” Hagglund states in his Journal of Philosophy interview, “entails that every moment negates itself–that it ceases to be as soon as it comes to be–and therefore must be inscribed as trace in order to be at all.” Trace and differance, he claims, are logical as opposed to ontological implications of succession, and succession seems to be fundamental to everything.

This is what warrants the extension of trace and differance from the intentional (the kinds of contexts Derrida was prone to deploy them) to the natural. And this is why Hagglund is convinced he’s offering a materialist reading of Derrida, one that allows him to generalize Derrida’s arche-writing to an ‘arche-materiality’ consonant with philosophical naturalism. But when you turn to his explicit statements to this effect, you find that the purported, constitutive generality of the trace, what makes it ultratranscendental, becomes something quite different:

This notion of the arche-materiality can accommodate the asymmetry between the living and the nonliving that is integral to Darwinian materialism (the animate depends upon the inanimate but not the other way around). Indeed, the notion of arche-materiality allows one to account for the minimal synthesis of time–namely, the minimal recording of temporal passage–without presupposing the advent or existence of life. The notion of arche-materiality is thus metatheoretically compatible with the most significant philosophical implications of Darwinism: that the living is essentially dependant on the nonliving, that animated intention is impossible without mindless, inanimate repetition, and that life is an utterly contingent and destructible phenomenon. Unlike current versions of neo-realism or neo-materialism, however, the notion of arche-materiality does not authorize its relation to Darwinism by constructing an ontology or appealing to scientific realism but rather articulating a logical infrastructure that is compatible with its findings. Journal of Philosophy

The important thing to note here is how Hagglund is careful to emphasize that the relationship between arche-materiality and Darwinian naturalism is one of compatibility. Arche-materiality, here, is posited as an alternative way to understand the mechanistic irreflexivity of the life sciences. This is more than a little curious given the ‘ultratranscendental’ status he wants to accord to the former. If it is the case that trace and differance understood as arche-materiality are merely compatible with rather than anterior to and constitutive of the mechanistic, Darwinian paradigm of the life sciences, then how could they be ‘ultratranscendental,’ which is to say, constitutive, in any sense? As an alternative, one might wonder what advantages, if any, arche-materiality has to offer theory. The advantages of mechanistic thinking should be clear to anyone who has seen a physician. So the question becomes one of what kind of conceptual work do trace and differance do.

Hagglund, in effect, has argued himself into the very bind which I fear is about to seize Continental philosophy as a whole. He recognizes the preposterous theoretical hubris involved in arguing that the mechanistic paradigm depends on arche-materiality, so he hedges, settles for ‘compatibility’ over anteriority. In a sense, he has no choice. Time is itself the object of scientific study, and a divisive one at that. Asserting that trace and differance are constitutive of the mechanistic paradigm places his philosophical speculation on firmly empirical ground (physics and cosmology, to be precise)–a place he would rather not be (and for good reason!).

But this requires that he retreat from his earlier claims regarding the ultratranscendental status of trace and differance, that he rescind the claim that they constitute an ‘all the way down’ condition. He could claim they are merely transcendental in the Kantian, or ‘conditions of experience,’ sense, but then that would require abandoning his claim to materialism, and so strand him with the ‘old Derrida.’ So instead he opts for ‘compatibility,’ and leaves the question of theoretical utility, the question of why we should bother with arcane speculative tropes like trace and differance given the boggling successes of the mechanistic paradigm, unasked.

One could argue, however, that Hagglund has already given us his answer: trace and differance, he contends, allow us to understand how reflexivity arises from irreflexivity absent the self-present subject. This is their signature contribution. As he writes:

The synthesis of the trace follows from the constitution of time we have considered. Given that the now can appear only by disappearing–that it passes away as soon as it comes to be–it must be inscribed as a trace in order to be at all. This is the becoming-space of time. The trace is necessarily spatial, since spatiality is characterized by the ability to remain in spite of temporal succession. Spatiality is thus the condition for synthesis, since it enables the tracing of relations between past and future. Radical Atheism, 18

But as far as ‘explanations’ are concerned it remains unclear as to how this can be anything other than a speculative posit. The synthesis of now moments occurs somehow. Since the past now must be recuperated within future nows, it makes sense to speak of some kind of residuum or ‘trace.’ If this synthesis isn’t the product of subjectivity, as Kant and Husserl would have it, then it has to be the product of something. The question is why this ‘something’ need have anything to do with space. Why does the fact that the trace (like the Dude) ‘abides’ have anything to do with space? The fact that both are characterized by immunity to succession implies, well… nothing. The trace, you could say, is ‘spatial’ insofar as it possesses location. But it remains entirely unclear how spatiality ‘enables the tracing of relations between past and future,’ and so becomes the ‘condition for synthesis.’

Hagglund’s argument simply does not work. I would be inclined to say the same of Derrida, if I actually thought he was trying to elaborate a traditional theoretical position in the system space of philosophy. But I don’t: I think the aporetic loop he establishes between deconstructive theory and practice is central to understanding his corpus. Derrida takes the notion of quasi-transcendence (as opposed to ultratranscendence) quite seriously. ‘Trace’ and ‘differance’ are figures as much as concepts, which is precisely why he resorts to a pageant of metaphors in his subsequent work, ‘originary supplements’ such as spectres, cinders, gifts, pharmakons and so on: The same can be said of ‘arche-writing’ and yes, even ‘spacing’: Derrida literally offers these as myopic and defective ways of thinking some fraction of the unthinkable. Derrida has no transcendental account of how reflexivity arises from irreflexivity, only a myriad of quasi-transcendental ways we might think the relation of reflexivity and irreflexivity. The most he would say is that trace and differance allow us to understand how the irreflexivity characteristic of mechanism operates both on and within the synthesis of experience.

At the conclusion of “Freud and the Scene of Writing,” Derrida discusses the ‘radicalization of the thought of the trace,’ adding parenthetically, “a thought because it escapes the binarism and makes binarism possible on the basis of a nothing” (Writing and Difference, 230). This, once again, is what makes the trace and differance ‘quasi-transcendental.’ Our inability to think the contemporaneous, irreflexive origin of our thinking means that we can only think that irreflexivity under ‘erasure,’ which is to say, in terms at once post hoc and ad hoc. Given that trace and differance refer to the irreflexive, procrustean nature of representation (or ‘presence’), the fact that being ‘vanishes’ in the disclosure of beings, it seems to make sense that we should wed our every reference to them with an admission of the vehicular violence involved, the making present (via the vehicle of thought) of what can never be, nor ever has been, present.

In positioning Derrida’s thought beyond the binarism of transcendental and empirical, Hagglund is situating deconstruction in the very place Derrida tirelessly argues thought cannot go. As we saw above, Hagglund thinks advocating ‘endless divisibility’ is ‘philosophically irresponsible’ given the fact of identity (Radical Atheism, 25). What he fails to realize is that this is precisely the point: preaching totalized irreflexivity is a form of ‘irresponsible empiricism’ for philosophical reason. Trace and differance, as more than a few Anglo-American philosophical commentators have noted, are rationally irresponsible. No matter how fierce the will to hygiene and piety, reason is always besmirched and betrayed by its occluded origins. Thus the aporetic loop of theory and practice, representation and performance, reflexivity and irreflexivity–and, lest we forget, interiority and exteriority…

Which is to say, the aporetic loop of spacing. As we’ve seen, Hagglund wants to argue that spacing constitutes a solution to the fundamental philosophical problem of synthesis. If this is indeed the cornerstone of Derrida’s philosophy as he claims, then the ingenious Algerian doesn’t seem to think it bears making explicit. If anything, the sustained, explicit considerations of temporality that characterize his early work fade into the implicit background of his later material. This is because Derrida offers spacing, not as an alternate, nonintentional explanation of synthesis, but rather as a profound way to understand the aporetic form of that synthesis:

Even before it ‘concerns’ a text in narrative form, double invagination constitutes the story of stories, the narrative of narrative, the narrative of deconstruction in deconstruction: the apparently outer edge of an enclosure [cloture], far from being simple, simply external and circular, in accordance with the philosophical representation of philosophy, makes no sign beyond itself, toward what is utterly other, without becoming double or dual, without making itself be ‘represented,’ refolded, superimposed, re-marked within the enclosure, at least in what the structure produces as an effect of interiority. But it is precisely this structure-effect that is being deconstructed here. “More Than One Language,” 267-8

The temporal assumptions Derrida isolates in his critique of Husserl are clearly implicit here, but it’s the theme of spacing that remains explicit. What Derrida is trying to show us, over and over again, is a peculiar torsion in what we call experience: the ‘aporetic loop’ I mentioned above. It’s most infamous statement is “there is nothing outside the text” (Of Grammatology, 158) and its most famous image is that of the “labyrinth which includes in itself its own exits” (Speech and Phenomena, 104). Derrida never relinquishes the rhetoric of space because the figure it describes is the figure of philosophy itself, the double-bind where experience makes possible the world that makes experience possible.

What Hagglund calls synthesis is at once the solution and the dilemma. It relates to the outside by doubling, becoming ‘inside-outside,’ thus exposing itself to what lays outside the possibility of inside-outside (and so must be thought under erasure). Spacing refers to the interiorization of exteriority via the doubling of interiority. The perennial philosophical sin (the metaphysics of presence) is to confuse this folding of interiority for all there is, for inside and outside. So to take Kant as an example, positing the noumenal amounts to a doubling of interiority: the binary of empirical and transcendental. What Derrida is attempting is nothing less than a thinking that remains, as much as possible, self-consciously open to what lies outside the inside-outside, the ‘nothing that makes such binarisms possible.’ Since traditional philosophy can only think this via presence, which is to say, via another doubling, the generation of another superordinate binary (the outside-outside versus the inside-outside (or as Hagglund would have it, the ultratranscendental versus the transcendental/empirical)), it can only remain unconsciously open to this absolute outside. Thus Derrida’s retreat into performance.

Far from any ‘philosophical solution’ to the ‘philosophical problem of synthesis,’ spacing provides a quasi-transcendental way to understand the dynamic and aporetic form of that synthesis, giving us what seems to be the very figure of philosophy itself, as well as a clue as to how thinking might overcome the otherwise all-conquering illusion of presence. Consider the following passage from “Differance,” a more complete version of the quote Hagglund uses to frame his foundational argument in Radical Atheism:

An interval must separate the present from what it is not in order for the present to be itself, but this interval that constitutes it as present must, by the same token, divide the present in and of itself, thereby also dividing, along with the present, everything that is thought on the basis of the present, that is, in our metaphysical language, every being, and singularly substance or the subject. In constituting itself, in dividing itself dynamically, this interval is what might be called spacing, the becoming-space of time or the becoming-time of space (temporization). And it is this constitution of the present, as an ‘originary’ and irreducibly nonsimple (and therefore, stricto sensu nonoriginary) synthesis of marks, or traces of retentions and protentions (to reproduce analogically and provisionally a phenomenological and transcendental language that soon will reveal itself to be inadequate), that I propose to call archi-writing, archi-traces, or differance. Which (is) (simultaneously) spacing (and) temporization. Margins of Philosophy, 13

Here we clearly see the movement of ‘double invagination’ described above, the way the ‘interval’ divides presence from itself both within itself and without, generating the aporetic figure of experience/world that would for better or worse become Derrida’s lifelong obsession. The division within is what opens the space (as inside/outside), while the division without, the division that outruns the division within, is what makes this space the whole of space (because of the impossibility of any outside inside/outside). Hagglund wants to argue “that an elaboration of Derrida’s definition allows for the most rigourous thinking of temporality by accounting for an originary synthesis without grounding it in an indivisible presence” (Radical Atheism, 18). Not only is his theoretical, ultratranscendental ‘elaboration’ orthogonal to Derrida’s performative, quasi-transcendental project, his rethinking of temporality (despite its putative ‘rigour’), far from explaining synthesis, ultimately re-inscribes him within the very metaphysics of presence he seeks to master and chastise. The irony, then, is that even though Hagglund utterly fails to achieve his thetic goals, there is a sense in which he unconsciously (and inevitably) provides a wonderful example of the very figure Derrida is continually calling to our attention. The problem of synthesis is the problem of presence, and it is insoluble, insofar as any theoretical solution, for whatever reason, is doomed to merely reenact it.

Derrida does not so much pose a solution to the problem of synthesis as he demonstrates the insolubility of the problem given the existing conceptual resources of philosophy. At most Derrida is saying that whatever brings about synthesis does so in a way that generates presence as deconstructively conceived, which is to say, structured as inside/outside, self/other, experience/world–at once apparently complete and ‘originary’ and yet paradoxically fragmentary and derivative. Trace and differance provide him with the conceptual means to explore the apparent paradoxicality at the heart of human thought and experience at a particular moment of history:

Differance is neither a word nor a concept. In it, however, we see the juncture–rather than the summation–of what has been most decisively inscribed in the thought of what is conveniently called our ‘epoch’: the difference of forces in Nietzche, Saussure’s principle of semiological difference, difference as the possibility of [neurone] facilitation, impression and delayed effect in Freud, difference as the irreducibility of the trace of the other in Levinas, and the ontic-ontological difference in Heidegger. Speech and Phenomena, 130

It is this last ‘difference,’ the ontological difference, that Derrida singles out for special consideration. Differance, he continues, is strategic, a “provisionally privileged” way to track the “closure of presence” (131). In fact, if anything is missing in an exegetical sense from Hagglund’s consideration of Derrida it has to be Heidegger, who edited The Phenomenology of Internal Time-consciousness and, like Derrida, arguably devised his own philosophical implicature via a critical reading of Husserl’s account of temporality. In this sense, you could say that trace and differance are not the result of a radicalization of Husserl’s account of time, but rather a radicalization of a radicalization of that account. It is the ontological difference, the difference between being and beings, that makes presence explicit as a problem. Differance, you could say, startegically and provisionally renders the problem of presence (or ‘synthesis’) dynamic, conceives it as an effect of the trace. Where the ontological difference allows presence to hang pinned in philosophical system space for quick reference and retrieval, differance ‘references’ presence as a performative concern, as something pertaining to this very moment now. Far from providing the resources to ‘solve’ presence, differance expands the problem it poses by binding (and necessarily failing to bind) it to the very kernel of now.

Contra Hagglund, trace and differance do not possess the resources to even begin explaining synthesis in any meaningful sense of the term ‘explanation.’ To think that it does, I have argued, is to misconceive both the import and the project of deconstruction. But this does not mean that presence/synthesis is in fact insoluble. As the above quote suggests, Derrida himself understood the ‘epochal’ (as opposed to ‘ultratranscendental’) nature of the problematic motivating trace and differance. A student of intellectual history, he understood the contingency of the resources we are able to bring to any philosophical problem. He did not, as Adorno did working through the same conceptual dynamics via negative dialectics and identity thinking, hang his project from the possibility of some ‘Messianic moment,’ but this doesn’t mean he didn’t think the radical exposure whose semantic shadow he tirelessly attempted to chart wasn’t itself radically exposed.

And as it so happens, we are presently living through what is arguably the most revolutionary philosophical epoch of all, the point when the human soul, so long sheltered by the mad complexities of the brain, is at long last yielding to the technical and theoretical resources of the natural sciences. What Hagglund, deferring to the life sciences paradigm, calls ‘compatibility’ is a constitutive relation after all, only one running from nature to thought, world to experience. Trace and differance, far from ‘explaining’ the ‘ultratranscendental’ possibility of ‘life,’ are themselves open/exposed to explanation in naturalistic terms. They are not magical.

Deconstruction can be naturalized.

Colonoscopy

So what then is synthesis? How does reflexivity arise from irreflexivity?

Before tackling this question we need to remind ourselves of the boggling complexity of the world as revealed by the natural sciences. Phusis kruptesthai philei, Heraclitus allegedly said, ‘nature loves hiding.’ What it hides ‘behind’ is nothing less than our myriad cognitive incapacities, our inability to fathom complexities that outrun our brain’s ability to sense and cognize. ‘Flicker fusion’ in psychophysics provides a rudimentary and pervasive example: when the frequency of a flickering light crosses various (condition-dependent) thresholds, our experience of it will ‘fuse.’ What was a series of intermittent flashes becomes continuous illumination. As pedestrian as this phenomena seems, it has enormous practical and theoretical significance. This is the threshold that determines, for instance, the frame rate for the presentation of moving images in film or video. Such technologies, you could say, actively exploit our sensory and cognitive bottlenecks, hiding with nature beyond our ability differentiate.

Differentiations that exceed our brain’s capacity to sense/cognize make no difference. Or put differently, information (understood in the basic sense of systematic differences making systematic differences) that exceeds the information processing capacities of our sensory and cognitive systems simply does not exist for those systems–not even as an absence. It simply never occurs to people that their incandescent lights are in fact discontinuous. Thus the profundity of the Heraclitean maxim: not only does nature conceal itself behind the informatic blind of complexity, it conceals this concealment. This is what makes science such a hard-won cultural achievement, why it took humanity so long (almost preposterously so, given hindsight) to see that it saw so little. Lacking information pertaining to our lack of information, we assumed we possessed all the information required. We congenitally assumed, in other words, the sufficiency of what little information we had available. Only now, after centuries of accumulating information via institutionalized scientific inquiry, can we see how radically insufficient that information was.

Take geocentrism for instance. Lacking information regarding the celestial motion and relative location of the earth, our ancestors assumed it was both motionless and central, which is to say, positionally self-identical relative to itself and the cosmos. Geocentrism is the result of a basic perspectival illusion, a natural assumption to make given the information available and the cognitive capacities possessed. As strange as it may sound, it can be interpreted as a high-dimensional, cognitive manifestation of flicker fusion, the way the absence of information (differences making differences) results in the absence of differentiation, which is to say, identity.

Typically we construe ‘misidentifications’ with the misapplication of representations, as when, for example, children call whales fish. Believing whales are fish and believing the earth is the motionless centre of the universe would thus seem to be quite different kinds of mistakes. Both are ‘misrepresentations,’ mismatches between cognition and the world, but where the former mistake is categorical, the latter is empirical. The occult nature of this ‘matching’ makes it difficult to do much more than classify them together as mistakes, the one a false identification, the other a false theory.

Taking an explicitly informatic view, however, allows us to see both as versions of the mistake you’re making this very moment, presuming as you do the constancy of your illuminated computer screen (among other things). Plugging the brain into its informatic environment reveals the decisive role played by the availability of information, how thinking whales are fish and thinking the earth is the motionless centre of the universe both turn on the lack of information, the brain’s inability to access the systematic differences required to differentiate whales from fish or the earth’s position over time. Moreover, it demonstrates the extraordinarily granular nature of human cognition as traditionally conceived. It reveals, in effect, the possibility that our traditional, intentional understanding of cognition should itself be seen as an artifact of information privation.

Each of the above cases–flicker fusion, geocentrism, and misidentification–involve our brain’s ability to comprehend its environments given its cognitive resources and the information available. With respect to cognizing cognition, however, we need to consider the brain’s ability to cognize itself given, once again, its cognitive resources and the information available. Much of the philosophical tradition has attributed an exemplary status to self-knowledge, thereby assuming that the brain is in a far better position to cognize itself than its environments. But as we saw in the case with environmental cognition, the absence of information pertaining to the absence of information generates the illusion of sufficiency, the assumption that the information available is all the information there is. A number of factors, including the evolutionary youth of metacognition, the astronomical complexity of the brain, not to mention the growing mountain of scientific evidence indicating rampant metacognitive error, suggest that our traditional assumptions regarding the sufficiency theoretical metacognition need to be set aside. It’s becoming increasingly likely that metacognitive intuitions, far from constituting some ‘plenum,’ are actually the product of severe informatic scarcity.

Nor should we be surprised: science is only just beginning to mine the informatic complexities of the human brain. Information pertaining to what we are as a matter of scientific fact is only now coming to light. Left to our own devices, we can only see so much of the sky. The idea of our ancient ancestors looking up and comprehending everything discovered by modern physics and cosmology is, well, nothing short of preposterous. They quite simply lacked the information. So why should we think peering at the sky within will prove any different than the sky above? Taking the informatic perspective thus raises the spectre of noocentrism, the possibility that our conception of ourselves as intentional is a kind of perspectival illusion pertaining to metacognition not unlike geocentrism in the case of environmental cognition.

Thus the Blind Brain Theory, the attempt to naturalistically explain intentional phenomena in terms of the kinds and amounts of information missing. Where Hagglund claims ‘compatibility’ with Darwinian naturalism, BBT exhibits continuity: it takes the mechanistic paradigm of the life sciences as its basis. To the extent that it can explain trace and difference, then, it can claim to have naturalized deconstruction.

According to BBT, the intentional structure of first-person experience–the very thing phenomenology takes itself to be describing–is an artifact of informatic neglect, a kind of cognitive illusion. So, for instance, when Hagglund (explaining Husserl’s account of time-consciousness) writes “[t]he notes that run off and die away can appear as a melody only through an intentional act that apprehends them as an interconnected sequence” (56) he is literally describing the way that experience appears to a metacognition trussed in various forms of neglect. As we shall see, where Derrida, via the quasi-transcendentals of trace and differance, can only argue the insufficiencies plaguing such intentional acts, BBT possesses the resources to naturalistically explain, not only the insufficiencies, but why metacognition attributes intentionality to temporal cognition at all, why the apparent paradoxes of time-consciousness arise, and why it is that trace and differance make ‘sense’ the way they do. ‘Brain blindness’ or informational lack, in other words, can not only explain many of the perplexities afflicting consciousness and the first-person, it can also explain–if only in a preliminary and impressionistic way–much of the philosophy turning on what seem to be salient intentional intuitions.

Philosophy becoming transcendentally self-conscious as it did with Hume and Kant can be likened to a kid waking up to the fact that he lives in a peculiar kind of box, one not only walled by neglect (which is to say, the absence of information–or nothing at all), but unified by it as well. Kant’s defining metacognitive insight came with Hume: Realizing the wholesale proximal insufficiency of experience, he understood that philosophy must be ‘critical.’ Still believing in reason, he hoped to redress that insufficiency via his narrow form of transcendental interpretation. He saw the informatic box, in other words, and he saw how everything within it was conditioned, but assuming the sufficiency of metacognition, he assumed the validity of his metacognitive ‘deductions.’ Thus the structure of the empirical, the conditioned, and the transcendental, the condition: the attempt to rationally recuperate the sufficiency of experience.

But the condition is, as a matter of empirical fact, neural. The speculative presumption that something resembling what we think we metacognize as soul, mind, or being-in-the-world arises at some yet-to-be naturalized ‘level of description’–noocentrism–is merely that, a speculative presumption that in this one special case (predictably, our case) science will redeem our intentional intuitions. BBT offers the contrary speculative presumption, that something resembling what we think we metacognize as soul, mind, or being-in-the-world will not arise at some yet-to-be naturalized ‘level of description’ because nothing resembles what we think we metacognize at any level. Cognition is fractionate, heuristic, and captive to the information available. The more scant or mismatched the information, the more error prone cognition becomes. And no cognitive system faces the informatic challenges confronting metacognition. The problem, simply put, is that we lack any ‘meta-metacognition,’ and thus any intuition of the radical insufficiency of the information available relative to the cognitive resources possessed. The kinds of low-dimensional distortions revealed are therefore taken as apodictic.

There are reasons why first-person experience appears the way it does, they just happen to be empirical rather than transcendental. Transcendental explanation, you could say, is an attempt to structurally regiment first-person experience in terms that take the illusion to be real. The kinds of tail-chasing analyses one finds in Husserl literally represent an attempt to dredge some kind of formal science out of what are best understood as metacognitive illusions. The same can be said for Kant. Although he deserves credit for making the apparent asymptotic structure of conscious experience explicit, he inevitably confused the pioneering status of his subsequent interpretations–the fact that they were, for the sake of sheer novelty, the ‘only game in town’–for a kind of synthetic deductive validity. Otherwise he was attempting to ‘explain’ what are largely metacognitive illusions.

According to BBT, ‘transcendental interpretation’ represents the attempt to rationalize what it is we think we see when we ‘reflect’ in terms (intentional) congenial to what it is we think we see. The problem isn’t simply that we see far too little, but that we are entirely blind to the very thing we need to see: the context of neurofunctional processes that explains the why and how of the information broadcast to or integrated within conscious experience. To say the neurofunctionality of conscious experience is occluded is to say metacognition accesses no information regarding the actual functions discharged by the information broadcast or integrated. Blind to what lies outside its informatic box, metacognition confuses what it sees for all there is (as Kahneman might say), and generates ‘transcendental interpretations’ accordingly. Reasoning backward with inadequate cognitive tools from inadequate information, it provides ever more interpretations to ‘hang in the air’ with the interpretations that have come before.

‘Transcendental,’ in other words, simply names those prescientific, medial interpretations that attempt to recuperate the apparent sufficiency of conscious experience as metacognized. BBT, on the other hand, is exclusively interested in medial interpretations of what is actually going on, regardless of speculative consequences. It is an attempt to systematically explain away conscious experience as metacognized–the first-person–in terms of informatic privation and heuristic misadventure.

This will inevitably strike some readers as ‘positivist,’ ‘scientistic,’ or ‘reductive,’ terms that have become scarce more than dismissive pejoratives in certain philosophical circles, an excuse to avoid engaging what science has to say regarding their domain–the human. BBT, in other words, is bound to strike certain readers as chauvinistic, even imperial. But, if anything, BBT is bent upon dispelling views grounded in parochial sources of information–chauvinism. In fact, it is transcendental interpretation that restricts itself to nonscientific sources of information under the blanket assumption of metacognitive sufficiency, the faith that enough information of the right kind is available for actual cognition. Transcendental interpretation, in other words, remains wedded to what Kant called ‘tutelary natures.’ BBT, however, is under no such constraint; it considers both metacognitive and scientific information, understanding that the latter, on pain of supernaturalism, simply has to provide the baseline for reliable theoretical cognition (whatever that ultimately turns out to be). Thus the strange amalgam of scientific and philosophical concepts found here.

If reliable theoretical cognition requires information of the right kind and amount, then it behooves the philosopher, deconstructive or transcendental, to take account of the information their intentional rationales rely upon. If that information is primarily traditional and metacognitive–prescientific–then that philosopher needs some kind of sufficiency argument, some principled way of warranting the exclusion of scientific information. And this, I fear, has become all but impossible to do. If the sufficiency argument provided is speculative–that is, if it also relies on traditional claims and metacognitive intuitions–then it simply begs the question. If, on the other hand, it marshals information from the sciences, then it simply acknowledges the very insufficiency it is attempting to fend.

The epoch of intentional philosophy is at an end. It will deny and declaim–it can do nothing else–but to little effect. Like all prescientific domains of discourse it can only linger and watch its credibility evaporate into New Age aether as the sciences of the brain accumulate ever more information and refine ever more instrumentally powerful interpretations of that information. It’s hard to argue against cures. Any explanatory paradigm that restores sight to the blind, returns mobility to the crippled, not to mention facilitates the compliance of the masses, will utterly dominate the commanding heights of cognition.

Far more than mere theoretical relevance is at stake here.

On BBT, all traditional and metacognitive accounts of the human are the product of extreme informatic poverty. Ironically enough, many have sought intentional asylum within that poverty in the form of apriori or pragmatic formalisms, confusing the lack of information for the lack of substantial commitment, and thus for immunity against whatever the sciences of the brain may have to say. But this just amounts to a different way of taking refuge in obscurity. What are ‘rules’? What are ‘inferences’? Unable to imagine how science could answer these questions, they presume either that science will never be able to answer them, or that it will answer them in a manner friendly to their metacognitive intuitions. Taking the history of science as its cue, BBT entertains no such hopes. It sees these arguments for what they happen to be: attempts to secure the sufficiency of low-dimensional, metacognitive information, to find gospel in a peephole glimpse.

The same might be said of deconstruction. Despite their purported radicality, trace and differance likewise belong to a low-dimensional conceptual apparatus stemming from a noocentric account of intentional sufficiency. ‘Mystic writing pad’ or no, Derrida remains a philosopher of experience as opposed to nature. As David Roden has noted, “while Derrida’s work deflates the epistemic primacy of the ‘first person,’ it exhibits a concern with the continuity of philosophical concepts that is quite foreign to the spirit of contemporary naturalism” (“The Subject”). The ‘advantage’ deconstruction enjoys, if it can be called such, lies in its relentless demonstration of the insufficiency plaguing all attempts to master meaning, including its own. But as we have seen above, it can only do such from the fringes of meaning, as a ‘quasi-transcendentally’ informed procedure of reading. Derrida is, strangely enough, like Hume in this regard, only one forewarned of the transcendental apologetics of Kant.

Careful readers will have already noted a number of striking parallels between the preceding account of BBT and the deconstructive paradigm. Cognition (or the collection of fractionate heuristic subsystems we confuse for such) only has recourse to whatever information is available, thus rendering sufficiency the perennial default. Even when cognition has recourse to supplementary information pertaining to the insufficiency of information, information is processed, which is to say, the resulting complex (which might be linguaformally expressed as, ‘Information x is insufficient for reliable cognition’) is taken as sufficient insofar as the system takes it up at all. Informatic insufficiency is parasitic on sufficiency, as it has to be, given the mechanistic nature of neural processing. For any circuit involving inputs and outputs, differences must be made. Sufficient or not, the system, if it is to function at all, must take it as such.

(I should pause to note a certain temptation at this juncture, one perhaps triggered by the use of the term ‘supplementary.’ One can very easily deconstruct the above set of claims the way one can deconstruct any set of theoretical claims, scientific or speculative. But where the deconstruction of speculative claims possesses or at least seems to possess clear speculative effects, the deconstruction of scientific claims does not, as a rule, possess any scientific effects. BBT, recall, is an empirical theory, and as such stands beyond the pale of decisive speculative judgment (if indeed, there is such a thing).)

The cognition of informatic insufficiency always requires sufficiency. To ‘know’ that you are ‘wrong’ is to be right about being wrong. The positivity of conscious experience and cognition follows from the mechanical nature of brain function, the mundane fact that differences must be made. Now, whatever ‘consciousness’ happens to be as a natural phenomenon (apart from our hitherto fruitless metacognitive attempts to make sense of it), it pretty clearly involves the ‘broadcasting’ or ‘integration’ of information (systematic differences made) from across the brain. At any given instant, conscious experience and cognition access only an infinitesimal fraction of the information processed by the brain: conscious experience and cognition, in other words, possess any number of informatic limits. Conscious experience and cognition are informatically encapsulated at any given moment. It’s not just that huge amounts of information are simply not available to the conscious subsystems of the brain, it’s that information allowing the cognition of those subsystems for what they are isn’t available. The positivity of conscious experience and cognition turns on what might be called medial neglect, the structural inability to consciously experience or cognize the mechanisms behind conscious experience and cognition.

Medial neglect means the mechanics of system are not available to the system. The importance of this observation cannot be overstated. The system cannot cognize itself the way it cognizes its environments, which is to say, causally, and so must cognize itself otherwise. What we call ‘intentionality’ is this otherwise. Most of the peculiarities of this ‘cognition otherwise’ stem from the structural inability of the system to track its own causal antecedents. The conscious subsystems of the brain cannot cognize the origins of any of its processes. Moreover, they cannot even cognize the fact that this information is missing. Medial neglect means conscious experience and cognition are constituted by mechanistic processes that structural escape conscious experience and cognition. And this is tantamount to saying that consciousness is utterly blind to its own irreflexivity.

And as we saw above, in the absence of differences we experience/cognize identity.

On BBT, then, the ‘fundamental synthesis’ described by Hagglund is literally a kind of flicker fusion,’ a metacognitive presumption of identity where there is none. It is a kind of mandatory illusion: illusory because it egregiously mistakes what is the case, and mandatory because, like the illusion of continuous motion in film, it involves basic structural capacities that cannot be circumvented and so ‘seen through.’ But where with film environmental cognition blurs the distinction between discrete frames into an irreflexive, sensible continuity, the ‘trick’ played upon metacognition is far more profound. The brain has evolved to survive and exploit environmental change, irreflexivity. First and foremost, human cognition is the evolutionary product of the need to track environmental irreflexivity with enough resolution and fidelity to identify and avoid threats and identify and exploit opportunities. You could say it is an ensemble of irreflexivities (mechanisms) parasitic upon the greater irreflexitivity of its environment (or to extend Craver’s terms, the brain is a component of the ‘brain/environment’). Lacking the information required to cognize temporal difference, it perceives temporal continuity. Our every act of cognition is at once irrevocable and blind to itself as irrevocable. Because it is blind to itself, it cannot, temporally speaking, differentiate itself from itself. As a result, such acts seem to arise from some reflexive source. The absence of information, once again, means the absence of distinction, which means identity. The now, the hitherto perplexing and inexplicable fusion of distinct times, becomes the keel of subjectivity, something that appears (to metacognition at least) to be a solitary, reflexive exception in an universe entirely irreflexive otherwise.

This is the cognitive illusion that both Kant and Husserl attempted to conceptually regiment, Kant by positing the transcendental unity of apperception, and Husserl via the transcendental ego. This is also the cognitive illusion that stands at the basis of our understanding of persons, both ourselves and others.

When combined with sufficiency, this account of reflexivity provides us with an elegant way to naturalize presence. Sufficiency means that the positivity of conscious experience and cognition ‘fills the existential screen’: there is nothing but what is experienced and cognized at any given moment. The illusion of reflexivity can be seen as a temporalization of the illusion of sufficiency: lacking the information required to relativize sufficiency to any given moment, metacognition blurs it across all times. The ‘only game in town effect’ becomes an ‘only game in time effect’ for the mere want of metacognitive information–medial neglect. The target of metacognition, conscious experience and cognition, appears to be something self-sustaining, something immediately, exhaustively self-present, something utterly distinct from the merely natural, and something somehow related to the eternal.

And with the naturalization of presence comes the naturalization of the aporetic figure of philosophy that so obsessed Derrida for the entirety of his career. Sufficiency, the fact that conscious experience and cognition ‘fills the screen,’ means that the limits of conscious experience and cognition always outrun conscious experience and cognition. Sufficiency means the boundaries of consciousness are asymptotic, ‘limits with only one side.’ The margins of your visual attention provide a great example of this. The limits of seeing can never be seen: the visual information integrated into conscious experience and cognition simply trails into ‘oblivion.’ The limits of seeing are thus visually asymptotic, though the integration of vision into a variety of other systems allows those limits to be continually, effortlessly cognized. Such, however, is not the case when it comes to the conscious subsystems of the brain as a whole. They are, once again, encapsulated. Conscious experience and cognition only exists ‘for’ conscious experience and cognition ‘within’ conscious experience and cognition. To resort to the language of representation favoured by Derrida, the limits of representation only become available via representation.

And all this, once again, simply follows from the mechanistic nature of the human brain, the brute fact that the individual mechanisms engaged in informatically comporting our organism to itself and its (social and natural) environments, are engaged and so incapable of systematically tracking their own activities let alone the limitations besetting them. Sufficiency is asymptosis. Such tracking requires a subsequent reassignation of neurocomputational resources–it must always be deferred to a further moment that is likewise mechanically incapable of tracking its own activities. This post hoc tracking, meanwhile, literally has next to nothing that it can systematically comport itself to (or ‘track’). Thus each instant of functioning blots the instant previous, rendering medial neglect all but complete. Both the incalculably intricate and derived nature of each instant is lost as is the passage between instants, save for what scant information is buffered or stored. And so are irreflexive repetitions whittled into anosognosiac originals.

Theoretical metacognition, or philosophical reflection, confronts the compelling intuition that it is originary, that it stands outside the irreflexive order of its environments, that it is in some sense undetermined or free. Precisely because it is mechanistic, it confuses itself for ‘spirit,’ for something other than nature. As it comes to appreciate (through the accumulation of questions (such as those posed by Hume)) the medial insufficiency of conscious experience as metacognized, it begins to posit medial prosthetics that dwell in the asymptotic murk, ‘conditions of possibility,’ formal rationalizations of conscious experience as metacognized. Asymptosis is conceived as transcendence in the Kantian sense (as autoaffection, apperceptive unity, so on), forms that appeal to philosophical intuition because of the way they seem to conserve the illusions compelled by informatic neglect. But since the assumption of metacognitive identity is an artifact of missing information, which is to say, cognitive incapacity, the accumulation of questions (which provide information regarding the absence of information) and the accumulation of information pertaining to irreflexivity (which, like external relationality, always requires more information to cognize), inevitably cast these transcendental rationalizations into doubt. Thus the strange inevitability of deconstruction (or negative dialectics, or the ‘philosophies of difference’ more generally), the convergence of philosophical imagination about the intuition of some obdurate, inescapable irreflexivity concealed at the very root of conscious experience and cognition.

Deconstruction can be seen as a ‘low resolution’ (strategic, provisional) recognition of the medial mechanicity that underwrites the metacognitive illusion of ‘meaning.’ Trace and differance are emissaries of irreflexivity, an expression of the neuromechanics of conscious experience and cognition given only the limited amount of information available to conscious experience and cognition. As mere glimmers of our mechanistic nature, however, they can only call attention to the insufficiencies that haunt the low-dimensional distortions of the soul. Rather than overthrow the illusions of meaning, they can at most call attention to the way it ‘wobbles,’ thus throwing a certain image of subjective semantic stability and centrality into question. Deconstruction, for all its claims to ‘radicalize,’ remains a profoundly noocentric philosophy, capable of conceiving the irreflexive only as the ‘hidden other’ of the reflexive. The claim to radicality, if anything, cements its status as a profoundly nooconservative mode of philosophical thought. Deconstruction becomes, as we can so clearly see in Hagglund, a form of intellectual hygiene. ‘Deconstructed’ intentional concepts begin to seem like immunized intentional concepts, ‘subjects’ and ‘norms’ and ‘meanings’ that are all the sturdier for referencing their ‘insufficiency’ in theoretical articulations that take them as sufficient all the same. Thus the oxymoronic doubling evinced by ‘deconstructive ethics’ or ‘deconstructive politics.’

The most pernicious hallucination, after all, is the hallucination that claims to have been seen through.

The present account, however, does not suffer happy endings, no matter how aleatory or conditional. According to BBT, nothing has nor ever will be ‘represented.’ Certainly our brains mechanically recapitulate myriad structural features of their environments, but at no point do these recapitulations inherit the occult property of aboutness. With BBT, these phantasms that orthogonally double the world become mere mechanisms, environmentally continuous components that may or may not covary with their environments, just more ramshackle life, the product of over 3 billion years of blind guessing. We become lurching towers of coincidence, happenstance conserved in meat. Blind to neurofunctionality, the brain’s metacognitive systems have no choice but to characterize the relation between the environmental information accumulated and those environments in acausal, nonmechanical terms. Sufficiency assures that this metacognitive informatic poverty will seem a self-evident plenum. The swamp of causal complexity is drained. The fantastically complicated mechanistic interactions constituting the brain/environment vanish into the absolute oblivion of the unknown unknown, stranding metacognition with the binary cartoon of a ‘subject’ ‘intending’ some ‘object.’ Statistical gradations evaporate into the procrustean discipline of either/or.

This, if anything, is the image I want to leave you with, one where the traditional concepts of philosophy can be seen for the granular grotesqueries they are, the cartoonish products of a metacognition pinioned between informatic scarcity and heuristic incapacity. I want to leave you with, in effect, an entirely new way to conceive philosophy, one adequate to the new and far more terrifying ‘Enlightenment’ presently revolutionizing the world around us. Does anyone really think their particular, prescientific accounts of the soul will escape unscathed or emerge redeemed by what sciences of the brain will reveal over the coming decades? Certainly one can argue points with BBT, a position whose conclusions are so dismal that I cannot myself fully embrace them. What one cannot argue against is the radical nature of our times, with the fact that science has at long last colonized the soul, that it is, even now, doing what it always does when it breaches some traditional domain of discourse: replace our always simplistic and typically flattering assumptions with portraits of bottomless intricacy and breathtaking indifference. We are just beginning, as a culture, to awaken to the fact that we are machines. Throw words against this prospect if you must. The engineers and the institutions that own them will find you a most convenient distraction.

Wire Finger

*Originally posted 02/27/2013

The Unholy Consult

by rsbakker

the-unholy-consult-cover

 

The cover is out as those of you who frequent Wertzone already know. The Unholy Consult, the penultimate book of The Aspect-Emperor, is set to be released this July, ending a story arc that has been my obsession for some thirty years now. I’m far from done with the Three Seas, of course: The Second Apocalypse possesses one final chapter. But this arc was the animating vision, the feverish sequence of glimpses I used to paint the whole.

The B&N Sci-Fi & Fantasy Blog has it listed among their top twenty ‘can’t wait to read’ 2017 releases, but I find myself growing… not so much reluctant as coy, I think—you know that wariness you get when encountering circumstances you should know, but don’t for some reason. Ever since the catalogue with the cover arrived in the mail everything has felt marginally displaced, troubled by a mismatch between shadows and sources of light. It’ll be strange, for instance, being able to talk candidly about the story. What if I decide I want to remain entombed?

It would be nice if The Unholy Consult pushed the popularity of the series over some kind of threshold, but the entire project has been a slow fuse, so I’m not going to hold my breath. The Great Ordeal made the Fantasy Hotlist’s top ten of 2016, but I’ve found that the reviews take longer to trickle in the deeper I get into the series. Epic Fantasy has become an astonishingly crowded subgenre, blessing these crazy books, I hope, with the distinction belonging to landmarks, even while increasing the number of attractions in between. My guess is that it’ll take time.

By coincidence, I just sent out the final proofs “On Alien Philosophy” to The Journal of Consciousness Studies, so in sense this summer will see my two great artistic and theoretical aspirations simultaneously fulfilled. Cosmos and History, meanwhile, just accepted “From Scripture to Fantasy,” my critique of Continental philosophy a la Adrian Johnston, so you can even throw a little revenge fantasy into the mix! If I were in my early twenties, I would worry that I was developing schizophrenia, so many threads are twining together. Add Trump’s election to the mix, and the fear has to be that I’m actually a character in a L. Ron Hubbard novel…

Not Phillip K. Dick.

Braced for Launch

by rsbakker

Happy New Year all. I greeted 2017 with the Norovirus, so I guess you could say I’m not liking the omens so far. Either I’ll be immune when the shit starts flying in Washington or I’ll fold like a napkin.

I did have occasion to reread my interview of David Roden for Figure/Ground a while back, and I thought it worth linking because I believe the points of contention between us could very well map the philosophy of the future. I think the heuristic dependence of intentional cognition on ancestral cognitive backgrounds means intentional cognition has no hope of surviving the ongoing (and accelerating) technological renovation of those backgrounds. The posthuman, whatever it amounts to, will crash our every attempt to ethically understand it. David thinks my pessimism is premature, that ethical cognition, at least, can be knapped/exapted (via minimal notions of agency and value) into something that can leap the breach between human and posthuman. You decide.

Parental Advisory: Contains Grammatical Violence, Excessive Jargon, and Scenes of Conceptual Nudity.

Snake? I heard he was dead…

by rsbakker

Happy Holidays, all. I came down with the flu a couple days before Christmas, turned me into a pile of lumber, then on Christmas Eve the plumbing backed up, first in the bathroom, then in the kitchen, then in both, meaning, I belatedly realized, in the sewer line, reminding me of the wash-drain in the ‘mudroom’ floor, which was spouting raw sewage like the White Whale, so I grabbed a potato and dove upon the spout the way a braver soldier than I might dive upon a grenade, then jammed the potato into the spout, ripping my thumbnail half off as I did so, then cleaned shivering for fever, before going out to the garage to grab my thirty foot snake, which I dragged into the basement, where I popped the sewer line access, and proceeded to snake, hauling out human feces, which of course whisked across me because the fucking thing was a minion of the Unholy Consult, after which I went upstairs, had a second mudroom holocaust realizing my snaking had been in vain, then discussed the possibility of having to cancel the big family dinner on Christmas Day with my wife, after which I resumed snaking, hauling more mire, even a washcloth, but to no avail, again and again into the wee of night, until finally, damp with effluent, I went to bed, carefully for some retarded reason, as if that would save the linens indignity or something, and I breathed, and I closed my eyes, and I heard my daughter scream, “Santa was here! Santa was here!” at which point, I got up and did not simply make joyous, but was joyous, glazed in my family’s excrement and overwhelmed with fucking gratitude, and my wife texted everyone asking who could host dinner in our stead, and my brother-in-law called back saying he just happened to have (it’s a complicated story) a hundred foot power snake in his basement, a miracle (because let’s face it) that I loaded into the back of my van mere minutes later, and used with his help to pierce the cocksucking Clog of Doom, and my wife drove him home, and I went upstairs and had a long hot shower, crawled into bed, felt like something clasped between palms held in prayer… my every surface thankful, and woke up to a glowing house booming with laughter.

Now that’s a fucking Christmas story, I think.  Of course, the flu just hammered me the next morning. Viruses eat happy endings the same as everything else.

It Is What It Is (Until Notified Otherwise)

by rsbakker

wynnwood-brilliance

 

The thing to always remember when one finds oneself in the middle of some historically intractable philosophical debate is that path-dependency is somehow to blame. This is simply to say that the problem is historical in that squabbles regarding theoretical natures always arises from some background of relatively problem-free practical application. At some point, some turn is taken and things that seem trivially obvious suddenly seem stupendously mysterious. St. Augustine, in addition to giving us one of the most famous quotes in philosophy, gives us a wonderful example of this in The Confessions when he writes:

“What, then, is time? If no one asks of me, I know; if I wish to explain to him who asks, I know not.” XI, XIV, 17

But the rather sobering fact is that this is the case with a great number of the second order questions we can pose. What is mathematics? What’s a rule? What’s meaning? What’s cause? And of course, what is phenomenal consciousness?

So what is it with second order interrogations? Why is ‘time talk’ so easy and effortlessly used even though we find ourselves gobsmacked each and every time someone asks what time qua time is? It seems pretty clear that either we lack the information required or the capacity required or some nefarious combination of both. If framing the problem like this sounds like a no-brainer, that’s because it is a no-brainer. The remarkable thing lies in the way it recasts the issue at stake, because as it turns out, the question of the information and capacity we have available is a biological one, and this provides a cognitive ecological means of tackling the problem. Since practical solving for time (‘timing’) is obviously central to survival, it makes sense that we would possess the information access and cognitive capacity required to solve a wide variety of timing issues. Given that theoretical solving for time (qua-time) isn’t central to survival (no species does it and only our species attempts it), it makes sense that we wouldn’t possess the information access and cognitive capacity required, that we would suffer time-qua-time blindness.

From a cognitive ecological perspective, in other words, St. Augustine’s perplexity should come as no surprise at all. Of course solving time-qua-time is mystifying: we evolved the access and capacity required for solving the practical problems of timing, and not the theoretical problem of time. Now I admit if the cognitive ecological approach ground to a halt here it wouldn’t be terribly illuminating, but there’s quite a bit more to be said: it turns out cognitive ecology is highly suggestive of the different ways we might expect our attempts to solve things like time-qua-time to break down.

What would it be like to reach the problem-solving limits of some practically oriented problem-solving mode? Well, we should expect our assumptions/intuitions to stop delivering answers. My daughter is presently going through a ‘cootie-catcher’ phase and is continually instructing me to ask questions, then upbraiding me when my queries don’t fit the matrix of possible ‘answers’ provided by the cootie-catcher (yes, no, and versions of maybe). Sometimes she catches these ill-posed questions immediately, and sometimes she doesn’t catch them until the cootie-catcher generates a nonsensical response.

cootie-catcher-2

Now imagine your child never revealed their cootie-catcher to you: you asked questions, then picked colours or numbers or animals, and it turned out some were intelligibly answered, and some were not. Very quickly you would suss out the kinds of questions that could be asked, and the kinds that could not. Now imagine unbeknownst to you that your child replaced their cootie-catcher with a computer running two separately tasked, distributed AlphaGo type programs, the first trained to provide well-formed (if not necessarily true) answers to basic questions regarding causality and nothing else, the second trained to provide well-formed (if not necessarily true) answers to basic questions regarding goals and intent. What kind of conclusions would you draw, or more importantly, assume? Over time you would come to suss out the questions generating ill-formed answers versus questions generating well-formed ones. But you would have no way of knowing that two functionally distinct systems were responsible for the well-formed answers: causal and purposive modes would seem the product of one cognitive system. In the absence of distinctions you would presume unity.

Think of the difference between Plato likening memory to an aviary in the Theaetetus and the fractionate, generative memory we now know to be the case. The fact that Plato assumed as much, unity and retrieval, shouts something incredibly important once placed in a cognitive ecological context. What it suggests is that purely deliberative attempts to solve second-order problems, to ask questions like what is memory-qua-memory, will almost certainly run afoul the problem of default identity, the identification that comes about for the want of distinctions. To return to our cootie-catcher example, it’s not simply that we would report unity regarding our child’s two AlphaGo type programs the way Plato did with memory, it’s that information involving its dual structure would play no role in our cognitive economy whatsoever. Unity, you could say, is the assumption built into the system. (And this applies as much to AI as it does to human beings. The first ‘driverless fatality’ died because his Tesla Model S failed to distinguish a truck trailer from the sky.)

Default identity, I think, can play havoc with even the most careful philosophical interrogations—such as the one Eric Schwitzgebel gives in the course of rebutting Keith Frankish, both on his blog and in his response in The Journal of Consciousness Studies, “Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage.”

According to Eric, “Illusionism as a Theory of Consciousness” presents the phenomenal realist with a dilemma: either they commit to puzzling ontological features such as simple, ineffable, intrinsic, or so on, or they commit to explaining those features away, which is to say, some variety of Illusionism. Since Eric both believes that phenomenal consciousness is real, and that the extraordinary properties attributed to it are likely not real, he proposes a third way, a formulation of phenomenal experience that neither inflates it into something untenable, nor deflates into something that is plainly not phenomenal experience. “The best way to meet Frankish’s challenge,” he writes, “is to provide something that the field of consciousness studies in any case needs: a clear definition of phenomenal consciousness, a definition that targets a phenomenon that is both substantively interesting in the way that phenomenal consciousness is widely thought to be interesting but also innocent of problematic metaphysical and epistemological assumptions” (2).

It’s worth noting the upshot of what Eric is saying here: the scientific study of phenomenal consciousness cannot, as yet, even formulate their primary explanandum. The trick, as he sees it, is to find some conceptual way to avoid the baggage, while holding onto some semblance of a wardrobe. And his solution, you might say, is to wear as many outfits as he possibly can. He proposes that definition by example is uniquely suited to anchor an ontologically and epistemologically innocent concept of phenomenal consciousness.

He has but one caveat: any adequate formulation of phenomenal consciousness has to account or allow for what Eric terms its ‘wonderfulness’:

If the reduction of phenomenal consciousness to something physical or functional or “easy” is possible, it should take some work. It should not be obviously so, just on the surface of the definition. We should be able to wonder how consciousness could possibly arise from functional mechanisms and matter in motion. Call this the wonderfulness condition. 3

He concedes the traditional properties ascribed to phenomenal experience outrun naturalistic credulity, but the feature of begging belief remains to be explained. This is the part of Eric’s position to keep an eye on because it means his key defense against eliminativism is abductive. Whatever phenomenal consciousness is, it seems safe to say it is not something easily solved. Any account purporting to solve phenomenal consciousness that leaves the wonderfulness condition unsatisfied is likely missing phenomenal consciousness altogether.

And so Eric provides a list of positive examples including sensory and somatic experiences, conscious imagery, emotional experience, thinking and desiring, dreams, and even other people, insofar as we continually attribute these very same kinds of experiences to them. By way of negative examples, he mentions a variety of intimate, yet obviously not phenomenally conscious processes, such as fingernail growth, intestinal lipid absorption, and so on.

He writes:

Phenomenal consciousness is the most folk psychologically obvious thing or feature that the positive examples possess and that the negative examples lack. I do think that there is one very obvious feature that ties together sensory experiences, imagery experiences, emotional experiences, dream experiences, and conscious thoughts and desires. They’re all conscious experiences. None of the other stuff is experienced (lipid absorption, the tactile smoothness of your desk, etc.). I hope it feels to you like I have belabored an obvious point. Indeed, my argumentative strategy relies upon this obviousness. 8

Intuition, the apparent obviousness of his examples, is what he stresses here. The beauty of definition by example is that offering instances of the phenomenon at issue allows you to remain agnostic regarding the properties possessed by that phenomenon. It actually seems to deliver the very metaphysical and epistemological innocence Eric needs to stave off the charge of inflation. It really does allow him to ditch the baggage and travel wearing all his clothes, or so it seems.

Meanwhile the wonderfulness condition, though determining the phenomenon, does so indirectly, via the obvious impact it has on human attempts to cognize experience-qua-experience. Whatever phenomenal consciousness is, contemplating it provokes wonder.

And so the argument is laid out, as spare and elegant as all of Eric’s arguments. It’s pretty clear these are examples of whatever it is we call phenomenal consciousness. Of course, there’s something about them that we find downright stupefying. Surely, he asks, we can be phenomenal realists in this austere respect?

For all its intuitive appeal, the problem with this approach is that it almost certainly presumes a simplicity that human cognition does not possess. Conceptually, we can bring this out with a single question: Is phenomenal consciousness the most folk psychologically obvious thing or feature the examples share, or is it obvious in some other respect? Eric’s claim amounts to saying the recognition of phenomenal consciousness as such belongs to everyday cognition. But is this the case? Typically, recognition of experience-qua-experience is thought to be an intellectual achievement of some kind, a first step toward the ‘philosophical’ or ‘reflective’ or ‘contemplative’ attitude. Shouldn’t we say, rather, that phenomenal consciousness is the most obvious thing or feature these examples share upon reflection, which is to say, philosophically?

This alternative need only be raised to drag Eric’s formulation back into the mire of conceptual definition, I think. But on a cognitive ecological picture, we can actually reframe this conceptual problematization in path-dependent terms, and so more forcefully insist on a distinction of modes and therefore a distinction in problem-solving ecologies. Recall Augustine, how we understand time without difficulty until we ask the question of time qua time. Our cognitive systems have no serious difficulty with timing, but then abruptly break down when we ask the question of time as such. Even though we had the information and capacity required to solve any number of practical issues involving time, as soon as we pose the question of time-qua-time that fluency evaporates and we find ourselves out-and-out mystified.

Eric’s definition by example, as an explicitly conceptual exercise, clearly involves something more than everyday applications of experience talk. The answer intuitively feels as natural as can be—there must be some property X these instances share or exclude, certainly!—but the question strikes most everyone as exceptional, at least until they grow accustomed to it. Raising the question, as Augustine shows us, is precisely where the problem begins, and as my daughter would be quick to remind Eric, cootie-catchers only work if we ask the right question. Human cognition is fractionate and heuristic, after all.

cootie-catcher

All organisms are immersed in potential information, difference making differences that could spell the difference between life and death. Given the difficulties involved in the isolation of causes, they often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information organisms have, evolved and learned sensitivities to effects systematically correlated to those environmental systems relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers adapted to deep information environments, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible.

We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems and the resources required to solve them are wildly disparate, not all access is equal.

Information access, I think, divides cognition into two distinct forms, two different families of ‘AlphaGo type’ programs. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.

Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.

Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.

(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the necessity, the efficacy, and the limitations of source insensitive (cue correlative) cognition. Andrei Cimpian’s lab and the work of Klaus Fiedler (as well as that of the Adaptive Behaviour and Cognition Research Group more generally) are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)

So what are we to make of Eric’s attempt to innocently (folk psychologically) pose the question of experience-qua-experience in light of this rudimentary distinction?

If one takes the brain’s ability to cognize its own cognitive functions as a condition of ‘experience talk,’ it becomes very clear very quickly that experience talk belongs to a source insensitive cognitive regime, a system adapted to exploit correlations between the information consumed (cues) and the vastly complicated systems (oneself and others) requiring solution. This suggests that Eric’s definition by example is anything but theoretically innocent, assuming, as it does, that our source insensitive, experience-talk systems pick out something in the domain of source sensitive cognition… something ‘real.’ Defining by example cues our experience-talk system, which produces indubitable instances of recognition. Phenomenal consciousness becomes, apparently, an indubitable something. Given our inability to distinguish between our own cognitive systems (given ‘cognition-qua-cognition blindness’), default identity prevails; suddenly it seems obvious that phenomenal experience somehow, minimally, belongs to the order of the real. And once again, we find ourselves attempting to square ‘posits’ belonging to sourceless modes of cognition with a world where everything has a source.

We can now see how the wonderfulness condition, which Eric sees working in concert with his definition by example, actually cuts against it. Experience-qua-experience provokes wonder precisely because it delivers us to crash space, the point where heuristic misapplication leads our intuitions astray. Simply by asking this question, we have taken a component from a source insensitive cognitive system relying (qua heuristic) on strategic correlations to the systems requiring solution to solve, and asked a completely different, source sensitive system to make sense of it. Philosophical reflection is a ‘cultural achievement’ precisely because it involves using our brains in new ways, applying ancient tools to novel questions. Doing so, however, inevitably leaves us stumbling around in a darkness we cannot see, running afoul confounds we have no way of intuiting, simply because they impacted our ancestors not at all. Small wonder ‘phenomenal consciousness’ provokes wonder. How could the most obvious thing possess so few degrees of cognitive freedom? How could light itself deliver us to darkness?

I appreciate the counterintuitive nature of the view I’m presenting here, the way it requires seeing conceptual moves in terms of physical path-dependencies, as belonging to a heuristic gearbox where our numbness to the grinding perpetually convinces us that this time, at long last, we have slipped from neutral into drive. But recall the case of memory, the way blindness to its neurocognitive intricacies led Plato to assume it simple. Only now can we run our (exceedingly dim) metacognitive impressions of memory through the gamut of what we know, see it as a garden of forking paths. The suggestion here is that posing the question of experience-qua-experience poses a crucial fork in the consciousness studies road, the point where a component of source-insensitive cognition, ‘experience,’ finds itself dragged into the court of source sensitivity, and productive inquiry grinds to a general halt.

When I employ experience talk in a practical, first-order way, I have a great deal of confidence in that talk. But when I employ experience talk in a theoretical, second-order way, I have next to no confidence in that talk. Why would I? Why would anyone, given the near-certainty of chronic underdetermination? Even more, I can see of no way (short magic) for our brain to have anything other than radically opportunistic and heuristic contact with its own functions. Either specialized, simple heuristics comprise deliberative metacognition or deliberative metacognition does not exist. In other words, I see no way of avoiding experience-qua-experience blindness.

This flat out means that on a high dimensional view (one open to as much relevant physical information as possible), there is just no such thing as ‘phenomenal consciousness.’ I am forced to rely on experience related talk in theoretical contexts all the time, as do scientists in countless lines of research. There is no doubt whatsoever that experience-talk draws water from far more than just ‘folk psychological’ wells. But this just means that various forms of heuristic cognition can be adapted to various experimentally regimented cognitive ecologies—experience-talk can be operationalized. It would be strange if this weren’t the case, and it does nothing to alleviate the fact that solving for experience-qua-experience delivers us, time and again, to crash space.

One does not have to believe in the reality of phenomenal consciousness to believe in the reality of the systems employing experience-talk. As we are beginning to discover, the puzzle has never been one of figuring out what phenomenal experiences could possibly be, but rather figuring out the biological systems that employ them. The greater our understanding of this, the greater our understanding of the confounds characterizing that perennial crash space we call philosophy.

Breakneck: Review and Critical Commentary of Whiplash: How to Survive our Faster Future by Joi Ito and Jeff Howe

by rsbakker

whiplash-cover

The thesis I would like to explore here is that Whiplash by Joi Ito and Jeff Howe is at once a local survival guide and a global suicide manual. Their goal “is no less ambitious than to provide a user’s manual to the twenty-first century” (246), a “system of mythologies” (108) embodying the accumulated wisdom of the storied MIT Media Lab. Since this runs parallel to my own project, I applaud their attempt. Like them, I think understanding the consequences of the ongoing technological revolution demands “an entirely new mode of thinking—a cognitive evolution on the scale of a quadruped learning to stand on its hind feet” (247). I just think we need to recall the number of extinctions that particular evolutionary feat required.

Whiplash was a genuine delight for me to read, and not simply because I’m a sucker for technoscientific anecdotes. At so many points I identified with the collection of misfits and outsiders that populate their tales. So, as an individual who fairly embodies the values promulgated in this book, I offer my own amendments to Ito and Howe’s heuristic source code, what I think is a more elegant and scientifically consilient way to understand not only our present dilemma, but the kinds of heuristics we will need to survive them…

Insofar as that is possible.

 

Emergence over Authority

General Idea: Pace of change assures normative obsolescence, which in turn requires openness to ‘emergence.’

“Emergent systems presume that every individual within that system possesses unique intelligence that would benefit the group.” 47

“Unlike authoritarian systems, which enable only incremental change, emergent systems foster the kind of nonlinear innovation that can react quickly to the kind of change of rapid changes that characterize the network age.” 48

Problems: Insensitive to the complexities of the accelerating social and technical landscape. The moral here should be, Does this heuristic still apply?

The quote above also points to the larger problem, which becomes clear by simply rephrasing it to read, ‘emergent systems foster the kind of nonlinear transformation that can react quickly to the kind of nonlinear transformations that characterize the network age.’ The problem, in other words, is also the solution. Call this the Putting Out Fire with Gasoline Problem. I wish Ito and Howe would have spent some more time considering it since it really is the heart of their strategy: How do we cope with accelerating innovation? We become as quick and innovative as we can.

 

Pull over Push

General Idea: Command and control over warehoused resources lacks the sensitivity to solve many modern problems, which are far better resolved by allowing the problems themselves to attract the solvers.

“In the upside-down, bizarre universe created by the Internet, the very assets on your balance sheet—from printing presses to lines of code—are now liabilities from the perspective of agility. Instead, we should try to use resources that can be utilized just in time, for just that time necessary, then relinquished.” 69

“As the cost of innovation continues to fall, entire communities that have been sidelined by those in power will be able to organize themselves and become active participants in society and government. The culture emergent innovation will allow everyone to feel a sense of both ownership and responsibility to each other and to the rest of the world, which will empower them to create more lasting change that the authorities who write policy and law.” 71

Problems: In one sense, I think this chapter speaks to the narrow focus of the book, the degree it views the world through IT glasses. Trump examples the power of Pull. ISIS examples the power of Pull. ‘Empowerment’ is usually charged with positive connotations, until one applies it to criminals, authoritarian governments and so on. It’s important to realize that ‘pull’ runs any which way, rather than directly toward better.

 

Compasses over Maps

General Idea: Sensitivity to ongoing ‘facts on the ground’ generally trumps reliance on high-altitude appraisals of yesterday’s landscape.

“Of all the nine principles in the book, compasses over maps has the greatest potential for misunderstanding. It’s actually very straightforward: a map implies a detailed knowledge of the terrain, and the existence of an optimum route; the compass is a far more flexible tool and requires the user to employ creativity and autonomy in discovering his or her own path.” 89

Problems: I actually agree that this principle is the most apt to be misunderstood because I’m inclined to think Ito and Howe themselves might be misunderstanding it! Once again, we need to see the issue in terms of cognitive ecology: Our ancestors, you could say, suffered a shallow present and enjoyed a deep future. Because the mechanics of their world eluded them, they had no way of re-engineering them, and so they could trust the machinery to trundle along the way it always had. We find ourselves in the opposite predicament: As we master more and more of the mechanics of our world, we discover an ever-expanding array of ways to re-engineering them, meaning we can no longer rely on the established machinery the way our ancestors—and here’s the important bit—evolved to. We are shallow present, deep future creatures living in a deep present, shallow future world.

This, I think, is what Ito and Howe are driving at: just as the old rules (authorities) no longer apply, the old representations (maps) no longer apply either, forcing us to gerrymander (orienteer) our path.

 

Risk over Safety

General Idea: The cost of experimentation has plummeted to such an extent that being wrong no longer has the catastrophic market consequences it once had.

“The new rule, then, is to embrace risk. There may be nowhere else in this book that exemplifies how far our collective brains have fallen behind our technology.” 116

“Seventy million years ago it was great to be a dinosaur. You were a complete package; big, thick-skinned, sharp-toothed, cold-blooded, long-lived. And it was great for a long, long time. Then, suddenly… it wasn’t so great. Because of your size, you needed an awful lot of calories. And you needed an awful lot of room. So you died. You know who outlived you? The frog.” 120

Problems: Essentially the argument is that risky ventures in the old economy are now safe, and that safe ventures are now risky, which means the argument is actually a ‘safety over risk’ one. I find this particular maxim so interesting because I think it really throws their lack of any theory of the problem they take themselves to be solving/ameliorating into relief. Really the moral here is experimentation pays.


This means the cognitive ecology Ito and Howe are both describing and advocating is in some sense antithetical—and therefore alienating—to our ancestral ways of making sense of ourselves.


 

Disobedience over Compliance

General Idea: Traditional forms of development stifle the very creativity institutions require to adapt to the accelerating pace of technological change.

“Since the 1970’s, social scientists have recognized the positive impact of “positive deviants,” people whose unorthodox behavior improves their lives and has the potential to improve their communities if it’s adopted more widely.” 141

“The people who will be the most successful in this environment will be the ones who ask questions, trust their instincts, and refuse to follow the rules when the rules get in their way.” 141

Problems: Disobedience is not critique, and Ito and Howe are careful to point this out, but they fail to mention what role, if any, criticality plays in their list of principles. Another problem has to do with the obvious exception bias at work in their account. Sure, being positive deviants has served Ito and Howe and the generally successful people they count as their ingroup, but what about the rest of us? This is why I cringe every time I hear Oscar acceptance speeches urging young wannabe thespians to ‘never give up on their dream,’ because winners—who are winners by virtue of being the exception—see themselves as proof positive that it can be done if you just try-try-try… This stuff is what powers the great dream smashing factory called Hollywood—as well as Silicon Valley. All things being equal, I think being a ‘positive deviant’ is bound to generate far more grief than success.

And this, I think, underscores the fundamental problem with the book, which is the question of application. I like to think of myself as a ‘positive deviant,’ but I’m aware that I am often identified as a ‘contrarian flake’ in the various academic silos I piss in now and again. By opening research ingroups to the wider world, the web immediately requires members to vet communications in a manner they never had to before. The world, as it turns out, is filled with contrarian flakes, so the problem becomes one of sorting positive deviants (like myself (maybe)), extra-institutional individuals with positive contributions to make, from all those contrarian flakes (like myself (maybe)).

Likewise, given that every communal enterprise possesses wilful, impassioned, but unimaginative employees, how does a manager sort the ‘positive deviant’ out?

When does disobedience over compliance apply? This is where the rubber hits the road, I think. The whole point of the (generally fascinating) anecdotes is to address this very issue, but aside from some gut estimation of analogical sufficiency between cases, we really have nothing to go on.

 

Practice over Theory

General Idea: Traditional forms of education and production emphasize planning before and learning outside the relevant context of applications, when humans are simply not wired for this, and when those contexts are transforming so quickly.

“Putting practice over theory means recognizing that in a faster future, in which change has become a new constant, there is often a higher cost to waiting and planning that there is to doing and improvising.” 159

“The Media Lab is focussed on interest-driven, passion-driven learning through doing. It is also trying to understand and deploy this form of creative learning into a society that will increasingly need more creative learners and fewer human beings who can solve problems better tackled by robots and computers.” 170

Problems: Humans are the gerrymandering species par excellence, leveraging technical skills into more and more forms of environmental mastery. In this respect it’s hard to argue against Ito and Howe’s point, given the caveats they are careful to provide.

The problem lies in the supercomplex environmental consequences of that environmental mastery: Whiplash is advertised as a how-to environmentally master the consequences of environmental mastery manual, so obviously, environmental mastery, technical innovation, ‘progress’—whatever you want to call it—has become a life and death matter, something to be ‘survived.’

The thing people really need to realize in these kinds of discussions is just how far we have sailed into uncharted waters, and just how fast the wind is about to grow.

 

Diversity over Ability

General Idea: Crowdsourcing, basically, the term Jeff Howe coined referring to the way large numbers of people from a wide variety of backgrounds can generate solutions eluding experts.

“We’re inclined to believe the smartest, best trained people in a given discipline—the experts—are the best qualified to a solve a problem in their specialty. And indeed, they often are. When they fail, as they will from time to time, our unquestioning faith in the principle of ‘ability’ leads us to imagine that we need to find a better solver: other experts with similarly high levels of training. But it is in the nature of high ability to reproduce itself—the new team of experts, it turns out, trained at the same amazing schools, institutes, and companies as the previous experts. Similarly brilliant, out two sets of experts can be relied on to apply the same methods to the problem, and share as well the same biases, blind spots, and unconscious tendencies.” 183

Problems: Again I find myself troubled not so much by the moral as by the articulation. If you switch the register from ‘ability’ to competence and consider the way ingroup adjudications of competence systematically perceive outgroup contributions to be incompetent, then you have a better model to work with here, I think. Each of us carry a supercomputer in our heads and all cognition exhibits path-dependency and is therefore vulnerable to blind alleys, so the power of distributed problem solving should come as no surprise. The problem, here, rather, is one of seeing though our ingroup blinders, and coming to understand how we instinctively identify competence forecloses on distributed cognitive resources (which can take innumerable forms).

Institutionalizing diversity seems like a good first step. But what about overcoming ingroup biases more generally? And what about the blind-alley problem (which could be called the ‘double-blind alley problem,’ given the way reviewing the steps taken tends to confirm the necessity of the path taken)? Is there a way to suss out the more pernicious consequences of cognitive path-dependency?

 

Resilience over Strength

General Idea: The reed versus the tree.

Problems: It’s hard to bitch about a chapter beginning with a supercool Thulsa Doom quote.

Strike that—impossible.

 

Systems over Objects

General Idea: Unravelling contemporary problems means unravelling complex problems necessitating adoption of the systems view.

“These new problems, whether we’re talking about curing Alzheimer’s or learning to predict volatile weather systems, seem to be fundamentally different, in that they seem to require the discovery of all the building blocks in a complex system.” 220

“Systems over objects recognizes that responsible innovation requires more than speed and efficiency. It also requires a constant focus on the overall impact of new technologies, and an understanding of the connections between people, their communities, and their environments.” 224

Problems: Since so much of Three Pound Brain is dedicated to understanding human experience and cognition in naturally continuous terms, I tend to think that ‘Systems over Subjects’ offers a more penetrating approach. The idea that things and events cannot be understood or appreciated in isolation is already firmly rooted in our institutional DNA, I think. The challenge, here, lies in squaring this way of thinking with everyday cognition, with our default ways of making sense of each other and ourselves. We are hardwired to see simple essences and sourceless causes everywhere we look. This means the cognitive ecology Ito and Howe are both describing and advocating is in some sense antithetical—and therefore alienating—to our ancestral ways of making sense of ourselves.


Algorithms are set to flood this space, to begin cuing social cognition to solve biological brains in the absence of any biological brains.


 

Conclusion

When I decided to post a review on this book, I opened an MSWord doc the way I usually do and began jotting down jumbled thoughts and impressions, including the reminder to “Bring up the problem of theorizing politics absent any account of human nature.” I had just finished reading the introduction by that point, so I read the bulk of Whiplash with this niggling thought in the back of my mind. Ito and Howe take care to avoid explicit political references, but as I’m sure they will admit, their project is political through and through. Politics has always involved science fiction; after all, how do you improve a future you can’t predict? Knowing human nature, their need to eat, to secure prestige, to mate, to procreate, and so on, is the only thing that allows us to predict human futures at all. Dystopias beg Utopias beg knowing what makes us tick.

In a time of radical, exponential social and environmental transformation, the primary question regarding human nature has to involve adaptability, our ability to cope with social and environmental transformation. The more we learn about human cognition, however, the more we discover that the human capacity to solve new problems is modular as opposed to monolithic, complex as opposed to simple. This in turn means that transforming different elements in our environments (the way technology does) can have surprising results.

So for example, given the ancestral stability of group sizes, it makes sense to suppose we would assess the risk of victimization against a fixed baseline whenever we encountered information regarding violence. Our ability to intuitively assess threats, in other words, depends upon a specific cognitive ecology, one where the information available is commensurate with the small communities of farmers and/or hunter-gatherers. This suggests the provision of ‘deep’ (ancestrally unavailable) threat information, such as that provided by the web or the evening news, would play havoc with our threat intuitions—as indeed seems to be the case.

Human cognition is heuristic, through and through, which is to say dependent on environmental invariances, the ancestral stability of different relevant backgrounds. The relation between group size and threat information is but one of countless default assumptions informing our daily lives. The more technology transforms our cognitive ecologies, the more we should expect our intuitions to misfire, to prompt ineffective problem-solving behaviour like voting for ‘tough-on-crime’ political candidates. The fact is technology makes things easy that were never ‘meant’ to be easy. Consider how humans depended on all the people they knew before the industrial concentration of production, and so were forced to compromise, to see themselves as requiring friends and neighbours. You could source your clothes, your food, even your stories and religion to some familiar face. You grew up in an atmosphere of ambient, ingroup gratitude that continually counterbalanced your selfish impulses. After the industrial concentration of production, the material dependencies enforcing cooperation evaporated, allowing humans to indulge egocentric intuitions, the sweet-tooth of themselves, and ‘individualism’ was born, and with it all the varieties of social isolation comprising the ‘modern malaise.’

This cognitive ecological lens is the reason why I’ve been warning that the web was likely to aggravate processes of group identification and counter-identification, why I’ve argued that the tactics of 20th century progressivism had actually become more pernicious than efficacious, and suggested that forms of political atavism, even the rise of demagoguery, would become bigger and bigger problems. Where most of the world saw the Arab Spring as a forceful example of the web’s capacity to emancipate, I saw it as an example of ‘flash civil unrest,’ the ability of populations to spontaneously organize and overthrow existing institutional orders period, and only incidentally ‘for the better.’

If you entertained extremist impulses before the internet, you had no choice but to air your views with your friends and neighbours, where, all things being equal, the preponderance of views would be more moderate. The network constraints imposed by geography, I surmised, had the effect of ameliorating extremist tendencies. Absent the difficulty of organizing about our darker instincts, rationalizing and advertising them, I think we have good reason to fear. Humans are tribal through and through, as prone to acts of outgroup violence as ingroup self-sacrifice. On the cognitive ecological picture, it just so happens that technological progress and moral/political progress have marched hand in hand thus far. The bulk of our prosocial, democratic institutions were developed—at horrendous cost, no less—to maximize the ‘better angels’ of our natures and to minimize the worst, to engineer the kind of cognitive ecologies we required to flourish in the new social and technical environments—such as the industrial concentration of material dependency—falling out of the Renaissance and Enlightenment.

I readily acknowledge that better accounts can be found for the social phenomena considered above: what I contend is that all of those accounts will involve some nuanced understanding of the heuristic nature of human cognition and the kinds of ecological invariance they take for granted. My further contention is that any adequate understanding of that heuristic nature raises the likelihood, perhaps even the inevitability, that human social cognition will effectively breakdown altogether. The problem lies in the radically heuristic nature of the cognitive modes we use to understand each other and ourselves. Since the complexity of our biocomputational nature renders it intractable, we had to develop ways of predicting/explaining/manipulating behaviour that have nothing to do with the brains behind that behaviour, and everything to do with its impact on our reproductive fortunes. Social problem-solving, in other words, depends on the stability of a very specific cognitive ecology, one entirely innocent to the possibility of AI.

For me, the most significant revelation from the Ashley Madison scandal was the ease with which men were fooled into thinking they were attracting female interest. And this just wasn’t an artifact of the venue: Ito’s MIT colleague Sherry Turkle, in addition to systematically describing the impact of technology on interpersonal relationships, often warns of the ease with which “Darwinian buttons” can be pushed. What makes simple heuristics so powerful is precisely what renders them so vulnerable (and it’s no accident that AI is struggling to overcome this issue now): they turn on cues physically correlated to the systems they track. Break those correlations, and those cues are connected to nothing at all, and we enter Crash Space, the kind of catastrophic cognitive ecological failure that warns away everyone but philosophers.

Virtual and Augmented Reality, or even Vegas magic acts, provide excellent visual analogues. Whether one looks at stereoscopic 3-D systems like Occulus Rift, or the much-ballyhooed ‘biomimetics’ of Magic Leap, or the illusions of David Copperfield, the idea is to cue visual environments that do not exist as effectively and as economically as possible. Goerztal and Levesque and others can keep pounding at the gates of general cognition (which may exist, who knows), but research like that of the late Clifford Nass is laying bare the landscape of cues comprising human social cognition, and given the relative resources required, it seems all but inevitable that the ‘taking to be’ approach, designing AIs focused not so much on being a genuine agent (whatever that is) as cuing the cognition of one, will sweep the field. Why build Disney World when you can project it? Developers will focus on the illusion, which they will refine and refine until the show becomes (Turing?) indistinguishable from the real thing—from the standpoint of consumers.

The differences being, 1) that the illusion will be perspectivally robust (we will have no easy way of seeing through it); and 2) the illusion will be a sociocognitive one. As AI colonizes more and more facets of our lives, our sociocognitive intuitions will become increasingly unreliable. This prediction, I think, is every bit as reliable as the prediction that the world’s ecosystems will be increasingly disrupted as human activity colonizes more and more of the world. Human social cognition turns access to cues into behaviour solving otherwise intractable biological brains—this is a fact. Algorithms are set to flood this space, to begin cuing social cognition to solve biological brains in the absence of any biological brains. Neil Lawrence likens the consequences to the creation of ‘System Zero,’ an artificial substratum for the System 1 (automatic, unconscious) and System 2 (deliberate, conscious) organization of human cognition. He writes:

“System Zero will come to understand us so fully because we expose to it our inner most thoughts and whims. System Zero will exploit massive interconnection. System Zero will be data rich. And just like an elephant, System Zero will never forget.”

Even as we continue attempting to solve it with systems we evolved to solve one another—a task which is going to remain as difficult as it always has, and will likely grow less attractive as fantasy surrogates become increasingly available. Talk about Systems over Subjects! The ecology of human meaning, the shared background allowing us to resolve conflict and to trust, will be progressively exploited and degraded—like every other ancestral ecology on this planet. When I wax grandiloquent (I am a crazy fantasy writer after all), I call this the semantic apocalypse.

I see no way out. Everyone thinks otherwise, but only because the way that human cognition neglects cognitive ecology generates the illusion of unlimited, unconstrained cognitive capacity. And this, I think, is precisely the illusion informing Ito and Howe’s theory of human nature…

Speaking of which, as I said, I found myself wondering what this theory might be as I read the book. I understood I wasn’t the target audience of the book, so I didn’t see its absence as a failing so much as unfortunate for readers like me, always angling for the hard questions. And so it niggled and niggled, until finally, I reached the last paragraph of the last page and encountered this:

“Human beings are fundamentally adaptable. We created a society that was more focussed on our productivity than our adaptability. These principles will help you prepare to be flexible and able to learn the new roles and to discard them when they don’t work anymore. If society can survive the initial whiplash when we trade our running shoes for a supersonic jet, we may yet find that the view from the jet is just what we’ve been looking for.” 250

This first claim, uplifting as it sounds, is simply not true. Human beings, considered individually or collectively, are not capable of adapting to any circumstance. Intuitions systematically misfire all the time. I appreciate how believing as much balms the conscience of those in the innovation game, but it is simply not true. And how could it be, when it entails that humans somehow transcend ecology, which is a far different claim than saying humans, relative to other organisms, are capable of spanning a wide-variety of ecologies. So long as human cognition is heuristic it depends on environmental invariances, like everything else biological. Humans are not capable of transcending system, which is precisely why we need to think the human in systematic terms, and to look at the impact of AI ecologically.

What makes Whiplash such a valuable book (aside from the entertainment factor) is that it is ecologically savvy. Ito and Howe’s dominant metaphor is that of adaptation and ecology. The old business habitat, they argue, has collapsed, leaving old business animals in the ecological lurch. The solution they offer is heuristic, a set of maxims meant to transform (at a sub-ideological level no less!) old business animals into newer, more adaptable ones. The way to solve the problem of innovation uncertainty is to contribute to that problem in the right way—be more innovative. But they fail to consider the ecological dimensions of this imperative, to see how feeding acceleration amounts to the inevitable destruction of cognitive ecologies, how the old meaning habitat is already collapsing, leaving old meaning animals in the ecological lurch, grasping for lies because those, at least, they can recognize.

They fail to see how their local survival guide likely doubles as a global suicide manual.


The meta-heuristics they offer, the new guiding mythologies, are meant to encapsulate the practical bases of evolvability itself… They’re teaching ferns how to grow flowers.


 

PS: The Big Picture

“In the past twenty-five years,” Ito and Howe write, “we have moved from a world dominated by simple systems to a world beset and baffled by complex systems” (246). This claim caught my attention because it is both true and untrue, depending how you look at it. We are pretty much the most complicated thing we know of in the universe, so it’s certainly not the case that we’ve ever dwelt in a world dominated by simple systems. What Ito and Howe are referring to, of course, is our tools. We are moving from a world dominated by simple tools to a world beset and baffled by complex ones. Since these tools facilitate tool-making, we find the great ratchet that lifted us out of the hominid fog clicking faster and faster and faster.

One of these ‘simple tools’ is what we call a ‘company’ or ‘business,’ an institution itself turning on the systematic application of simple tools, ones that intrinsically value authority over emergence, push over pull, maps over compasses, safety over risk, compliance over disobedience, theory over practice, ability over diversity, strength over resilience, and objects over systems. In the same way the simplicity of our physical implements limited the damage they could do to our physical ecologies, the simplicity of our cognitive tools limited the damage they could do to our cognitive ecology. It’s important to understand that the simplicity of these tools is what underwrites the stability of the underlying cognitive ecology. As the growing complexity and power of our physical tools intensified the damage done to our physical ecologies, the growing complexity and power of our cognitive tools is intensifying the damage done to our cognitive ecologies.

Now, two things. First, this analogy suggests that not all is hopeless, that the same way we can use the complexity and power of our physical tools to manage and prevent the destruction of our physical environment, we should be able to use the complexity and power of our cognitive tools to do the same. I concede the possibility, but I think the illusion of noocentrism (the cognitive version of geocentrism) is simply too profound. I think people will endlessly insist on the freedom to concede their autonomy. System Zero will succeed because it will pander ever so much better than a cranky old philosopher could ever hope to.

Second, notice how this analogy transforms the nature of the problem confronting that old animal, business, in the light of radical ecological change. Ancestral human cognitive ecology possessed a shallow present and a deep future. For all his ignorance, a yeoman chewing his calluses in the field five hundred years ago could predict that his son would possess a life very much resembling his own. All the obsolete items that Ito and Howe consider are artifacts of a shallow present. When the world is a black box, when you have no institutions like science bent on the systematic exploration of solution space, the solutions happened upon are generally lucky ones. You hold onto the tools you trust, because it’s all guesswork otherwise and the consequences are terminal. Authority, Push, Compliance, and so on are all heuristics in their own right, all ways of dealing with supercomplicated systems (bunches of humans), but selected for cognitive ecologies where solutions were both precious and abiding.

Oh, how things have changed. Ambient information sensitivity, the ability to draw on everything from internet search engines, to Big Data, to scientific knowledge more generally, means that businesses have what I referred to earlier as a deep present, a vast amount of information and capacity to utilize in problem solving. This allows them to solve systems as systems (the way science does) and abandon the limitations of not only object thinking, but (and this is the creepy part) subject thinking as well. It allows them to correct for faulty path-dependencies by distributing problem-solving among a diverse array of individuals. It allows them to rationalize other resources as well, to pull what they need when they need it rather than pushing warehoused resources.

Growing ambient information sensitivity means growing problem-solving economy—the problem is that this economy means accelerating cognitive ecological transformation. The cheaper optimization becomes, the more transient it becomes, simply because each and every new optimization transforms, in ways large or small but generally unpredictable, the ecology (the network of correlations) prior heuristic optimizations require to be effective. Call this the Optimization Spiral.

This is the process Ito and Howe are urging the business world to climb aboard, to become what might be called meta-ecological institutions, entities designed in the first instance, not to build cars or to mediate social relations or to find information on the web, but to evolve. As an institutionalized bundle of heuristics, a business’s ability to climb the Optimization Spiral, to survive accelerating ecological change, turns on its ability to relinquish the old while continually mimicking, tinkering, and birthing with the new. Thus the value of disobedience and resilience and practical learning: what Ito and Howe are advocating is more akin to the Precambrian Explosion or the rise of Angiosperms than simply surviving extinction. The meta-heuristics they offer, the new guiding mythologies, are meant to encapsulate the practical bases of evolvability itself… They’re teaching ferns how to grow flowers.

And stepping back to take the systems view they advocate, one cannot but feel an admixture of awe and terror, and wonder if they aren’t sketching the blueprint for an entirely unfathomable order of life, something simultaneously corporate and corporeal.

Real Systems

by rsbakker

THE ORDER WHICH IS THERE

Now I’ve never had any mentors; my path has been too idiosyncratic, for the better, since I think it’s the lack of institutional constraints that has allowed me to experiment the way I have. But if I were pressed to name any spiritual mentor, Daniel Dennett would be the first name to cross my lips—without the least hesitation. Nevertheless, I see the theoretical jewel of his project, the intentional stance, as the last gasp of what will one day, I think, count as one of humanity’s great confusions… and perhaps the final one to succumb to science.

A great many disagree, of course, and because I’ve been told so many times to go back to “Real Patterns” to discover the error of my ways, I’ve decided I would use it to make my critical case.

Defenders of Dennett (including Dennett himself) are so quick to cite “Real Patterns,” I think, because it represents his most sustained attempt to situate his position relative to his fellow philosophical travelers. At issue is the reality of ‘intentional states,’ and how the traditional insistence on some clear cut binary answer to this question—real/unreal—radically underestimates the ontological complexity charactering both everyday life and the sciences. What he proposes is “an intermediate doctrine” (29), a way of understanding intentional states as real patterns.

I have claimed that beliefs are best considered to be abstract objects rather like centers of gravity. Smith considers centers of gravity to be useful fictions while Dretske considers them to be useful (and hence?) real abstractions, and each takes his view to constitute a criticism of my position. The optimistic assessment of these opposite criticisms is that they cancel each other out; my analogy must have hit the nail on the head. The pessimistic assessment is that more needs to be said to convince philosophers that a mild and intermediate sort of realism is a positively attractive position, and not just the desperate dodge of ontological responsibility it has sometimes been taken to be. I have just such a case to present, a generalization and extension of my earlier attempts, via the concept of a pattern. My aim on this occasion is not so much to prove that my intermediate doctrine about the reality of psychologcal states is right, but just that it is quite possibly right, because a parallel doctrine is demonstrably right about some simpler cases. 29

So what does he mean by ‘real patterns’? Dennett begins by considering a diagram with six rows of five black boxes each characterized by varying degrees of noise, so extreme in some cases as completely obscure the boxes. He then, following the grain of his characteristic genius, provides a battery of different ways these series might find themselves used.

This crass way of putting things-in terms of betting and getting rich-is simply a vivid way of drawing attention to a real, and far from crass, trade-off that is ubiquitous in nature, and hence in folk psychology. Would we prefer an extremely compact pattern description with a high noise ratio or a less compact pattern description with a lower noise ratio? Our decision may depend on how swiftly and reliably we can discern the simple pattern, how dangerous errors are, how much of our resources we can afford to allocate to detection and calculation. These “design decisions” are typically not left to us to make by individual and deliberate choices; they are incorporated into the design of our sense organs by genetic evolution, and into our culture by cultural evolution. The product of this design evolution process is what Wilfrid Sellars calls our manifest image, and it is composed of folk physics, folk psychology, and the other pattern-making perspectives we have on the buzzing blooming confusion that bombards us with data. The ontology generated by the manifest image has thus a deeply pragmatic source. 36

The moral is straightforward: the kinds of patterns that data sets yield are both perspectival and pragmatic. In each case, the pattern recognized is quite real, but bound upon some potentially idiosyncratic perspective possessing some potentially idiosyncratic needs.

He then takes this moral to Conway’s Game of Life, a computer program where cells in a grid are switched on or off in successive turns depending on the number of adjacent cells switched on. The marvelous thing about this program lies in the kinds of dynamic complexities arising from this simple template and single rule, subsystems persisting from turn to turn, encountering other subsystems with predictable results. Despite the determinism of this system, patterns emerge that only the design stance seems to adequately capture, a level possessing “it’s own language, a transparent foreshortening of the tedious descriptions one could give at the physical level” (39).

For Dennett, the fact that one can successfully predict via the design stance clearly demonstrates that it’s picking out real patterns somehow. He asks us to imagine transforming the Game into a supersystem played out on a screen miles wide and using the patterns picked out to design a Turing Machine playing chess against itself. Here, Dennett argues, the determinacy of the microphysical picture is either intractable or impracticable, yet we need only take up a chess stance or a computational stance to make, from a naive perspective, stunning predictions as to what will happen next.

And this is of course as true of life life as it is the Game of Life: “Predicting that someone will duck if you throw a brick at him is easy from the folk-psychological stance; it is and will always be intractable if you have to trace the photons from brick to eyeball, the neurotransmitters from optic nerve to motor nerve, and so forth” (42). His supersized Game of Life, in other words, makes plain the power and the limitations of heuristic cognition.

This brings him to his stated aim of clarifying his position vis a vis his confreres and Fodor. As he points out, everyone agrees there’s some kind of underlying “order which is there,” as Anscombe puts it in Intention. The million dollar question, of course, is what this order amounts to:

Fodor and others have claimed that an interior language of thought is the best explanation of the hard edges visible in “propositional attitude psychology.” Churchland and I have offered an alternative explanation of these edges… The process that produces the data of folk psychology, we claim, is one in which the multidimensional complexities of the underlying processes are projected through linguistic behavior, which creates an appearance of definiteness and precision, thanks to the discreteness of words. 44-45

So for traditional realists, like Fodor, the structure beliefs evince in reflection and discourse expresses the structure beliefs must possess in the head. For Dennett, on the other hand, the structure beliefs evince in reflection and discourse expresses, among other things, the structure of reflection and discourse. How could it be otherwise, he asks, given the ‘stupendous scale of compression’ (42) involved?

As Haugeland points out in “Pattern and Being,” this saddles Dennett’s account of patterns with a pretty significant ambiguity: if the patterns characteristic of intentional states express the structure of reflection and discourse, then the ‘order which is there’ must be here as well. Of course, this much is implicit in Dennett’s preamble: the salience of certain patterns depends on the perspective we possess on them. But even though this implicit ‘here-there holism’ becomes all but explicit when Dennett turns to Radical Translation and the distinction between his and Davidson’s views, his emphasis nevertheless remains on the order out there. As he writes:

Davidson and I both like Churchland’s alternative idea of propositional-attitude statements as indirect “measurements” of a reality diffused in the behavioral dispositions of the brain (and body). We think beliefs are quite real enough to call real just so long as belief talk measures these complex behavior-disposing organs as predictively as it does. 45-46

Rhetorically (even diagrammatically if one takes Dennett’s illustrations into account), the emphasis is on the order there, while here is merely implied as a kind of enabling condition. Call this the ‘epistemic-ontological ambiguity’ (EOA). On the one hand, it seems to make eminent sense to speak of patterns visible only from certain perspectives and to construe them as something there, independent of any perspective we might take on them. But on the other hand, it seems to make jolly good sense to speak of patterns visible only from certain perspectives and to construe them as around here, as something entirely dependent on the perspective we find ourselves taking. Because of this, it seems pretty fair to ask Dennett which kind of pattern he has in mind here. To speak of beliefs as dispositions diffused in the brain seems to pretty clearly imply the first. To speak of beliefs as low dimensional, communicative projections, on the other hand, seems to clearly imply the latter.

Why this ambiguity? Do the patterns underwriting belief obtain in individual believers, dispositionally diffused as he says, or do they obtain in the communicative conjunction of witnesses and believers? Dennett promised to give us ‘parallel examples’ warranting his ‘intermediate realism,’ but by simply asking the whereabouts of the patterns, whether we will find them primarily out there as opposed to around here, we quickly realize his examples merely recapitulate the issue they were supposed to resolve.

 

THE ORDER AROUND HERE

Welcome to crash space. If I’m right then you presently find yourself strolling through a cognitive illusion generated by the application of heuristic capacities outside their effective problem ecology.

Think of how curious the EOA is. The familiarity of it should be nothing short of gobsmacking: here, once again we find ourselves stymied by the same old dichotomies: here versus there, inside versus outside, knowing versus known. Here, once again we find ourselves trapped in the orbit of the great blindspot that still, after thousands of years, stumps the wise of the world.

What the hell could be going on?

Think of the challenge facing our ancestors attempting cognize their environmental relationships for the purposes of communication and deliberate problem-solving. The industrial scale of our ongoing attempt to understand as much demonstrates the intractability of that relationship. Apart from our brute causal interactions, our ability to cognize our cognitive relationships is source insensitive through and through. When a brick is thrown at us, “the photons from brick to eyeball, the neurotransmitters from optic nerve to motor nerve, and so forth” (42) all go without saying. In other words, the whole system enabling cognition of the brick throwing is neglected, and only information relevant to ancestral problem-solving—in this case, brick throwing—finds its way to conscious broadcast.

In ancestral cognitive ecologies, our high-dimensional (physical) continuity with nature mattered as much as it matters now, but it quite simply did not exist for them. They belonged to any number of natural circuits across any number of scales, and all they had to go on was the information that mattered (disposed them to repeat and optimize behaviours) given the resources they possessed. Just as Dennett argues, human cognition is heuristic through and through. We have no way of cognizing our position within any number of the superordinate systems science has revealed in nature, so we have to make do with hacks, subsystems allowing us to communicate and troubleshoot our relation to the environment while remaining almost entirely blind to it. About talk belongs to just such a subsystem, a kluge communicating and troubleshooting our relation to our environments absent cognition of our position in larger systems. As I like to say, we’re natural in such a way as to be incapable of cognizing ourselves as natural.

About talk facilitates cognition and communication of our worldly relation absent any access to the physical details of that relation. And as it turns out, we are that occluded relation’s most complicated component—we are the primary thing neglected in applications of about talk. As the thing most neglected, we are the thing most presumed, the invariant background guaranteeing the reliability of about talk (this is why homuncular arguments are so empty). This combination of cognitive insensitivity to and functional dependence upon the machinations of cognition (what I sometimes refer to as medial neglect) suggests that about talk would be ideally suited to communicating and troubleshooting functionally independent systems, processes generally insensitive to our attempts to cognize them. This is because the details of cognition make no difference to the details cognized: the automatic distinction about talk poses between cognizing system and the system cognized poses no impediment to understanding functionally independent systems. As a result, we should expect about talk to be relatively unproblematic when it comes to communicating and troubleshooting things ‘out there.’

Conversely, we should expect about talk to generate problems when it comes to communicating and troubleshooting functionally dependent systems, processes somehow sensitive to our attempts to cognize them. Consider ‘observer effects,’ the problem researchers themselves pose when their presence or their tools/techniques interfere with the process they are attempting to study. Given medial neglect, the researchers themselves always constitute a black box. In the case of systems functionally sensitive to the activity of cognition, as is often the case in psychology and particle physics, understanding the system requires we somehow obviate our impact on the system. As the interactive, behavioural components of cognition show, we are in fact quite good (though far from perfect) at inserting and subtracting our interventions in processes. But since we remain a black box, since our position in the superordinate systems formed by our investigations remains occluded, our inability to extricate ourselves, to gerrymander functional independence, say, undermines cognition.

Even if we necessarily neglect our positions in superordinate systems, we need some way of managing the resulting vulnerabilities, to appreciate that patterns may be artifacts of our position. This suggests one reason, at least, for the affinity of mechanical cognition and ‘reality.’ The more our black box functions impact the system to be cognized, the less cognizable that system becomes in source sensitive terms. We become an inescapable source of noise. Thus our intuitive appreciation of the need for ‘perspective,’ to ‘rise above the fray’: The degree to which a cognitive mode preserves (via gerrymandering if not outright passivity) the functional independence of a system is the degree to which that cognitive mode enables reliable source sensitive cognition is the degree to which about talk can be effectively applied.

The deeper our entanglements, on the other hand, the more we need to rely on source insensitive modes of cognition to cognize target systems. Even if our impact renders the isolation of source signals impossible, our entanglement remains nonetheless systematic, meaning that any number of cues correlated in any number of ways to the target system can be isolated (which is really all ‘radical translation’ amounts to). Given that metacognition is functionally entangled by definition, it becomes easy to see why the theoretical question of cognition causes about talk to crash the spectacular ways it does: our ability to neglect the machinations of cognition (the ‘order which is here’) is a boundary condition for the effective application of ‘orders which are there’—or seeing things as real. Systems adapted to work around the intractability of our cognitive nature find themselves compulsively applied to the problem of our cognitive nature. We end up creating a bestiary of sourceless things, things that, thanks to the misapplication of the aboutness heuristic, have to belong to some ‘order out there,’ and yet cannot be sourced like anything else out there… as if they were unreal.

The question of reality cues the application of about talk, our source insensitive means of communicating and troubleshooting our cognitive relation to the world. For our ancient ancestors, who lacked the means to distinguish between source sensitive and source insensitive modes of cognition, asking, ‘Are beliefs real?’ would have sounded insane. HNT, in fact, provides a straightforward explanation for what might be called our ‘default dogmatism,’ our reflex for naive realism: not only do we lack any sensitivity to the mechanics of cognition, we lack any sensitivity to this insensitivity. This generates the persistent illusion of sufficiency, the assumption (regularly observed in different psychological phenomena) that the information provided is all the information there is.

Cognition of cognitive insufficiency always requires more resources, more information. Sufficiency is the default. This is what makes the novel application of some potentially ‘good trick,’ as Dennett would say, such tricky business. Consider philosophy. At some point, human culture acquired the trick of recruiting existing metacognitive capacities to explain the visible in terms of the invisible in unprecedented (theoretical) ways. Since those metacognitive capacities are radically heuristic, specialized consumers of select information, we can suppose retasking those capacities to solve novel problems—as philosophers do when they, for instance, ‘ponder the nature of knowledge’—would run afoul some pretty profound problems. Even if those specialized metacognitive consumers possessed the capacity to signal cognitive insufficiency, we can be certain the insufficiency flagged would be relative to some adaptive problem-ecology. Blind to the heuristic structure of cognition, the first philosophers took the sufficiency of their applications for granted, much as very many do now, despite the millennia of prior failure.

Philosophy inherited our cognitive innocence and transformed it, I would argue, into a morass of competing cognitive fantasies. But if it failed to grasp the heuristic nature of much cognition, it did allow, as if by delayed exposure, a wide variety of distinctions to blacken the photographic plate of philosophical reflection—that between is and ought, fact and value, among them. The question, ‘Are beliefs real?’ became more a bona fide challenge than a declaration of insanity. Given insensitivity to the source insensitive nature of belief talk, however, the nature of the problem entirely escaped them. Since the question of reality cues the application of about talk, source insensitive modes of cognition struck them as the only game in town. Merely posing the question springs the trap (for as Dennett says, selecting cues is “typically not left to us to make by individual and deliberate choices” (36)). And so they found themselves attempting to solve the hidden nature of cognition via the application of devices adapted to ignore hidden natures.

Dennett runs into the epistemic-ontological ambiguity because the question of the reality of intentional states cues the about heuristic out of school, cedes the debate to systems dedicated to gerrymandering solutions absent high-dimensional information regarding our cognitive predicament—our position within superordinate systems. Either beliefs are out there, real, or they’re in here, merely, an enabling figment of some kind. And as it turns out, IST is entirely amenable to this misapplication, in that ‘taking the intentional stance’ involves cuing the about heuristic, thus neglecting our high-dimensional cognitive predicament. On Dennett’s view, recall, an intentional system is any system that can be predicted/explained/manipulated via the intentional stance. Though the hidden patterns can only be recognized from the proper perspective, they are there nonetheless, enough, Dennett thinks, to concede them reality as intentional systems.

Heuristic Neglect Theory allows us to see how this amounts to mistaking a CPU for a PC. On HNT, the trick is to never let the superordinate systems enabling and necessitating intentional cognition out of view. Recall the example of the gaze heuristic from my prior post, how fielders essentially insert—functionally entangle—themselves into the pop fly system to let the ball itself guide them in. The same applies to beliefs. When your tech repairs your computer, you have no access to her personal history, the way thousands of hours have knapped her trouble-shooting capacities, and even less access to her evolutionary history, the way continual exposure to problematic environments has sculpted her biological problem-solving capacities. You have no access, in other words, to the vast systems of quite natural relata enabling her repair. The source sensitive story is unavailable, so you call her ‘knowledgeable’ instead; you presume she possesses something—a fetish, in effect—possessing the sourceless efficacy explaining her almost miraculous ability to make your PC run: a mass of true beliefs (representations), regarding personal computer repair. You opt for a source insensitive means that correlates with her capacities well enough to neglect the high-dimensional facts—the natural and personal histories—underwriting her ability.

So then where does the ‘real pattern’ gainsaying the reality of belief lie? The realist would say in the tech herself. This is certainly what our (heuristic) intuitions tell us in the first instance. But as we saw above, squaring sourceless entities in a world where most everything has a source is no easy task. The instrumentalist would say in your practices. This certainly lets us explain away some of the peculiarities crashing our realist intuitions, but at the cost of other, equally perplexing problems (this is crash space, after all). As one might expect, substituting the use heuristic for the about heuristic merely passes the hot potato of source insensitivity. ‘Pragmatic functions’ are no less difficult to square with the high-dimensional than beliefs.

But it should be clear by now that the simple act of pairing beliefs with patterns amounts to jumping the same ancient shark. The question, ‘Are beliefs real?’ was a no-brainer for our preliterate ancestors simply because they lived in a seamless shallow information cognitive ecology. Outside their local physics, the sources of things eluded them altogether. ‘Of course beliefs are real!’ The question was a challenge for our philosophical ancestors because they lived in a fractured shallow information ecology. They could see enough between the cracks to appreciate the potential extent and troubling implications of mechanical cognition, it’s penchant to crash our shallow (ancestral) intuitions. ‘It has to be real!’

With Dennett, entire expanses of our shallow information ecology have been laid low and we get, ‘It’s as real as it needs to be.’ He understands the power of the about heuristic, how ‘order out there’ thinking effects any number of communicative solutions—thus his rebuttal of Rorty. He understands, likewise, the power of the use heuristic, how ‘order around here’ thinking effects any number of communicative solutions—thus his rebuttal of Fodor. And most importantly, he understands the error of assuming the universal applicability of either. And so he concludes:

Now, once again, is the view I am defending here a sort of instrumentalism or a sort of realism? I think that the view itself is clearer than either of the labels, so I shall leave that question to anyone who stills find [sic] illumination in them. 51

What he doesn’t understand is how it all fits together—and how could he, when IST strands him with an intentional theorization of intentional cognition, a homuncular or black box understanding of our contemporary cognitive predicament? This is why “Real Patterns” both begins and ends with EOA, why we are no closer to understanding why such ambiguity obtains at all. How are we supposed to understand how his position falls between the ‘ontological dichotomy’ of realism and instrumentalism when we have no account of this dichotomy in the first place? Why the peculiar ‘bi-stable’ structure? Why the incompatibility between them? How can the same subject matter evince both? Why does each seem to inferentially beg the other?

 

THE ORDER

The fact is, Dennett was entirely right to eschew outright realism or outright instrumentalism. This hunch of his, like so many others, was downright prescient. But the intentional stance only allows him to swap between perspectives. As a one-time adherent I know first-hand the theoretical versatility IST provides, but the problem is that explanation is what is required here.

HNT argues that simply interrogating the high-dimensional reality of belief, the degree to which it exists out there, covers over the very real system—the cognitive ecology—explaining the nature of belief talk. Once again, our ancestors needed some way of communicating their cognitive relations absent source-sensitive information regarding those relations. The homunculus is a black box precisely because it cannot source its own functions, merely track their consequences. The peculiar ‘here dim’ versus ‘there bright’ character of naive ontological or dogmatic cognition is a function of medial neglect, our gross insensitivity to the structure and dynamics of our cognitive capacities. Epistemic or instrumental cognition comes with learning from the untoward consequences of naive ontological cognition—the inevitable breakdowns. Emerging from our ancestral, shallow information ecologies, the world was an ‘order there’ world simply because humanity lacked the ability to discriminate the impact of ‘around here.’ The discrimination of cognitive complexity begets intuitions of cognitive activity, undermines our default ‘out there’ intuitions. But since ‘order there’ is the default and ‘around here’ the cognitive achievement, we find ourselves in the peculiar position of apparently presuming ‘order there’ when making ‘around here’ claims. Since ‘order there’ intuitions remain effective when applied in their adaptive problem-ecologies, we find speculation splitting along ‘realist’ versus ‘anti-realist’ lines. Because no one has any inkling of any of this, we find ourselves flipping back and forth between these poles, taking versions of the same obvious steps to trod the same ancient circles. Every application is occluded, and so ‘transparent,’ as well as an activity possessing consequences.

Thus EOA… as well as an endless parade of philosophical chimera.

Isn’t this the real mystery of “Real Patterns,” the question of how and why philosophers find themselves trapped on this rickety old teeter-totter? “It is amusing to note,” Dennett writes, “that my analogizing beliefs to centers of gravity has been attacked from both sides of the ontological dichotomy, by philosophers who think it is simply obvious that centers of gravity are useful fictions, and by philosophers who think it is simply obvious that centers of gravity are perfectly real” (27). Well, perhaps not so amusing: Short of solving this mystery, Dennett has no way of finding the magic middle he seeks in this article—the middle of what? IST merely provides him with the means to recapitulate EOA and gesture to the possibility of some middle, a way to conceive all these issues that doesn’t deliver us to more of the same. His instincts, I think, were on the money, but his theoretical resources could not take him where he wanted to go, which is why, from the standpoint of his critics, he just seems to want to have it both ways.

On HNT we can see, quite clearly, I think, the problem with the question, ‘Are beliefs real?’ absent an adequate account of the relevant cognitive ecology. The bitter pill lies in understanding that the application conditions of ‘real’ have real limits. Dennett provides examples where those application conditions pretty clearly seem to obtain, then suggests more than argues that these examples are ‘parallel’ in all the structurally relevant respects to the situation with belief. But to distinguish his brand from Fodor’s ‘industrial strength’ realism, he has no choice but to ‘go instrumental’ in some respect, thus exposing the ambiguity falling out of IST.

It’s safe to say belief talk is real. It seems safe to say that beliefs are ‘real enough’ for the purposes of practical problem-solving—that is, for shallow (or source insensitive) cognitive ecologies. But it also seems safe to say that beliefs are not real at all when it comes to solving high-dimensional cognitive ecologies. The degree to which scientific inquiry is committed to finding the deepest (as opposed to the most expedient) account, should be the degree to which it views belief talk as components of real systems and views ‘belief’ as a source insensitive posit, a way to communicate and troubleshoot both oneself and one’s fellows.

This is crash space, so I appreciate the kinds of counter-intuitiveness involved in this view I’m advancing. But since tramping intuitive tracks has hitherto only served to entrench our controversies and confusions, we have good reason to choose explanatory power over intuitive appeal. We should expect synthesis in the cognitive sciences will prove every bit as alienating to traditional presumption as it was in biology. There’s more than a little conceit involved in thinking we had any special inside track on our own nature. In fact, it would be a miracle if humanity had not found itself in some version of this very dilemma. Given only source insensitive means to troubleshoot cognition, to understand ourselves and each other, we were all but doomed to be stumped by the flood of source sensitive cognition unleashed by science. (In fact, given some degree of interstellar evolutionary convergence, I think one can wager that extraterrestrial intelligences will have suffered their own source insensitive versus source sensitive cognitive crash spaces. See my, “On Alien Philosophy,” The Journal of Consciousness Studies, (forthcoming))

IST brings us to the deflationary limit of intentional philosophy. HNT offers a way to ratchet ourselves beyond, a form of critical eliminativism that can actually explain, as opposed to simply dispute, the traditional claims of intentionality. Dennett, of course, reserves his final criticism for eliminativism, perhaps because so many critics see it as the upshot of his interpretivism. He acknowledges the possibility that “that neuroscience will eventually-perhaps even soon-discover a pattern that is so clearly superior to the noisy pattern of folk psychology that everyone will readily abandon the former for the latter (50),” but he thinks it unlikely:

For it is not enough for Churchland to suppose that in principle, neuroscientific levels of description will explain more of the variance, predict more of the “noise” that bedevils higher levels. This is, of course, bound to be true in the limit-if we descend all the way to the neurophysiological “bit map.” But as we have seen, the trade-off between ease of use and immunity from error for such a cumbersome system may make it profoundly unattractive. If the “pattern” is scarcely an improvement over the bit map, talk of eliminative materialism will fall on deaf ears-just as it does when radical eliminativists urge us to abandon our ontological commitments to tables and chairs. A truly general-purpose, robust system of pattern description more valuable than the intentional stance is not an impossibility, but anyone who wants to bet on it might care to talk to me about the odds they will take. 51

The elimination of theoretical intentional idiom requires, Dennett correctly points out, some other kind of idiom. Given the operationalization of intentional idioms across a wide variety of research contexts, they are not about to be abandoned anytime soon, and not at all if the eliminativist has nothing to offer in their stead. The challenge faced by the eliminativist, Dennett recognizes, is primarily abductive. If you want to race at psychological tracks, you either enter intentional horses or something that can run as fast or faster. He thinks this unlikely because he thinks no causally consilient (source sensitive) theory can hope to rival the combination of power and generality provided by the intentional stance. Why might this be? Here he alludes to ‘levels,’ suggest that any causally consilient account would remain trapped at the microphysical level, and so remain hopelessly cumbersome. But elsewhere, as in his discussion of ‘creeping depersonalization’ in “Mechanism and Responsibility,” he readily acknowledges our ability to treat with one another as machines.

And again, we see how the limited resources of IST have backed him into a philosophical corner—and a traditional one at that. On HNT, his claim amounts to saying that no source sensitive theory can hope to supplant the bundle of source insensitive modes comprising intentional cognition. On HNT, in other words, we already find ourselves on the ‘level’ of intentional explanation, already find ourselves with a theory possessing the combination of power and generality required to eliminate a particle of intentional theorization: namely, the intentional stance. A way to depersonalize cognitive science.

Because IST primarily provides a versatile way to deploy and manage intentionality in theoretical contexts rather than any understanding of its nature, the disanalogy between ‘center of gravity’ and ‘beliefs’ remains invisible. In each case you seem to have an entity that resists any clear relation to the order which is there, and yet finds itself regularly and usefully employed in legitimate scientific contexts. Our brains are basically short-cut machines, so it should come as no surprise that we find heuristics everywhere, in perception as much as cognition (insofar as they are distinct). It also should come as no surprise that they comprise a bestiary, as with most all things biological. Dennett is comparing heuristic apples and oranges, here. Centers of gravity are easily anchored to the order which is there because they economize otherwise available information. They can be sourced. Such is not the case with beliefs, belonging as they do to a system gerrymandering for the want of information.

So what is the ultimate picture offered here? What could reality amount to outside our heuristic regimes? Hard to think, as it damn well should be. Our species’ history posed no evolutionary challenges requiring the ability to intuitively grasp the facts of our cognitive predicament. It gave us a lot of idiosyncratic tools to solve high impact practical problems, and as a result, Homo sapiens fell through the sieve in such a way as to be dumbfounded when it began experimenting in earnest with its interrogative capacities. We stumbled across a good number of tools along the way, to be certain, but we remain just as profoundly stumped about ourselves. On HNT, the ‘big picture view’ is crash space, in ways perhaps similar to the subatomic, a domain where our biologically parochial capacities actually interfere with our ability to understand. But it offers a way of understanding the structure and dynamics of intentional cognition in source sensitive terms, and in so doing, explains why crashing our ancestral cognitive modes was inevitable. Just consider the way ‘outside heuristic regimes’ suggests something ‘noumenal,’ some uber-reality lost at the instant of transcendental application. The degree to which this answer strikes you as natural or ‘obvious’ is the degree you have been conditioned to apply that very regime out of school. With HNT we can demand those who want to stuff us into this or that intellectual Klein bottles define their application conditions, convince us this isn’t just more crash space mischief.

It’s trivial to say some information isn’t available, so why not leave well enough alone? Perhaps the time has come to abandon the old, granular dichotomies and speak in terms of dimensions of information available and cognitive capacities possessed. Imagine that

Moving on.