First time laying eyes on this thanks to BF… I just had to share.
In fact, I feel like making a frickin T-shirt or something!
No bells, just whistling in the dark…
I rented a bin and got some buddies together to clean out the barn and lo! I stumbled across the original map that would morph into the world of The Second Apocalypse. I’m generally averse to discussing the world and plot of the series because I think a certain amount of ‘genesis neglect’ actually serves to make the series more believable, and therefore more immersive, but this! this comes straight from the Coffers. It’s gotta be thirty years old at least!
The magic I felt inking this, the sense of swooping grandeur, intricate melancholy. I remember feeling all of it boom in my heart and my bones: the mangled histories flung across mountains and plains, the vying nations and dying races, and the obscene evil that would consume it. The world was a new lover back then, charged with passions I can scarce recognize anymore.
That kid would be proud, I think, of what I’ve made of it so far. But what I wouldn’t give to relive a moment of that naïve intensity.
What a slog it’s been. A Slog of Slogs!
Peter Hankins of Conscious Entities fame has posted his thoughts on my Scientia Salon piece here. As always, I think Conscious Entities is the best site on the web for those seeking clear and impartial op-editorial updates on the world of cognitive science and consciousness research–far more so than Three Pound Brain! Which is okay. Here, the idea is to push a certain boundary, whereas there, the idea is to assess many of the different boundaries being pushed.
Can be found at the esteemed Scientia Salon. Spread the link far and wide. For those who follow the blog, the arguments will be familiar: what should be interesting is watching what a far different, and far less charitable, group of philosophers make of them.
He sits back on his haunches, looking at the bills and coins in his hand. He looks from the bag to Clayton and back again, suddenly shaken and terribly shocked. –Barre Lyndon, The War of the Worlds, Scene 268.
The 1953 version of The War of the Worlds has a wonderful scene where a well-dressed man offers a bag of money to board a Pacific-Tech truck fleeing Los Angeles, only to be violently rebuffed by more rugged souls. And so he’s left, perplexed and dismayed, to await his doom wondering how money, the long-time source of his power over others, suddenly possesses no power at all.
Money offers a paradigmatic example of the confusion of differential or relational properties with intrinsic properties. Given the reliability of a system, information pertaining to the system need not be known to master the capacities belonging to some element within the system. An individual need not know anything about political economy to know, locally at least, what money can do. Given ignorance of the system, attributing special powers to the available element becomes the default, the only way to understand how the element, in this case money, does what it does. We literally fetishize money. The attribution of ‘special powers’ actually allows us to solve a wide variety of practical problems. How did your brother-in-law get that mansion? Well, he won a million dollars in the lottery. Since the enabling background is a ubiquitous feature of all such explanations, it need not figure in them—it ‘goes without saying.’ Given the system, money makes things happen. Why did that stranger at the till give me the cigarettes? Because I gave him ten bucks.
Intrinsic efficacy, in other words, is a useful heuristic, a way to solve problems belonging to a certain ecology. No one needs to know how money works to know that money does work. Even though money only possesses power as a component of a far larger system, we can solve a number of problems within that system simply assuming that money possesses that power intrinsically.
Out of sight, out of mind. This is why financial crises regularly shock the assumptions of so many. Heuristic cognition is largely an unconscious, habitual affair: everyone assumes the stranger is going to run the same routines for the same gold. Instabilities in the system make plain the complex, differential nature of the properties assumed intrinsic. Though the notion of intrinsic value would die a hard death in economic theory more generally, the differential nature of ‘fiat money’ is apparent to anyone bearing currency that others refuse to recognize.
Some systems, however, never give us a heuristic reality check. Since we humans are embedded in a wide variety of systems that (until recently) we had no hope of understanding, yet filled with entities that required some kind of understanding, it makes sense to suppose that attributions of intrinsic efficacy provide humans with a general problem-solving strategy. As a cultural artifact, money is actually a good example of that generality, of the way intrinsic efficacy can be used to make sense of items in novel, yet otherwise occluded, systems.
Think about how many things, phenomenally speaking, just happen; we have no inkling whatsoever of the underwriting systems. By dint of what we are, we perpetually suffer the Inverse Problem, the problem of cognizing environmental systems given only the effects of those systems. Somehow our brain conjures a world from a thin stream of visual, auditory, olfactory, and haptic effects. This is why my daughter perpetually hounds me with origin questions: she’s trying to figure out what’s relational and what’s intrinsic, what’s part of the great Rube Goldberg machine and what stands alone. It’s almost as if she’s identifying all the little Big Bangs scattered across her environment, all points where effects, for all practical purposes, arise ab initio.
The Inverse Problem illustrates the extremity of our cognitive straits, and so explains the practical necessity of intrinsic efficacy. When consistently confronted by effects absent any cause—viz., a system that outruns our on-the-fly capacity to cognize—we assume such efficacy to be intrinsic to the entity occasioning it. Given the sheer ubiquity of such effects, then, we should expect attributions of intrinsic efficacy to be a ubiquitous feature of human cognition.
As indeed they are. Magical thinking, for instance, clearly involves the application of intrinsic efficacy, only to problem-ecologies it plainly cannot solve. A fetish understood in the anthropological sense provides what might seem a paradigmatic example, where occult powers are attributed to some object. In fact, the bulk of what science has labelled ‘superstition’ consists in the erroneous attribution of intrinsic efficacy to objects, actions, and events.
Of course, what makes magical thinking magical is the fact that the intrinsic efficacies posited simply do not exist. Where money does in fact mediate the functions attributed to it, fetishes do not. They may very well mediate ulterior functions—leveraging prestige, reinforcing social cohesion, and the like—but they do not do what the practitioners themselves suppose. A million dollars will buy you a house, but a fetish won’t make a rich relative sicken and die! Where systematic understanding demystifies money, clarifies the nature of the actual functions involved, it simply debunks fetishes.
All applications of intrinsic efficacy, in other words, are not equal. Some function in their domain of explicit application, while others do not. Since science has shown us that larger systems are always responsible, however, we should presume that all applications involve neglect of those systems. We should assume, in other words, that no such thing as intrinsic efficacy exists, and that if, for any reason, it seems that such a thing does (or worse yet, has to), it only does so for neglect.
And yet the vast majority of us continue to believe in it. Rules constrain. Representations reveal. Decisions resolve. Goals guide. Desires drive. Reasons clarify. According to some, the bloody apriori organizes the whole of bloody existence!
All these abstract or mental entities possess efficacies that we simply cannot square with our understanding of the various natural systems of which they should be part. We refer to these various loci of efficacy all the time; they help us predict, explain, and manipulate, given certain problem ecologies. Nevertheless, our every attempt to find them in nature has come up empty-handed.
In other words, they exhibit all the characteristics of what we’ve been referring to as intrinsic efficacy heuristics. As extreme as our cognitive straits are relative to our environments, they are even more so relative to ourselves. Given their complexity, brains simply cannot cognize brains in ‘plug and play’ terms. Intrinsic efficacies are not simply useful, they are mandatory when it comes to our intuitive understanding of ourselves and others. When our mechanic repairs our car, we have no access to his personal history, the way continual exposure to mechanical issues has honed his problem-solving capacities, and even less access to his evolutionary history, the way continual exposure to problematic environments has sculpted his biological problem-solving capacities. We have no access, in other words, to the vast systems of quite natural relata that make his repairs possible. So we call him ‘knowledgeable’ instead; we presume he possesses something—a fetish, in effect—possessing the efficacy explaining his almost miraculous ability to make your Ford Pinto run: a mass of true beliefs, representations, regarding automotive mechanics.
Since the point of the ‘representation fetish’ is to solve neglecting the systems actually responsible, our every attempt to explain representations in terms of these systems fails. Representation, like all intentional phenomena, is heuristic through and through. But for some reason, we simply cannot relinquish the notion that they have to be more. Even though intrinsic efficacy is obviously a ‘cognitive conceit’ everywhere else, the majority of cognitive science researchers insist on the reality of these particular loci, or at least the reality of some of them (because everybody thinks something has to be eliminated). The illusion—so easily overcome vis a vis money—remains the single most contentious issue confronting cognitive science today.
One reason is simply that the past never crashes. Where monetary systems possess limits and instabilities that regularly indicate the relational nature of money’s efficacy, individual and evolutionary history are fixed. The complex relationality of meaning, or ‘externalism,’ can only be demonstrated indirectly, via a number of different philosophical tactics. In lieu of crashing markets, Wittgenstein challenges us to source the efficacy of the rules governing our representations, showing how citing further rules simply defers the issue, and how no recollection of prior use can serve to warrant present uses, because any number of recollections can be made to accord with any given use. In lieu of crashing markets, Quine uses the problem of starting a meaning market from scratch, or ‘radical translation,’ to demonstrate how meanings are perpetual hostages of contexts. In lieu of crashing markets, Putnam poses a systematically attenuated world, a Twin Earth, demonstrating the relationality of meaning via the equivocity of meat. In lieu of crashing markets, Derrida devises a market crashing methodology, deconstruction, where the myth of the ‘transcendental signified’ is revealed through the incremental, interpretative deformation of meaning in texts. In lieu of crashing markets, Dennett provides an alternate evolutionary history of a meaning system, the ‘two bitser,’ showing how successively complicating a mere mechanism can generate the complicated behaviours we associate with meaning.
In each case, the theorist relies on some imaginative way of removing meaning from our present market to show its dependence on the greater system. But alternate worlds are not quite as convincing as actual ones, and the power of the ‘representational intuition’ seems to be commensurate with its local problem-solving power, so these arguments, as immanently decisive as they are, have failed to carry the field. Even worse, those they have convinced generally assume that representation alone is the problem, and thus that these arguments motivate some form of pragmatic normativism—which is to say, a different form of intrinsic efficacy! They miss the whole moral.
And this speaks to the second great difficulty obscuring the heuristic nature of meaning: the fact that it constitutes a component of a larger system of such heuristics. Representation begs reference begs truth begs rationality begs normativity, and so on. Overcoming one instance of intrinsic efficacy, therefore, simply results in becoming snarled in another, and the gain in understanding is minimal at best. One set of conundrums is exchanged for another, as we should expect. Since this heuristic system has remained invisible for the whole of human history, erroneous attributions of intrinsic efficacy characterize the sum of our traditional self-understanding, what Sellars famously called the ‘Manifest Image.’ Seeing this heuristic system for what it is, therefore, represents as radical a conceptual break with our past as one can imagine. And this radicality, accordingly, means that epistemic conservativism itself counts against the possibility of seeing intrinsic efficacy for what it is.
We find ourselves stranded with a variety of special purpose ‘meaning fetishes,’ floating efficacies that motivate and constrain our activities, bind us to our environments, solve our disputes, and so on. And like the well-dressed man in The War of the Worlds, we quite simply do not know how to go on.
So I’ve been struggling with politics the way I always struggle with politics.
Here’s what I think is very likely a waste of intellectual resources:
1) Philosophical redefinitions of ‘freedom.’ So you’ve added to the sum of what there is to disagree about, induced more educated souls to opine as opposed to act, and contributed to the cultural alienation that makes anti-intellectualism cool. Who do you work for again?
2) Conceptual delimitations of what David Roden calls ‘Posthuman Possibility Space.’ Humans are not exempt from the order of nature. Science has had no redemptive tales to tell so far, so why should we think it will in the future?
3) The fetishization of art. A classic example of the ‘man with a hammer’ disease. Transgressing outgroup aesthetic expectations for ingroup consumption amounts to nothing more than confirming outgroup social expectations regarding your ingroup. Unless the ‘art’ in question genuinely reaches out, then it is simply part of the problem. Of course, this amounts to abandoning art and embracing dreck, where, as the right has always known, the true transformative power of art has always lain.
4) Critiques and defenses of subjectivity. Even if there is such a thing, I think it’s safe to say that discoursing about it amounts to little more than an ingroup philosophical parlour game.
Here’s what I think is not as likely to be a waste of intellectual resources (but very well could be):
1) Cultural triage. WE NO LONGER HAVE TIME TO FUCK AROUND. The Theory Industry (and yes I smell the reek of hypocrisy) is a self-regarding institutional enterprise, bent not so much on genuine transformation as breath mints and citations–which is to say, the accumulation of ingroup prestige. The only lines worth pursuing are lines leading out, away from the Theory Industry, and toward all those people who keep our lazy asses alive. If content is your thing, then invade the commons, recognize that writing for the likeminded amounts to not writing at all.
2) Theoretical honesty. NO ONE HAS ANY DEFINITIVE THEORETICAL ANSWERS. This is an enormous problem because moral certainty is generally required to motivate meaningful, collective political action. Such moral certainty in the modern age is either the product of ignorance and/or stupidity. The challenge facing us now, let alone in the future, is one of picking guesses worth dying for without the luxury of delusion. Pick them. Run with them.
3) The naturalization of morality and meaning. EMBRACE THOSE DEFINITIVE ANSWERS WE DO HAVE. Science tells us what things are, how they function, and how they can be manipulated. Science is power, which is why all the most powerful institutions invest so heavily in science. The degree to which science and scientific methodologies are eschewed is the degree to which power is eschewed. Only discourses possessing a vested interest in their own impotence would view ‘scientism’ as a problem admitting a speculative or attitudinal solution, rather than the expression of their own crisis of theoretical legitimacy. The thinking that characterizes the Theory Industry is almost certainly magical, in this respect, insofar as it believes that words and moral sentiment can determine what science can and cannot cognize.
Any others anyone can think of?
We are led back to these perceptions in all questions regarding origins, but they themselves exclude any further question as to origin. It is clear that the much-talked-of certainty of internal perception, the evidence of the cogito, would lose all meaning and significance if we excluded temporal extension from the sphere of self-evidence and true givenness.
–Husserl, The Phenomenology of Internal Time-Consciousness
So recall this list, marvel how it continues to grow, and remember, the catalogue is just getting started. The real tsunami of information is rumbling off in the near horizon. And lest you think your training or education render you exempt, pause and consider the latest in Eric Schwitzgebel’s empirical investigations of how susceptible professional philosophers are to various biases and effects on that list. I ask you to consider what we know regarding human cognitive shortcomings to put you in a skeptical frame of mind. I want to put in a skeptical frame of mind because of a paper by Dan Zahavi, the Director of the Center for Subjectivity Research at the University of Copenhagen, that came up on my academia.edu feed the other day.
Zahavi has always struck me as unusual as far as ‘continental’ philosophers go, at once a Husserlian ‘purist’ and determined to reach out, to “make phenomenology a powerful and systematically convincing voice in contemporary philosophical discussion” (“Husserl, self, and others: an interview with Dan Zahavi”). I applaud him for this, for braving genuine criticism, genuine scientific research, rather than allowing narrow ingroup interpretative squabbles to swallow him whole. In “Killing the straw man: Dennett and phenomenology,” he undertakes a survey of Dennett’s many comments regarding phenomenology, and a critical evaluation of his alternative to phenomenology, heterophenomenology. Since I happen to be a former phenomenologist, I’ve had occasion to argue both sides of the fence. I spent a good portion of my late twenties and early thirties defending my phenomenological commitments from my skeptical, analytically inclined friends using precisely the arguments and assumptions that Zahavi deploys against Dennett. And I’ve spent the decade following arguing a position even more radically eliminativistic than Dennett’s. I’ve walked a mile in both shoes, I suppose. I’ve gone from agreeing with pretty much everything Zahavi argues in this piece (with a handful of deconstructive caveats) to agreeing with almost nothing.
So what I would like to do is use Zahavi’s position and critique as a foil to explain how and why I’ve abandoned the continental alliance and joined the scientific empire. I gave up on what I call the Apple-and-Oranges Argument because I realized there was no reliable, a priori way to discursively circumscribe domains, to say science can only go so far and no further. I gave up on what I call the Ontological Pre-emption Argument because I realized arguing ‘conditions of possibility,’ far from rationally securing my discourse, simply multiplied my epistemic liabilities. Ultimately, I found myself stranded with what I call the Abductive Argument, an argument based on the putative reality of the consensual structures that seem to genuinely anchor phenomenological disputation. Phenomenology not only offered the best way to describe that structure, it offered the only way, or so I thought. Since Zahavi provides us with examples of all three arguments in the course of castigating Dennett, and since Dennett occupies a position similar to my own, “Killing the straw man” affords an excellent opportunity to demonstrate how phenomenology fares when considered in terms of brain science and heuristic neglect.
As the title of the paper suggests, Zahavi thinks Dennett never moves past critiquing a caricature of phenomenology. For Dennett, Zahavi claims, phenomenology is merely a variant of Introspectionism and thus suffering all the liabilities that caused Introspectionism to die as a branch of empirical psychology almost a century ago now. To redress this equivocation, Zahavi turns to that old stalwart of continental cognitive self-respect, the ‘Apples-and-Oranges Argument’:
To start with, it is important to realize that classical phenomenology is not just another name for a kind of psychological self-observation; rather it must be appreciated as a special form of transcendental philosophy that seeks to reflect on the conditions of possibility of experience and cognition. Phenomenology is a philosophical enterprise; it is not an empirical discipline. This doesn’t rule out, of course, that its analyses might have ramifications for and be of pertinence to an empirical study of consciousness, but this is not its primary aim.
By conflating phenomenology and introspective psychology, Dennett is conflating introspection with the phenomenological attitude, the theoretically attuned orientation to experience that allows the transcendental structure of experience to be interpreted. Titchener’s psychological structuralism, for instance, was invested in empirical investigations into the structure and dynamics of the conscious mind. As descriptive psychology, it could not, by definition, disclose what Zahavi terms the ‘nonpsychological dimension of consciousness,’ those structures that make experience possible.
What makes phenomenology different, in other words, is also what makes phenomenology better. And so we find the grounds for the Ontological Pre-emption Argument in the Apples-and-Oranges Argument:
Phenomenology is not concerned with establishing what a given individual might currently be experiencing. Phenomenology is not interested in qualia in the sense of purely individual data that are incorrigible, ineffable, and incomparable. Phenomenology is not interested in psychological processes (in contrast to behavioral processes or physical processes). Phenomenology is interested in the very dimension of givenness or appearance and seeks to explore its essential structures and conditions of possibility. Such an investigation of the field of presence is beyond any divide between psychical interiority and physical exteriority, since it is an investigation of the dimension in which any object—be it external or internal—manifests itself. Phenomenology aims to disclose structures that are intersubjectively accessible, and its analyses are consequently open for corrections and control by any (phenomenologically tuned) subject.
The strategy is as old as phenomenology itself. First you extricate phenomenology from the bailiwick of the sciences, then you position phenomenology prior to the sciences as the discipline responsible for cognizing the conditions of possibility of science. First you argue that it is fundamentally different, and then you argue that this difference is fundamental.
Of course, Zahavi omits any consideration of the ways Dennett could respond to either of these claims. (This is one among several clues to the institutionally defensive nature of this paper, the fact that it is pitched more to those seeking theoretical reaffirmation than to institutional outsiders—let alone lapsarians). Dennett need only ask Zahavi why anyone should believe that his domain possesses ontological priority over the myriad domains of science. The fact that Zahavi can pluck certain concepts from Dennett’s discourse, drop them in his interpretative machinery, and derive results friendly to that machinery should come as no surprise. The question pertains to the cognitive legitimacy of the machinery: therefore any answer presuming that legitimacy simply begs the question. Does Zahavi not see this?
Even if we granted the possible existence of ‘conditions of possibility,’ the most Zahavi or anyone else could do is intuit them from the conditioned, which just happen to be first-person phenomena. So if generalizing from first-person phenomena proved impossible because of third-person inaccessibility—because genuine first person data were simply too difficult to come by—why should we think those phenomena can nevertheless anchor a priori claims once phenomenologically construed? The fact is phenomenology suffers all the problems of conceptual controversy and theoretical underdetermination as structuralist psychology. Zahavi is actually quite right: phenomenology is most certainly not a science! There’s no need for him to stamp his feet and declare, “Oranges!” Everybody already knows.
The question is why anyone should take his Oranges seriously as a cognitive enterprise. Why should anyone believe his domain comes first? What makes phenomenologically disclosed structures ontologically prior or constitutive of conscious experience? Blood flow, neural function—the life or death priority of these things can be handily demonstrated with a coat-hanger! Claims like Zahavi’s regarding the nature of some ontologically constitutive beyond, on the other hand, abound in philosophy. Certainly powerful assurances are needed to take them seriously, especially when we reject them outright for good reason elsewhere. Why shouldn’t we just side with the folk, chalk phenomenology up to just another hothouse excess of higher education? Because you stack your guesswork up on the basis of your guesswork in a way you’re guessing is right?
As I learned, neither the Apples-and-Oranges nor the Ontological Pre-emption Arguments draw much water outside the company of the likeminded. I felt their force, felt reaffirmed the way many phenomenologists, I’m sure, feel reaffirmed reading Zahavi’s exposition now. But every time I laid them on nonphenomenologists I found myself fenced by questions that were far too easy to ask—and far easier to avoid than answer.
So I switched up my tactics. When my old grad school poker buddies started hacking on Heidegger, making fun of the neologisms, bitching about the lack of consensus, I would say something very similar to what Zahavi claims above—even more powerful, I think, since it concretizes his claims regarding structure and intersubjectivity. Look, I would tell them, once you comport yourself properly (with a tremendous amount of specialized training, bear in mind), you can actually anticipate the kinds of things Husserl or Heidegger or Merleau-Ponty or Sarte might say on this or that subject. Something more than introspective whimsy is being tracked—surely! And if that ‘something more’ isn’t the transcendental structure of experience, what could it be? Little did I know how critical this shift in the way I saw the dialectical landscape would prove.
Basically I had retreated to the Abductive Argument—the only real argument, I now think, that Zahavi or any phenomenologist ultimately has outside the company of their confreres. Apriori arguments for phenomenological aprioricity simply have no traction unless you already buy into some heavily theorized account of the apriori. No one’s going to find the distinction between introspectionism and phenomenology convincing so long as first-person phenomena remain the evidential foundation of both. If empirical psychology couldn’t generalize from phenomena, then why should we think phenomenology can reason to their origins, particularly given the way it so discursively resembles introspectionism? Why should a phenomenological attitude adjustment make any difference at all?
One can actually see Zahavi shift to abductive warrant in the last block quote above, in the way he appeals to the intersubjectively accessible nature of the ‘structures’ comprising the domain of the phenomenological attitude. I suspect this is why Zahavi is so keen on the eliminativist Dennett (whom I generally agree with) at the expense of the intentionalist Dennett (whom I generally disagree with)—so keen on setting up his own straw man, in effect. The more he can accuse Dennett of eliminating various verities of experience, the more spicy the abductive stew becomes. If phenomenology is bunk, then why does it exhibit the systematicity that it does? How else could we make sense of the genuine discursivity that (despite all the divergent interpretations) unquestionably animates the field? If phenomenological reflection is so puny, so weak, then how has any kind of consensus arisen at all?
The easy reply, of course, is to argue that the systematicity evinced by phenomenology is no different than the systematicity evinced by intelligent design, psychoanalysis, climate-change skepticism, or what have you. One might claim that rational systematicity, the kind of ‘intersubjectivity’ that Zahavi evokes several times in “Killing the straw man,” is actually cheap as dirt. Why else would we find ourselves so convincing, no matter what we happen to believe? Thus the importance of genuine first-person data: ‘structure’ or no ‘structure,’ short of empirical evidence, we quite simply have no way of arbitrating between theories, and thus no way of moving forward. Think of the list of our cognitive shortcomings! We humans have an ingrown genius for duping both ourselves and one another given the mere appearance of systematicity.
Now abductive arguments for intentionalism more generally have the advantage of taking intentional phenomena broadly construed as their domain. So in his Sources of Intentionality, for instance, Uriah Kriegel argues ‘observational contact with the intentional structure of experience’ best explains our understanding of intentionality. Given the general consensus that intentional phenomena are real, this argument has real dialectical traction. You can disagree with Kriegel, but until you provide a better explanation, his remains the only game in town.
In contrast to this general, Intentional Abductive Argument, the Phenomenological Abductive Argument takes intentional phenomena peculiar to the phenomenological attitude as its anchoring explananda. Zahavi, recall, accuses Dennett of equivocating phenomenology and introspectionism because of a faulty understanding of the phenomenological attitude. As a result he confuses the ontic with the ontological, ‘a mere sector of being’ with the problem of Being as such. And you know what? From the phenomenological attitude, his criticism is entirely on the mark. Zahavi accuses Dennett of a number of ontological sins that he simply does not commit, even given the phenomenological attitude, but this accusation, that Dennett has run afoul the ‘metaphysics of presence,’ is entirely correct—once again, from the phenomenological attitude.
Zahavi’s whole case hangs on the deliverances of the phenomenological attitude. Refuse him this, and he quite simply has no case at all. This was why, back in my grad school days, I would always urge my buddies to read phenomenology with an open mind, to understand it on its own terms. ‘I’m not hallucinating! The structures are there! You just have to look with the right eyes!’
Of course, no one was convinced. I quickly came to realize that phenomenologists occupied a position analogous to that of born-again Christians, party to a kind of undeniable, self-validating experience. Once you grasp the ontological difference, it truly seems like there’s no going back. The problem is that no matter how much you argue no one who has yet to grasp the phenomenological attitude can possibly credit your claims. You’re talking Jesus, son of God, and they think you’re referring to Heyzoos down at the 7-11.
To be clear, I’m not suggesting that phenomenology is religious, only that it shares this dialectical feature with religious discourses. The phenomenological attitude, like the evangelical attitude, requires what might be called a ‘buy in moment.’ The only way to truly ‘get it’ is to believe. The only way to believe is to open your heart to Husserl, or Heidegger, or in this case, Zahavi. “Killing the straw man” is jam packed with such inducements, elegant thumbnail recapitulations of various phenomenological interpretations made by various phenomenological giants over the years. All of these recapitulations beg the question against Dennett, obviously so, but they’re not dialectically toothless or merely rhetorical for it. By giving us examples of phenomenological understanding, Zahavi is demonstrating possibilities belonging to a different way of looking at the world, laying bare the very structure that organizes phenomenology into genuinely critical, consensus driven discourse.
The structure that phenomenology best explains. For anyone who has spent long rainy afternoons pouring through the phenomenological canon, alternately amused and amazed by this or that interpretation of lived life, the notion that phenomenology is ‘mere bunk’ can only sound like ignorance. If the structures revealed by the phenomenological attitude aren’t ontological, then what else could they be?
This is what I propose to show: a radically different way of conceiving the ‘structures’ that motivate phenomenology. I happen to be the global eliminativist that Zahavi mistakenly accuses Dennett of being, and I also happen to have a fairly intimate understanding of the phenomenological attitude. I came by my eliminativism in the course of discovering an entirely new way to describe the structures revealed by the phenomenological attitude. The Transcendental Interpretation is no longer the only game in town.
The thing is, every phenomenologist, whether they know it or not, is actually part of a vast, informal heterophenomenological experiment. The very systematicity of conscious access reports made regarding phenomenality via the phenomenological attitude is what makes them so interesting. Why do they orbit around the same sets of structures the way they do? Why do they lend themselves to reasoned argumentation? Zahavi wants you to think that his answer—because they track some kind of transcendental reality—is the only game in town, and thus the clear inference to the best explanation.
But this is simply not true.
So what alternatives are there? What kind of alternate interpretation could we give to what phenomenology contends is a transcendental structure?
In his excellent Posthuman Life, David Roden critiques transcendental phenomenology in terms of what he calls ‘dark phenomenology.’ We now know as a matter of empirical fact that our capacity to discriminate colours presented simultaneously outruns our capacity to discriminate sequentially, and that our memory severely constrains the determinacy of our concepts. This gap between the capacity to conceptualize and the capacity to discriminate means that a good deal of phenomenology is conceptually dark. The argument, as I see it, runs something like: 1) There is more than meets the phenomenological eye (dark phenomenology). 2) This ‘more’ is constitutive of what meets the phenomenological eye. 3) This ‘more’ is ontic. 4) Therefore the deliverances of the phenomenological eye cannot be ontological. The phenomenologist, he is arguing, has only a blinkered view. The very act of conceptualizing experience, no matter how angelic your attitude, covers experience over. We know this for a fact!
My guess is that Zahavi would concede (1) and (2) while vigorously denying (3), the claim that the content of dark phenomenology is ontic. He can do this simply by arguing that ‘dark phenomenology’ provides, at best, another way of delimiting horizons. After all, the drastic difference in our simultaneous and sequential discriminatory powers actually makes phenomenological sense: the once-present source impression evaporates into the now-present ‘reverberations,’ as Husserl might call them, fades on the dim gradient of retentional consciousness. It is a question entirely internal to phenomenology as to just where phenomenological interpretation lies on this ‘continuum of reverberations,’ and as it turns out, the problem of theoretically incorporating the absent-yet-constitutive backgrounds of phenomena is as old as phenomenology itself. In fact, the concept of horizons, the subjectively variable limits that circumscribe all phenomena, is an essential component of the phenomenological attitude. The world has meaning–everything we encounter resounds with the significance of past encounters, not to mention future plans. ‘Horizon talk’ simply allows us to make these constitutive backgrounds theoretically explicit. Even while implicit they belong to the phenomena themselves no less, just as implicit. Consciousness is as much non-thematic consciousness as it is thematic consciousness. Zahavi could say the discovery that we cannot discriminate nearly as well sequentially as we can simultaneously simply recapitulates this old phenomenological insight.
Horizons, as it turns out, also provide a way to understand Zahavi’s criticism of the heterophenomenology Dennett proposes we use in place of phenomenology. The ontological difference is itself the keystone of a larger horizon argument involving what Heidegger called the ‘metaphysics of presence,’ how forgetting the horizon of Being, the fundamental background allowing beings to appear as beings, leads to investigations of Being under the auspices of beings, or as something ‘objectively present.’ More basic horizons of use, horizons of care, are all covered over as a result. And when horizons are overlooked—when they are ignored or worse yet, entirely neglected—we run afoul conceptual confusions. In this sense, it is the natural attitude of science that is most obviously culpable, considering beings, not against their horizons of use or care, but against the artificially contrived, parochial, metaphysically naive, horizon of natural knowledge. As Zahavi writes, “the one-sided focus of science on what is available from a third person perspective is both naive and dishonest, since the scientific practice constantly presupposes the scientist’s first-personal and pre-scientific experience of the world.”
As an ontic discourse, natural science can only examine beings from within the parochial horizon of objective presence. Any attempt to drag phenomenology into the natural scientific purview, therefore, will necessarily cover over the very horizon that is its purview. This is what I always considered a ‘basic truth’ of the phenomenological attitude. It certainly seems to be the primary dialectical defence mechanism: to entertain the phenomenological attitude is to recognize the axiomatic priority of the phenomenological attitude. If the intuitive obviousness of this escapes you, then the phenomenological attitude quite simply escapes you.
Dennett, in other words, is guilty of a colossal oversight. He is quite simply forgetting that lived life is the condition of possibility of science. “Dennett’s heterophenomenology,” Zahavi writes, “must be criticized not only for simply presupposing the availability of the third-person perspective without reflecting on and articulating its conditions of possibility, but also for failing to realize to what extent its own endeavour tacitly presupposes an intact first-person perspective.”
Dennett’s discursive sin, in other words, is the sin of neglect. He is quite literally blind to the ontological assumptions—the deep first person facts—that underwrite his empirical claims, his third person observations. As a result, none of these facts condition his discourse the way they should: in Heidegger’s idiom, he is doomed to interpret Being in terms of beings, to repeat the metaphysics of presence.
The interesting thing to note here, however, is that Roden is likewise accusing Zahavi of neglect. Unless phenomenologists accord themselves supernatural powers, it seems hard to believe that they are not every bit as conceptually blind to the full content of phenomenal experience as the rest of us are. The phenomenologist, in other words, must acknowledge the bare fact that they suffer neglect. And if they acknowledge the bare fact of neglect, then, given the role neglect plays in their own critique of scientism, they have to acknowledge the bare possibility that they, like Dennett and heterophenomenology, find themselves occupying a view whose coherence requires ignorance—or to use Zahavi’s preferred term, naivete—in a likewise theoretically pernicious way.
The question now becomes one of whether the phenomenological concept of horizons can actually allay this worry. The answer here has to be no. Why? Simply because the phenomenologist cannot deploy horizons to rationally immunize phenomenology against neglect without assuming that phenomenology is already so immunized. Or put differently: if it were the case that neglect were true, that Zahavi’s phenomenology, like Dennett’s heterophenomenology, only makes sense given a certain kind of neglect, then we should expect ‘horizons’ to continue playing a conceptually constitutive role—to contribute to phenomenology the way it always has.
Horizons cannot address the problem of neglect. The phenomenologist, then, is stranded with the bare possibility that their practice only appears to be coherent or cognitive. If neglect can cause such problems for Dennett, then it’s at least possible that it can do so for Zahavi. And how else could it be, given that phenomenology was not handed down to Moses by God, but rather elaborated by humans suffering all the cognitive foibles on the list linked above? In all our endeavours, it is always possible that our blindspots get the better of us. We can’t say anything about specific ‘unknown unknowns’ period, let alone anything regarding their relevance! Arguing that phenomenology constitutes a solitary exception to this amounts to withdrawing from the possibility of rational discourse altogether—becoming a secular religion, in effect.
So it has to be possible that Zahavi’s phenomenology runs afoul theoretically pernicious neglect the way he accuses Dennett’s heterophenomenology of running afoul theoretically pernicious neglect.
Fair is fair.
The question now becomes one of whether phenomenology is suffering from theoretically pernicious neglect. Given that magic mushrooms fuck up phenomenologists as much as the rest of us, it seems assured that the capacities involved in cognizing their transcendental domain pertain to the biological in some fundamental respect. Phenomenologists suffer strokes, just like the rest of us. Their neurobiological capacity to take the ‘phenomenological attitude’ can be stripped from them in a tragic inkling.
But if the phenomenological attitude can be neurobiologically taken, it can also be given back, and here’s the thing, in attenuated forms, tweaked in innumerable different ways, fuzzier here, more precise there, truncated, snipped, or twisted.
This means there are myriad levels of phenomenological penetration, which is to say, varying degrees of phenomenological neglect. Insofar as we find ourselves on a biological continuum with other species, this should come as no surprise. Biologically speaking, we do not stand on the roof of the world, so it makes sense to suppose that the same is true of our phenomenology.
So bearing this all in mind, here’s an empirical alternative to what I termed the Transcendental Interpretation above.
On the Global Neuronal Workspace Theory, consciousness can be seen as a serial, broadcast conduit between a vast array of nonconscious parallel systems. Networks continually compete at the threshold of conscious ‘ignition,’ as it’s called, competition between nonconscious processes results in the selection of some information for broadcast. Stanislaus Dehaene—using heterophenomenology exactly as Dennett advocates—claims on the basis of what is now extensive experimentation that consciousness, in addition to broadcasting information, also stabilizes it, slows it down (Consciousness and the Brain). Only information that is so broadcast can be accessed for verbal report. From this it follows that the ‘phenomenological attitude’ can only access information broadcast for verbal report, or conversely, that it neglects all information not selected for stabilization and broadcast.
Now the question becomes one of whether that information is all the information the phenomenologist, given his or her years of specialized training, needs to draw the conclusions they do regarding the ontological structure of experience. And the more one looks at the situation through a natural lens, the more difficult it becomes to see how this possibly could be the case. The GNW model sketched above actually maps quite well onto the dual-process cognitive models that now dominate the field in cognitive science. System 1 cognition applies to the nonconscious, massively parallel processing that both feeds, and feeds from, the information selected for stabilization and broadcast. System 2 cognition applies to the deliberative, conscious problem-solving that stabilization and broadcast somehow makes possible.
Now the phenomenological attitude, Zahavi claims, somehow enables deliberative cognition of the transcendental structure of experience. The phenomenological attitude, then, somehow involves a System 2 attempt to solve for consciousness in a particular way. It constitutes a paradigmatic example of deliberative, theoretical metacognition, something we are also learning more and more about on a daily basis. (The temptation here will be to beg the question and ‘go ontological,’ and then accuse me of begging the question against phenomenology, but insofar as neuropathologies have any kind of bearing on the ‘phenomenological attitude,’ insofar as phenomenologists are human, giving in to this temptation would be tendentious, more a dialectical dodge than an honest attempt to confront a real problem.)
The question of whether Zahavi has access to what he needs, then, calves into two related issues: the issue of what kind of information is available, and the issue of what kind of metacognitive resources are available.
On the metacognitive capacity front, the picture arising out of cognitive psychology and neuroscience is anything but flattering. As Fletcher and Carruthers have recently noted:
What the data show is that a disposition to reflect on one’s reasoning is highly contingent on features of individual personality, and that the control of reflective reasoning is heavily dependent on learning, and especially on explicit training in norms and procedures for reasoning. In addition, people exhibit widely varied abilities to manage their own decision-making, employing a range of idiosyncratic techniques. These data count powerfully against the claim that humans possess anything resembling a system designed for reflecting on their own reasoning and decision-making. Instead, they support a view of meta-reasoning abilities as a diverse hodge-podge of self-management strategies acquired through individual and cultural learning, which co-opt whatever cognitive resources are available to serve monitoring-and-control functions. (“Metacognition and Reasoning”)
We need to keep in mind that the transcendental deliverances of the phenomenological attitude are somehow the product of numerous exaptations of radically heuristic systems. As the most complicated system in its environment, and as the one pocket of its environment that it cannot physically explore, the brain can only cognize its own processes in disparate and radically heuristic ways. In terms of metacognitive capacity, then, we have reason to doubt the reliability of any form of reflection.
On the information front, we’ve already seen how much information slips between the conceptual cracks with Roden’s account of dark phenomenology. Now with the GNW model, we can actually see why this has to be the case. Consciousness provides a ‘workspace’ where a little information is plucked from many producers and made available to many consumers. The very process of selection, stabilization, and broadcasting, in other words, constitutes a radical bottleneck on the information available for deliberative metacognition. This actually allows us to make some rather striking predictions regarding the kinds of difficulties such a system might face attempting to cognize itself.
For one, we should expect such a system to suffer profound source neglect. Since all the neurobiological machinery preceding selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the origins of consciousness to end in dismal failure. In fact, given that the larger cognitive system cognizes environments via predictive error minimization (I heartily recommend Hohwy’s, The Predictive Mind), which is to say, via the ability to anticipate what follows from what, we could suppose it would need some radically different means of cognizing itself, one somehow compensating for, or otherwise accommodating, source neglect.
For another, we should expect such a system to suffer profound scope neglect. Once again, since all the neurobiological machinery bracketing the selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the limits of consciousness to end in failure. Since the larger cognitive system functions via active environmental demarcations, consciousness would jam the gears, to be an ‘object without edges,’ if anything coherent at all.
We should expect to be baffled by our immediate sources and by our immediate scope, not because they comprise our transcendental limitations, but because such blind-spots are an inevitable by-product of the radical neurophysiological limits on our brain’s ability to cognize its own structure and dynamics. Thus Blind Brain Theory, the empirical thesis that we’re natural in such a way that we cannot cognize ourselves as natural, and so cognize ourselves otherwise. We’re a standalone solution-monger, one so astronomically complicated that we at best enjoy an ad hoc, heuristic relation to ourselves. The self-same fundamental first-person structure that phenomenology interprets transcendentally—as ontologically positive, naturalistically inscrutable, and inexplicably efficacious—it explains in terms of neglect, explains away, in effect. It provides a radical alternative to the Transcendental Interpretation discussed above—a Blind Brain interpretation. Insofar as Zahavi’s ‘phenomenological attitude’ amounts to anything at all, it can be seen as a radically blinkered, ‘inside view’ of source and scope neglect. Phenomenology, accordingly, can be diagnosed as the systematic addumbration of a wide variety of metacognitive illusions, all turning in predictable ways on neglect.
As a onetime phenomenologist I can appreciate how preposterous this must all sound, but I ask you to consider, as honestly as that list I linked above allows, the following passage:
This flow is something we speak of in conformity with what is constituted, but it is not ‘something in objective time.’ It is absolute subjectivity and has the absolute properties of something to be designated metaphorically as ‘flow’; of something that originates in a point of actuality, in a primal source-point and a continuity of moments of reverberation. For all this, we lack names. Husserl, Phenomenology of Internal Time-Consciousness, 79.
Now I think this sounds like a verbal report generated by a metacognitive system suffering source and scope neglect yet grappling with questions of source and scope all the same. Blind to our source blindness, our source appears to stand outside the order of the conditioned, to be ‘absolute’ or ‘transcendental.’ Blind to our scope blindness, this source seems to be a kind of ‘object without edges,’ more boundless container than content. And so a concatenation of absolute ignorances drives a powerful intuition of absolute or transcendental subjectivity at the very limit of what can be reported. Thus domesticated, further intuitive inferences abound, and the sourceless, scopeless arena of the phenomenological attitude is born, and with it, the famed ontological difference, the principled distinction of the problem of being from the problems of beings, or the priority of the sourceless and scopeless over the sourced and the scoped.
My point here is to simply provide a dramatic example of the way the transcendental structure revealed by the phenomenological attitude can be naturalistically turned inside out, how its most profound posits are more parsimoniously explained as artifacts of metacognitive neglect. Examples of how this approach can be extended in ways relevant to phenomenology can be found here, here, and here.
This is a blog post, so I can genuinely reach out. Everyone who practices phenomenology needs to consider the very live possibility that they’re actually trading in metacognitive illusions, that the first person they claim to be interpreting in the most fundamental terms possible is actually a figment of neglect. At the very least they need to recognize that the Abductive Argument is no longer open to them. They can no longer assume, the way Zahavi does, that the intersubjective features of their discourse evidence the reality of their transcendental posits exclusively. If anything, Blind Brain Theory offers a far better explanation for the discourse-organizing structure at issue, insofar as it lacks any supernatural posits, renders perspicuous a hitherto occult connection between brain and consciousness (as phenomenologically construed), and is empirically testable.
All of the phenomenological tradition is open to reinterpretation in its terms. I agree that this is disastrous… the very kind of disaster we should have expected science would deliver. Science is to be feared precisely because it monopolizes effective theoretical cognition, not because it seeks to, and philosophies so absurd as to play its ontological master manage only to anaesthetize themselves.
When asked what problems remain outstanding in his AVANT interview, Zahavi acknowledges that phenomenology, despite revealing the dialectical priority of the first person over the third person perspective on consciousness, has yet to elucidate the nature of the relationship between them. “What is still missing is a real theoretical integration of these different perspectives,” he admits. “Such integration is essential, if we are to do justice to the complexity of consciousness, but it is in no way obvious how natural science all by itself will be able to do so” (118). Blind Brain Theory possesses the conceptual resources required to achieve this integration. Via neglect and heuristics, it allows us to see the first-person in terms entirely continuous with the third, while allowing us to understand all the apories and conundrums that have prevented such integration until now. It provides the basis, in other words, for a wholesale naturalization of phenomenology.
Regardless, I think it’s safe to say that phenomenology is at a crossroads. The days when the traditional phenomenologist could go on the attack, actually force their interlocutors to revisit their assumptions, are quickly coming to a close. As the scientific picture of the human accumulates ever more detail—ever more data—the claim that these discoveries have no bearing whatsoever on phenomenological practice and doctrine becomes ever more difficult to credit. “Science is a specific theoretical stance towards the world,” Zahavi claims. “Science is performed by embodied and embedded subjects, and if we wish to comprehend the performance and limits of science, we have to investigate the forms of intentionality that are employed by cognizing subjects.”
Perhaps… But only if it turns out that ‘cognizing subjects’ possess the ‘intentionality’ phenomenology supposes. What if science is performed by natural beings who, quite naturally, cannot intuit themselves in natural terms? Phenomenology has no way of answering this question. So it waits the way all prescientific discourses have waited for the judgment of science on their respective domains. I have given but one possible example of a judgment that will inevitably come.
There will be others. My advice? Jump ship before the real neuroinformatic deluge comes. We live in a society morphing faster and more profoundly every year. There is much more pressing work to be done, especially when it comes to theorizing our everydayness in more epistemically humble and empirically responsive manner. We lack names for what we are, in part because we have been wasting breath on terms that merely name our confusion.