Zahavi, Dennett, and the End of Being*
We are led back to these perceptions in all questions regarding origins, but they themselves exclude any further question as to origin. It is clear that the much-talked-of certainty of internal perception, the evidence of the cogito, would lose all meaning and significance if we excluded temporal extension from the sphere of self-evidence and true givenness.
–Husserl, The Phenomenology of Internal Time-Consciousness
So recall this list, marvel how it continues to grow, and remember, the catalogue is just getting started. The real tsunami of information is rumbling off in the near horizon. And lest you think your training or education render you exempt, pause and consider the latest in Eric Schwitzgebel’s empirical investigations of how susceptible professional philosophers are to various biases and effects on that list. I ask you to consider what we know regarding human cognitive shortcomings to put you in a skeptical frame of mind. I want to put in a skeptical frame of mind because of a paper by Dan Zahavi, the Director of the Center for Subjectivity Research at the University of Copenhagen, that came up on my academia.edu feed the other day.
Zahavi has always struck me as unusual as far as ‘continental’ philosophers go, at once a Husserlian ‘purist’ and determined to reach out, to “make phenomenology a powerful and systematically convincing voice in contemporary philosophical discussion” (“Husserl, self, and others: an interview with Dan Zahavi”). I applaud him for this, for braving genuine criticism, genuine scientific research, rather than allowing narrow ingroup interpretative squabbles to swallow him whole. In “Killing the straw man: Dennett and phenomenology,” he undertakes a survey of Dennett’s many comments regarding phenomenology, and a critical evaluation of his alternative to phenomenology, heterophenomenology. Since I happen to be a former phenomenologist, I’ve had occasion to argue both sides of the fence. I spent a good portion of my late twenties and early thirties defending my phenomenological commitments from my skeptical, analytically inclined friends using precisely the arguments and assumptions that Zahavi deploys against Dennett. And I’ve spent the decade following arguing a position even more radically eliminativistic than Dennett’s. I’ve walked a mile in both shoes, I suppose. I’ve gone from agreeing with pretty much everything Zahavi argues in this piece (with a handful of deconstructive caveats) to agreeing with almost nothing.
So what I would like to do is use Zahavi’s position and critique as a foil to explain how and why I’ve abandoned the continental alliance and joined the scientific empire. I gave up on what I call the Apple-and-Oranges Argument because I realized there was no reliable, a priori way to discursively circumscribe domains, to say science can only go so far and no further. I gave up on what I call the Ontological Pre-emption Argument because I realized arguing ‘conditions of possibility,’ far from rationally securing my discourse, simply multiplied my epistemic liabilities. Ultimately, I found myself stranded with what I call the Abductive Argument, an argument based on the putative reality of the consensual structures that seem to genuinely anchor phenomenological disputation. Phenomenology not only offered the best way to describe that structure, it offered the only way, or so I thought. Since Zahavi provides us with examples of all three arguments in the course of castigating Dennett, and since Dennett occupies a position similar to my own, “Killing the straw man” affords an excellent opportunity to demonstrate how phenomenology fares when considered in terms of brain science and heuristic neglect.
As the title of the paper suggests, Zahavi thinks Dennett never moves past critiquing a caricature of phenomenology. For Dennett, Zahavi claims, phenomenology is merely a variant of Introspectionism and thus suffering all the liabilities that caused Introspectionism to die as a branch of empirical psychology almost a century ago now. To redress this equivocation, Zahavi turns to that old stalwart of continental cognitive self-respect, the ‘Apples-and-Oranges Argument’:
To start with, it is important to realize that classical phenomenology is not just another name for a kind of psychological self-observation; rather it must be appreciated as a special form of transcendental philosophy that seeks to reflect on the conditions of possibility of experience and cognition. Phenomenology is a philosophical enterprise; it is not an empirical discipline. This doesn’t rule out, of course, that its analyses might have ramifications for and be of pertinence to an empirical study of consciousness, but this is not its primary aim.
By conflating phenomenology and introspective psychology, Dennett is conflating introspection with the phenomenological attitude, the theoretically attuned orientation to experience that allows the transcendental structure of experience to be interpreted. Titchener’s psychological structuralism, for instance, was invested in empirical investigations into the structure and dynamics of the conscious mind. As descriptive psychology, it could not, by definition, disclose what Zahavi terms the ‘nonpsychological dimension of consciousness,’ those structures that make experience possible.
What makes phenomenology different, in other words, is also what makes phenomenology better. And so we find the grounds for the Ontological Pre-emption Argument in the Apples-and-Oranges Argument:
Phenomenology is not concerned with establishing what a given individual might currently be experiencing. Phenomenology is not interested in qualia in the sense of purely individual data that are incorrigible, ineffable, and incomparable. Phenomenology is not interested in psychological processes (in contrast to behavioral processes or physical processes). Phenomenology is interested in the very dimension of givenness or appearance and seeks to explore its essential structures and conditions of possibility. Such an investigation of the field of presence is beyond any divide between psychical interiority and physical exteriority, since it is an investigation of the dimension in which any object—be it external or internal—manifests itself. Phenomenology aims to disclose structures that are intersubjectively accessible, and its analyses are consequently open for corrections and control by any (phenomenologically tuned) subject.
The strategy is as old as phenomenology itself. First you extricate phenomenology from the bailiwick of the sciences, then you position phenomenology prior to the sciences as the discipline responsible for cognizing the conditions of possibility of science. First you argue that it is fundamentally different, and then you argue that this difference is fundamental.
Of course, Zahavi omits any consideration of the ways Dennett could respond to either of these claims. (This is one among several clues to the institutionally defensive nature of this paper, the fact that it is pitched more to those seeking theoretical reaffirmation than to institutional outsiders—let alone lapsarians). Dennett need only ask Zahavi why anyone should believe that his domain possesses ontological priority over the myriad domains of science. The fact that Zahavi can pluck certain concepts from Dennett’s discourse, drop them in his interpretative machinery, and derive results friendly to that machinery should come as no surprise. The question pertains to the cognitive legitimacy of the machinery: therefore any answer presuming that legitimacy simply begs the question. Does Zahavi not see this?
Even if we granted the possible existence of ‘conditions of possibility,’ the most Zahavi or anyone else could do is intuit them from the conditioned, which just happen to be first-person phenomena. So if generalizing from first-person phenomena proved impossible because of third-person inaccessibility—because genuine first person data were simply too difficult to come by—why should we think those phenomena can nevertheless anchor a priori claims once phenomenologically construed? The fact is phenomenology suffers all the problems of conceptual controversy and theoretical underdetermination as structuralist psychology. Zahavi is actually quite right: phenomenology is most certainly not a science! There’s no need for him to stamp his feet and declare, “Oranges!” Everybody already knows.
The question is why anyone should take his Oranges seriously as a cognitive enterprise. Why should anyone believe his domain comes first? What makes phenomenologically disclosed structures ontologically prior or constitutive of conscious experience? Blood flow, neural function—the life or death priority of these things can be handily demonstrated with a coat-hanger! Claims like Zahavi’s regarding the nature of some ontologically constitutive beyond, on the other hand, abound in philosophy. Certainly powerful assurances are needed to take them seriously, especially when we reject them outright for good reason elsewhere. Why shouldn’t we just side with the folk, chalk phenomenology up to just another hothouse excess of higher education? Because you stack your guesswork up on the basis of your guesswork in a way you’re guessing is right?
As I learned, neither the Apples-and-Oranges nor the Ontological Pre-emption Arguments draw much water outside the company of the likeminded. I felt their force, felt reaffirmed the way many phenomenologists, I’m sure, feel reaffirmed reading Zahavi’s exposition now. But every time I laid them on nonphenomenologists I found myself fenced by questions that were far too easy to ask—and far easier to avoid than answer.
So I switched up my tactics. When my old grad school poker buddies started hacking on Heidegger, making fun of the neologisms, bitching about the lack of consensus, I would say something very similar to what Zahavi claims above—even more powerful, I think, since it concretizes his claims regarding structure and intersubjectivity. Look, I would tell them, once you comport yourself properly (with a tremendous amount of specialized training, bear in mind), you can actually anticipate the kinds of things Husserl or Heidegger or Merleau-Ponty or Sarte might say on this or that subject. Something more than introspective whimsy is being tracked—surely! And if that ‘something more’ isn’t the transcendental structure of experience, what could it be? Little did I know how critical this shift in the way I saw the dialectical landscape would prove.
Basically I had retreated to the Abductive Argument—the only real argument, I now think, that Zahavi or any phenomenologist ultimately has outside the company of their confreres. Apriori arguments for phenomenological aprioricity simply have no traction unless you already buy into some heavily theorized account of the apriori. No one’s going to find the distinction between introspectionism and phenomenology convincing so long as first-person phenomena remain the evidential foundation of both. If empirical psychology couldn’t generalize from phenomena, then why should we think phenomenology can reason to their origins, particularly given the way it so discursively resembles introspectionism? Why should a phenomenological attitude adjustment make any difference at all?
One can actually see Zahavi shift to abductive warrant in the last block quote above, in the way he appeals to the intersubjectively accessible nature of the ‘structures’ comprising the domain of the phenomenological attitude. I suspect this is why Zahavi is so keen on the eliminativist Dennett (whom I generally agree with) at the expense of the intentionalist Dennett (whom I generally disagree with)—so keen on setting up his own straw man, in effect. The more he can accuse Dennett of eliminating various verities of experience, the more spicy the abductive stew becomes. If phenomenology is bunk, then why does it exhibit the systematicity that it does? How else could we make sense of the genuine discursivity that (despite all the divergent interpretations) unquestionably animates the field? If phenomenological reflection is so puny, so weak, then how has any kind of consensus arisen at all?
The easy reply, of course, is to argue that the systematicity evinced by phenomenology is no different than the systematicity evinced by intelligent design, psychoanalysis, climate-change skepticism, or what have you. One might claim that rational systematicity, the kind of ‘intersubjectivity’ that Zahavi evokes several times in “Killing the straw man,” is actually cheap as dirt. Why else would we find ourselves so convincing, no matter what we happen to believe? Thus the importance of genuine first-person data: ‘structure’ or no ‘structure,’ short of empirical evidence, we quite simply have no way of arbitrating between theories, and thus no way of moving forward. Think of the list of our cognitive shortcomings! We humans have an ingrown genius for duping both ourselves and one another given the mere appearance of systematicity.
Now abductive arguments for intentionalism more generally have the advantage of taking intentional phenomena broadly construed as their domain. So in his Sources of Intentionality, for instance, Uriah Kriegel argues ‘observational contact with the intentional structure of experience’ best explains our understanding of intentionality. Given the general consensus that intentional phenomena are real, this argument has real dialectical traction. You can disagree with Kriegel, but until you provide a better explanation, his remains the only game in town.
In contrast to this general, Intentional Abductive Argument, the Phenomenological Abductive Argument takes intentional phenomena peculiar to the phenomenological attitude as its anchoring explananda. Zahavi, recall, accuses Dennett of equivocating phenomenology and introspectionism because of a faulty understanding of the phenomenological attitude. As a result he confuses the ontic with the ontological, ‘a mere sector of being’ with the problem of Being as such. And you know what? From the phenomenological attitude, his criticism is entirely on the mark. Zahavi accuses Dennett of a number of ontological sins that he simply does not commit, even given the phenomenological attitude, but this accusation, that Dennett has run afoul the ‘metaphysics of presence,’ is entirely correct—once again, from the phenomenological attitude.
Zahavi’s whole case hangs on the deliverances of the phenomenological attitude. Refuse him this, and he quite simply has no case at all. This was why, back in my grad school days, I would always urge my buddies to read phenomenology with an open mind, to understand it on its own terms. ‘I’m not hallucinating! The structures are there! You just have to look with the right eyes!’
Of course, no one was convinced. I quickly came to realize that phenomenologists occupied a position analogous to that of born-again Christians, party to a kind of undeniable, self-validating experience. Once you grasp the ontological difference, it truly seems like there’s no going back. The problem is that no matter how much you argue no one who has yet to grasp the phenomenological attitude can possibly credit your claims. You’re talking Jesus, son of God, and they think you’re referring to Heyzoos down at the 7-11.
To be clear, I’m not suggesting that phenomenology is religious, only that it shares this dialectical feature with religious discourses. The phenomenological attitude, like the evangelical attitude, requires what might be called a ‘buy in moment.’ The only way to truly ‘get it’ is to believe. The only way to believe is to open your heart to Husserl, or Heidegger, or in this case, Zahavi. “Killing the straw man” is jam packed with such inducements, elegant thumbnail recapitulations of various phenomenological interpretations made by various phenomenological giants over the years. All of these recapitulations beg the question against Dennett, obviously so, but they’re not dialectically toothless or merely rhetorical for it. By giving us examples of phenomenological understanding, Zahavi is demonstrating possibilities belonging to a different way of looking at the world, laying bare the very structure that organizes phenomenology into genuinely critical, consensus driven discourse.
The structure that phenomenology best explains. For anyone who has spent long rainy afternoons pouring through the phenomenological canon, alternately amused and amazed by this or that interpretation of lived life, the notion that phenomenology is ‘mere bunk’ can only sound like ignorance. If the structures revealed by the phenomenological attitude aren’t ontological, then what else could they be?
This is what I propose to show: a radically different way of conceiving the ‘structures’ that motivate phenomenology. I happen to be the global eliminativist that Zahavi mistakenly accuses Dennett of being, and I also happen to have a fairly intimate understanding of the phenomenological attitude. I came by my eliminativism in the course of discovering an entirely new way to describe the structures revealed by the phenomenological attitude. The Transcendental Interpretation is no longer the only game in town.
The thing is, every phenomenologist, whether they know it or not, is actually part of a vast, informal heterophenomenological experiment. The very systematicity of conscious access reports made regarding phenomenality via the phenomenological attitude is what makes them so interesting. Why do they orbit around the same sets of structures the way they do? Why do they lend themselves to reasoned argumentation? Zahavi wants you to think that his answer—because they track some kind of transcendental reality—is the only game in town, and thus the clear inference to the best explanation.
But this is simply not true.
So what alternatives are there? What kind of alternate interpretation could we give to what phenomenology contends is a transcendental structure?
In his excellent Posthuman Life, David Roden critiques transcendental phenomenology in terms of what he calls ‘dark phenomenology.’ We now know as a matter of empirical fact that our capacity to discriminate colours presented simultaneously outruns our capacity to discriminate sequentially, and that our memory severely constrains the determinacy of our concepts. This gap between the capacity to conceptualize and the capacity to discriminate means that a good deal of phenomenology is conceptually dark. The argument, as I see it, runs something like: 1) There is more than meets the phenomenological eye (dark phenomenology). 2) This ‘more’ is constitutive of what meets the phenomenological eye. 3) This ‘more’ is ontic. 4) Therefore the deliverances of the phenomenological eye cannot be ontological. The phenomenologist, he is arguing, has only a blinkered view. The very act of conceptualizing experience, no matter how angelic your attitude, covers experience over. We know this for a fact!
My guess is that Zahavi would concede (1) and (2) while vigorously denying (3), the claim that the content of dark phenomenology is ontic. He can do this simply by arguing that ‘dark phenomenology’ provides, at best, another way of delimiting horizons. After all, the drastic difference in our simultaneous and sequential discriminatory powers actually makes phenomenological sense: the once-present source impression evaporates into the now-present ‘reverberations,’ as Husserl might call them, fades on the dim gradient of retentional consciousness. It is a question entirely internal to phenomenology as to just where phenomenological interpretation lies on this ‘continuum of reverberations,’ and as it turns out, the problem of theoretically incorporating the absent-yet-constitutive backgrounds of phenomena is as old as phenomenology itself. In fact, the concept of horizons, the subjectively variable limits that circumscribe all phenomena, is an essential component of the phenomenological attitude. The world has meaning–everything we encounter resounds with the significance of past encounters, not to mention future plans. ‘Horizon talk’ simply allows us to make these constitutive backgrounds theoretically explicit. Even while implicit they belong to the phenomena themselves no less, just as implicit. Consciousness is as much non-thematic consciousness as it is thematic consciousness. Zahavi could say the discovery that we cannot discriminate nearly as well sequentially as we can simultaneously simply recapitulates this old phenomenological insight.
Horizons, as it turns out, also provide a way to understand Zahavi’s criticism of the heterophenomenology Dennett proposes we use in place of phenomenology. The ontological difference is itself the keystone of a larger horizon argument involving what Heidegger called the ‘metaphysics of presence,’ how forgetting the horizon of Being, the fundamental background allowing beings to appear as beings, leads to investigations of Being under the auspices of beings, or as something ‘objectively present.’ More basic horizons of use, horizons of care, are all covered over as a result. And when horizons are overlooked—when they are ignored or worse yet, entirely neglected—we run afoul conceptual confusions. In this sense, it is the natural attitude of science that is most obviously culpable, considering beings, not against their horizons of use or care, but against the artificially contrived, parochial, metaphysically naive, horizon of natural knowledge. As Zahavi writes, “the one-sided focus of science on what is available from a third person perspective is both naive and dishonest, since the scientific practice constantly presupposes the scientist’s first-personal and pre-scientific experience of the world.”
As an ontic discourse, natural science can only examine beings from within the parochial horizon of objective presence. Any attempt to drag phenomenology into the natural scientific purview, therefore, will necessarily cover over the very horizon that is its purview. This is what I always considered a ‘basic truth’ of the phenomenological attitude. It certainly seems to be the primary dialectical defence mechanism: to entertain the phenomenological attitude is to recognize the axiomatic priority of the phenomenological attitude. If the intuitive obviousness of this escapes you, then the phenomenological attitude quite simply escapes you.
Dennett, in other words, is guilty of a colossal oversight. He is quite simply forgetting that lived life is the condition of possibility of science. “Dennett’s heterophenomenology,” Zahavi writes, “must be criticized not only for simply presupposing the availability of the third-person perspective without reflecting on and articulating its conditions of possibility, but also for failing to realize to what extent its own endeavour tacitly presupposes an intact first-person perspective.”
Dennett’s discursive sin, in other words, is the sin of neglect. He is quite literally blind to the ontological assumptions—the deep first person facts—that underwrite his empirical claims, his third person observations. As a result, none of these facts condition his discourse the way they should: in Heidegger’s idiom, he is doomed to interpret Being in terms of beings, to repeat the metaphysics of presence.
The interesting thing to note here, however, is that Roden is likewise accusing Zahavi of neglect. Unless phenomenologists accord themselves supernatural powers, it seems hard to believe that they are not every bit as conceptually blind to the full content of phenomenal experience as the rest of us are. The phenomenologist, in other words, must acknowledge the bare fact that they suffer neglect. And if they acknowledge the bare fact of neglect, then, given the role neglect plays in their own critique of scientism, they have to acknowledge the bare possibility that they, like Dennett and heterophenomenology, find themselves occupying a view whose coherence requires ignorance—or to use Zahavi’s preferred term, naivete—in a likewise theoretically pernicious way.
The question now becomes one of whether the phenomenological concept of horizons can actually allay this worry. The answer here has to be no. Why? Simply because the phenomenologist cannot deploy horizons to rationally immunize phenomenology against neglect without assuming that phenomenology is already so immunized. Or put differently: if it were the case that neglect were true, that Zahavi’s phenomenology, like Dennett’s heterophenomenology, only makes sense given a certain kind of neglect, then we should expect ‘horizons’ to continue playing a conceptually constitutive role—to contribute to phenomenology the way it always has.
Horizons cannot address the problem of neglect. The phenomenologist, then, is stranded with the bare possibility that their practice only appears to be coherent or cognitive. If neglect can cause such problems for Dennett, then it’s at least possible that it can do so for Zahavi. And how else could it be, given that phenomenology was not handed down to Moses by God, but rather elaborated by humans suffering all the cognitive foibles on the list linked above? In all our endeavours, it is always possible that our blindspots get the better of us. We can’t say anything about specific ‘unknown unknowns’ period, let alone anything regarding their relevance! Arguing that phenomenology constitutes a solitary exception to this amounts to withdrawing from the possibility of rational discourse altogether—becoming a secular religion, in effect.
So it has to be possible that Zahavi’s phenomenology runs afoul theoretically pernicious neglect the way he accuses Dennett’s heterophenomenology of running afoul theoretically pernicious neglect.
Fair is fair.
The question now becomes one of whether phenomenology is suffering from theoretically pernicious neglect. Given that magic mushrooms fuck up phenomenologists as much as the rest of us, it seems assured that the capacities involved in cognizing their transcendental domain pertain to the biological in some fundamental respect. Phenomenologists suffer strokes, just like the rest of us. Their neurobiological capacity to take the ‘phenomenological attitude’ can be stripped from them in a tragic inkling.
But if the phenomenological attitude can be neurobiologically taken, it can also be given back, and here’s the thing, in attenuated forms, tweaked in innumerable different ways, fuzzier here, more precise there, truncated, snipped, or twisted.
This means there are myriad levels of phenomenological penetration, which is to say, varying degrees of phenomenological neglect. Insofar as we find ourselves on a biological continuum with other species, this should come as no surprise. Biologically speaking, we do not stand on the roof of the world, so it makes sense to suppose that the same is true of our phenomenology.
So bearing this all in mind, here’s an empirical alternative to what I termed the Transcendental Interpretation above.
On the Global Neuronal Workspace Theory, consciousness can be seen as a serial, broadcast conduit between a vast array of nonconscious parallel systems. Networks continually compete at the threshold of conscious ‘ignition,’ as it’s called, competition between nonconscious processes results in the selection of some information for broadcast. Stanislaus Dehaene—using heterophenomenology exactly as Dennett advocates—claims on the basis of what is now extensive experimentation that consciousness, in addition to broadcasting information, also stabilizes it, slows it down (Consciousness and the Brain). Only information that is so broadcast can be accessed for verbal report. From this it follows that the ‘phenomenological attitude’ can only access information broadcast for verbal report, or conversely, that it neglects all information not selected for stabilization and broadcast.
Now the question becomes one of whether that information is all the information the phenomenologist, given his or her years of specialized training, needs to draw the conclusions they do regarding the ontological structure of experience. And the more one looks at the situation through a natural lens, the more difficult it becomes to see how this possibly could be the case. The GNW model sketched above actually maps quite well onto the dual-process cognitive models that now dominate the field in cognitive science. System 1 cognition applies to the nonconscious, massively parallel processing that both feeds, and feeds from, the information selected for stabilization and broadcast. System 2 cognition applies to the deliberative, conscious problem-solving that stabilization and broadcast somehow makes possible.
Now the phenomenological attitude, Zahavi claims, somehow enables deliberative cognition of the transcendental structure of experience. The phenomenological attitude, then, somehow involves a System 2 attempt to solve for consciousness in a particular way. It constitutes a paradigmatic example of deliberative, theoretical metacognition, something we are also learning more and more about on a daily basis. (The temptation here will be to beg the question and ‘go ontological,’ and then accuse me of begging the question against phenomenology, but insofar as neuropathologies have any kind of bearing on the ‘phenomenological attitude,’ insofar as phenomenologists are human, giving in to this temptation would be tendentious, more a dialectical dodge than an honest attempt to confront a real problem.)
The question of whether Zahavi has access to what he needs, then, calves into two related issues: the issue of what kind of information is available, and the issue of what kind of metacognitive resources are available.
On the metacognitive capacity front, the picture arising out of cognitive psychology and neuroscience is anything but flattering. As Fletcher and Carruthers have recently noted:
What the data show is that a disposition to reflect on one’s reasoning is highly contingent on features of individual personality, and that the control of reflective reasoning is heavily dependent on learning, and especially on explicit training in norms and procedures for reasoning. In addition, people exhibit widely varied abilities to manage their own decision-making, employing a range of idiosyncratic techniques. These data count powerfully against the claim that humans possess anything resembling a system designed for reflecting on their own reasoning and decision-making. Instead, they support a view of meta-reasoning abilities as a diverse hodge-podge of self-management strategies acquired through individual and cultural learning, which co-opt whatever cognitive resources are available to serve monitoring-and-control functions. (“Metacognition and Reasoning”)
We need to keep in mind that the transcendental deliverances of the phenomenological attitude are somehow the product of numerous exaptations of radically heuristic systems. As the most complicated system in its environment, and as the one pocket of its environment that it cannot physically explore, the brain can only cognize its own processes in disparate and radically heuristic ways. In terms of metacognitive capacity, then, we have reason to doubt the reliability of any form of reflection.
On the information front, we’ve already seen how much information slips between the conceptual cracks with Roden’s account of dark phenomenology. Now with the GNW model, we can actually see why this has to be the case. Consciousness provides a ‘workspace’ where a little information is plucked from many producers and made available to many consumers. The very process of selection, stabilization, and broadcasting, in other words, constitutes a radical bottleneck on the information available for deliberative metacognition. This actually allows us to make some rather striking predictions regarding the kinds of difficulties such a system might face attempting to cognize itself.
For one, we should expect such a system to suffer profound source neglect. Since all the neurobiological machinery preceding selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the origins of consciousness to end in dismal failure. In fact, given that the larger cognitive system cognizes environments via predictive error minimization (I heartily recommend Hohwy’s, The Predictive Mind), which is to say, via the ability to anticipate what follows from what, we could suppose it would need some radically different means of cognizing itself, one somehow compensating for, or otherwise accommodating, source neglect.
For another, we should expect such a system to suffer profound scope neglect. Once again, since all the neurobiological machinery bracketing the selection, stabilization, and broadcast is nonconscious, we should expect any metacognitive attempt to solve for the limits of consciousness to end in failure. Since the larger cognitive system functions via active environmental demarcations, consciousness would jam the gears, to be an ‘object without edges,’ if anything coherent at all.
We should expect to be baffled by our immediate sources and by our immediate scope, not because they comprise our transcendental limitations, but because such blind-spots are an inevitable by-product of the radical neurophysiological limits on our brain’s ability to cognize its own structure and dynamics. Thus Blind Brain Theory, the empirical thesis that we’re natural in such a way that we cannot cognize ourselves as natural, and so cognize ourselves otherwise. We’re a standalone solution-monger, one so astronomically complicated that we at best enjoy an ad hoc, heuristic relation to ourselves. The self-same fundamental first-person structure that phenomenology interprets transcendentally—as ontologically positive, naturalistically inscrutable, and inexplicably efficacious—it explains in terms of neglect, explains away, in effect. It provides a radical alternative to the Transcendental Interpretation discussed above—a Blind Brain interpretation. Insofar as Zahavi’s ‘phenomenological attitude’ amounts to anything at all, it can be seen as a radically blinkered, ‘inside view’ of source and scope neglect. Phenomenology, accordingly, can be diagnosed as the systematic addumbration of a wide variety of metacognitive illusions, all turning in predictable ways on neglect.
As a onetime phenomenologist I can appreciate how preposterous this must all sound, but I ask you to consider, as honestly as that list I linked above allows, the following passage:
This flow is something we speak of in conformity with what is constituted, but it is not ‘something in objective time.’ It is absolute subjectivity and has the absolute properties of something to be designated metaphorically as ‘flow’; of something that originates in a point of actuality, in a primal source-point and a continuity of moments of reverberation. For all this, we lack names. Husserl, Phenomenology of Internal Time-Consciousness, 79.
Now I think this sounds like a verbal report generated by a metacognitive system suffering source and scope neglect yet grappling with questions of source and scope all the same. Blind to our source blindness, our source appears to stand outside the order of the conditioned, to be ‘absolute’ or ‘transcendental.’ Blind to our scope blindness, this source seems to be a kind of ‘object without edges,’ more boundless container than content. And so a concatenation of absolute ignorances drives a powerful intuition of absolute or transcendental subjectivity at the very limit of what can be reported. Thus domesticated, further intuitive inferences abound, and the sourceless, scopeless arena of the phenomenological attitude is born, and with it, the famed ontological difference, the principled distinction of the problem of being from the problems of beings, or the priority of the sourceless and scopeless over the sourced and the scoped.
My point here is to simply provide a dramatic example of the way the transcendental structure revealed by the phenomenological attitude can be naturalistically turned inside out, how its most profound posits are more parsimoniously explained as artifacts of metacognitive neglect. Examples of how this approach can be extended in ways relevant to phenomenology can be found here, here, and here.
This is a blog post, so I can genuinely reach out. Everyone who practices phenomenology needs to consider the very live possibility that they’re actually trading in metacognitive illusions, that the first person they claim to be interpreting in the most fundamental terms possible is actually a figment of neglect. At the very least they need to recognize that the Abductive Argument is no longer open to them. They can no longer assume, the way Zahavi does, that the intersubjective features of their discourse evidence the reality of their transcendental posits exclusively. If anything, Blind Brain Theory offers a far better explanation for the discourse-organizing structure at issue, insofar as it lacks any supernatural posits, renders perspicuous a hitherto occult connection between brain and consciousness (as phenomenologically construed), and is empirically testable.
All of the phenomenological tradition is open to reinterpretation in its terms. I agree that this is disastrous… the very kind of disaster we should have expected science would deliver. Science is to be feared precisely because it monopolizes effective theoretical cognition, not because it seeks to, and philosophies so absurd as to play its ontological master manage only to anaesthetize themselves.
When asked what problems remain outstanding in his AVANT interview, Zahavi acknowledges that phenomenology, despite revealing the dialectical priority of the first person over the third person perspective on consciousness, has yet to elucidate the nature of the relationship between them. “What is still missing is a real theoretical integration of these different perspectives,” he admits. “Such integration is essential, if we are to do justice to the complexity of consciousness, but it is in no way obvious how natural science all by itself will be able to do so” (118). Blind Brain Theory possesses the conceptual resources required to achieve this integration. Via neglect and heuristics, it allows us to see the first-person in terms entirely continuous with the third, while allowing us to understand all the apories and conundrums that have prevented such integration until now. It provides the basis, in other words, for a wholesale naturalization of phenomenology.
Regardless, I think it’s safe to say that phenomenology is at a crossroads. The days when the traditional phenomenologist could go on the attack, actually force their interlocutors to revisit their assumptions, are quickly coming to a close. As the scientific picture of the human accumulates ever more detail—ever more data—the claim that these discoveries have no bearing whatsoever on phenomenological practice and doctrine becomes ever more difficult to credit. “Science is a specific theoretical stance towards the world,” Zahavi claims. “Science is performed by embodied and embedded subjects, and if we wish to comprehend the performance and limits of science, we have to investigate the forms of intentionality that are employed by cognizing subjects.”
Perhaps… But only if it turns out that ‘cognizing subjects’ possess the ‘intentionality’ phenomenology supposes. What if science is performed by natural beings who, quite naturally, cannot intuit themselves in natural terms? Phenomenology has no way of answering this question. So it waits the way all prescientific discourses have waited for the judgment of science on their respective domains. I have given but one possible example of a judgment that will inevitably come.
There will be others. My advice? Jump ship before the real neuroinformatic deluge comes. We live in a society morphing faster and more profoundly every year. There is much more pressing work to be done, especially when it comes to theorizing our everydayness in more epistemically humble and empirically responsive manner. We lack names for what we are, in part because we have been wasting breath on terms that merely name our confusion.
*[Originally posted 2014/10/22]
“For anyone who has spent long rainy afternoons pouring through the phenomenological canon, alternately amused and amazed by this or that interpretation of lived life, the notion that phenomenology is ‘mere bunk’ can only sound like ignorance.” *RSB drops the mike.*
I’m running out toes over here! 😉
At this point, I think many will be offended if RSB doesn’t take the time to step on their favorite idea-toes. 🙂
For real though about phenomenology–when I was first introduced, there definitely seemed to be a sort of initiation, some kind of rite of passage that took you to the cool kid’s table–IF you agreed with the basic project.
On this topic (God save me elsewhere), I was never able to let go of a niggling, nascent credulity. Oh well. But goddamnit, if you grant them the basic assumptions, they do have some cool shit to write about. IF!
Can we get a phenomenological description of a day in the life of a neuroscientist researching philosophers’ brains???
I know my young self would be horrified by my hoary self for this very reason! It mingles conceptual rigour and poetic insight in narcotic ways.
Long time reader. First time talker. I must confess my difficulty in parsing through this and grasping the finer nuances and thoughts of philosophy, I was trained to think more rigidly and mash square shapes into circular slots until one of them gives, but I am slowly trying to retrain…reframe that. I have been marking your book recommendations in interviews over the years and hope to eventually mash them against my own dull matter. This is what my brain shows me as a rough summary:
In any case, would love to see you make Bible’s Bible in a complete, published form so I can struggle with BBT and it’s implications head on, instead of neglectfully piecing it together over the years. Keep up the good work, you have caused me no small amount of existential terror and drunken rants in pubs and clubs 🙂
Putting them together piecewise over the years is probably the most honest way to come by the theory. I’m a systematic thinker who suffers organizational dyslexia–not a happy combination!
I have been working my way toward a rewrite of “The Last Magic Show”… This gives me a little more incentive. Have you checked out the Alien Philosophy piece? Or the Scientia Salon piece?
Whiteboards helped me in that regard. Drawing things as objects make them more tangible in my head. Probably something with engaging multiple parts of the brain that helps organization and recollection.
I did read the Alien Philosophy one, but after some quick googling it appears I somehow overlooked “The Last Magic Show” and the Scientia Salon piece. I’ve added them to my to-read folder. Thanks!
where did To Be and Not To Be go? did you decide to can it because you posted an updated article involving mary’s room? i particularly liked the older one.
Hmm. Good question. I have no recollection of taking it down. Lemme rummage around.
Good pic! As I understand it, probably what you want is to first draw a circle. This represents how much we as individuals know we don’t know about ourselves. Like you know your visual cone ends at a certain point. Or that you can forget stuff. Things like that.
The next circle is drawn around the first circle. This larger circle is what information is in the mind that the mind feeds back into itself – ie, how much the mind is aware of itself. The discrepancy between the first circle and second is why you can ‘just know something’. You’re just unaware of where your access ends and so information (even the term itself) seems to pop out of nowhere. On a side note, a lot of people take this effect not as a sign of not knowing where their introspective limit is, but instead that this information that pops out is an absolute certitude. Somehow incapacity inverts to becoming some capacity to ‘just know’. Their incapacity to self access includes an incapacity to know it’s just self access rather than being in touch with god or some ontology.
The final circle is a much larger one, engulfing the second and obviously the first with it, which is how much information the brain has. Some of this info will sometimes migrate to inside the second circle and sometimes even to within the first circle. But this third circle is basically the same as your ‘all the information in the organic unit.
Much the same as your pic, I guess. But for myself it’s that first circle and the interplay between it and the second circle that is ‘the hard problem’.
Welcome to the blog, John! 🙂
Well the treadmill (CPU) part I drew in part of my understanding as its active processor. My knowledge of English for example, it’s a second language for me. I didn’t actively learn it, not really, I picked it up along the way, so I really can’t reason why I form one sentence this way or that way. I do what feels…right and that was somehow enough to get me top grades in it. However, I was always struck when someone asked me for a translation of something in my native one.
You know what it means, but you can’t explain it. Not right away anyway. The CPU has to interrupt the brain at large, fetch the meaning and feed it into the heavily edited ‘now’ we perceive. This time between the query, process and final answer/reply is called ‘dead time’ in certain automated processes I learned in HS. The disconnect between knowledge and where it actively resides really began to disturb me once I started thinking about it. Memories too, especially of youth and early days. Unless someone reminded me of some specific event with some detail, it’s like they didn’t exist. And yet I was I…more or less.
The CPU has to interrupt the brain at large, fetch the meaning and feed it into the heavily edited ‘now’ we perceive. This time between the query, process and final answer/reply is called ‘dead time’ in certain automated processes I learned in HS.
Yeah, it’s ironic that it takes some time to translate internal perceptions into human readable format. Into words. Why aren’t they human readable instantly, if they are so very human?
Reblogged this on The Ratliff Notepad.
Thanks Joseph! Any thoughts on the argument?
Yes, but still formulating, you covered a lot of ground here (and also inspired another read of Husserl).
Reblogged this on alien ecologies and commented:
Scott on the wholesale naturalization of phenomenology: “Blind Brain Theory possesses the conceptual resources required to achieve this integration. Via neglect and heuristics, it allows us to see the first-person in terms entirely continuous with the third, while allowing us to understand all the apories and conundrums that have prevented such integration until now. It provides the basis, in other words, for a wholesale naturalization of phenomenology.”
One of your better essay, Scott! “Blind Brain Theory possesses the conceptual resources required to achieve this integration. Via neglect and heuristics, it allows us to see the first-person in terms entirely continuous with the third, while allowing us to understand all the apories and conundrums that have prevented such integration until now. It provides the basis, in other words, for a wholesale naturalization of phenomenology.” I thought it interesting that you talk about the slowing down of processes so that its verbalization can be stabilized and transmitted by consciousness, at the expense and neglect of everything else. “Stanislaus Dehaene—using heterophenomenology exactly as Dennett advocates—claims on the basis of what is now extensive experimentation that consciousness, in addition to broadcasting information, also stabilizes it, slows it down (Consciousness and the Brain). Only information that is so broadcast can be accessed for verbal report. From this it follows that the ‘phenomenological attitude’ can only access information broadcast for verbal report, or conversely, that it neglects all information not selected for stabilization and broadcast.” Deleuze on Spinoza speaks of the “differential velocities, between acceleration and deceleration of particles” between brain and body in continuous process: the composition of speeds and slowness on the plane of immanence.” (p. 122, Spinoza: Practical Philosophy). This sense of perception, language, and consciousness are tuned by the brain itself to receive information at the level of time-velocity consciousness is capable of…. the sense the brain receives much more, but filters out and excludes most of the data, and transposes only what we need for system/environmental pressures, etc. Your neglect thesis… etc.
Thanks Craig! The fascinating thing, of course, is that so damn many Continental theorists, particularly from the late 20th century, found themselves grappling for ways to make sense of neglect (especially efficacious neglect). BBT is their child, insofar as I cut my philosophical teeth on them. I like to think it’ll eventually be seen as the naturalistic basis of their semantic inklings, the tiger they were chasing all along, but then I remind myself of that frickin list!
Yea, I think the postmoderns have gotten stuck in a dark place, and all these johnny-come-lately SR folk’s turn to substance and formalism has left the buggery of Deleuze and others maligned, when in fact as you suggest it was the pomo’s – or at least the empirical naturalist ones who were giving birth to the neglect thesis under other tropes… of course, the point is they were reading the sciences, while from Latour on it has been a circus of those trying to malign the sciences and the empirical turn. Funny that!
I keep thinking the anachronistic, precritical SR is just going to dry up and blow away. But if you check out the count on Zahavi’s other papers, you’ll notice that his SR piece is far and away the most viewed.
No, I think, sadly, there is a whole new generation of Idealists arising… I get lambasted for my own naturalism all the time as if my conceptuality were part of some dinosaur impoverished world of antiquated notions… a radicalized naturalism to me seems the only path, the continuity between brain and what remains of consciousness is the rock bottom of this base materialist perspective. Yet, against such ideas as yours or mine a world of religion is being pitted against us, a world of Idealists seek once again age old Transcendence, duality, and the idiocy of non-scientific irreduction… etc. Even those like Brassier or Negarestani with their hyped up Spinozistic neo-rationalist Prometheanism are mere beggars at the gate of utilitarianism, martialing fragments of its faith in a computational functionalism that seeks to transcend the labors of the sciences in some mish-mash world of optimized intelligence, etc. One more path away from the immanent and mundane world of the brain… It seems in our time everyone has a religion, even the Secularists. Yet, as you and David have shown we cannot even fight our way out of a paper bag, blind to our own ignorance and neglect; foolish to think we can transcend the brain’s own mechanisms, we follow a strange course toward fake immortalities… like reading a bad J.G. Ballard novel.
It’s worthy a diagnostic piece. Short a subject matter, they have nothing to be subject matter experts (‘Smees’ they call them at my wife’s work) about, so there’s some very basic institutional stuff, no doubt. To me, though, so much of it reads like fundamentalism, railing against the ritualistic nonsense the tradition has imposed between us and the literal Object of God.
Yea, it seems par for the course, most of our culture and civilization is based on irrational denialism at the moment, from philosophy to climate change. The natural sciences have been under attack by the irrational quadrant of our philosophical and religious deniers for years now, and with all this so called ‘religious turn’ toward irrational and transcendence based Speculative philosophies in the past few years the gamut of New Age bullshit is gaining the upper hand with the folk psychology image ideologues. Ultimately I agree with you that the ‘crash space’ is the next move down the pipe… I was even reading Bernard Stiegler to that effect of late (if you can stomach such Derridean derivative crapology) that due to our externalization and dependency on off-loading memory in the past few thousand years we are in process of losing both our internal memory systems as well as our knowledge. Strangely he agrees with you that we’re about to enter a ‘crash space’ and moment of memory depletion and forgetfulness in which we will lose our accumulated knowledge. A sort of civilizational wide mind-wipe… haha… On the other front with the present Sixth Extinction in process, and the possible future of climate change releasing methane, the oceanic conveyor belt collapsing and other wonders of natural science modeling our irrational deniers and religious/philosophical breeders of mindlessness are blindly following leaders into planetary extinction. Par for the course… 🙂
All Can Be Lost: The Risk of Putting Knowledge in the Hands of Machines. – The Atlantic Magazine.
We are forgetting how to fly.
Intuit hunter GPS users.
Actors versus observers.
Who needs humans anyway?
“As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?”
Yea, Bernard Stiegler tells us we’re losing all our knowledge anyway, that we’ve become so dependent of our external memory devices from Big Data to Google, to our iPhones, to Face Book, Twitter, all the little trace systems that our brains are losing the ability to remember … losing our ability to think an actual thought, create a concept, become original… we’re becoming mere clones of our external systems, ruled and regulated by machinic life that is absorbing every last aspect of human intelligence into its own matrix to the point that only the husk of the human will remain, the body. And, somewhere down the road that too will become obsolesced as robots (like the recent Chinese replacement of 60,000 workers) replace us and our need to remain. Some even venture that our elite will grow a new eugenic breed of superchildren, or that our transhuman robotics brain uploads will come about… a sort of immortalization of the age old narcissistic dream of ego living on forever. But me… I think the species will soon obliterate itself in stupidity and forget what its goal used to be, and becoming once again primates will wander back into its natural world oblivious and unconscious of this age of insanity. (Just another hyperbolic gesture… yes, just another promissory note to annihilation!)
I’ll have to check this out–the absence of any workable theory of meaning always leaves me scratching my head with these approaches. Cognition has always been distributed cognition, so he would be on the hook for explaining why its continued distribution is ‘dehumanizing.’ Be it lore or supercomputers, minimizing cognitive loads via distribution is about as human a enterprise as one could imagine.
From my standpoint, everyone sees it happening, knows that something profound is afoot, but they have no way of understanding the process simply because all they have are folk reifications to work with. Everything is fuzzy, terminologically overdetermined, and theoretically underdetermined–perpetually so. Biomechanically, it all seems pretty clear to me. The big problem, of course, is the breakdown in modularity, the way systems become ever more prone to global, catastrophic collapse the less modularity their components possess. Otherwise, what everyone is talking about are best understood in terms of crash spaces, cognitive habitat destruction. The problem isn’t so much distribution of knowledge or forgetting–this is the Platonic instinct, right? “Writing is destroying our ability to think!” Well, no, Plato, it’s enabling new ways of thinking. Such clearly seems to be the case now. The problem isn’t knowledge, it’s cognition, and the way it evolved to take any number of now endangered environmental invariants for granted. We have built this system adapted to exploit every advantage, which means that with the proliferation of crash spaces comes the proliferation of cheat spaces. The problem isn’t simply that modularity is breaking down, it’s the way it’s breaking down, the growing maladaptiveness/obsolescence of ancestral forms of cognition.
Exactly: “The problem isn’t simply that modularity is breaking down, it’s the way it’s breaking down, the growing maladaptiveness/obsolescence of ancestral forms of cognition.” That’s Stiegler’s point that because we’ve become more and more reliant on these external memory devices we’re becoming lazy, losing some of our ancestral cognitive abilities – and, because of this we’re in a hyper-Sophist age of manipulation, easily manipulated by artificial thought and the habituation of thought, rather than being able to think for ourselves anymore. Instead we’re becoming more and more reliant for our external systems of computational fuctionalism to take over through modulated algorithms and adaptive intelligence, from deep learning to simple agent apps (smart apps) etc. to do most of the background thinking and habitual matching tasks that we used to do… everything that is calculable will be performed by machinic intelligence in the near future, leaving the human out of the equation. Of course he’s reliant of nuance and the statistical anomalies outside of such hypothesis, too. But a lot of his data comes from studies of neuroscientists rather than philosophy… although he’s such a dam Derridean polyglot etymologist that it’s lost in translation unless you spend time… My feeling is a neuroscientist would be a helluva lot better reading than Stiegler… another reason to give up philosophy for the sciences… 🙂
This is Plato’s point exactly. Reading does make us lazy–we cease to memorize any of the great words underwriting our wisdom!
These arguments just seem unconvincing, ways to flatter one form of lazy thinking by crowing about one that appears lazier. I hear it from my academic buddies all the time. I can’t help but see it as just the kind of ingroup rallying cry the anachronistic are prone to make.
All adaptation involves ‘de-skilling.’ The kids are better adapted, plain and simple, and here we are crowing about how poorly they would make out in our near-extinct cognitive habitats, one where, apparently, we were able to ‘think for ourselves’ more reliably.
The problem can’t simply be that knowledge is becoming more transactional, or that more and more forms of problem-solving are being offloaded. There’s nothing intrinsically bad about the redistribution of cognitive loads–quite the contrary! In many cases, it frees people up to problem solve things they lacked the time to previously tackle. Look at us internet para-academics!
Technical dependency is the human lot. We engineer our environments to facilitate environmental engineering. The question has to be why this is a problem now, why the processes that have so empowered humanity have suddenly flipped on their head and started disempowering us. That’s what his theory has to explain. (What I think I can explain). Book good, computer bad: why? De-skilling is simply the corollary of re-skilling.
Funny… that’s his question too… 🙂 He agrees with everything you just said… but, of course I haven’t read his later works that supposedly provide his solution. Not exactly sure if I want to work through all his verbiage to find out not at this point.. 🙂
Reblogged this on synthetic zero and commented:
Click to access Phenomenology_vs._speculative_realism.pdf
Awesome, Dirk. Thanks man. I guess the SR paper connection was pretty damn transparent!
sure, invited the Dans to chime in but doesn’t seem to be their thing, interesting as academic science goes more and more OA and online the philo folks are bunkered down.
When the institutional walls assuring academic insularity come crumbling down, you have to decide not to hear outgroup voices. The best way to silence an outgroup critic is to refuse to give them a voice in the first place–I think humans instinctively understand as much. The content of critical claims means nothing so long as nobody is listening. I would love to have a bestseller for a number of reasons, but a big one is to see how it would change the pattern.
sure I think in part the scientists are more secure in their relationships with peers/reviews (helps that their results are largely testable) and have long been upset at how waiting for publication often limits the usefulness of their results, they have for years had some amazing (in the sense of works in progress being co-operative ventures) online conversation threads but still most require invitations whereas their journals are ever more available t the online public, a big part of the flow of info has to do with who controls the platforms.
check yer in-mail when ya get a chance.
And all that darned hyperbole! 😉
I think yer BBTish assessment of the state of (the limits of) our knowledge is right on but as I’ve said before I think you overestimate the powers of engineering to rework/reshape our society/life-worlds, things aren’t all that different really on the tech and daily living front, tragically it doesn’t take much for the unintended consequences of our tech to outstrip our grasps be it pollution, nukes, gengineering, econ-bots, etc. We are crashing the biosophere but it will just be (as it is unfolding now) more of the same and not some alien invasion.
Is this what the Dunyain are doing? Are they phenomenologists interpreting “the transcendental structure of experience?” Are they trying to “grasp the absolute” that comes before existence and provides the conditions under which it is possible for the physical universe to exist and be known? Is the Dunyain project phenomenological in nature and therefore doomed (as I thought when I first read about it because one can’t “come before” logically if one does not come before chronologically)?
It’s rational as opposed to phenomenological, but the thing is, the dilemma is very nearly the same. I actually have some fun with this in TGO, so I’ll spare any details. Remind me, though, after you’ve had a chance to read it, Michael.
Tickled to read this again – as you say, the account of dimensionality reduction vis a vis modal, amodal and memory systems suggests that the poor old phenomenologist is sucking on scrag end. The burden of explanation of explanation should be on the phenomenologist to explain why we should expect her putative subject to be anything other than dark.
This should be be a journal article. Was the re-post prompted by Zahavi’s take down of Speculative Realism?
Thanks, David. This is something I’ve been arguing all along: the more the science uncovers, the more magical the phenomenologist’s ability becomes. Empirically the possibility (probability) of systematic deception can no longer be denied (it’s becoming more and more clear that scrag is all they could hope to ever have). For an outsider, the discourse betrays all the symptoms of systematic deception. For the position to convince any outsider, some surety against systematic deception is required to claim credibility. What is their model of metacognition? When I pressed Dan on this over at Brains, the best he could do was punt, again and again. Like Evan, actually.
I noticed Div had posted a link to the SR paper, and I still haven’t finished any one of the dozen or so pieces I still have hanging in the hoist (I’m mad for Golgotterath right now) so I thought, What the hay…
Is a Blog-to-Movie-Adaptation a possibility?
Just play Oedipus in a staging of Sophocles’ play, and have someone in the audience film you gouging out your eyes for real, and you’ll have the blog-to-movie adaptation in the bag.
Another option would be to bring back the Broadway play “Leased”, where everybody has AIDS. Note the last 20 words in the song carefully.
Here’s more stuff. The Witness game creator:
Links to a study titled: “Could a neuroscientist understand a microprocessor?”
“There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex
datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information.
Microprocessors are among those artificial information processing systems that are both complex and that we understand at
all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical
microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data
analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal
interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor.
This suggests that current approaches in neuroscience may fall short of producing meaningful models of the brain.”
Though, in the comments:
Low-level architectural differences make the tests they perform near-meaningless for electronic systems.
They try to justify the approach based on superficial similarities, but it’s all built on a bad metaphor.
Conclusion shouldn’t be that such techniques are wrong for neuro. Just CPUs. Which we knew prima facie.
Hey, galaxies also have some superficial organizational similarities to brains…
…but we would never apply radio telescopes to neurons and try to conclude that they’re a bad method.
I read this over the weekend. It looks very solid to me. Steven Fleming has a great roundup of the stakes over at his Elusive Self blog. My suspicion is that the situation is more difficult than they think, given neural reuse and the explananda problem more generally.
Could an electrical engineer understand a brain? I think that the understanding of how a microprocessor is ‘wired together’ goes a long way toward understanding how it works. How far are we from having the same understanding of a brain that the electrical engineers who design microprocessors have of the microprocessors they design? What level of understanding of information processing in the human brain should we expect to have given the state of our knowledge regarding how a brain is ‘wired together? If a brain has 100 billion neurons and each neuron has 1000 connections to other neurons that’s 10 trillion connections. How far are we from being able to map those 10 trillion connections? We might be as far from really understanding the brain as we are from being able to map those connections. And if a synapse has about a hundred possible states that gets up to a quadrillion or so possible states for the whole brain per what ever length of time it takes for a synapse to change state. The gap between what we know about how brains are wired together and all there is to know about how brains are wired together might be so big that neuroscientists are not that far beyond phenomenologists compared to how far there is to go.
Hi Scott and company, long time since I’ve been here.
If each logic gate stores 6.24 *10^6 electrons, how many states do we add? The answer is none because the charge quantity does not effect the possible logic states. Likewise the number of possible neuronal interconnections seem daunting but this is just an example of the computational fallacy. More important is what is the neuronal equivalent of digital charge storage? Or what is the true function of neuronal interactions? A microprocessor which imitates a Donkey Kong environment says something but not much else.
Hi VicP! Welcome back, my brother. We’re talking about functional analyses here, so there’s a sense in which the specifics of the implementation don’t matter. The moral as I see it is that even full knowledge of the implementation does not seem to facilitate functional inferences.
Scott: Agree because brains themselves are physically embedded in a physical environment and socially embedded in the social environment. I make double reference because brains themselves are embedded in themselves according to the phenomenologist pov. Sort of indicative for in-group and out-group social behavior. They’re very good at holding secrets and only passing them to the select members.
Question to anyone:
I’ve been reading Bakker’s essay The Last Magic Show and I’m struggling a bit, In large part it’s because I don’t know what the working definition of counsciousness and unconscious are.
I’m confused because I always assumed these are metaphors, so Bakker’s assertion that scientific evidence proves they are only metaphors makes no sense to me. .
For Bakker it is something like ‘accessibility to verbal report’. Information that is inaccessible to verbal report is information that is not conscious. IIRC in that paper he uses the information integration condition of tononi, and associates this to the thalamocortical structures.
I guess I’m not sure what the source of the confusion is, Newb. As Div mentioned, ‘accessible for verbal report’ is a big dividing line for conscious versus unconscious information processing. Otherwise, science has yet to determine what consciousness actually amounts to. BBT explains why the kinds of things a great many people think need to be explained–the consciousness they think they can see–in substantive terms are far more parsimoniously explained away.
My browser keeps kicking me out on this one after a minute of so. Is there anything of particular interest at a particular point?
[…] R. Scott Bakker argues that this enveloping darkness is what we might expect given what he has christened “Blind Brain Theory”. Roughly BBT claims that the processes through which brains and bodies interpret their mental lives cannot model their own causal complexity – hence their aura of phenomenal immediacy. We seem supernatural, Bakker writes, “because we cannot cognize ourselves as natural, and so cognize ourselves otherwise” (Bakker 2014). […]
[…] the information available—‘experience’—warrants the kinds of claims phenomenologists are prone to make about the truth of experience. Does the so-called ‘phenomenological attitude’ possess […]
[…] R. Scott Bakker argues that this enveloping darkness is what we might expect given what he has christened “Blind Brain Theory”. Roughly BBT claims that the processes through which brains and bodies interpret their mental lives cannot model their own causal complexity – hence their aura of phenomenal immediacy. We seem supernatural, Bakker writes, “because we cannot cognize ourselves as natural, and so cognize ourselves otherwise” (Bakker 2014). […]
[…] “Zahavi, Dennett, and the End of Being” https://rsbakker.wordpress.com/2016/05/28/zahavi-dennett-and-the-end-of-being/, Accessed 22 June […]
[…] This nicely complements an argument I’ve made elsewhere to the effect that the rational subject – in analytical pragmatism, for example – presupposes an untheorized subject who invests the world with normative clothing. Since this dark extra subject (hors-sujet) remains outside theory, the concept of agency cannot be reduced to the idea of normative compliance or assent (forthcoming). We seem supernatural, as Scott Bakker writes, “because we cannot cognize ourselves as natural, and so cognize ourselves otherwise” (Bakker 2014). […]