Life as Perpetual Motion Machine: Adrian Johnston and the Continental Credibility Crisis
by rsbakker
In Thinking, Fast and Slow, Daniel Kahneman cites the difficulty we have distinguishing experience from memory as the reason why we retrospectively underrate our suffering in a variety of contexts. Given the same painful medical procedure, one would expect an individual suffering for twenty minutes to report a far greater amount than an individual suffering for half that time or less. Such is not the case. As it turns out duration has “no effect whatsoever on the ratings of total pain” (380). Retrospective assessments, rather, seem determined by the average of the pain’s peak and its coda.
Absent intellectual effort, the default is to remove the band-aid slowly.
Far from being academic, this ‘duration neglect,’ as Kahneman calls it, places the therapist in something of a bind. What should the physician’s goal be? The reduction of the pain actually experienced, or the reduction of the pain remembered. Kahneman provocatively frames the problem as a question of choosing between selves, the ‘experiencing self’ that actually suffers the pain and the ‘remembering self’ that walks out of the clinic. Which ‘self’ should the therapist serve? Kahneman sides with the latter. “Memories,” he writes, “are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self” (381). As he continues:
“Confusing experience with the memory of it is a compelling cognitive illusion—and it is the substitution that makes us believe a past experience can be ruined. The experiencing self does not have a voice. The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self.” 381
There’s many, many ways to parse this fascinating passage, but what I’m most interested in is the brand of tyranny Kahneman invokes here. The use is metaphoric, of course, referring to some kind of ‘power’ that remembering possesses over experience. But this ‘power over’ isn’t positive: the ‘remembering self’ is no ‘tyrant’ in the interpersonal or political sense. We aren’t talking about a power that one agent holds over another, but rather the way facts belonging to one capacity, experiencing, regularly find themselves at the mercy of another, remembering.
Insofar as the metaphor obtains at all, you could say the power involved is the power of selection. Consider the sum of your own sensorium this very moment—the nearly sub-audible thrum of walled-away urban environs, the crisp white of the screen, the clamour of meandering worry on your margins, the smell of winter drafts creeping through lived-in spaces—and think of how wane and empty it will have become when you lie in bed this evening. With every passing heartbeat, the vast bulk of experience is consigned to oblivion, stranding us with memories as insubstantial as coffee-rings on a glossy magazine.
It has to be this way, of course, for both brute biomechanical and evolutionary developmental reasons. The high-dimensionality of experience speaks to the evolutionary importance of managing ongoing environmental events. The biomechanical complexity required to generate this dimensionality, however, creates what might be called the Problem of Indisposition. Since any given moment of experience exhausts our capacity to experience, each subsequent moment of experience all but utterly occludes the moment prior. The astronomical amounts of information constitutive of momentary experience is all but lost, ‘implicit’ in the systematic skeleton of ensuing effects to be sure, but inaccessible to cognition all the same.
Remembering, in other words, is radically privative. As a form of subsequent experiencing, the machinery involved generating the experience remembered has been retasked. Accordingly, the question of just what gets selected becomes all important. The phenomenon of duration neglect noted above merely highlights one of very many kinds of information neglected. In this instance, it seems, evolution skimped on the metacognitive machinery required to reliably track and rationally assess certain durations of pain. Remembering the peak and coda apparently packed a bigger reproductive punch.
Kahneman likens remembering to a tyrant because selectivity, understood at the level of agency, connotes power. The automaticity of this selectivity, however, suggests that abjection is actually the better metaphor, that far from being a tyrant, remembering is more a captive to the information available, more a prisoner in Plato’s Cave, than any kind of executive authority.
If any culprit deserves the moniker of ‘tyrant’ here, it has to be neglect. Why do so many individuals choose to remove the band-aid slowly? Because information regarding duration plays far less a roll than information regarding intensity. Since the mechanisms responsible for remembering systematically neglect such information, that information possesses no downstream consequences for the machinery of decision-making. What we have traditionally called memory consists of a fractionate system of automata scattered throughout the brain. What little they cull from experiencing is both automatic and radically heuristic. Insofar as the metaphor of ‘tyrant’ applies at all, it applies to the various forms of neglect suffered by conscious cognition, the myriad scotomas constraining the possibilities of ‘remembering experience’—or metacognition more generally.
Kahneman’s distinction wonderfully illustrates the way the lack of information can have positive cognitive effects. Band-aids get pulled slowly because only a spare, evolutionarily strategic fraction of experiencing can be remembered. We only recall enough of experience, it seems safe to assume, to solve the kinds of problems impacting our paleolithic ancestors’ capacity to reproduce. This raises the general question of just what kinds of problems we should expect metacognition—given the limitations of its access and resources—to be able to solve.
Or put more provocatively, the question that philosophy has spent millennia attempting to evade in the form of skepticism: If we don’t possess the metacognitive capacity to track the duration of suffering, why should we expect theoretical reflection to possess the access and capacity to theoretically cognize the truth of experience otherwise? Given the sheer complexity of the brain, the information consciously accessed is almost certainly adapted to various, narrow heuristic functions. It’s easy to imagine specialized metacognitive access and processing adapting to solve specialized problems possessing reproductive benefits. But it seems hard to imagine why evolution would select for the ability to theoretically intuit experience for what it is. Even worse, theoretical reflection is an exaptation, a cultural achievement. As such, we should expect it to be a naive metacognitive consumer, taking all information absent any secondary information regarding that information’s sufficiency.
In other words, not only should we expect theoretical reflection to be blind, we should also expect it to be blind to its own blindness.
It is this question of neurobiological capacity and evolutionary problem-solving that I want to bring to Adrian Johnston’s project to materially square the circle of subjectivity—or as he puts it, to secure “the possibility of a gap between, on the one hand, a detotalized, disunified plethora of material substances riddled with contingencies and conflicts and, on the other hand, the bottom-up surfacing out of these substances of the recursive, self-relating structural dynamics of cognitive, affective, and motivational subjectivity—a subjectivity fully within but nonetheless free at certain levels from material nature” (209).
I’ve considered several attempts by different Continental philosophers to deal with the challenges posed by the sciences of the mind over the past three years: Quentin Meillasoux in CAUSA SUIcide, Levi Bryant in The Ptolemaic Restoration, Martin Hagglund in Reactionary Atheism, and Slavoj Zizek in Zizek Hollywood, each of which has received thousands of views. With Meillasoux I focussed on his isolation of ‘correlation’ as a problematic ontological assumption, and the way he seemed to think he need only name it as such, and all the problems of subjectivity raised by Hume and normativity raised by Wittgenstein could just be swept under the philosophical rug. With Bryant I focussed on the problem of dogmatic ontologism, the notion that naming correlation as a problem somehow warranted a return to the good old preKantian days, where we could make ontological assertions without worrying about our capacity to make such claims. With Hagglund I raised issues with his interpretation of Derrida as an early thinker of ‘ultratranscendental materialism,’ showing how the concepts at issue were intentional through and through, and thus thoroughly incompatible with the natural scientific project. With Zizek I focussed on the way his deflationary ontology of negative subjectivity arising from some ‘gap’ in the real, aside from simply begging all the questions it purported to answer, amounted to an ontologization of what is far more parsimoniously explained as a cognitive illusion.
And, of course, I took the opportunity to demonstrate the explanatory power of the Blind Brain Theory in each case, the way each of these approaches actually exploit various metacognitive illusions to make their case.
Now, having recently completed Johnston’s Prolegomena to Any Future Materialism: The Outcome of Contemporary French Philosophy, I’ve come to realize that these thinkers* are afflicted with the same set of recurring problems, problems which must be overcome if anything approaching a compelling account of the kind Johnston sets as his goal is to be had. These might be enumerated as follows:
Naivete Problem: With the qualified exception of Zizek, these authors seem largely (and in some cases entirely) ignorant of the enormous philosophical literature dealing with the problems intentionality poses for materialism/physicalism. They also seem to have scant knowledge of the very sciences they claim to be ‘grounding.’
No Cognitive Guarantee Problem: These authors take it as given that radical self-deception is simply not a possible outcome of a mature neuroscience–that something resembling subjectivity as remembered is ‘axiomatic.’ In all fairness, this is a common presumption of those critical of the eliminativist implications of the sciences of the brain. Rose and Abi-Rached, for instance, make it the centrepiece of their attempt to defang the neuroscientific threat to social science in their Neuro: The New Brain Sciences and the Management of the Mind. (Their strategy is twofold: on the one hand, they (like some of the authors considered here) give a conveniently narrow characterization of the threat in terms of subjectivity, arguing that the findings of neuroscience in this regard are simply confirming the subject-decentering theoretical insights already motivating much of the social sciences. Then they essentially cherry-pick researchers and commentators in the field who confirm their thesis without giving dissenters a hearing.) The unsettling truth is that wholesale, radical deception regarding who and what we are is entirely possible (evolution only cares about accuracy insofar as it pays reproductive dividends), and actually already a matter of empirical fact regarding a handful of cognitive capacities.
Talk Is Cheap Problem: There is a decided tendency among these authors to presume the effectiveness of metaphysical argumentation, to not only think that ontological claims merit serious attention in the sciences, but that the threat posed is merely ideological and not material. Rehearsing old arguments against determinism (especially when it’s the Second Law of Thermodynamics that needs to be refuted) will make no difference whatsoever once the brain ceases to be a ‘grey box’ and becomes continuous with our technology.
Implausible Continuity Problem: All of these authors ignore what I call the Big Fat Pessimistic Induction: the fact that, all things being equal, we should expect science to revolutionize the human as radically as it has revolutionized every other natural domain now that the brain has become empirically tractable. They assume, rather, that the immunity the opacity of the brain had granted their tradition historically will somehow continue.
Metacognitive Reliability Problem: All of these authors overlook the potentially crippling issue of metacognitive deception, despite the mounting evidence of metacognitive unreliability. I should note that this tendency is common in Analytic Philosophy of Mind as well (but less and less so as the years pass).
Intentional Dissociation Problem: All of these authors characterize the cognitive scientific threat in the narrow terms of subjectivity rather than intentionality broadly construed, the far more encompassing rubric common to Analytic philosophy. Given the long Continental tradition of critiquing commonly held conceptions of subjectivity, the attractiveness of this approach is understandable, but no less myopic.
I think Prolegomena to Any Future Materialism: The Outcome of Contemporary French Philosophy suffers from all these problems—clearly so. What follows is not so much a review—I’ll await the final book of his trilogy for that (for a far more balanced consideration see Stephan Craig Hickman’s serial review here, here, here, and here)—as a commentary on the general approach one finds in many Continental materialisms as exemplified by Johnston. What all these authors want is some way of securing—or salvaging—some portion of the bounty of spirit absent spirit. They want intentionality absent theological fantasy, and materialism absent nihilistic horror. What I propose is a discussion of the difficulties any such project must overcome—a kind of prolegomena to Johnston’s Prolegomena—and a demonstration why he cannot hope to succeed short of embracing the very magical thinking he is so quick to deride.
Insofar as this is a blog post, part of a living, real time debate, I heartily encourage partisans of his approach to sound off. I am by no means a scholar of any of these authors, so I welcome corrections of misinterpretations. Strawmen teach few lessons, and learn none whatsoever. But I also admit to a certain curiosity given the optimistic stridency of so much of Johnston’s rhetoric. “From my perspective,” he writes in a recent interview, “these naturalists are overconfident aggressors not nearly as well-armed as they believe themselves to be. And, the anti-naturalists react to them with unwarranted fear, buying into the delusions of their foes that these enemies really do wield scientifically-solid, subject-slaying weapons.” I’m sure everyone reading this would love to see what kind of walk accompanies this talk! From my quite contrary perspective, the only way a book like this could be written is for the lack of any sustained interaction with those holding contrary views. Write for your friends long enough, and your writing becomes friendly.
In my own terms, Johnston is an explicit proponent of what might be called noocentrism, the last bastion, now that geocentrism and biocentrism have been debunked, of the intuition that we are something special. Freud, of course, famously claimed to have accomplished this overthrow, to have inflicted the third great ‘narcissistic wound,’ when he had only camouflaged the breastworks by carving intentionality along different mortices. Noocentrism represents an umbrella commitment to our metacognitive intuitions regarding the various efficacies of experience, and these are the intuitions that Johnston explicitly seeks to vindicate. He is ‘preoccupied,’ as he puts it, “with constructing an ontology of freedom” (204). Since any such ontology contradicts the prevailing understanding of the natural arising out of the sciences–how can freedom arise in a nature where everything is in-between, a cog for indifferent forces?–the challenge confronting any materialism is one of explaining subjectivity in a materially consistent manner. As he puts it in his recent Society and Space interview:
“For me, the true ultimate test of any and every materialism is whether it can account in a strictly materialist (yet non-reductive) fashion for those phenomena seemingly most resistant to such an account. Merely dismissing these phenomena (first and foremost, those associated with subjectivity) as epiphenomenal relative to a sole ontological foundation (whether as Substance, Being, Otherness, Flesh, Structure, System, Virtuality, Difference, or whatever else) fails this test and creates many more problems than it supposedly solves.”
Naturalizing consciousness and intentionality—or in Johnston’s somewhat antiquated jargon, explaining the material basis of subjectivity—is without a doubt the holy grail, not only of contemporary philosophy of mind, but of several sciences as well. And he is quite right to insist, I think, that any such naturalization that simply eliminates intentional phenomena (along the lines of Alex Rosenberg’s position, say) hasn’t actually naturalized anything at all. If consciousness and intentionality don’t exist as we intuit them, then we need some account of why we intuit them as such. Elimination, in other words, has to explain why elimination is required in the first place.
But global eliminativist materialist approaches (such as Rosenberg’s and my own) are actually very rare. In contemporary debates, philosophers and researchers tend to be eliminativists or antirealists about specific intentional phenomena, qualia, content, norms, or so on, rather than all intentional phenomena. This underscores two problems that loom large over Johnston’s account, at least as it stands in this first volume. The first has to do with what I called the Intentional Dissociation Problem above, the fact that the problem of subjectivity is simply a subset of the larger problem of intentionality. It falls far short of capturing the ‘problem space’ that Johnston purports to tackle. Some philosophers (Pete Mandik comes to mind) are eliminativists about subjectivity, yet realists about other semantic phenomena.
The second has to do with the fact that throughout the course of the book he repeatedly references reductive and eliminative materialisms as his primary rhetorical foil without actually engaging any of the positions in any meaningful way. Instead he references Catherine Malabou’s perplexing work on neuroplasticity, stating that “one need not fear that bringing biology into the picture of a materialist theory of the subject leads inexorably to a reductive materialism of a mechanistic and/or eliminative sort; such worries are utterly unwarranted, based exclusively on an unpardonable ignorance of several decades of paradigm-shifting discoveries in the life sciences” (Prolegomena, 29). Why? Apparently because epigenetics and neural plasticity “ensure the openness of vectors and logics not anticipated or dictated by the bump-and-grind efficient causality of physical particles alone” (29).
Comments like these—and one finds them scattered throughout the text—demonstrates a problematic naivete regarding his subject matter. One could point out that quantum indeterminacy actually governs the ‘determinism’ he attributes to physical particles. But the bigger problem—the truly ‘unpardonable ignorance’—is that it shows how little he seems to understand the very problem he has set out to solve. His mindset seems to be as antiquated as the sources he cites. He seems to think, for instance, that ‘mechanism’ in the brain sciences refers to something nonstochastic, ‘clockwork,’ that the spectre of Laplace is what drives the unwarranted claims of reductive/eliminative materialists. ‘Decades of research revealing indeterminacy, and still they speak of mechanisms?’
As hard as it is to believe, Johnston pretty clearly thinks the primary problem materialism poses for subjectivity is the problem of determinism. But the problem, simply put, is nothing other than the Second Law of Thermodynamics, the exceptionless irreflexivity of the natural. Ontological freedom is every bit as incompatible with the probabilistic as it is the determined. The freedom of noise is no freedom at all.
This, without a doubt, is his single biggest argumentative oversight, the one that probably explains his wholesale dismissal of any would-be detractor such as myself. His foe here is entropy, not some anachronistic conception of clockwork determinism. Only an appreciation of this allows an appreciation of the difficulty the task Johnston has set himself. Forget the thousands of years of tradition, the lifetime of familiarity, the system of concepts anchored, forget that Johnston is arguing for the most beloved thing—your exceptionality—set aside all this, and what remains, make no mistake, is a perpetual motion machine, something belonging to reality but obeying laws of its own.
So how does one theoretically rationalize a perpetual motion machine?
The metaphor is preposterous, of course, even though it remains analogous in the most important respect. Johnston literally believes it’s possible to “be a partisan of a really and indissolubly free subject while simultaneously and without incoherence or self-contradiction remaining entirely faithful to the uncompromising atheism and immanentism of the combative materialist tradition” (176). He thinks that certain real, physical systems (you and me, as luck would have it) do not obey physical law, at least not the way every single system effectively explained through the history of natural science obeys physical law.
What makes the metaphor preposterous, however, is the apparent immediacy of subjectivity, the way it strikes us as a source of some kind upon reflection, hemmed not by astronomical neural complexities, but by rules, goals, rationality. In a basic sense, what could be more obvious? This is what we experience!
Or… is it just what we remember?
And here’s the rub. The problem that Johnston has set himself to solve is a dastardly one indeed, far, far more difficult than he seems to imagine. Even with the dazzling assurance of experience, a perpetual motion machine is pretty damn hard thing to explain. The fact that most everyone is dazzled by subjectivity in its myriad guises doesn’t change the fact that they are, quite explicitly, betting on a perpetual motion machine. There’s a reason, after all, why everyone but everyone who’s attempted what Johnston has set out to achieve has failed. “Empty-handed adversaries,” as Johnston claims in the same interview, “do not deserve to be feared.” But if they’re empty-handed, then they must know kung-fu, or something lethal, because so far they’ve managed to kill every single theory such as his!
But when you start interrogating that ‘dazzling assurance,’ when you consider just how much we remember, things become even more difficult for Johnston. Because the fact is, we really don’t remember all that much. Certain things escape memory simply because they escape experience altogether. Our brains, for instance, have no more access to the causal complexities of their own function than they do to those of others, so they rely on powerful, yet imperfect systems, ‘fast and frugal heuristics,’ to solve (explain, predict, and manipulate) themselves and others. When abnormalities occur in these systems, such as those belonging, say, to autism spectrum disorder, our capacity to solve is impaired.
As the history of philosophy attests, we seem to experience next to nothing regarding the actual function of these systems, or at least nothing we can remember in the course of pondering our various forms of intentional problem solving. All we seem to intuit are a series of problem-solving modes that we simply cannot square with the problem-solving modes we use to engineer and understand mechanical systems. And, most importantly, we seem to experience (or remember) nothing of just how little we experience (or remember). And so the armchair perpetually remains a live option.
I say ‘most importantly’ because this means remembering doesn’t simply overlook its incapacities, it neglects them. When it comes to experience, we remember everything there is to be remembered, always. We rarely have any inkling of what’s bent, bleached, or lost. What is lost to the system, does not exist for the system, even as something lost.
Add neglect and suddenly a good number of intentional peculiarities begin to make frightening sense. Why, for instance, should we be surprised that problem solving modes adapted to solve complex causal systems absent causal information cannot themselves make sense of causal information? We are mechanically embedded in our environments in such a way that we cannot cognize ourselves as so embedded, and so are forced to cognize ourselves otherwise, acausally, relying on heuristics that theoretical reflection transforms into rules, goals, and reasons, hazy obscurities at the limits of discrimination.
We are astronomically complicated causal systems that cannot remember themselves as such, amnesiac machines that take themselves for perpetual motion machines for the profundity of their forgetting. At any given moment, what we remember is all there is; there is nothing else to blame, no neuromechanistic background we might use to place our thoughts and experiences in their actual functional context, namely, the machinery that bullets and spirochetes and beta-amyloid plaques can destroy. We do not simply lack the access and the resources to intuit ourselves for what we are (something), we lack the resources intuit this lack of resources. Thus the myth of perpetual motion, our conviction in what Johnston calls the “self-determining spontaneity of transcendental subjects.”
The limits of remembering, in other words, provide an elegant, entirely naturalistic, explanation for our metacognitive intuitions of spontaneity, the almost inescapable sense that thought has to represent some kind of fundamental discontinuity in being. Since we cannot cognize the actual activity of cognition, that activity—the function of flesh and blood neural circuits that would seize were you to suffer a midcerebral arterial stroke this instant—does not exist for metacognition. All the informational dimensions of this medial functionality, the dimensions of the material, vanish into oblivion, stranding us with a now that always seems to be the same now, despite its manifest difference, a life that is always in the mysterious process of just beginning.
But Johnston doesn’t buy this story. For him, we actually do remember everything we need to remember to theoretically fathom experience. For him, the fact of subjectivity is nothing less than an “axiomatic intuition” (204), as dazzling as dazzling can be. He never explains how this magic might be possible, how any brain could possibly possess the access and resources to fathom its structure and dynamics in anything but radically privative ways, but then he’s not even aware this is a problem (or more likely, he assumes Freud and Lacan have already solved this problem for him). For him, self-determining spontaneity—perpetual motion—is simply a positive fact of what we are. Everything is remembered that needs to be remembered.
The problem, he’s convinced, doesn’t lie with us. So in order to pass his own test, to craft a materialism absent cryptotheological elements that nevertheless explains (as opposed to explains away) all the perplexing phenomena of intentionality, he needs some different account of nature.
He’s not alone in this regard. The vast majority of theorists who tackle the many angles of this problem are intentional realists of some description. But for many, if not most of them, the tactic is to posit empirical ignorance: though we presently cannot puzzle through the conundrums of intentional phenomena, proponents of so-called ‘spooky emergence’ contend, advances in cognitive neuroscience (and/or physics) will somehow vindicate our remembering. Consciousness and intentionality, they believe, are emergent phenomena, novel physical properties pertaining to as yet unknown natural mechanisms.
Johnston also appropriates the term ‘emergentism’ to describe his project, but it’s hard to see it as much more than a ‘cool by association’ ploy. Emergentism provides a way for physicalists (materialists) to redeem something ‘perpetual enough’ short of committing to ontological pluralism. Emergentists, in other words, are naturalists, convinced that “philosophy can and should limit itself to a deontologized epistemology with nothing more than, at best, a complex conception of the cognizing mental apparatus” (204).
This ‘article of faith,’ however, is one that Johnston explicitly rejects, claiming that “thought cannot indefinitely defer fulfilling its duty to build a realist and materialist ontology” (204). So be warned, no matter how much he helps himself to the term, Johnston is no ‘emergentist’ in the standard sense. He’s an avowed ontologist, as he has to be, given the Zizekian frame he uses to mount his theoretical chassis. “[A] theory of the autonomous negativity of self-relating subjectivity always is accompanied, at a minimum implicitly, by the shadow of a picture of being (as the ground of such subjectivity) that must be made explicit sooner or later” (204). Elsewhere, he writes, “I am tempted to characterize my transcendental materialism as an emergent dual-aspect monism, albeit with the significant qualification that these ‘aspects’ and their eradicable divisions (such as mind and matter, the asubjective and subjectivity, and the natural and the more-than-natural) enjoy the heft of actual existence” (180), that is, he’s a kind of dual-aspect monist so long as the dualities are not aspectual!
Insofar as perpetual motion machines (like autonomous subjects) pretty clearly violate nature as science presently conceives it, one might say that Johnston’s ontological emergentism is honest in a manner that naturalistic emergentism is not. As an eliminative naturalist who finds the notion of systems that violate the laws of physics arising as a consequence of those laws ‘spooky,’ I’m inclined to think so. But in avoiding one credibility conundrum he has simply inherited another, namely, our manifest inability to arbitrate ontological claim-making.
Johnston himself recognizes this problem of ontological credibility, insofar as he makes it the basis of his critiques of Badiou and Meillassoux, who suffer, he argues, “from a Heideggerean hangover, specifically, an acceptance unacceptable for (dialectical) materialism of the veracity of ontological difference, or a clear-cut distinction between the ontological and the ontic” (170). ‘Genuine materialism,’ as he continues, “does not grant anyone the low-effort luxury of fleeing into the uncluttered, fact-free ether of ‘fundamental ontology’ serenely separated from the historically shifting stakes of ontic disciplines” (171). And how could it, now that the machinery of human cognition itself lies on the examination table? He continues, “Although a materialist philosophy cannot be literally falsifiable as are Popperian sciences, it should be contestable as receptive, responsive, and responsible vis-a-vis the sciences” (171).
This, for me, is the penultimate line of the book, the thread from which the credibility of Johnston’s whole project hangs. As Johnston poses the dilemma:
“… the quarrels among the prior rationalist philosophers about being an sich are no more worth taking philosophically seriously than silly squabbles between sci-fi writers about whose concocted fantasy-world is truer or somehow more ‘superior’ than the others; such quarrels are nothing more than fruitless comparisons between equally hallucinatory apples and oranges, again resembling the sad spectacle of a bunch of pulp fiction novelists bickering over the correctness-without-criteria of each others’ fabricated imaginings and illusions.” 170
And yet nowhere could I find any explanation of how his own ontology manages to avoid this ‘fantasy world trap,’ to be ‘receptive’ or ‘responsive’ or ‘responsible’ to any of the sciences—to be anything other than another fundamental ontology, albeit one that rhetorically approves of the natural scientific project. The painful, perhaps even hilarious fact of the matter is that Johnston’s picture of intentionally rising from the cracks and gaps of an intrinsically contradictory reality happens to be the very ontological trope I use to structure the fantasy world of The Second Apocalypse!
There can be little doubt that he believes his picture somehow is receptive, responsive, and responsible, thinking, as he does, that his account
“… will not amount merely to compelling philosophy and psychoanalysis, in a lopsided, one-way movement, to adapt and conform to the current state of the empirical, experimental sciences, with the latter and their images of nature left unchanged in the bargain. Merging philosophy and psychoanalysis with the sciences promises to force profound changes, in a two-way movement, within the latter at least as much as within the former.” 179
Given the way science has ideologically and materially overrun every single domain it has managed to colonize historically, this amounts to a promise to force a conditional surrender with words—unless, that is, he has some gobsmacking way to empirically motivate (as opposed to verify) his peculiar brand of ontological emergentism.
But the closest he comes to genuinely explaining the difference between his ‘good’ ontologism and the ‘bad’ ontologism of those he critiques comes near the end of the text, where he espouses what might be called a qualified Darwinianism, one where “the chasm dividing unnatural humanity from natural animality is … not a top-down imposition inexplicably descending from the enigmatic heights of an always-already there ‘Holy Spirit’ … but, instead a ‘gap’ signalling a transcendence-in-immanence” (178). To advert to Dennettian terms, one might suggest that Johnston sees the bad ontologism of Badiou and Meillasoux as offering ‘skyhooks,’ unexplained explainers set entirely outside the blind irreflexivity of nature. His own good ontologism, on the other hand, he conceives phylogenetically, which is to say more in terms of what Dennett would call ‘cranes,’ a complicating continuity of natural processes and mechanisms culminating in ‘virtual machines’ that we then mistake for skyhooks.
Or perhaps we should label them ‘crane-hooks,’ insofar as Johnston envisions a ‘gap’ or ‘contradiction’ written into the very fundamental structure of existence, a wedge that bootstraps subjectivity as remembered…
A perpetual motion machine.
The charitable assumption to make at this point is that he’s saving this bombshell for the ensuing text. But given the egregious way he mischaracterizes the difficulties of his project at the beginning of the text, it’s hard to believe he has much in the way combustible material. As we saw, he flat out equivocates the concrete mechanistic threat—the way the complexities of technology are transforming the complexities of life into more technology—with the abstract philosophical problem of determinism. Creeping depersonalization–be it the medicalization of individuals in numerous institutional (especially educational) contexts, or the ‘nudge’ tactics ubiquitously employed throughout commercial society, or institutional reorganization based on data mining techniques–is nothing if not an obvious social phenomenon. When does it stop? Is there really some essential ‘gap’ between you and all the buzzing, rumbling systems about you, the negentropic machinery of life, the endless lotteries that comprise evolution, the countless matter conversion engines that are stars? Does mechanism, engineered or described, eventually bump into the edge of mere nature, bounce from some redemptive contradiction in the fabric of being? One that just happens to be us?
Are we the perpetual motion machine we’ve sought in vain for millennia?
The fact is, one doesn’t have to look far to conclude that Johnston’s ontologism is just more bad ontology, the same old empty cans strung in a different configuration. After all, he takes the dialectical nature of his materialism quite seriously. As he writes:
“… naturalizing human being (i.e., not allowing humans to stand above-and-beyond the natural world in some immaterial, metaphysical zone) correlatively entails envisioning nature as, at least in certain instances, being divided against itself. An unreserved naturalization of humanity must result in a defamiliarization and reworking of those most foundational and rudimentary proto-philosophical images contributing to any picture of material nature. The new, fully secularized materialism (inspired in part by Freudian-Lacanian psychoanalysis) to be developed and defended in Prolegomena to Any Future Materialism is directly linked to this notion of nature as the self-shattering, internally conflicted existence of a detotalized material immanence.” 19-20
What all this means is that nature, for Johnston, is intrinsically contradictory. Now contradictions are at least three things: first, they logically entail everything; second, they’re analytically difficult to think; and third, they’re conceptually semantic, which is to say, intentional through and through. Setting aside the way the first two considerations raise the spectres of obscurantism and sophistry (where better hide something stolen?), the third should set the klaxons wailing for even those possessing paraconsistent sympathies. Why? Simply because saying that reality is fundamentally contradictory amounts to saying that reality is fundamentally intentional. And this means that what we have here, in effect, is pretty clearly a kind of anthropomorphism, the primary difference being, jargon aside, that it’s a different kind of anthropos that is being externalized, namely, the fragmented, decentred, and oh-so-dreary ‘postmodern subject.’
I don’t care how inured to a discourse’s foibles you become, this has to be a tremendous problem. Johnston writes, “a materialist theory of the subject, in order to adhere to one of the principal tenets of any truly materialist materialism (i.e., the ontological axiom according to which matter is the sole ground), must be able to explain how subjectivity emerges out of materiality—and, correlative to this, how materiality must be configured in and of itself so that such an emergence is a real possibility” (27). Now empirically speaking, we have no clue ‘how materiality must be configured’ because we do not, as yet, understand the mechanisms underwriting consciousness and intentionality. Johnston, of course, rhetorically dismisses this ongoing, ever advancing empirical project, as an obvious nonstarter. He has determined, rather, that the only way subjectivity can be naturally understood is if we come to see that nature itself is profoundly subjective…
I can almost hear Spinoza groaning from his grave on the Spui.
If the contradiction of the human can only be ‘explained’ by recourse to some contradiction intrinsic the entire universe, then why not simply admit that the contradiction of the human cannot be explained? Just declare yourself a mysterian of some kind–I dunno. Johnston devotes considerable space critiquing Meillasoux for using ‘hyperchaos’ as an empty metaphysical gimmick, a post hoc way to rationalize the nonmechanistic efficacy of intentional phenomena. And yet it’s hard to see how Johnston gives his reader even this much, insofar as he’s simply taken the enigma of intentionality and painted it across the cosmos—literally so!
Johnston references the ‘sad spectacle of a bunch of pulp fiction novelists’ arguing their worlds’ (170), but as someone who’s actually participated in that (actually quite hilarious) spectacle, I can assure everyone that we, unlike the sad spectacle of Continental materialists arguing their worlds, know we’re arguing fictions. What makes such spectacles sad is the presumption to a cognitive authority that simply does not exist. Arguing the intrinsically dialectical nature of materiality is of a par with arguing intelligent design, save that the intuitions motivating intelligent design are more immediate (they require nowhere near as much specialized training to appreciate), and that its proponents have done a tremendous amount of work to make their position appear receptive, responsive, and responsible to the sciences they would, in the spirit of share-and-share alike, ‘complement with a deeper understanding.’
A contradictory materiality is an anthropomorphic materiality. It provides redemption, not understanding of some decentred-me-friendly world that science has been unable to find. In his attempt to materially square the circle of subjectivity, Johnston invents a stripped down, intellectualized fantasy world, and then embarks on a series of ‘fruitless comparisons between equally hallucinatory apples and oranges’ (170). And how could it be any other way when all of these pulp philosophy thinkers are trapped arguing memories?
Vivid ones to be sure, but memories all the same.
The vividness, in fact, is a large part of the whole bloody problem. It means that no matter how empty our metacognitive intuitions regarding experience are, they generally strike us as sufficient: What, for instance, could be more obvious than our normative understanding of rules? But there’s powerful evidence suggesting our feeling of willing is only contingently connected to our actions (a matter of interpretation). There’s irrefutable evidence that our episodic memory is not veridical. Likewise, there is powerful evidence suggesting our explanations of our behaviour are only contingently related to our actions (a matter of interpretation). Even if you dispute the findings (with laboratory results, one would hope), or think that psychoanalysis is somehow vindicated by these findings (rather than rendered empirically irrelevant), the fact remains that none of the old assumptions can be trusted.
Do you have any metacognitive sense of the symphony of subpersonal heuristic systems operating inside your skull this very instant, the kinds of problems they’ve adapted to solve versus the kinds of problems that can only generate impasse and confusion? Of course not. The titanic investment in time and resources required to isolate what little we have isolated wouldn’t have been required otherwise. We are almost entirely blind to what we are and what we do. But because we are blind to that blindness, we confuse what little we do see with everything to be seen. We therefore become the ‘object’ that cannot be an ‘object,’ the thing that cannot be intuitively cognized in time and space, that strikes us with the immediacy of this very moment, that appears to somehow stand outside a nature that is all-encompassing otherwise.
The system outside the picture, somehow belonging and not belonging…
Or as I once called it, the ‘occluded frame.’
And this just follows from our mechanical nature. For a myriad of reasons, any system originally adapted to systematically engage environmental systems will be structurally incapable of systematically engaging itself in the same manner. So when it develops the capacity to ask, as we have developed the capacity to ask, ‘What am I?’ it will have grounds to answer, ‘Of this world, and not of this world.’
To say, precisely because it is a mechanism, ‘I am contradiction.’
As with the crude thumbnail given above, the Blind Brain Theory attempts to naturalistically explain away the peculiarities of intentionality and phenomenality in terms of neglect. Since we cannot intuit our profound continuity with our environments, we intuit ourselves otherwise, as profoundly discontinuous with our environments. This discontinuity, of course, is the cornerstone of the problem of understanding what we are. Before, when the brain remained a black box, we could take it for granted, we could leverage our ignorance in ways that catered to our conceits, especially our perennial desire to be the great exception to the natural. So long as the box remained sealed, we could speak of beetles without fear of contradiction.
Now that the box has been cracked open with nary a beetle to be found, all those speculative discourses reliant upon our historical ignorance find themselves scrambling. They know the pattern, even if they are loath to speak of it or, like Johnston, prone to denial. Nevertheless, science is nothing if not imperial and industrial. It displaces aboriginal discourses, delegitimizes them in the course of revolutionizing any given domain. Humans, meanwhile, are hardwired to rationalize their interests. When their claims to status and authority are threatened, the moral and intellectual deficiencies of their adversary simply seems obvious. So it should come as no surprise that specialists in those discourses are finally rousing themselves from their ingroup slumber to defend what they must consider manifest authority and hard-earned privileges.
But they face a profound dilemma when it comes to prosecuting their case against science—a dilemma not one of these Continentalists has yet to acknowledge. Before, in the good old black box days, they could rely on simple pejoratives like ‘positivism’ and ‘scientism’ to do all the heavy lifting, simply because science reliably fell silent when it came to issues within their domain. The bind they find themselves in now, however, could scarce be more devious. The most obvious problem lies in the revolutionary revision of their subject matter—the thinking human. But the subject matter of the human is also the subject of the matter, the activity that makes the understanding of any subject matter possible. Continentalists, of course, know this, because it provides the basis for their ontological priority claims. They are describing, so they think, what makes science possible. This is what grants them diplomatic transcendental immunity when they take up residence in scientific domains. But Johnston isolates the dilemma—his dilemma—himself when he points out the empty nature of the Ontological Difference.
Foucault actually provides the most striking image of this that I know of with his analysis of the ‘emprico-transcendental doublet called man’ in The Order of Things. What is transpiring today can be seen as a battle for the soul of the darkness that comes before thought. Is it ontological as so much of philosophy insists? Or is it ontic as science seems to be in the process of discovering? So long as our ontic conditions remained informatically impoverished, so long as the brain remained a black box, then the dazzling vividness of our remembering could easily overcome our abstract, mechanistic qualms. We could rely on the apparent semantic density of ‘lived life’ or ‘conditions of possibility’ or ‘language games’ or ‘epistemes’ or so on (and so on) to silence the rumble of an omnivorous science. We could dwell in the false peace of trench warfare, a stalemate between two general, apparently antithetical claims to one truth. As Foucault writes:
“… either this true discourse finds its foundation and model in the empirical truth whose genesis in nature and in history it retraces, so that one has an analysis of the positivist type (the truth of the object determines the truth of the discourse that describes its foundation); or the true discourse anticipates the truth whose nature and history it defines; it sketches it out in advance and foments it from a distance, so that one has a discourse of the eschatological type (the truth of the philosophical discourse constitutes the truth in formation).” 320
Foucault, of course, has stacked the deck in this characterization of epistemological modes—simply posing the (historically contingent) problem of the human in terms of an ‘empirico-transcendental doublet’ is to concede authority to the transcendental—but he was nevertheless astute–or at least evocative–in his assessment of the form of the problem (as seen from within the subject/object heuristic). Again, as he writes:
“The true contestation of positivism and eschatology does not lie, therefore, in a return to actual experience (which rather, in fact, provides them with confirmation by giving them roots); but if such a contestation could be made, it would be from the starting-point of a question which may well seem aberrant, so opposed is it to what has rendered the whole of our thought historically possible. This question would be: Does man really exist?” 322
A question that was both prescient in his day and premature, given that the empirical remained, for most purposes, locked out of the black box of the human. For all his historicism, Foucault failed to look at this dilemma historically, to realize (as Adorno arguably did) that short of some form reason capable of contesting scientific claims on the human, the domain of the human was doomed to be overrun by scientific reason, and that discourses such as his would eventually be reduced to the status of alchemy or astrology or religion.
And herein lies the rub for Johnston. He thinks the key to a viable Continental materialism turns on getting the ontological nature of the what right, when the problem resides in the how. He says as much himself: anybody can cook up and argue a fantasy world. In my own lectures on fantasy, the most fictional of fictions, I always stress how the anthropomorphic ‘secondary worlds’ depicted could only be counted as ‘fantastic’ given the cognitive dominion of science. This, I think, is the real anxiety lurking beneath his work (despite all his embarrassing claims about ‘empty handed foes’). The only thing preventing the obvious identification of his secondary worlds as fantastic was the scientific inscrutability of the human. Now that the human is becoming empirically scrutable across myriad dimensions, now that the informatic floodgates have been cranked open—now that his claims have a baseline of comparison—the inexorable processes that rendered the anthropomorphic fantastic across external nature are beginning to render internal meaning fantastic as well.
Why do pharmaceuticals impact us? Man is a machine. Why do cochlear implants function? Man is a machine. Why do head injuries so profoundly reorganize experience? Man is a machine. The Problem of Mechanism is material first and only secondarily philosophical. Given what I know about the human capacity for self-deception (having followed the science for years now), I have no doubt that the vast majority of people will find refuge in ‘mere words,’ philosophical or theological rationalization of this or that redeeming ‘axiomatic posit.’ This is what makes the Singularity so bloody crucial to these kinds of debates (and what puts thinkers like David Roden so tragically far ahead of his peers). When we become indistinguishable from our machinery, or when our machines make kindergarten scribbles of our greatest works of genius, will we persist insisting on our ontological exceptionality then?
Or will the ‘human’ merely refer to some eyeless, larval stage? Will noocentrism be seen as last of the three great Centripetal Conceits?
Short of discovering some Messianic form of reason—a form of cognition capable of overpowering a scientific cognition that can cure blindness and vaporize cities—attempts to argue Messianic realities a la Continental materialism are doomed to fail before they even begin. Both the how and the what of the traditional humanities are under siege. As it stands, the profundity of this attack can still be partially hidden, so long as one’s audience wants to be reassured and has no real grasp of the process. A good number of high profile researchers are themselves apologists for the humanistic status quo, so one can, as defenders of various religious beliefs are accustomed, pluck many heartening quotes from the enemy’s own mouth. But since it is the rising tide of black-box information that has generated this legitimacy crisis, it seems more than a little plausible to presume that it will deepen and deepen, until finally it yawns abyssal, no matter how many well-heeled words are mustered to do battle against it.
Not matter how many Johnston’s pawn their cryptotheological perpetual motion machines.
Our only way to cognize our experiencing is via our remembering. The thinner this remembering turns out to be—and it seems to be very thin—the more we should expect to be dismayed and confounded by the sciences of the brain. At the same time we should expect a burgeoning market for apologia, for rationalizations that allow for the dismissal and domestication of the threats posed. Careers will be made, celebrated ones, for those able to concoct the most appealing and slippery brands of theoretical snake-oil. And meanwhile the science will trundle on, the incompatible findings will accumulate, and those of us too suspicious to believe in happy endings will be reduced to arguing against our hopes, and for the honest appraisal of the horror that confronts us all.
Because the bandage of our traditional self-conception will be torn away quicker than you think.
.
* POSTSCRIPT (17/01/2014): Levi Bryant, it should be noted, is an exception in several respects, and it was remiss of me to include him without qualification. A concise overview of his position can be found here.
Reblogged this on AGENT SWARM and commented:
R.Scott Bakker’s take on Adrian Johnston’s new book
Reblogged this on synthetic_zero.
So many interesting things said here, but one stuck to me like quicksand:
“Elimination, in other words, has to explain why elimination is required in the first place.”
To me that is the dark ponderable at the end of the tunnel. For Eliminative materialism will need to provide if not an answer to that then at least a thorough set of problems to which its investment in the neurosciences will ultimately provide a solution. That for me is the struggle I face in my understanding both of the history of this eliminative materialist perspective both in the skeptic and naturalist traditions, as well as in the current work of practicing neuroscientists in the so far 21 disciplines of that framework. That in itself could take a lifetime of study…
There’s so many things in cognitive science that depend on what intentionality is, not the least of which is the definition of ‘cognition.’ BBT would make most of these problems go away, provide a means to handle intentional terms where their shorthand power is warranted. As it stands now, when someone from any of the fields mentions ‘cognition’ I literally have no idea what they mean, short a number of follow-up questions. If we could isolate our suite of heuristic capacities, roughly outline the kinds of problem ecologies that suite them, it would save plenty o’ problem-solving grief. Psychology could finally shrug off the last of its functionalist confusions, see that they’re trading in mechanism sketches, come work in direct concert with neuroscience. It would also allow philosophy to move on, abandon the great phantom tail chase, the endless empirico-transcendental doubling.
And it would break most every knowledgeable heart in the world, profoundly rewrite the role science plays social policy (in no way I find appealing), license the medicalization of everything, and on and on.
Post-intentionality (whether in the form described by BBT or some other more nuanced form) would represent the most decisive scientifically driven break with the past in the history of the human race.
Yep, if one was actually going to clear the path toward a neoeliminative materialist perspective, one will need to start by recapping the short dubious history of intentionalism itself. Maybe that is the place to begin…. because one cannot eliminate what one has not thoroughly understood, and as that old historian of man’s first myths Vico once remarked: “One understands only what one has made.” So like any good engineer we must first reverse engineer the machine of intentional thought through its strange history from Plato to now.
In fact I’ve been pondering of late Reza Negarestani’s form of eliminative materialism again. Of course Reza has been around for quite a while and I had at one time written extensively on his old hyperstition work, along with his newer theory-fictions Cyclopedia and Culinary Materialism, etc. from Collapse volumes to published fiction, etc.
His concept of the Blank Sheet of Assertion he asks: “If the Universal is foreclosed to the thought of its particular instances and if the free sign is absolutely devoid of an significance and meaning and if the eternal qua zero of nature (0) is concomitantly inclined to posit difference and indifferently remain within itself for no reason whatsoever, then how is it possible to navigate the universal, to systematically approach the empty and free sign and think the eternal? In other words, how can we think or imagine a system of knowledge proper to a meaningless, contingent, free and bottomless Universe?”
He starts with Lorenz Oken’s notion of Zero (0) as the first act, the nothing of nature. Yet, this act has no ground, no substratum.
In this regard I think of the brain’s processes foreclosed to thought per se, without any preconceived significance or meaning in which the question is: how do we map that which is zero, a blank assertion, a ground that is no ground with our outmoded tools of thought and reason. What must we do to break free of the old modes and into a new framework of explanatory power? That, to me, is the central problem for both science and philosophy in our time. Almost like boostrapping everything that has preceded us into some new system of thought that has of yet only an inkling of traces on our horizon.
And, if my guess is right, and hearing you… we want be able to, but our progeny, the new AI’s or thinking machines we make in some near future will do just that and surpass us by revealing the underpinnings of our own nothingness and Zero point foreclosures.
I’ve actually flirted with similar ideas, and I know Reza was very enthusiastic about BBT when he was here in baby London last year, but I had no idea he had struck out on his own eliminativism tangent. Sounds fascinating to be sure! But now I think this amounts to the same old metacognitive instinct to universalize our heuristics, that we’re in roughly the same position with respect to BBT as we are the Standard Model: we’ve run aground on our own native heuristic limits, and can only proceed with the project of cognition via all the myriad prostheses of science. Our knowledge is officially greater than we are, which is to say, we can continue with the work of prediction and manipulation, but that the realm of ‘intuitively satisfying explanations’ are now behind us. The big reason for this has to do with the way the subject/object heuristic ropes are thought back into its image, time and again, when the vistas suggested by BBT, anyways, are quite beyond subjectivity and objectivity – and almost unimaginable for it.
I’ve been cooking up a post on this for some time now, but it refuses to come together. The upshot has quite a bit to do with what you term ‘wisdom,’ Stephen, an ability to accept that our relation to our knowledge is so componential that the possibility of adequately internalizing its ‘image’ is forever beyond us.
But, but… the complete realization that we’re not perpetual motion machines is super depressing.
And hilarious!
Thank goodness that for most of us there is a huge gap between merely knowing a thing is true and believing it.
That’s why it makes me feel like a hypocrite, calling out hypocrisy all the time!
I’ve never understood how Zizek and his crew could continue pedalling their warmed up Kantian idealism as any sort “materialism”. As Keith Frankish once remarked to me in a park in Heraklion: the bottom line for any materialism is that there is nothing ontologically privileged about the mental. The materialist has to assume that the universe is non-mental and that humans are unspecial parts of it.
This is the most brilliant dissection of this philosophical corpse that I’ve yet read – it cries out to be submitted somewhere. Scott, have you considered Speculations, Angelaki or Continental Philosophy Review . . .
I should fess up to some points in Posthuman Life which could be construed as deviations from this materialist ordinance. Ch3 set up a kind of phenomenologically-tinged pragmatism as an aprioristic hedge against radical post-human weirdness, only to knock the whole thing down from a position that is very similar in content BBT: i.e. that our phenomenology is striated with darkness, so phenomenology cannot tell us what it is. Since we do not know what phenomenology is, we do not know what kind of “phenomenologies” populate the posthuman possibility space ex ante.
But this argument is adapted from, of all places, Derrida’s claim in “Genesis and Structure” that there’s a kind of disconnect between phenomenology’s doctrine of evidence and its transcendental presuppositions (there’s also an argument against Davidson’s position on the constitutive role of language in thought, but it’s less prominent in the book as a whole). So there’s a strategic use of Derridean deconstruction, as well as of Deleuzean assemblage ontology. I need these to a) rip away all anthropological integuments from posthumanism and b) to talk about posthumanisms at an adequate level of generality.
All goes to show that the only thing harder than ontology is not doing it. We’re not just incompetent, we’re incontinent.
I thought you would like. I don’t know what my problem is when it comes to rewriting for publication. Time is always a big factor (this was simply a night-time diversion to the overhaul of The Unholy Consult). In the meantime, Blog-Pharau must be fed!
Like you, a part of me is amazed at the tenaciousness of these views, and even more that their popularity seems to continually grow. It really just seems to be a matter of vocabulary – I always come to a point reading these books where I realize the author has no clear sense of the opposing conceptualities they’re attempting to harmonize, and all but oblivious to the way attempts to source intentionality turn out to be exercises in smoothing bubbles out or wallpaper. Since the analytics have spent so long trying to square the same circle, it has to serve taking a long hard look at their failures before embarking on a project like this. Register blindness, maybe?
Regarding Derrida, if you get a chance, David, check out the Hagglund critique I link at the beginning of this piece, which tries to show how Derrida can be translated out of his semantic register. I’m curious as what you think. I’ve discovered the questions of metacognitive access that underwrite BBT and Dark Phenomenology are just as much show-stoppers in analytic contexts as well.
Incontinence is the key, which is why we gird ourselves with due diligence Depends: stake our ontological commitments, abductively pin their ‘veracity’ to their consilience and the explanatory work they seem to do, and stop there. Incontinental materialism, on the other hand… 😉
BTW can anyone advise me about where to begin reading Reza’s work on eliminativism. I’ve tried and failed, I must admit.
I’d like to know myself!
The sad thing is that a lot of Reza’s works on the web were removed about a year or so back because of threats by the Iranian government against members of his family. I know I was asked to remove all of my essays on him at that time, so about the only mention I have is of his fictional work not his philosophy. One used to find all of his old works on the old hyperstition blog site. But alas that was removed too. I no his theory fictions in his propose tetralogy on dark materialism Cyclonpedia, and Culinary Materialism, and a few essays. His wiki entry pretty much sums up aspects of the simple in the reference area for what’s left on the web…
http://en.wikipedia.org/wiki/Reza_Negarestani
There’s something bizarrely inspiring about the effort put into writing essays of this sort, since it contains within it all that is necessary to know that it is doomed to fail. (And yes, I did catch the short paragraph that allows me to forgive Bakker for not being exclusively Unholy Consulting 24/7 at this point – it’s all of a piece in ways I am beginning to grasp more clearly).
Philosophers will never be dissuaded from going on about these issues, nor from thinking they have something meaningful to add. It’s like Amazonian tribes believing that if they were only sufficiently devout, their prayers and ritual sacrifices could prevent the modern world from encroaching on and destroying their ancestral homelands.
I am an oddity among professional scientists, having some (undergraduate) philosophical training, and even so, this blog is my only point of contact with contemporary philosophy of mind. It’s a colossal waste of time (and not only for people in the process of completing much-anticipated fantasy trilogies, natch).
Perhaps one day we will be able to clinically diagnose the “philosophical” interest in these topics and the imperviousness to their demonstrable disutility. We have made some progress on related issues:
J Neurosci. 2011 Apr 20;31(16):6188-98. doi: 10.1523/JNEUROSCI.6486-10.2011.
Dopaminergic genes predict individual differences in susceptibility to confirmation bias.
Doll BB, Hutchison KE, Frank MJ.
Abstract
The striatum is critical for the incremental learning of values associated with behavioral actions. The prefrontal cortex (PFC) represents abstract rules and explicit contingencies to support rapid behavioral adaptation in the absence of cumulative experience. Here we test two alternative models of the interaction between these systems, and individual differences thereof, when human subjects are instructed with prior information about reward contingencies that may or may not be accurate. Behaviorally, subjects are overly influenced by prior instructions, at the expense of learning true reinforcement statistics. Computational analysis found that this pattern of data is best accounted for by a confirmation bias mechanism in which prior beliefs–putatively represented in PFC–influence the learning that occurs in the striatum such that reinforcement statistics are distorted. We assessed genetic variants affecting prefrontal and striatal dopaminergic neurotransmission. A polymorphism in the COMT gene (rs4680), associated with prefrontal dopaminergic function, was predictive of the degree to which participants persisted in responding in accordance with prior instructions even as evidence against their veracity accumulated. Polymorphisms in genes associated with striatal dopamine function (DARPP-32, rs907094, and DRD2, rs6277) were predictive of learning from positive and negative outcomes. Notably, these same variants were predictive of the degree to which such learning was overly inflated or neglected when outcomes are consistent or inconsistent with prior instructions. These findings indicate dissociable neurocomputational and genetic mechanisms by which initial biases are strengthened by experience.
Still, very nice essay. It can be worth it to beat a dead horse if only to demonstrate to someone else why they should stop trying to ride it.
I was hoping you would swing by ochlo, primarily because I think it serves for partisans of this stuff to hear just how out of touch they (we!) are with what’s actually going on from a neuroscientist’s POV, but also because I wanted to ask you what your take on the localization problem is, because this is where it seems you have no choice but to rub elbows with philosophy/psychology folk, if only to neurally pin down what functions you’re attempting to isolate. Is this a fair assessment? What’s your attitude regarding psychology more generally? I know you don’t think much of Craver, but I’m guessing you would think (as I do) that insofar as they’re part of a progressive explanatory paradigm at all, they’re in the business of mechanism sketches.
I agree with what you say regarding the medicalization of philosophy, of course. After-all, I’m essentially accusing all of traditional philosophy with suffering anosognosia! I just can’t see why this doesn’t horrify you the way it horrifies me…
Hey.
I don’t know what “the localization problem” refers to, nor do I know understand what is intended by “neurally pin down what functions you are trying to isolate”. I guess I don’t think about what I’m doing in terms of “isolating functions”. I’m happy to try to answer if you clarify the question a bit.
“Psychology” is a broad field, encompassing psychophysics (a close cousin to sensory neurophysiology, my own field), to social psychology, developmental psychology, etc. I suppose in the broadest terms, I am generally sympathetic to people who attempt to go out and measure something about the world, report what they did, and what they found, even when the methodological limits are relatively severe (i.e., you may have to rely on the self-assessments of bored undergraduates satisfying course credit requirements, etc.).
I am not sure that everyone is the “business of mechanism sketches”, but I am also not entirely sure what is intended by the phrase. The fact that a certain process must eventually be implemented in a physical system, but there are levels of analysis that can also be useful in explaining how a system is organized functionally. For example:
Click to access Carandini-NRN2012.pdf
Now, you may be more familiar with this:
Click to access chirimuuta-minimal-models-and-cncs-penult.pdf
I haven’t read the latter but I suspect a comparison of the two may illustrate the difference between a scientific review and….whatever the latter is.
Sorry if this reply is less than useful.
Are you kidding me? These are fantastic (I would be very grateful, in fact, if you could shoot me a link any time you bump into anything these!). I haven’t given them a full read, but I’m curious as to why the authors of the latter even think their examples count as a problem for the mechanist account, aside for a semantic turf war over who gets to use the word ‘explanation’ (this is where I think Craver overreaches, I think, trying to monopolize the term). The fact that myriad mechanisms can give rise to what appears roughly the same function is no big whup at all for the mechanist, as is the fact that mathematical models provide far and away the most effective way to understand that function. The point is that it’s mechanisms we’re talking about, and if we want to say enhance the capacity of given mathematically modelled function in any individual brain, we’re going to have to goose the implementing mechanism to do so. The overarching thrust of Craver’s argument is basically, What the hell are we talking about otherwise, if we’re not talking about the brain? In his famous paper with Piccini he even makes an information argument similar to one’s I make: Psychology had no choice but to go it alone positing functions before the black box was cracked open. Is the argument that they should continue that way, think that all the information provided by neuroscience is useless on the basis of philosophical ‘autonomy arguments’?
It’s when the functions described are intentional that the problems arise(just think of the failure of the semantic externalist project (Fodor, Dretske, etc.)), that the emergence becomes ‘spooky,’ that the implementing mechanisms cannot be invoked without contradicting the apparent function to be localized.
The problem of localizing psychological functions (take ‘belief,’ say) in the brain has two main poles: the problem of defining the function for which a neurobiological explanation is sought, and the problem of where to begin looking for that function in madhouse complexity of the brain. So my friend, Eric Schwitzgebel, for instance, is something of a ‘localization skeptic,’ critical especially of the kind of ‘boxology’ you find some theorists indulging in. It’s the problem of bringing together two mysteries, only one of which is empirically tractable. Given the conceptual antipathy between intentional functions (judging true, etc.) and causal explanations, philosophers of mind haven’t been able to decide how one could naturalize (as opposed to operationalize) the former in principle. This motivates Eric’s dispositionalism: all we have are a welter of behavioural tendencies to go on, some of which our psychological terms can readily handle, some forcing us to look elsewhere (as in cases of head trauma). This is too phenomenalist for me, but that’s another story.
I figure I’ll just post here rather than respond in three different places. Frank, if you read this, Orion’s Arm might have ruined me at a different time in life.
I just wanted to write that though I’ve read all these posts (and bent myself to other blogs because of TPB’s inception), one thing I’ve learned from pursuing philosophy is that I’m going to have to go back and reread them all with their connotations in mind to truly understand them.
This community of philosopher’s you’ve all fostered together is impressive.
But to this ochlo/Bakker dyad specifically (as I always enjoy what manifests here):
I’m basically of same mind as ochlo’s first post – excepting that I think I understand Bakker’s engaged in an attempt at social engineering: save as many philosophers as possible from their sinking ship. The mind/brain of a philosopher as computational engine is precious commodity and it’s only for the moment that the dominant philosophic problems are entangled in fictions.
I’m of the opinion after all that philosophers make the best scientists. Science is subject to philosophic fallacy; scientists do philosophy all the time. Philosophy, on the other hand, is an exercise in rigour, no matter the content of focus – though, it would do philosophy some good now and then to reference the sheer numbers involved in bias and heuristic, if it is truly interested in rigour (if not, as Bakker’s found, the possibility of weighty philosophy).
ochlocrat: I don’t know what “the localization problem” refers to, nor do I know understand what is intended by “neurally pin down what functions you are trying to isolate”. I guess I don’t think about what I’m doing in terms of “isolating functions”. I’m happy to try to answer if you clarify the question a bit.
rsbakker: I wanted to ask you what your take on the localization problem is, because this is where it seems you have no choice but to rub elbows with philosophy/psychology folk, if only to neurally pin down what functions you’re attempting to isolate.
To throw in my unasked for communicative two cents: I would hazard that Bakker is wondering at neuroscience’s digestion of the ‘hard problem,’ or more contextually relevant, perhaps, the ‘binding problem. From what I find the theoretical instances of psychology isn’t really concerned with either of these things insofar as I feel the sciences simply seem to assume that these will be either/or (in)validated (as ochlo highlights you don’t necessarily need to concern yourself with these questions when you are doing specific research, which is ultimately works by culminating trickle).
I think you’re right to think the sciences don’t give a damn generally, and that the division of labour really is one of them pressing their research onward as effectively as possible while wankers like me whine and opine about ‘what it all means’ post facto. Psychology, though, is in a real pickle, I think. There really are huge conceptual problems, where their inability to precisely define their concepts makes it possible to deny any cogency to many of their findings (check out Elizabeth Irvine’s Consciousness as a Scientific Concept, for instance). Then there’s the issue of representation, which has been operationalized without difficulty in many empirical contexts, but itself remains (given the ‘hard problem of content’) opaque to naturalistic modes of description. These are the kinds of issues I think BBT can resolve.
There really are huge conceptual problems, where their inability to precisely define their concepts makes it possible to deny any cogency to many of their findings (check out Elizabeth Irvine’s Consciousness as a Scientific Concept, for instance).
Agreed. Distinguishing constructs is key – for instance, BBH might actually be the unifying framework – but that there is a need for some kind of unified theory is undisputed… I think. Most of my professors, or practicing academics that I read, still tend to wax philosophically as much as they are committed to “pressing their research onward” and so obviously I agree that psychology as a discipline has to address the same issues you’ve chosen to tackle philosophically.
I’m having issues accessing Irvine’s chapter but I’ll figure it out. I did stumble across Neuroscience and the correct level of explanation for understanding mind: An extraterrestrial roams through some neuroscience laboratories and concludes earthlings are not grasping how best to understand the mind–brain interface – I can’t remember if this is specifically what you or Thomson were riffing off of in Thinker as Tinker.
Then there’s the issue of representation, which has been operationalized without difficulty in many empirical contexts, but itself remains (given the ‘hard problem of content’) opaque to naturalistic modes of description. These are the kinds of issues I think BBT can resolve.
It is possible that BBT can (dissolve) resolve these distinctions (and if so, probable that in resolving these distinctions a unifying theory will operationalize these disciplines within novel – more relevant – contexts). It’s not what I want to focus on when I manage to break into research but I’ve definitely tried to think of how BBH can be researched. And to what end…
To what end… It would be a career maker if confirmed, because it would resolve a boatload of confusions. Possible avenues of research: Some kind of exhaustive accumulation of all the small ‘neglect’ findings in the existing literature to see how they might fit with the theory. I think the kinds of neglect driven cognitive bias effects (Only-game-in-town Effect, Origination Effect) that BBT predicts could be quite easily mapped into different research paradigms. I even wonder if it couldn’t be extended into psychophysics, provide a broader definition of ‘fusion,’ and open the possibility of all kinds of ‘cognitive fusion’ effects. I think you could use BBT to actually give diverse experimental phenomena like change blindness, attentional blindness, duration neglect, and flicker fusion a common theoretical paradigm for interpretation, and I’m sure a number of experimental possibilities would leap out a project such as that.
The one thing it would do is put to bed arguments like Gazzaniga’s from that awesome little piece you mention – which is to say, to do away with the ‘Accomplishment Fallacy’ – the notion that our experiences as we metacognitively intuit them are somehow complete, sufficient, reliable – something our brain accomplishes – as opposed to what our brain merely thinks it accomplishes when it’s metacognitive resources are exapted to the purposes of theoretical reflection. The ‘whole is more than the sum of the parts,’ or mechanical emergence, is part and parcel of all natural scientific investigation, and has never been particularly mysterious. To invoke it as a way to explain away our inability to explain intentionality and consciousness is simply an attractive dodge. Why should emergence of the first kind be straightforward, and emergence of the latter kind be baffling, spooky? BBT answers this question too.
Lol – I hope someone with a little more current research agency is paying attention; serious gold here.
One important point you hit upon here is that there is a succinct lack of homogeneity of terms – for instance, flicker fusion and perceptual threshold are essentially aspects of a similar construct and refer to almost the same constituent criteria yet one is used almost exclusively in psychophysic dialogue, the other psychological.
Possible avenues of research: Some kind of exhaustive accumulation of all the small ‘neglect’ findings in the existing literature to see how they might fit with the theory.
In my BBT specific musings, this is where I tend to tread mostly. There seems the descriptive capacity to rearrange the heuristics and biases by ‘families’ of neglect, by describing cognitive ecologies (this I think is the breadwinner for psychological theory, which naturally seems falls out of a BBT paradigm).
That Gazzaniga piece drives me up the wall. It’s great noting that a some feature of folk psychology produces excellent results in certain contexts, but that doesn’t explain at all how any such results are formed. Leaving it at “emergence” sounds very much like faith in some kind of transcendental magic.
Take his own examples of democracy and computers. If a component individual’s elimination just happens to coincide with the collapse of a social democracy, how would we analyze such an event or remedy the outcome? If I wanted to improve the performance of not just software but also the hardware of a computer in a way that no one else has currently achieved, what kind of knowledge would I need to attempt such a feat? If properties just “emerge” for mysterious and unfathomable reasons, then it seems like I really have no recourse should those properties ever fail to emerge the way I want them to.
Exactly. It’s the big difference between the two kinds of emergences, spooky and mechanical. There’s multiple ways to carve nature sure, but ultimately we’re after the joints, what allows us to manipulate (or kick back, listen to some Marley, whatever makes your time bomb tick).
“Computational analysis found that this pattern of data is best accounted for by a confirmation bias mechanism in which prior beliefs–putatively represented in PFC–influence the learning that occurs in the striatum such that reinforcement statistics are distorted.”
This sounds really interesting, Ochlorat. I suppose the usual riposte to accusations of philosophical irrelevance is to point out that we (I plead guilty) have a professional interest in explicating concepts like mental or neural representation. If we don’t know understand what it could mean to say that beliefs are represented in the PFC, or in anything else, then the content of the hypothesis under test is unclear. There’s canonical body of work on this area by the like of Fodor, Churchland, Dennett, etc. Maybe none of this has cracked it and (maybe) Scott is right that that we end up with some kind of semantic eliminativism, but, if that’s so, then this claim is literally false.
“Maybe none of this has cracked it and (maybe) Scott is right that that we end up with some kind of semantic eliminativism, but, if that’s so, then this claim is literally false.”
Depending on how you interpret the terms, of course! A good number of people in cognitive science now use ‘representation’ and even ‘content’ in ‘metaphoric’ senses, the idea being they’re great little intentionalisms that, because of the heuristic power, allow us to effectively carry on with our research and understanding, but shouldn’t be taken literally. ‘Recapitulations’ is what I like to use to get away from the neglect driven image of heads filled with little pictures (radically distributed, of course!) somehow possessing instantaneous logical relationships with ‘objects’ across the vacuum of space.
I feel sorry for enactivists because of this: on the one hand, of course we’re fundamentally continuous with out environments. On the other hand, of course we represent things across the vacuum. Short something like BBT, which is to say, a way to get rid of the second ‘of course,’ they have a hard circle to square.
And this is just where the semantic confusion begins! Everyone uses these operative concepts in their own private way, it sometimes seems.
Why?
Might be a side point, but I’m left wondering about the context of ‘why’. In that for one person what might seem a discrepancy brought to light by the question is no discrepancy or even notable at all. I ran into this with Eric Schwitzgebel the other day, where I thought raising the question of who is the author of a certain measuring method was significant, yet it didn’t seem to leap out for my interlocutor. ‘Why’ can seem personally compelling, yet crossing the threshold of conversation, actually be subject to quite a number of limitations.
because so far they’ve managed to kill every single theory such as yours!
Perhaps in regard to the ‘why’ issue, they kill it in ways that just don’t seem relevant?
“Why” what exactly? Without further context your question makes no sense.
All whys, Frank. The meta of whys!
But seriously I am talking the overall structure of communication when ask questions of these philosophers. If you ask why because to you it provokes the recognition of incongruities A, B and C but to the listener it provokes only incongruity C, or worse, no incongruities at all, then asking why in regard to A, B and C will not cut it in itself, communication wise.
I say this having often said ‘Why?’ myself – and now considering how little it might trigger in a listener.
As a side topic on a side topic, it’s interesting how ‘why’ would not seem a topic in itself – it seems a pivotal – one doesn’t ask about why, one asks “why’ what?”. As if one only bases further conversation on it, rather than invert and make ‘why’ the actual thing to investigate via conversation.
Why?
All time classic. I saw him do that live once and I swear to God I farted for laughing. People were sucking air and whooping at the same time, scowling for disgust and laughter.
One of the few honest ways of being a philosopher left, comedy.
To straight faced respond – the thing is that is an example of being able to freeform a responce. He might respond in regard to A, or D, or K, or Z. The person asking why does not have an expectation of the responce provoking B or something like that. There isn’t going to be a communicative issue of someone asking why and expecting (or stuck on expecting) a B responce, but getting some other letter instead.
“But teh funny video…”
Hey man, I’m working up my own material! How can you be a comedian if you only ever laugh?
*dun dun DUN!*
Many thought-provoking formulations here, as always.
Here are a few worries, though. It strikes me that Johnston could reply to your point about the anthropomorphic implications of his talk of contradictory nature, by saying that your talk of mechanisms and human machines has the same implications—at least on the surface. We’ve been over this many times, so I think that were Johnston to reply in this way (by saying that universal mechanisms and machines are implicitly theistic or deistic, due to the words’ connotations, in which case the universe takes on a function as an artifact), you’d have to say that scientists use these words with no such implications, that nature is full of only causal relations that are stripped of any purpose or design.
Now, I say this because I think that once we accept that more neutral picture of nature, we’ve got another problem. Naturalists must be open-minded about so-called nature’s ability to transcend itself, to create new things within its domain, to add emergent levels to the older levels, and so forth. In fact, the multiverse interpretation in physics makes the word “natural” perfectly vacuous, as far as I can tell, because according to that interpretation everything that’s possible is also actual. Thus, the fantasy worlds you talked about are not merely fantastic. They’re real in some other universe. If this is the direction physics is going in, I don’t see how cognitive scientists can afford to be so monistic or stingy.
Take your point about cognitive neglect, for example. If you’re right and we can’t understand ourselves in the relatively direct way we understand the rest of the world, maybe that very necessary ignorance serves as a platform for us to reinvent ourselves. Maybe the delusions, fantasies, and other self-deceptions are as crucial to our identity as is our brain. Maybe nature has added onto itself in this farcical fashion, because *mere* causal relations, as opposed to mechanisms in any thicker, anthropocentric sense, can do literally anything that’s possible, according to a leading interpretation of quantum mechanics.
At any rate, I like the way you end on an ominous reference to horror. Science should indeed horrify us for lots of reasons.
Darwin’s Nihilistic Idea: Evolution and the Meaninglessness of Life:
Click to access dditamler.pdf
This is probably the best overview to Alex’s position. I recommend checking out his Atheist’s Guide to Reality as well.
yeah that’s a good overview, here he is getting a bit more into the weeds:
Thanks, Ben, I was beginning to worry that no one was going to raise criticisms. I get tremendous amounts of traffic from these posts, and I’ve learned that most pro-Continental materialists prefer burning straw effigies of me in Facebook chat rooms rather than actually face their fleshly critics in real-time. They should be uneasy with that, because I think the lack of any real critical push-back is the big reason why these kinds of books can be written at all.
The intentional performative contradiction charge is especially the one I was hoping to encounter, because it is the most commonly cited. Since scientists use intentional terms like ‘use,’ and since I use intentional terms like ‘use,’ I must be contradicting myself, helping myself to the very intentionality I claim to be putting on notice. How obviously stupid I must be! (I know you’re not saying this, Ben).
And yet, my argument is quite plainly that ‘use’ doesn’t mean what the philosophical tradition has taken it to mean! So accusing me of using it the traditional way simply – clearly – begs the question. I’m saying that intentionality is a heuristic, mechanistic way of understanding very complex systems, one that we have been compulsively ontologizing due to scant information available when we attempt to ‘theoretically remember’ what is we were when we used the term ‘use.’ Now we know some brain circuits are responsible somehow at some level, that the mechanical picture is in fact true. The question is one of where this other picture comes in. So my question to the Intentionalist is, How do we know your account as true? This forces them back on metacognition (where else is their to turn?), which forces them to explain how such any such system could possibly provide them with the information they require to make their case stick. I’m still waiting for this argument…
I’m not sure how the “nature’s ability to transcend itself” argument is supposed to work. The multiverse, a controversial speculative posit presently winning many over in the physics community, ain’t going to help Johnston in this universe. Is there a universe where a perpetual motion machine is possible? Who knows… Physics relies on universality: local violations of its laws would cast the entire edifice in doubt. If that’s what’s required to redeem out intuitive sense of autonomy and purposiveness, then our intuitive senses are pretty much fucked!
The notion that metacognitive illusions might nevertheless actually possess redeeming functions is one that I’m sympathetic to, as you now. I just don’t now how, aside from some kind of ‘broken clock effect’ (where philosophers missapply heuristics, and misapply, and misapply, until one shouts ‘Democracy!’ and provide something genuinely effective) it might work. Something like this ‘orthogonal redemption’ has to be the case, I think, because we evolved the capacity to theorize in the absence of any constraints on accuracy: stories about anthropomorphic worlds served our ancestors well in some ways, despite being entirely full of shit.
slight aside to your point but still central to the original post I believe when you say “I think the lack of any real critical push-back is the big reason why these kinds of books can be written at all” I think you are overvaluing the power/role of arguments/critical-reasoning here and vastly undervaluing the institutionalized roles of cliques and the “gossip” world as meister Heidegger diagnosed it.
An important distinction to be true. Wasn’t this the stuff the web was going to put an end to?
heh, sure just like wider representation of peoples/views, greater transparency, and more direct access/feedback, was going to make democratic politics more effective…
Scott, we’ve been over the logic of the argument that I’ve developed in dialogue with you. Remember that it’s a dilemma. So there’s a fork in the road. One road takes you to the performative contradiction. But we’ve established that this isn’t what you’re doing, so that road is closed. I only raised it in the first paragraph of my last comment, because it struck me that the exact same superficial analysis of Johnston’s use of “contradiction” could be applied to “mechanism” or to “machine.” The question is whether these words are meant to include the folk connotations or not. If they’re used in some theoretical way, they could be redefined to mean anything.
Anyway, the second road takes us to open-mindedness. You see, by my way of thinking, the mechanistic view of nature gets at what I call nature’s undeadness, but leaves out its pantheistic divinity. Nature’s causal relations are impersonal, yes, but they’re also creative. So while you’re right that it’s not the case that anything goes in this universe, this universe has developed new phases and levels. Early in its history there were no stars or heavy elements. Millions of years later, those things were produced and then came life and then social and technological patterns, among many other things.
My worry is that the loose talk of everything being mechanical distracts us from the fact that nature–as scientists understand it–can produce new things within it. Maybe people are such new things. Maybe people developed when primates accidentally rewired their brain by means of the brain’s ignorance of itself. Maybe people are made not just out of heuristics running on neurochemical reactions, but out of the very self-deceptions that fill in the blank left by cognitive neglect.
The problem is that if we’re using “mechanism” in the non-technical sense, we’re talking about an intelligently designed artifact, in which case the artifact’s function limits what the artifact can do, because unintended uses are made improbable by all the work gone into specializing the device. So if nature is mechanical in that theistic sense, we might indeed say that nature won’t likely create anything that’s not intended by the designer. If we had God’s plan in the form of some revelation, we might know exactly what to expect to come out of nature.
Obviously, you don’t want to say anything like that. But then we’re left with that second road and that road seems to me to offer more possible surprises than you can afford. We’re left with a domain in which novelty (emergent complexity) is possible. Do emergent properties become supernatural? Well, now the eliminative materialist should be forced to define “natural” in such a way that personhood is ruled out. I think quantum mechanics has made this latter move hard to carry out. Again, if the multiverse is all natural, then as long as people are physically possible, they’re both actual (somewhere, maybe in our universe after sufficient evolution and complexification) and natural.
Reblogged this on lazyrealism and commented:
Bakker with less fire (cause he knows he drove the point home already) as usual, but good arguments as always.
What is to come is yet another repetition of the reason vs. faith debate and rightly so. Analytic philosopher may take science seriously, but they stick still to much to the empirical, not exploring its abysses for our all too blind brains, that, luckily or unluckily, can see what it wants to see. I dont understand why “contintental materialist” or cont. philosophers for short don’t explicitly say what they try to do: to save something holy in the world. All they do is to postulate “ontological gaps” that are at best only “empirical gaps” in our universe full of malignant useless stuff. Why not openly declare themselves as RELIGIOUS authors and not philosophical ones (I still honor that word too much). Why hide as materialist, when officially everybody is, why not dare to say what they want say, why don’t they call their books “Fuck Materialism – I believe in a benevolent God who makes himself visible in explanatory wholes in the whole of being”.
In the end it might be a justified point to choose to deliberately ignore the results of science or to be blind to BBT. One can argue for such a stance towards the world. It will sell. The scientific realist, who must become pessimists and nihilists at some point if they take science metaphysically serious might be closer to the essence of nature, cognition, our role in the universe etc., but the blind might win, cause they are nature – spreading phylogenetically induced belief and optimism in order to continue living.
In short: IT IS LEGITIMATE TO ARGUE WITH VALUES AGAINST FACTS -BUT THEN SAY IT -INSTEAD OF POSITING THE NORMATIVE AS THE REAL.
Living blind or to Die seeing – that is the question – do we have a choice?
In a nutshell, yes. The other option is to be a mysterian, to replace ‘God’ with ‘?’ In that case, you would have 3 options, Delusional Living, Blind Living, and Sighted Dying, representing commitments to Transcendence, Confusion/resignation, and Nature, respectively. Now I don’t think the ancient skeptics would be all that troubled by what I’m arguing, insofar as they are the archetypal eliminativists. There is a profound peace to be had living within your cognitive limitations, living the Unexamined Life in an Examined World. First-order equanimity amid second-order madness.
The point I wish I would have made in this piece, in retrospect, anyway, is that this isn’t where thinking ends, only where a certain kind of delusional thinking ends, and that thought can move on, so long as it remains honest to its confusion, and resigns itself to the determinations of a Nature that pulls all plugs in the end. Continental Materialism may be a ‘sad spectacle,’ but given enough epistemic humility, the impulse that moves it need not be. What will become of it, no one can say…
This what I take to be Ben’s guiding ethos over on RWTUG, as well as Stephen’s at NR.
I agree that I’m open to working out a religion that actually makes sense to a rigorous naturalistic and postmodern atheist. And I agree that these religionists would have to resign themselves to a tragic worldview. But I wonder what you’re getting at when you say that nature “pulls all plugs.” If you mean that nature, through science, undermines all delusions, this is hardly necessary or even overwhelmingly probable. If we’re talking about what will likely happen, as opposed to what science would accomplish in some hypothetical world in which scientists are free to learn as much as they can, your pessimistic induction is on less sure ground. This is because the unenlightened masses would be free to revolt, deprive science of funding, or demand that scientists tilt their research to some more practical ends, in which case scientists might serve certain Western secular delusions.
What I’m saying is that maybe nature can create as many delusions as it can undermine. I agree that we’ve seen some centuries of scientific progress, but surely you don’t think there’s anything metaphysically guaranteed about that historical development.
Anyway, suppose science is allowed to finish its impersonal picture of the universe. In that case, we’d understand our lack of choice in the matter and we’d see ourselves as part of some natural process of enlightenment. But who is to say how that process ends? We talk about apocalypse, but maybe there can be a turning away from accursed knowledge towards self-imposed ignorance, like in the movie Pi. If posthumans won’t have any choice, because there’s no such thing as a free person, it’s just a matter of what the available conditions will likely produce: a species that goes insane, blows itself up, retreats to intellectual darkness, or finds some way of living nobly with the humiliating knowledge. I like to explore that last scenario.
I meant kills us, by the nature comment.
All post-intentional scenarios are interesting simply because no one has had the conceptual chutzpah to explore them. BBT provides all kinds of crazy resources, I think. But I’m not convinced it’s the only way to think our way through. So much depends on the science.
At the moment where I have my break of thinking after studies I deliberately try to opt for error/confusion/self-destruction…but it doesn’t really work – that thing doesnt wanna stop thinking and loving truth…damn.
Thanks for such a nice aphorism: I put it on my wall.
“First-order equanimity amid second-order madness.”
At the moment I tend to disagree about your last point. I think that thinking has ended maybe already long ago, maybe the beginning of philosophy was also its end – a kind of singularity – that is only alive because occurring in some individuals. I know that I may speak in riddles. Anyway this line of thought or intuition doesn’t stem from spec. real. more from rationalism and necessitarianism.
To your last point I would say: there is no end in gaining knowledge, but in thinking maybe yes.Sorry for being to lazy to develop it in detail at the moment…if it should somehow catch someones interest.
but maybe there can be a turning away from accursed knowledge towards self-imposed ignorance
That’s one of my pet theories on the state of Earwa in Scott’s books. High tech dudes perhaps wipe everyones memories so they can all live in a bad ass simulation but not know it’s a sim. Then it crashes – hilarity ensues. Well, hideous eternal torture ensues. And no one knows how to stop the crashed machine. Well, no one even knows it’s a machine.
Don’t have to spoiler this because it’s raw speculation on my part!
Is a pessimistic religion possible?
Schopenhauer…”Religion is the masterpiece of the art of animal training, for it trains people as to how they shall think”.
Of course Buddha who brought us: “All life is pain…” or as some would have it the Four Noble truths:
1.Life is suffering (dukkha)
2.Life is the cause of suffering (samudaya)
3.Truth is the end of suffering (nirhodha)
4.Truth is the path that frees us from suffering (magga)
Much like Wittgenstein Buddha saw our minds caught in the trap of linguistic illusions so he created the eightfold path to alleviate these illusions:
Right View
Right Intention
Right Speech
Right Action
Right Livelihood
Right Effort
Right Mindfulness
Right Concentration
Pragmatic practices for a physical being with cultural blinkers to be wiped free of their encrustations.
But being neither a Buddhist nor a pessimist I would assume that you might find someone like the agnosiac Thomas Ligotti or H.P. Lovecraft closer to a cosmic pessimism in seeing science as the pattern of that dark path.
sure
LoL, now Scott’s gotta sue! 😉 Gods be hatin’ all over the place…wait, some other book did that already…
[…] R.Scott Bakker’s review of Adrian Johnston’s PROLEGOMENA TO ANY FUTURE […]
[…] are nonetheless problems with this model. First, as Scott Bakker has recently noted in his review of Adrian Johnston’s Prolegomena to Any Future Materialism, there seems to be something […]
[…] Materialism the subject is seen as something “magical” that defies the laws of physics (see Bakker). The central issue at stake is as he tells us “how to reconcile subject with a genuine […]
Hi. Just a few links, in light of the exchange above.
First, I should restate my contention that the best thing to do vis a vis the “hard problem” is to ignore it, since philosophical discussions of the problem have been pretty goofy. As I’ve argued on other threads, “modal imagining” is conspicuously stupid in this regard, and exasperating for those of us who take the mechanisms of brain function seriously.
Dennett agrees:
http://www.edge.org/response-detail/25289
Second, in terms of issues Bakker raised regarding “isolating functions” and the like, I would highly recommend this:
http://www.sciencemag.org/content/342/6158/580.abstract
I think it gives a better impression of how scientists are tackling the “madhouse complexity” of the brain (i.e., with faster computers, and better algorithms, MVPA, etc.). I would encourage people who find that article interesting to read some of the references, since the treatment of each in the article is rather cursory.
I think readers here might especially like this one:
Click to access 2012.Huth.etal.pdf
Some of the descriptions of the finer points of the methods for that paper are here, and the paper is interesting in its own right. (I also suspect that fMRI is a better fit for this audience than invasive neurophysiology, generally speaking).
Click to access 2011a.Nishimoto.etal.pdf
A brief note on “representation” – I myself have written things like “we were interested in how information about X is represented in the cortex…” In most cases, what is intended is the idea of quantifying the mutual information present in the neural responses and some physical stimulus, or some analogous metric derived from established techniques (linear discriminant analysis, support vector machines, etc.). I would recommend not trying to map philosophical jargon onto ours, in such cases. Our jargon tends to bottom out in math, eventually. I think that’s a big part of why it works when it does.
I am not sure what Mike meant about “philosophers making the best scientists” – I’m sure there’s some semantic game that makes such a sentence trivially true (“I mean, like, don’t scientists ‘love’ ‘knowledge’ and, thus…”), but that’s the only sense in which it is true. Undergraduates interested in systems or cognitive neuroscience should major in physics, math, or engineering if they wish to be competitive at the best graduate school programs, generally.
It really is fascinating to watch philosophers “collaborate” on trying to unravel the mind. It’s a bit like watching a scrum of people, none of whom can swim, dropped in a swimming pool together. Climbing on and dunking someone else might be enough for a single gasping breath from time to time, for one of them, but all the yelling and splashing doesn’t change the fact that they’re still all drowning.
Cheers.
Very cool. I’m all over it, especially re localization, since it’s one of the things that might convince me to dial back some on BBT.
I actually have a peculiar take on the hard problem. You see naturalists like Dennett and Flanagan poo-pooing it as a non-problem all the time, but I think it’s actually more trenchant than they admit. I agree entirely with them (and you) that it’s a non-problem in the first-order sense, that it’s nothing scientists need worry about. But I think it’s a real and likely intractable problem in the second order sense, that most will succumb to their metacognitive anosognosia and refuse to believe any explanation you guys eventually come up with – that it will be the new Creationism. This is what makes be a ‘dual theory theorist’: we need an explanation of consciousness, and an explanation of why we’re so prone to mischaracterize it.
If you’re okay with it, I’d love to use your metaphor as an epigraph for a post, ochlo!
Hi.
@Bakker: Feel free to use the metaphor, if you like.
I take your point on the “second order” acceptance of materialist explanations of consciousness, but given how many Americans don’t accept evolution, I would expect it, frankly. To be conscious is to feel “souled”, in a sense, so it won’t be an accident that the religious will likely resist any forthcoming explanation of conscious experience most strongly.
I’d be curious to hear what you and others think about the references I linked in my previous post. “Localization of function” has been given a significant new upgrade in the context of contemporary fMRI methodology, for example.
Incredible stuff. The semantic space paper especially – probably because it comprises the kind of ‘context-mapping’ research I mentioned in Neuropath as a means of overcoming localization qualms! But more than anything it attests to the experimental creativity – even artistry – that drives so much research. I was especially impressed by their sensitivity to the variety of interpretative confounds they faced, and the way using more, rather than fewer categories allowed them to make crossword-puzzlesque inferences to disambiguate. Five souls mapped, 7 billion consumers to go!
I wonder what Uttal makes of all this…
Interesting. I’d assumed that most researchers assume distributed, complex localization of function at this point.
I’m not sure what kind of response you are looking for, ochlo. I like the research you linked – it’s research I’ve always assumed would eventually be borne out as evident and I’m impressed with their intellectual pursuits. Turk-Browne especially highlights a seemingly inevitable shift in paradigm.
If you had more content-specific questions you wanted to explore, let me know, ochlo. I’m could always enthused about discussing the zeitgeist.
I intended it more practically, ochlo. I think “philosophy training,” as it were, can prime one to be far less likely to succumb to experimental design biases, for instance, if only “philosophers” would make reference to actual scientific data (rather than the shorthand linguistic proofs that justified philosophic theories become).
In one way, philosophy is like a smithy for common sense metal; it brings a sense of mental rigour to the table that I find lacking in scientific practice. I’ve just encountered too many academics (or people in general) who make great researchers only because of their adherence to scientific methodology (which is great for analogizing the method’s validity) and so make horrifying mistakes or assumptive fallacies in their paraphrasing, introductions, discussions, general thought, etc.
Again, just clarifying. I really enjoy reading your posts and I think I mostly overlap with your thinking.
Fingers crossed that I can navigate the gap from the humanities to the sciences when I apply to grad school ;).
Och,
The hard problem is essentially everywhere amongst regular folk – it’s just named the hard problem that by some regular folk who gave themselves fancy names. And yes, along with that made up name comes a ton of redundant, made up material. But it’s still regular folk trying to grasp at something, same as all other regular folk. Including you. I think you’ve got some kind of bridge between your own regular folk experience of life vs the research you know of and it’s implications – a bridge you likely use every day. Ignoring the hard problem (most particularly the regular folk version, rather than highly elaborated philosopher version) will mean no one is taught about such a bridging. And them not being taught about such is – well, the brain research is becoming clearly so important that this is like leaving people to not be taught about maths or reading and writing. As much as math and reading and writing became important.
If you think you’re not on a bridge between the two and yet you’d agree you’re juggling the first person experience of life and yet also the ramifications of brain research – then I dunno. Such juggling is the bridge between the two. Granted maybe not a formalised bridge at the moment.
Now I’ll wait on my ‘that didn’t make any sense!’ rejection slip, for either having overextended with the bridge analogy, or I’ve just been dumb.
Someone famous whose name escapes me said that if God did not exist we would have to invent him. If God really does not exist then we ought to create him. I think Ben Cain has a pretty good idea. A good way for people who have the religious impulse but are too honest with themselves to believe in the old gods would be to devote themselves to enabling our descendents to achieve through technology the kind of control over the physical universe that God is supposed to have. Who knows, it might even become possible for our descendents to create universes of their own and truly become gods. In the however long meantime between now and our apotheosis making technology creation holy might encourage our kids to take more math classes and a religion that explicitly asserts that God does not exist is likely to make for fewer religious bigots than a religion that claims he does.
Curious if anyone here is familiar with Terrence Deacon’s work “Incomplete Nature: How Mind Emerged From Nature.” I received the book as a Christmas gift last year but havent read it yet.
The part of the blur that intrigues me is:
Have it on order actually. Dennett’s review was very intriguing, and of course, any reference to absences producing positive effects is bound to prick my ears!
I read Dennett’s review last night and was intrigued by the controversy mentioned in the final part of his review (who doesn’t love a good controversy). I ended up spending (waisting?) a great deal of time on the Terry Deacon Affair website.
Speaking of life and perpetual motion machines:
https://www.simonsfoundation.org/quanta/20140122-a-new-physics-theory-of-life/
[…] exercise. Although I find Wolfendale’s approach far—far—more promising than that of, say, Adrian Johnston or Slavoj Zizek, it still commits basic errors that the nascent Continental Philosophy of Mind, […]
[…] “Zizek, Hollywood, and the Disenchantment of Continental Philosophy,” or “Life as Perpetual Motion Machine: Adrian Johnston and the Continental Credibility Crisis“). This is particularly true of Catherine Malabou, who, as far as I can tell, is primarily […]