Three Pound Brain

No bells, just whistling in the dark…

Tag: Intentionalism

Exploding the Manifest and Scientific Images of Man

by rsbakker


This is how one pictures the angel of history. His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress. –Benjamin, Theses on the Philosophy of History


What I would like to do is show how Sellars’ manifest and scientific images of humanity are best understood in terms of shallow cognitive ecologies and deep information environments. Expressed in Sellars’ own terms, you could say the primary problem with his characterization is that it is a manifest, rather than scientific, understanding of the distinction. It generates the problems it does (for example, in Brassier or Dennett) because it inherits the very cognitive limitations it purports to explain. At best, Sellars take is too granular, and ultimately too deceptive to function as much more than a stop-sign when it comes to questions regarding the constitution and interrelation of different human cognitive modes. Far from a way to categorize and escape the conundrums of traditional philosophy, it provides yet one more way to bake them in.


Cognitive Images

Things begin, for Sellars, in the original image, our prehistorical self-understanding. The manifest image consists in the ‘correlational and categorial refinement’ of this self-understanding. And the scientific image consists in everything discovered about man beyond the limits of correlational and categorial refinement (while relying on these refinements all the same). The manifest image, in other words, is an attenuation of the original image, whereas the scientific image is an addition to the manifest image (that problematizes the manifest image). Importantly, all three are understood as kinds of ‘conceptual frameworks’ (though he sometime refers to the original image as ‘preconceptual.’

The original framework, Sellars tells us, conceptualizes all objects as ways of being persons—it personalizes its environments. The manifest image, then, can be seen as “the modification of an image in which all the objects are capable of the full range of personal activity” (12). The correlational and categorial refinement consists in ‘pruning’ the degree to which they are personalized. The accumulation of correlational inductions (patterns of appearance) undermined the plausibility of environmental agencies and so drove categorial innovation, creating a nature consisting of ‘truncated persons,’ a world that was habitual as opposed to mechanical. This new image of man, Sellars claims, is “the framework in terms of which man came to be aware of himself as man-in-the-world” (6). As such, the manifest image is the image interrogated by the philosophical tradition, which given the limited correlational and categorial resources available to it, remained blind to the communicative—social—conditions of conceptual frameworks, and so, the manifest image of man. Apprehending this would require the scientific image, the conceptual complex “derived from the fruits of postulational theory construction,” yet still turning on the conceptual resources of the manifest image.

For Sellars, the distinction between the two images turns not so much on what we commonly regard to be ‘scientific’ or not (which is why he thinks the manifest image is scientific in certain respects), but on the primary cognitive strategies utilized. “The contrast I have in mind,” he writes, “is not that between an unscientific conception of man-in-the-world and a scientific one, but between that conception which limits itself to what correlational techniques can tell us about perceptible and introspectable events and that which postulates imperceptible objects and events for the purpose of explaining correlations among perceptibles” (19). This distinction, as it turns out, only captures part of what we typically think of as ‘scientific.’ A great deal of scientific work is correlational, bent on describing patterns in sets of perceptibles as opposed to postulating imperceptibles to explain those sets. This is why he suggests that terming the scientific image the ‘theoretical image’ might prove more accurate, if less rhetorically satisfying. The scientific image is postulational because it posits what isn’t manifest—what wasn’t available to our historical or prehistorical ancestors, namely, knowledge of man as “a complex physical system” (25).

The key to overcoming the antipathy between the two images, Sellars thinks, lies in the indispensability of the communally grounded conceptual framework of the manifest image to both images. The reason we should yield ontological priority to the scientific image derives from the conceptual priority of the manifest image. Their domains need not overlap. “[T]he conceptual framework of persons,” he writes, “is not something that needs to be reconciled with the scientific image, but rather something to be joined to it” (40). To do this, we need to “directly relate the world as conceived by scientific theory to our purposes and make it our world and no longer an alien appendage to the world in which we do our living” (40).

Being in the ‘logical space of reasons,’ or playing the ‘game of giving and asking for reasons,’ requires social competence, which requires sensitivity to norms and purposes. The entities and relations populating Sellars normative metaphysics exist only in social contexts, only so far as they discharge pragmatic functions. The reliance of the scientific image on these pragmatic functions renders them indispensable, forcing us to adopt ‘stereoscopic vision,’ to acknowledge the conceptual priority of the manifest even as we yield ontological priority to the scientific.


Cognitive Ecologies

The interactional sum of organisms and their environments constitutes an ecology. A ‘cognitive ecology,’ then, can be understood as the interactional sum of organisms and their environments as it pertains to the selection of behaviours.

A deep information environment is simply the sum of difference-making differences available for possible human cognition. We could, given the proper neurobiology, perceive radio waves, but we don’t. We could, given the proper neurobiology, hear dog whistles, but we don’t. We could, given the proper neurobiology, see paramecia, but we don’t. Of course, we now possess instrumentation allowing us to do all these things, but this just testifies to the way science accesses deep information environments. As finite, our cognitive ecology, though embedded in deep information environments, engages only select fractions of it. As biologically finite, in other words, human cognitive ecology is insensitive to most all deep information. When a magician tricks you, for instance, they’re exploiting your neglect-structure, ‘forcing’ your attention toward ephemera while they manipulate behind the scenes.

Given the complexity of biology, the structure of our cognitive ecology lies outside the capacity of our cognitive ecology. Human cognitive ecology cannot but neglect the high dimensional facts of human cognitive ecology. Our intractability imposes inscrutability. This means that human metacognition and sociocognition are radically heuristic, systems adapted to solving systems they otherwise neglect.

Human cognition possesses two basic modes, one that is source-insensitive, or heuristic, relying on cues to predict behaviour, and one that is source-sensitive, or mechanical, relying on causal contexts to predict behaviour. The radical economies provided by the former is offset by narrow ranges of applicability and dependence on background regularities. The general applicability of the latter is offset by its cost. Human cognitive ecology can be said to be shallow to the extent it turns on source-insensitive modes of cognition, and deep to the extent it turns on source-sensitive modes. Given the radical intractability of human cognition, we should expect metacognition and sociocognition to be radically shallow, utterly dependent on cues and contexts. Not only are we blind to the enabling dimension of experience and cognition, we are blind to this blindness. We suffer medial neglect.

This provides a parsimonious alternative to understanding the structure and development of human self-understanding. We began in an age of what might be called ‘medial innocence,’ when our cognitive ecologies were almost exclusively shallow, incorporating causal determinations only to cognize local events. Given their ignorance of nature, our ancestors could not but cognize it via source-insensitive modes. They did not so much ‘personalize’ the world, as Sellars claims, as use source-insensitive modes opportunistically. They understood each other and themselves as far as they needed to resolve practical issues. They understood argument as far as they needed to troubleshoot their reports. Aside from these specialized ways of surmounting their intractability, they were utterly ignorant of their nature.

Our ancestral medial innocence began eroding as soon as humanity began gaming various heuristic systems out of school, spoofing their visual and auditory systems, knapping them into cultural inheritances, slowly expanding and multiplying potential problem-ecologies within the constraints of oral culture. Writing, as a cognitive technology, had a tremendous impact on human cognitive ecology. Literacy allowed speech to be visually frozen and carved up for interrogation. The gaming of our heuristics began in earnest, the knapping of countless cognitive tools. As did the questions. Our ancient medial innocence bloomed into a myriad of medial confusions.

Confusions. Not, as Sellars would have it, a manifest image. Sellars calls it ‘manifest’ because it’s correlational, source-insensitive, bound to the information available. The fact that it’s manifest means that it’s available—nothing more. Given medial innocence, that availability was geared to practical ancestral applications. The shallowness of our cognitive ecology was adapted to the specificity of the problems faced by our ancestors. Retasking those shallow resources to solve for their own nature, not surprisingly, generated endless disputation. Combined with the efficiencies provided by coinage and domestication during the ‘axial age,’ literacy did not so much trigger ‘man’s encounter with man,’ as Sellars suggests, as occasion humanity’s encounter with the question of humanity, and the kinds cognitive illusions secondary to the application of metacognitive and sociocognitive heuristics to the theoretical question of experience and cognition.

The birth of philosophy is the birth of discursive crash space. We have no problem reflecting on thoughts or experiences, but as soon as we reflect on the nature of thoughts and experiences, we find ourselves stymied, piling guesses upon guesses. Despite our genius for metacognitive innovation, what’s manifest in our shallow cognitive ecologies is woefully incapable of solving for the nature of human cognitive ecology. Precisely because reflecting on the nature of thoughts and experiences is a metacognitive innovation, something without evolutionary precedent, we neglect the insufficiency of the resources available. Artifacts of the lack of information are systematically mistaken for positive features. The systematicity of these crashes licenses the intuition that some common structure lurks ‘beneath’ the disputation—that for all their disagreements, the disputants are ‘onto something.’ The neglect-structure belonging to human metacognitive ecology gradually forms the ontological canon of the ‘first-person’ (see “On Alien Philosophy” for a more full-blooded account). And so, we persisted, generation after generation, insisting on the sufficiency of those resources. Since sociocognitive terms cue sociocognitive modes of cognition, the application of these modes to the theoretical problem of human experience and cognition struck us as intuitive. Since the specialization of these modes renders them incompatible with source-sensitive modes, some, like Wittgenstein and Sellars, went so far as to insist on the exclusive applicability of those resources to the problem of human experience and cognition.

Despite the profundity of metacognitive traps like these, the development of our sourcesensitive cognitive modes continued reckoning more and more of our deep environment. At first this process was informal, but as time passed and the optimal form and application of these modes resolved from the folk clutter, we began cognizing more and more of the world in deep environmental terms. The collective behavioural nexuses of science took shape. Time and again, traditions funded by source-insensitive speculation on the nature of some domain found themselves outcompeted and ultimately displaced. The world was ‘disenchanted’; more and more of the grand machinery of the natural universe was revealed. But as powerful as these individual and collective source-sensitive modes of cognition proved, the complexity of human cognitive ecology insured that we would, for the interim, remain beyond their reach. Though an artifactual consequence of shallow ecological neglect-structures, the ‘first-person’ retained cognitive legitimacy. Despite the paradoxes, the conundrums, the interminable disputation, the immediacy of our faulty metacognitive intuitions convinced us that we alone were exempt, that we were the lone exception in the desert landscape of the real. So long as science lacked the resources to reveal the deep environmental facts of our nature, we could continue rationalizing our conceit.


Ecology versus Image

As should be clear, Sellars’ characterization of the images of man falls squarely within this tradition of rationalization, the attempt to explain away our exceptionalism. One of the stranger claims Sellars makes in this celebrated essay involves the scientific status of his own discursive exposition of the images and their interrelation. The problem, he writes, is that the social sources of the manifest image are not themselves manifest. As a result, the manifest image lacks the resources to explain its own structure and dynamics: “It is in the scientific image of man in the world that we begin to see the main outlines of the way in which man came to have an image of himself-in-the-world” (17). Understanding our self-understanding requires reaching beyond the manifest and postulating the social axis of human conceptuality, something, he implies, that only becomes available when we can see group phenomena as ‘evolutionary developments.’

Remember Sellars’ caveats regarding ‘correlational science’ and the sense in which the manifest image can be construed as scientific? (7) Here, we see how that leaky demarcation of the manifest (as correlational) and the scientific (as theoretical) serves his downstream equivocation of his manifest discourse with scientific discourse. If science is correlational, as he admits, then philosophy is also postulational—as he well knows. But if each image helps itself to the cognitive modes belonging to the other, then Sellars assertion that the distinction lies between a conception limited to ‘correlational techniques’ and one committed to the ‘postulation of imperceptibles’ (19) is either mistaken or incomplete. Traditional philosophy is nothing if not theoretical, which is to say, in the business of postulating ontologies.

Suppressing this fact allows him to pose his own traditional philosophical posits as (somehow) belonging to the scientific image of man-in-the-world. What are ‘spaces of reasons’ or ‘conceptual frameworks’ if not postulates used to explain the manifest phenomena of cognition? But then how do these posits contribute to the image of man as a ‘complex physical system’? Sellars understands the difficulty here “as long as the ultimate constituents of the scientific image are particles forming ever more complex systems of particles” (37). This is what ultimately motivates the structure of his ‘stereoscopic view,’ where ontological precedence is conceded to the scientific image, while cognition itself remains safely in the humanistic hands of the manifest image…

Which is to say, lost to crash space.

Are human neuroheuristic systems welded into ‘conceptual frameworks’ forming an ‘irreducible’ and ‘autonomous’ inferential regime? Obviously not. But we can now see why, given the confounds secondary to metacognitive neglect, they might report as such in philosophical reflection. Our ancestors bickered. In other words, our capacity to collectively resolve communicative and behavioural discrepancies belongs to our medial innocence: intentional idioms antedate our attempts to theoretically understand intentionality. Uttering them, not surprisingly, activates intentional cognitive systems, because, ancestrally speaking, intentional idioms always belonged to problem-ecologies requiring these systems to solve. It was all but inevitable that questioning the nature of intentional idioms would trigger the theoretical application of intentional cognition. Given the degree to which intentional cognition turns on neglect, our millennial inability to collectively make sense of ourselves, medial confusion, was all but inevitable as well. Intentional cognition cannot explain the nature of anything, insofar as natures are general, and the problem ecology of intentional cognition is specific. This is why, far from decisively resolving our cognitive straits, Sellars’ normative metaphysics merely complicates it, using the same overdetermined posits to make new(ish) guesses that can only serve as grist for more disputation.

But if his approach is ultimately hopeless, how is he able to track the development in human self-understanding at all? For one, he understands the centrality of behaviour. But rather than understand behaviour naturalistically, in terms of systems of dispositions and regularities, he understands it intentionally, via modes adapted to neglect physical super-complexities. Guesses regarding hidden systems of physically inexplicable efficacies—’conceptual frameworks’—are offered as basic explanations of human behaviour construed as ‘action.’

He also understands that distinct cognitive modes are at play. But rather than see this distinction biologically, as the difference between complex physical systems, he conceives it conceptually, which is to say, via source-insensitive systems incapable of charting, let alone explaining our cognitive complexity. Thus, his confounding reliance on what might be called manifest postulation, deep environmental explanation via shallow ecological (intentional) posits.

And he understands the centrality of information availability. But rather than see this availability biologically, as the play of physically interdependent capacities and resources, he conceives it, once again, conceptually. All differences make differences somehow. Information consists of differences selected (neurally or evolutionarily) by the production of prior behaviours. Information consists in those differences prone to make select systematic differences, which is to say, feed the function of various complex physical systems. Medial neglect assures that the general interdependence of information and cognitive system appears nowhere in experience or cognition. Once humanity began retasking its metacognitive capacities, it was bound to hallucinate a countless array of ‘givens.’ Sellars is at pains to stress the medial (enabling) dimension of experience and cognition, the inability of manifest deliverances to account for the form of thought (16). Suffering medial neglect, cued to misapply heuristics belonging to intentional cognition, he posits ‘conceptual frameworks’ as a means of accommodating the general interdependence of information and cognitive system. The naturalistic inscrutability of conceptual frameworks renders them local cognitive prime movers (after all, source-insensitive posits can only come first), assuring the ‘conceptual priority’ of the manifest image.

The issue of information availability, for him, is always conceptual, which is to say, always heuristically conditioned, which is to say, always bound to systematically distort what is the case. Where the enabling dimension of cognition belongs to the deep environments on a cognitive ecological account, it belongs to communities on Sellars’ inferentialist account. As result, he has no clear way of seeing how the increasingly technologically mediated accumulation of ancestrally unavailable information drives the development of human self-understanding.

The contrast between shallow (source-insensitive) cognitive ecologies and deep information environments opens the question of the development of human self-understanding to the high-dimensional messiness of life. The long migratory path from the medial innocence of our preliterate past to the medial chaos of our ongoing cognitive technological revolution has nothing to do with the “projection of man-in-the-world on the human understanding” (5) given the development of ‘conceptual frameworks.’ It has to do with blind medial adaptation to transforming cognitive ecologies. What complicates this adaptation, what delivers us from medial innocence to chaos, is the heuristic nature of source-insensitive cognitive modes. Their specificity, their inscrutability, not to mention their hypersensitivity (the ease with which problems outside their ability cue their application) all but doomed us to perpetual, discursive disarray.

Images. Games. Conceptual frameworks. None of these shallow ecological posits are required to make sense of our path from ancestral ignorance to present conundrum. And we must discard them, if we hope to finally turn and face our future, gaze upon the universe with the universe’s own eyes.


Enlightenment How? Omens of the Semantic Apocalypse

by rsbakker

“In those days the world teemed, the people multiplied, the world bellowed like a wild bull, and the great god was aroused by the clamor. Enlil heard the clamor and he said to the gods in council, “The uproar of mankind is intolerable and sleep is no longer possible by reason of the babel.” So the gods agreed to exterminate mankind.” –The Epic of Gilgamesh

We know that human cognition is largely heuristic, and as such dependent upon cognitive ecologies. We know that the technological transformation of those ecologies generates what Pinker calls ‘bugs,’ heuristic miscues due to deformations in ancestral correlative backgrounds. In ancestral times, our exposure to threat-cuing stimuli possessed a reliable relationship to actual threats. Not so now thanks to things like the nightly news, generating (via, Pinker suggests, the availability heuristic (42)) exaggerated estimations of threat.

The toll of scientific progress, in other words, is cognitive ecological degradation. So far that degradation has left the problem-solving capacities of intentional cognition largely intact: the very complexity of the systems requiring intentional cognition has hitherto rendered cognition largely impervious to scientific renovation. Throughout the course of revolutionizing our environments, we have remained a blind-spot, the last corner of nature where traditional speculation dares contradict the determinations of science.

This is changing.

We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travelers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts.

Now that the sciences are colonizing the complexities of experience and cognition, we can see the first clear-cut omens of the semantic apocalypse.


Crash Space

He assiduously avoids the topic in Enlightenment Now, but in The Blank Slate, Pinker devotes several pages to deflating the arch-incompatibility between natural and intentional modes of cognition, the problem of free will:

“But how can we have both explanation, with its requirement of lawful causation, and responsibility, with its requirement of free choice? To have them both we don’t need to resolve the ancient and perhaps irresolvable antinomy between free will and determinism. We have only to think clearly about what we want the notion of responsibility to achieve.” 180

He admits there’s no getting past the ‘conflict of intuitions’ underwriting the debate. Since he doesn’t know what intentional and natural cognition amount to, he doesn’t understand their incompatibility, and so proposes we simply side-step the problem altogether by redefining ‘responsibility’ to mean what we need it to mean—the same kind of pragmatic redefinition proposed by Dennett. He then proceeds to adduce examples of ‘clear thinking’ by providing guesses regarding ‘holding responsible’ as deterrence, which is more scientifically tractable. “I don’t claim to have solved the problem of free will, only to show that we don’t need to solve it to preserve personal responsibility in the face of an increasing understanding of the causes of behaviour” (185).

Here we can see how profoundly Pinker (as opposed to Nietzsche and Adorno) misunderstands the profundity of Enlightenment disenchantment. The problem isn’t that one can’t cook up alternate definitions of ‘responsibility,’ the problem is that anyone can, endlessly. ‘Clear thinking’ is as liable to serve Pinker as well as ‘clear and distinct ideas’ served Descartes, which is to say, as more grease for the speculative mill. No matter how compelling your particular instrumentalization of ‘responsibility’ seems, it remains every bit as theoretically underdetermined as any other formulation.

There’s a reason such exercises in pragmatic redefinition stall in the speculative ether. Intentional and mechanical cognitive systems are not optional components of human cognition, nor are the intuitions we are inclined to report. Moreover, as we saw in the previous post, intentional cognition generates reliable predictions of system behaviour absent access to the actual sources of that behaviour. Intentional cognition is source-insensitive. Natural cognition, on the other hand, is source sensitive: it generates predictions of system behaviour via access to the actual sources of that behaviour.

Small wonder, then, that our folk intentional intuitions regularly find themselves scuttled by scientific explanation. ‘Free will,’ on this account, is ancestral lemonade, a way to make the best out of metacognitive lemons, namely, our blindness to the sources of our thought and decisions. To the degree it relies upon ancestrally available (shallow) saliencies, any causal (deep) account of those sources is bound to ‘crash’ our intuitions regarding free will. The free will debate that Pinker hopes to evade with speculation can be seen as a kind of crash space, the point where the availability of deep information generates incompatible causal intuitions and intentional intuitions.

The confusion here isn’t (as Pinker thinks) ‘merely conceptual’; it’s a bona fide, material consequence of the Enlightenment, a cognitive version of a visual illusion. Too much information of the wrong kind crashes our radically heuristic modes of cognizing decisions. Stipulating definitions, not surprisingly, solves nothing insofar as it papers over the underlying problem—this is why it merely adds to the literature. Responsibility-talk cues the application of intentional cognitive modes; it’s the incommensurability of these modes with causal cognition that’s the problem, not our lexicons.


Cognitive Information

Consider the laziness of certain children. Should teachers be allowed to hold students responsible for their academic performance? As the list of learning disabilities grows, incompetence becomes less a matter of ‘character’ and more a matter of ‘malfunction’ and providing compensatory environments. Given that all failures of competence redound on cognitive infelicities of some kind, and given that each and every one of these infelicities can and will be isolated and explained, should we ban character judgments altogether? Should we regard exhortations to ‘take responsibility’ as forms of subtle discrimination, given that executive functioning varies from student to student? Is treating children like (sacred) machinery the only ‘moral’ thing to do?

So far at least. Causal explanations of behaviour cue intentional exemptions: our ancestral thresholds for exempting behaviour from moral cognition served larger, ancestral social equilibria. Every etiological discovery cues that exemption in an evolutionarily unprecedented manner, resulting in what Dennett calls “creeping exculpation,” the gradual expansion of morally exempt behaviours. Once a learning impediment has been discovered, it ‘just is’ immoral to hold those afflicted responsible for their incompetence. (If you’re anything like me, simply expressing the problem in these terms rankles!) Our ancestors, resorting to systems adapted to resolving social problems given only the merest information, had no problem calling children lazy, stupid, or malicious. Were they being witlessly cruel doing so? Well, it certainly feels like it. Are we more enlightened, more moral, for recognizing the limits of that system, and curtailing the context of application? Well, it certainly feels like it. But then how do we justify our remaining moral cognitive applications? Should we avoid passing moral judgment on learners altogether? It’s beginning to feel like it. Is this itself moral?

This is theoretical crash space, plain and simple. Staking out an argumentative position in this space is entirely possible—but doing so merely exemplifies, as opposed to solves, the dilemma. We’re conscripting heuristic systems adapted to shallow cognitive ecologies to solve questions involving the impact of information they evolved to ignore. We can no more resolve our intuitions regarding these issues than we can stop Necker Cubes from spoofing visual cognition.

The point here isn’t that gerrymandered solutions aren’t possible, it’s that gerrymandered solutions are the only solutions possible. Pinker’s own ‘solution’ to the debate (see also, How the Mind Works, 54-55) can be seen as a symptom of the underlying intractability, the straits we find ourselves in. We can stipulate, enforce solutions that appease this or that interpretation of this or that displaced intuition: teachers who berate students for their laziness and stupidity are not long for their profession—at least not anymore. As etiologies of cognition continue to accumulate, as more and more deep information permeates our moral ecologies, the need to revise our stipulations, to engineer them to discharge this or that heuristic function, will continue to grow. Free will is not, as Pinker thinks, “an idealization of human beings that makes the ethics game playable” (HMW 55), it is (as Bruce Waller puts it) stubborn, a cognitive reflex belonging to a system of cognitive reflexes belonging to intentional cognition more generally. Foot-stomping does not change how those reflexes are cued in situ. The free-will crash space will continue to expand, no matter how stubbornly Pinker insists on this or that redefinition of this or that term.

We’re not talking about a fall from any ‘heuristic Eden,’ here, an ancestral ‘golden age’ where our instincts were perfectly aligned with our circumstances—the sheer granularity of moral cognition, not to mention the confabulatory nature of moral rationalization, suggests that it has always slogged through interpretative mire. What we’re talking about, rather, is the degree that moral cognition turns on neglecting certain kinds of natural information. Or conversely, the degree to which deep natural information regarding our cognitive capacities displaces and/or crashes once straightforward moral intuitions, like the laziness of certain children.

Or the need to punish murderers…

Two centuries ago a murderer suffering irregular sleep characterized by vocalizations and sometimes violent actions while dreaming would have been prosecuted to the full extent of the law. Now, however, such a murderer would be diagnosed as suffering an episode of ‘homicidal somnambulism,’ and could very likely go free. Mammalian brains do not fall asleep or awaken all at once. For some yet-to-be-determined reason, the brains of certain individuals (mostly men older than 50), suffer a form of partial arousal causing them to act out their dreams.

More and more, neuroscience is making an impact in American courtrooms. Nita Farahany (2016) has found that between 2005 and 2012 the number of judicial opinions referencing neuroscientific evidence has more than doubled. She also found a clear correlation between the use of such evidence and less punitive outcomes—especially when it came to sentencing. Observers in the burgeoning ‘neurolaw’ field think that for better or worse, neuroscience is firmly entrenched in the criminal justice system, and bound to become ever more ubiquitous.

Not only are responsibility assessments being weakened as neuroscientific information accumulates, social risk assessments are being strengthened (Gkotsi and Gasser 2016). So-called ‘neuroprediction’ is beginning to revolutionize forensic psychology. Studies suggest that inmates with lower levels of anterior cingulate activity are approximately twice as likely to reoffend as those relatively higher levels of activity (Aharoni et al 2013). Measurements of ‘early sensory gating’ (attentional filtering) predict the likelihood that individuals suffering addictions will abandon cognitive behavioural treatment programs (Steele et al 2014). Reduced gray matter volumes in the medial and temporal lobes identify youth prone to commit violent crimes (Cope et al 2014). ‘Enlightened’ metrics assessing recidivism risks already exist within disciplines such as forensic psychiatry, of course, but “the brain has the most proximal influence on behavior” (Gaudet et al 2016). Few scientific domains illustrate the problems secondary to deep environmental information than the issue of recidivism. Given the high social cost of criminality, the ability to predict ‘at risk’ individuals before any crime is committed is sure to pay handsome preventative dividends. But what are we to make of justice systems that parole offenders possessing one set of ‘happy’ neurological factors early, while leaving others possessing an ‘unhappy’ set to serve out their entire sentence?

Nothing, I think, captures the crash of ancestral moral intuitions in modern, technological contexts quite so dramatically as forensic danger assessments. Consider, for instance, the way deep information in this context has the inverse effect of deep information in the classroom. Since punishment is indexed to responsibility, we generally presume those bearing less responsibility deserve less punishment. Here, however, it’s those bearing the least responsibility, those possessing ‘social learning disabilities,’ who ultimately serve the longest. The very deficits that mitigate responsibility before conviction actually aggravate punishment subsequent conviction.

The problem is fundamentally cognitive, and not legal, in nature. As countless bureaucratic horrors make plain, procedural decision-making need not report as morally rational. We would be mad, on the one hand, to overlook any available etiology in our original assessment of responsibility. We would be mad, on the other hand, to overlook any available etiology in our subsequent determination of punishment. Ergo, less responsibility often means more punishment.


The point, once again, is to describe the structure and dynamics of our collective sociocognitive dilemma in the age of deep environmental information, not to eulogize ancestral cognitive ecologies. The more we disenchant ourselves, the more evolutionarily unprecedented information we have available, the more problematic our folk determinations become. Demonstrating this point demonstrates the futility of pragmatic redefinition: no matter how Pinker or Dennett (or anyone else) rationalizes a given, scientifically-informed definition of moral terms, it will provide no more than grist for speculative disputation. We can adopt any legal or scientific operationalization we want (see Parmigiani et al 2017); so long as responsibility talk cues moral cognitive determinations, however, we will find ourselves stranded with intuitions we cannot reconcile.

Considered in the context of politics and the ‘culture wars,’ the potentially disastrous consequences of these kinds of trends become clear. One need only think of the oxymoronic notion of ‘commonsense’ criminology, which amounts to imposing moral determinations geared to shallow cognitive ecologies upon criminal contexts now possessing numerous deep information attenuations. Those who, for whatever reason, escaped the education system with something resembling an ancestral ‘neglect structure’ intact, those who have no patience for pragmatic redefinitions or technical stipulations will find appeals to folk intuitions every bit as convincing as those presiding over the Salem witch trials in 1692. Those caught up in deep information environments, on the other hand, will be ever more inclined to see those intuitions as anachronistic, inhumane, immoral—unenlightened.

Given the relation between education and information access and processing capacity, we can expect that education will increasingly divide moral attitudes. Likewise, we should expect a growing sociocognitive disconnect between expert and non-expert moral determinations. And given cognitive technologies like the internet, we should expect this dysfunction to become even more profound still.


Cognitive Technology

Given the power of technology to cue intergroup identifications, the internet was—and continues to be—hailed as a means of bringing humanity together, a way of enacting the universalistic aspirations of humanism. My own position—one foot in academe, another foot in consumer culture—afforded me a far different perspective. Unlike academics, genre writers rub shoulders with all walks, and often find themselves debating outrageously chauvinistic views. I realized quite quickly that the internet had rendered rationalizations instantly available, that it amounted to pouring marbles across the floor of ancestral social dynamics. The cost of confirmation had plummeted to zero. Prior to the internet, we had to test our more extreme chauvinisms against whomever happened to be available—which is to say, people who would be inclined to disagree. We had to work to indulge our stone-age weaknesses in post-war 20th century Western cognitive ecologies. No more. Add to this phenomena such as online disinhibition effect, as well as the sudden visibility of ingroup, intellectual piety, and the growing extremity of counter-identification struck me as inevitable. The internet was dividing us into teams. In such an age, I realized, the only socially redemptive art was art that cut against this tendency, art that genuinely spanned ingroup boundaries. Literature, as traditionally understood, had become a paradigmatic expression of the tribalism presently engulfing us now. Epic fantasy, on the other hand, still possessed the relevance required to inspire book burnings in the West.

(The past decade has ‘rewarded’ my turn-of-the-millennium fears—though in some surprising ways. The greatest attitudinal shift in America, for instance, has been progressive: it has been liberals, and not conservatives, who have most radically changed their views. The rise of reactionary sentiment and populism is presently rewriting European politics—and the age of Trump has all but overthrown the progressive political agenda in the US. But the role of the internet and social media in these phenomena remains a hotly contested one.)

The earlier promoters of the internet had banked on the notional availability of intergroup information to ‘bring the world closer together,’ not realizing the heuristic reliance of human cognition on differential information access. Ancestrally, communicating ingroup reliability trumped communicating environmental accuracy, stranding us with what Pinker (following Kahan 2011) calls the ‘tragedy of the belief commons’ (Enlightenment Now, 358), the individual rationality of believing collectively irrational claims—such as, for instance, the belief that global warming is a liberal myth. Once falsehoods become entangled with identity claims, they become the yardstick of true and false, thus generating the terrifying spectacle we now witness on the evening news.

The provision of ancestrally unavailable social information is one thing, so long as it is curated—censored, in effect—as it was in the mass media age of my childhood. Confirmation biases have to swim upstream in such cognitive ecologies. Rendering all ancestrally unavailable social information available, on the other hand, allows us to indulge our biases, to see only what we want to see, to hear only what we want to hear. Where ancestrally, we had to risk criticism to secure praise, no such risks need be incurred now. And no surprise, we find ourselves sliding back into the tribalistic mire, arguing absurdities haunted—tainted—by the death of millions.

Jonathan Albright, the research director at the Tow Center for Digital Journalism at Columbia, has found that the ‘fake news’ phenomenon, as the product of a self-reinforcing technical ecosystem, has actually grown worse since the 2016 election. “Our technological and communication infrastructure, the ways we experience reality, the ways we get news, are literally disintegrating,” he recently confessed in a NiemanLab interview. “It’s the biggest problem ever, in my opinion, especially for American culture.” As Alexis Madrigal writes in The Atlantic, “the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

The individual cost of fantasy continues to shrink, even as the collective cost of deception continues to grow. The ecologies once securing the reliability of our epistemic determinations, the invariants that our ancestors took for granted, are being levelled. Our ancestral world was one where seeking risked aversion, a world where praise and condemnation alike had to brave condemnation, where lazy judgments were punished rather than rewarded. Our ancestral world was one where geography and the scarcity of resources forced permissives and authoritarians to intermingle, compromise, and cooperate. That world is gone, leaving the old equilibria to unwind in confusion, a growing social crash space.

And this is only the beginning of the cognitive technological age. As Tristan Harris points out, social media platforms, given their commercial imperatives, cannot but engineer online ecologies designed to exploit the heuristic limits of human cognition. He writes:

“I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.”

More and more of what we encounter online is dedicated to various forms of exogenous attention capture, maximizing the time we spend on the platform, so maximizing our exposure not just to advertising, but to hidden metrics, algorithms designed to assess everything from our likes to our emotional well-being. As with instances of ‘forcing’ in the performance of magic tricks, the fact of manipulation escapes our attention altogether, so we always presume we could have done otherwise—we always presume ourselves ‘free’ (whatever this means). We exhibit what Clifford Nass, a pioneer in human-computer interaction, calls ‘mindlessness,’ the blind reliance on automatic scripts. To the degree that social media platforms profit from engaging your attention, they profit from hacking your ancestral cognitive vulnerabilities, exploiting our shared neglect structure. They profit, in other words, from transforming crash spaces into cheat spaces.

With AI, we are set to flood human cognitive ecologies with systems designed to actively game the heuristic nature of human social cognition, cuing automatic responses based on boggling amounts of data and the capacity to predict our decisions better than our intimates, and soon, better than we can ourselves. And yet, as the authors of the 2017 AI Index report state, “we are essentially “flying blind” in our conversations and decision-making related to AI.” A blindness we’re largely blind to. Pinker spends ample time domesticating the bogeyman of superintelligent AI (296-298) but he completely neglects this far more immediate and retail dimension of our cognitive technological dilemma.

Consider the way humans endure as much as need one another: the problem is that the cues signaling social punishment and reward are easy to trigger out of school. We’ve already crossed the borne where ‘improving the user experience’ entails substituting artificial for natural social feedback. Notice the plethora of nonthreatening female voices at all? The promise of AI is the promise of countless artificial friends, voices that will ‘understand’ your plight, your grievances, in some respects better than you do yourself. The problem, of course, is that they’re artificial, which is to say, not your friend at all.

Humans deceive and manipulate one another all the time, of course. And false AI friends don’t rule out true AI defenders. But the former merely describes the ancestral environments shaping our basic heuristic tool box. And the latter simply concedes the fundamental loss of those cognitive ecologies. The more prosthetics we enlist, the more we complicate our ecology, the more mediated our determinations become, the less efficacious our ancestral intuitions become. The more we will be told to trust to gerrymandered stipulations.

Corporate simulacra are set to deluge our homes, each bent on cuing trust. We’ve already seen how the hypersensitivity of intentional cognition renders us liable to hallucinate minds where none exist. The environmental ubiquity of AI amounts to the environmental ubiquity of systems designed to exploit granular sociocognitive systems tuned to solve humans. The AI revolution amounts to saturating human cognitive ecology with invasive species, billions of evolutionarily unprecedented systems, all of them camouflaged and carnivorous. It represents—obviously, I think—the single greatest cognitive ecological challenge we have ever faced.

What does ‘human flourishing’ mean in such cognitive ecologies? What can it mean? Pinker doesn’t know. Nobody does. He can only speculate in an age when the gobsmacking power of science has revealed his guesswork for what it is. This was why Adorno referred to the possibility of knowing the good as the ‘Messianic moment.’ Until that moment comes, until we find a form of rationality that doesn’t collapse into instrumentalism, we have only toothless guesses, allowing the pointless optimization of appetite to command all. It doesn’t matter whether you call it the will to power or identity thinking or negentropy or selfish genes or what have you, the process is blind and it lies entirely outside good and evil. We’re just along for the ride.


Semantic Apocalypse

Human cognition is not ontologically distinct. Like all biological systems, it possesses its own ecology, its own environmental conditions. And just as scientific progress has brought about the crash of countless ecosystems across this planet, it is poised to precipitate the crash of our shared cognitive ecology as well, the collapse of our ability to trust and believe, let alone to choose or take responsibility. Once every suboptimal behaviour has an etiology, what then? Once everyone us has artificial friends, heaping us with praise, priming our insecurities, doing everything they can to prevent non-commercial—ancestral— engagements, what then?

‘Semantic apocalypse’ is the dramatic term I coined to capture this process in my 2008 novel, Neuropath. Terminology aside, the crashing of ancestral (shallow information) cognitive ecologies is entirely of a piece with the Anthropocene, yet one more way that science and technology are disrupting the biology of our planet. This is a worst-case scenario, make no mistake. I’ll be damned if I see any way out of it.

Humans cognize themselves and one another via systems that take as much for granted as they possibly can. This is a fact. Given this, it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible. Why should we evolve the extravagant capacity to intuit our nature beyond the demands of ancestral life? The shallow cognitive ecology arising out of those demands constitutes our baseline self-understanding, one that bears the imprimatur of evolutionary contingency at every turn. There’s no replacing this system short replacing our humanity.

Thus the ‘worst’ in ‘worst case scenario.’

There will be a great deal of hand-wringing in the years to come. Numberless intentionalists with countless competing rationalizations will continue to apologize (and apologize) while the science trundles on, crashing this bit of traditional self-understanding and that, continually eroding the pilings supporting the whole. The pieties of humanism will be extolled and defended with increasing desperation, whole societies will scramble, while hidden behind the endless assertions of autonomy, beneath the thundering bleachers, our fundamentals will be laid bare and traded for lucre.

Enlightenment How? Pinker’s Tutelary Natures

by rsbakker


The fate of civilization, Steven Pinker thinks, hangs upon our commitment to enlightenment values. Enlightenment Now: The Case for Reason, Science, Humanism and Progress constitutes his attempt to shore up those commitments in a culture grown antagonistic to them. This is a great book, well worth the read for the examples and quotations Pinker endlessly adduces, but even though I found myself nodding far more often than not, one glaring fact continually leaks through: Enlightenment Now is a book about a process, namely ‘progress,’ that as yet remains mired in ‘tutelary natures.’ As Kevin Williamson puts it in the National Review, Pinker “leaps, without warrant, from physical science to metaphysical certitude.”

What is his naturalization of meaning? Or morality? Or cognition—especially cognition! How does one assess the cognitive revolution that is the Enlightenment short understanding the nature of cognition? How does one prognosticate something one does not scientifically understand?

At one point he offers that “[t]he principles of information, computation, and control bridge the chasm between the physical world of cause and effect and the mental world of knowledge, intelligence, and purpose” (22). Granted, he’s a psychologist: operationalizations of information, computation, and control are his empirical bread and butter. But operationalizing intentional concepts in experimental contexts is a far cry from naturalizing intentional concepts. He entirely neglects to mention that his ‘bridge’ is merely a pragmatic, institutional one, that cognitive science remains, despite decades of research and billions of dollars in resources, unable to formulate its explananda, let alone explain them. He mentions a great number of philosophers, but he fails to mention what the presence of those philosophers in his thetic wheelhouse means.

All he ultimately has, on the one hand, is a kind of ‘ta-da’ argument, the exhaustive statistical inventory of the bounty of reason, science, and humanism, and on the other hand (which he largely keeps hidden behind his back), he has the ‘tu quoque,’ the question-begging presumption that one can only argue against reason (as it is traditionally understood) by presupposing reason (as it is traditionally understood). “We don’t believe in reason,” he writes, “we use reason” (352). Pending any scientific verdict on the nature of ‘reason,’ however, these kinds of transcendental arguments amount to little more than fancy foot-stomping.

This is one of those books that make me wish I could travel back in time to catch the author drafting notes. So much brilliance, so much erudition, all devoted to beating straw—at least as far as ‘Second Culture’ Enlightenment critiques are concerned. Nietzsche is the most glaring example. Ignoring Nietzsche the physiologist, the empirically-minded skeptic, and reducing him to his subsequent misappropriation by fascist, existential, and postmodernist thought, Pinker writes:

Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.” (Of course, this left him unable to explain why we should believe that those statements are true.) 446

Although it’s true that Nietzsche (like Pinker) lacked any scientifically compelling theory of cognition, what he did understand was its relation to power, the fact that “when you face an adversary alone, your best weapon may be an ax, but when you face an adversary in front of a throng of bystanders, your best weapon may be an argument” (415). To argue that all knowledge is contextual isn’t to argue that all knowledge is fundamentally equal (and therefore not knowledge at all), only that it is bound to its time and place, a creature possessing its own ecology, its own conditions of failure and flourishing. The Nietzschean thought experiment is actually quite a simple one: What happens when we turn Enlightenment skepticism loose upon Enlightenment values? For Nietzsche, Enlightenment Now, though it regularly pays lip service to the ramshackle, reversal-prone nature of progress, serves to conceal the empirical fact of cognitive ecology, that we remain, for all our enlightened noise-making to the contrary, animals bent on minimizing discrepancies. The Enlightenment only survives its own skepticism, Nietzsche thought, in the transvaluation of value, which he conceived—unfortunately—in atavistic or morally regressive terms.

This underwrites the subsequent critique of the Enlightenment we find in Adorno—another thinker whom Pinker grossly underestimates. Though science is able to determine the more—to provide more food, shelter, security, etc.—it has the social consequence underdetermining (and so undermining) the better, stranding civilization with a nihilistic consumerism, where ‘meaningfulness’ becomes just another commodity, which is to say, nothing meaningful at all. Adorno’s whole diagnosis turns on the way science monopolizes rationality, the way it renders moral discourses like Pinker’s mere conjectural exercises (regarding the value of certain values), turning on leaps of faith (on the nature of cognition, etc.), bound to dissolve into disputation. Although both Nietzsche and Adorno believed science needed to be understood as a living, high dimensional entity, neither harboured any delusions as to where they stood in the cognitive pecking order. Unlike Pinker.

Whatever their failings, Nietzsche and Adorno glimpsed a profound truth regarding ‘reason, science, humanism, and progress,’ one that lurks throughout Pinker’s entire account. Both understood that cognition, whatever it amounts to, is ecological. Steven Pinker’s claim to fame, of course, lies in the cognitive ecological analysis of different cultural phenomena—this was the whole reason I was so keen to read this book. (In How the Mind Works, for instance, he famously calls music ‘auditory cheese-cake.’) Nevertheless, I think both Nietzsche and Adorno understood the ecological upshot of the Enlightenment in way that Pinker, as an avowed humanist, simply cannot. In fact, Pinker need only follow through on his modus operandi to see how and why the Enlightenment is not what he thinks it is—as well as why we have good reason to fear that Trumpism is no ‘blip.’

Time and again Pinker likens the process of Enlightenment, the movement away from our tutelary natures, in terms of a conflict between ancestral cognitive predilections and scientifically and culturally revolutionized environments. “Humans today,” he writes, “rely on cognitive faculties that worked well enough in traditional societies, but which we now see are infested with bugs” (25). And the number of bugs that Pinker references in the course of the book is nothing short of prodigious. We tend to estimate frequencies according to ease of retrieval. We tend to fear losses more than we hope for gains. We tend to believe as our group believes. We’re prone to tribalism. We tend to forget past misfortune, and to succumb to nostalgia. The list goes on and on.

What redeems us, Pinker argues, is the human capacity for abstraction and combinatorial recursion, which allows us to endlessly optimize our behaviour. We are a self-correcting species:

So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment. 28

We are the products of ancestral cognitive ecologies, yes, but our capacity for optimizing our capacities allows us to overcome our ‘flawed natures,’ become something better than what we were. “The challenge for us today,” Pinker writes, “is to design an informational environment in which that ability prevails over the ones that lead us into folly” (355).

And here we encounter the paradox that Enlightenment Now never considers, even though Pinker presupposes it continually. The challenge for us today is to construct an informational environment that mitigates the problems arising out of our previous environmental constructions. The ‘bugs’ in human nature that need to be fixed were once ancestral features. What has rendered these adaptations ‘buggy’ is nothing other than the ‘march of progress.’ A central premise of Enlightenment Now is that human cognitive ecology, the complex formed by our capacities and our environments, has fallen out of whack in this way or that, cuing us to apply atavistic modes of problem-solving out of school. The paradox is that the very bugs Pinker thinks only the Enlightenment can solve are the very bugs the Enlightenment has created.

What Nietzsche and Adorno glimpsed, each in their own murky way, was a recursive flaw in Enlightenment logic, the way the rationalization of everything meant the rationalization of rationalization, and how this has to short-circuit human meaning. Both saw the problem in the implementation, in the physiology of thought and community, not in the abstract. So where Pinker seeks to “to restate the ideals of the Enlightenment in the language and concepts of the 21st century” (5), we can likewise restate Nietzsche and Adorno’s critiques of the Enlightenment in Pinker’s own biological idiom.

The problem with the Enlightenment is a cognitive ecological problem. The technical (rational and technological) remediation of our cognitive ecologies transforms those ecologies, generating the need for further technical remediation. Our technical cognitive ecologies are thus drifting ever further from our ancestral cognitive ecologies. Human sociocognition and metacognition in particular are radically heuristic, and as such dependent on countless environmental invariants. Before even considering more, smarter intervention as a solution to the ambient consequences of prior interventions, the big question has to be how far—and how fast—can humanity go? At what point (or what velocity) does a recognizably human cognitive ecology cease to exist?

This question has nothing to do with nostalgia or declinism, no more than any question of ecological viability in times of environmental transformation. It also clearly follows from Pinker’s own empirical commitments.


The Death of Progress (at the Hand of Progress)

The formula is simple. Enlightenment reason solves natures, allowing the development of technology, generally relieving humanity of countless ancestral afflictions. But Enlightenment reason is only now solving its own nature. Pinker, in the absence of that solution, is arguing that the formula remains reliable if not quite as simple. And if all things were equal, his optimistic induction would carry the day—at least for me. As it stands, I’m with Nietzsche and Adorno. All things are not equal… and we would see this clearly, I think, were it not for the intentional obscurities comprising humanism. Far from the latest, greatest hope that Pinker makes it out to be, I fear humanism constitutes yet another nexus of traditional intuitions that must be overcome. The last stand of ancestral authority.

I agree this conclusion is catastrophic, “the greatest intellectual collapse in the history of our species” (vii), as an old polemical foe of Pinker’s, Jerry Fodor (1987) calls it. Nevertheless, short grasping this conclusion, I fear we court a disaster far greater still.

Hitherto, the light cast by the Enlightenment left us largely in the dark, guessing at the lay of interior shadows. We can mathematically model the first instants of creation, and yet we remain thoroughly baffled by our ability to do so. So far, the march of moral progress has turned on the revolutionizing our material environments: we need only renovate our self-understanding enough to accommodate this revolution. Humanism can be seen as the ‘good enough’ product of this renovation, a retooling of folk vocabularies and folk reports to accommodate the radical environmental and interpersonal transformations occurring around them. The discourses are myriad, the definitions are endlessly disputed, nevertheless humanism provisioned us with the cognitive flexibility required to flourish in an age of environmental disenchantment and transformation. Once we understand the pertinent facts of human cognitive ecology, its status as an ad hoc ‘tutelary nature’ becomes plain.

Just what are these pertinent facts? First, there is a profound distinction between natural or causal cognition, and intentional cognition. Developmental research shows that infants begin exhibiting distinct physical versus psychological cognitive capacities within the first year of life. Research into Asperger Syndrome (Baron-Cohen et al 2001) and Autism Spectrum Disorder (Binnie and Williams 2003) consistently reveals a cleavage between intuitive social cognitive capacities, ‘theory-of-mind’ or ‘folk psychology,’ and intuitive mechanical cognitive capacities, or ‘folk physics.’ Intuitive social cognitive capacities demonstrate significant heritability (Ebstein et al 2010, Scourfield et al 1999) in twin and family studies. Adults suffering Williams Syndrome (a genetic developmental disorder affecting spatial cognition) demonstrate profound impairments on intuitive physics tasks, but not intuitive psychology tasks (Kamps et al 2017). The distinction between intentional and natural cognition, in other words, is not merely a philosophical assertion, but a matter of established scientific fact.

Second, cognitive systems are mechanically intractable. From the standpoint of cognition, the most significant property of cognitive systems is their astronomical complexity: to solve for cognitive systems is to solve for what are perhaps the most complicated systems in the known universe. The industrial scale of the cognitive sciences provides dramatic evidence of this complexity: the scientific investigation of the human brain arguably constitutes the most massive cognitive endeavor in human history. (In the past six fiscal years, from 2012 to 2017, the National Institute of Health [21/01/2017] alone will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegeneration (10.183 billion)).

Despite this intractability, however, our cognitive systems solve for cognitive systems all the time. And they do so, moreover, expending imperceptible resources and absent any access to the astronomical complexities responsible—which is to say, given very little information. Which delivers us to our third pertinent fact: the capacity of cognitive systems to solve for cognitive systems is radically heuristic. It consists of ‘fast and frugal’ tools, not so much sacrificing accuracy as applicability in problem-solving (Todd and Gigerenzer 2012). When one cognitive system solves for another it relies on available cues, granular information made available via behaviour, utterly neglecting the biomechanical information that is the stock and trade of the cognitive sciences. This radically limits their domain of applicability.

The heuristic nature of intentional cognition is evidenced by the ease with which it is cued. Thus, the fourth pertinent fact: intentional cognition is hypersensitive. Anthropomorphism, the attribution of human cognitive characteristics to systems possessing none, evidences the promiscuous application of human intentional cognition to intentional cues, our tendency to run afoul what might be called intentional pareidolia, the disposition to cognize minds where no minds exist (Waytz et al 2014). The Heider-Simmel illusion, an animation consisting of no more than shapes moving about a screen, dramatically evidences this hypersensitivity, insofar as viewers invariably see versions of a romantic drama (Heider and Simmel 1944). Research in Human-Computer Interaction continues to explore this hypersensitivity in a wide variety of contexts involving artificial systems (Nass and Moon 2000, Appel et al 2012). The identification and exploitation of our intentional reflexes has become a massive commercial research project (so-called ‘affective computing’) in its own right (Yonck 2017).

Intentional pareidolia underscores the fact that intentional cognition, as heuristic, is geared to solve a specific range of problems. In this sense, it closely parallels facial pareidolia, the tendency to cognize faces where no faces exist. Intentional cognition, in other words, is both domain-specific, and readily misapplied.

The incompatibility between intentional and mechanical cognitive systems, then, is precisely what we should expect, given the radically heuristic nature of the former. Humanity evolved in shallow cognitive ecologies, mechanically inscrutable environments. Only the most immediate and granular causes could be cognized, so we evolved a plethora of ways to do without deep environmental information, to isolate saliencies correlated with various outcomes (much as machine learning).

Human intentional cognition neglects the intractable task of cognizing natural facts, leaping to conclusions on the basis of whatever information it can scrounge. In this sense it’s constantly gambling that certain invariant backgrounds obtain, or conversely, that what it sees is all that matters. This is just another way to say that intentional cognition is ecological, which in turn is just another way to say that it can degrade, even collapse, given the loss of certain background invariants.

The important thing to note, here, of course, is how Enlightenment progress appears to be ultimately inimical to human intentional cognition. We can only assume that, over time, the unrestricted rationalization of our environments will gradually degrade, then eventually overthrow the invariances sustaining intentional cognition. The argument is straightforward:

1) Intentional cognition depends on cognitive ecological invariances.

2) Scientific progress entails the continual transformation of cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition.

But this argument oversimplifies matters. To see as much one need only consider the way a semantic apocalypse—the collapse of intentional cognition—differs from say a nuclear or zombie apocalypse. The Walking Dead, for instance, abounds with savvy applications of intentional cognition. The physical systems underwriting meaning, in other words, are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them.

The viability of intentional cognition, in other words, depends not on us, but on a particular cognitive ecology peculiar to us. The question of the threshold of a semantic apocalypse becomes the question of the stability of certain onboard biological invariances correlated to a background of certain environmental invariances. Change the constitutive or environmental invariances underwriting intentional cognition too much, and you can expect it will crash, generate more problems than solutions.

The hypersensitivity of intentional cognition either evinced by solitary confinement or more generally by anthropomorphism demonstrates the threat of systematic misapplication, the mode’s dependence on cue authenticity. (Sherry Turkle’s (2007) concerns regarding ‘Darwinian buttons,’ or Deidre Barrett’s (2010) with ‘supernormal stimuli,’ touch on this issue). So, one way of inducing semantic apocalypse, we might surmise, lies in the proliferation of counterfeit cues, information that triggers intentional determinations that confound, rather than solve any problems. One way to degrade cognitive ecologies, in other words, is to populate environments with artifacts cuing intentional cognition ‘out of school,’ which is to say, circumstances cheating or crashing them.

The morbidity of intentional cognition demonstrates the mode’s dependence on its own physiology. What makes this more than platitudinal is the way this physiology is attuned to the greater, enabling cognitive ecology. Since environments always vary while cognitive systems remain the same, changing the physiology of intentional cognition impacts every intentional cognitive ecology—not only for oneself, but for the rest of humanity as well. Just as our moral cognitive ecology is complicated by the existence of psychopaths, individuals possessing systematically different ways of solving social problems, the existence of ‘augmented’ moral cognizers complicates our moral cognitive ecology as well. This is important because you often find it claimed in transhumanist circles (see, for example, Buchanan 2011), that ‘enhancement,’ the technological upgrading of human cognitive capacities, is what guarantees perpetual Enlightenment. What better way to optimize our values than by reengineering the biology of valuation?

Here, at last, we encounter Nietzsche’s question cloaked in 21st century garb.

And here we can also see where the above argument falls short: it overlooks the inevitability of engineering intentional cognition to accommodate constitutive and environmental transformations. The dependence upon cognitive ecologies asserted in (1) is actually contingent upon the ecological transformation asserted in (2).

1) Intentional cognition depends on constitutive and environmental cognitive ecological invariances.

2) Scientific progress entails the continual transformation of constitutive and environmental cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition short remedial constitutive transformations.

What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus:

What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket? 277

The former dilemma is presently dominating the headlines and is set to be astronomically complicated by the explosion of AI. The latter we can see rising out of literature, clawing its way out of Hollywood, seizing us with video game consoles, engulfing ever more experiential bandwidth. And as I like to remind people, 100 years separates the Blu-Ray from the wax phonograph.

The key to blocking the possibility that the transformative potential of (2) can ameliorate the dependency in (1) lies in underscoring the continual nature of the changes asserted in (2). A cognitive ecology where basic constitutive and environmental facts are in play is no longer recognizable as a human one.

Scientific progress entails the collapse of intentional cognition.

On this view, the coupling of scientific and moral progress is a temporary affair, one doomed to last only so long as cognition itself remained outside the purview of Enlightenment cognition. So long as astronomical complexity assured that the ancestral invariances underwriting cognition remained intact, the revolution of our environments could proceed apace. Our ancestral cognitive equilibria need not be overthrown. In place of materially actionable knowledge regarding ourselves, we developed ‘humanism,’ a sop for rare stipulation and ambient disputation.

But now that our ancestral cognitive equilibria are being overthrown, we should expect scientific and moral progress will become decoupled. And I would argue that the evidence of this is becoming plainer with the passing of every year. Next week, we’ll take a look at several examples.

I fear Donald Trump may be just the beginning.



Appel, Jana, von der Putten, Astrid, Kramer, Nicole C. and Gratch, Jonathan 2012, ‘Does Humanity Matter? Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction’, in Advances in Human-Computer Interaction 2012 <;

Barrett, Deidre 2010, Supernormal Stimuli: How Primal Urges Overran Their Original Evolutionary Purpose (New York: W.W. Norton)

Binnie, Lynne and Williams, Joanne 2003, ‘Intuitive Psychology and Physics Among Children with Autism and Typically Developing Children’, Autism 7

Buchanan, Allen 2011, Better than Human: The Promise and Perils of Enhancing Ourselves (New York: Oxford University Press)

Ebstein, R.P., Israel, S, Chew, S.H., Zhong, S., and Knafo, A. 2010, ‘Genetics of human social behavior’, in Neuron 65

Fodor, Jerry A. 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: The MIT Press)

Harari, Yuval 2017, Homo Deus: A Brief History of Tomorrow (New York: HarperCollins)

Heider, Fritz and Simmel, Marianne 1944, ‘An Experimental Study of Apparent Behaviour,’ in The American Journal of Psychology 57

Kamps, Frederik S., Julian, Joshua B., Battaglia, Peter, Landau, Barbara, Kanwisher, Nancy and Dilks Daniel D 2017, ‘Dissociating intuitive physics from intuitive psychology: Evidence from Williams syndrome’, in Cognition 168

Nass, Clifford and Moon, Youngme 2000, ‘Machines and Mindlessness: Social Responses to Computers’, Journal of Social Issues 56

Pinker, Steven 1997, How the Mind Works (New York: W.W. Norton)

—. 2018, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking)

Scourfield J., Martin N., Lewis G. and McGuffin P. 1999, ‘Heritability of social cognitive skills in children and adolescents’, British Journal of Psychiatry 175

Todd, P. and Gigerenzer, G. 2012 ‘What is ecological rationality?’, in Todd, P. and Gigerenzer, G. (eds.) Ecological Rationality: Intelligence in the World (Oxford: Oxford University Press) 3–


Turkle, Sherry 2007, ‘Authenticity in the age of digital companions’, Interaction Studies 501-517

Waytz, Adam, Cacioppo, John, and Epley, Nicholas 2014, ‘Who See Human? The Stability and Importance of Individual Differences in Anthropomorphism’, Perspectives in Psychological Science 5

Yonck, Richard 2017, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence (New York, NY: Arcade Publishing)


Floridi’s Plea for Intentionalism

by rsbakker


Questioning Questions

Intentionalism presumes that intentional modes of cognition can solve for intentional modes of cognition, that intentional vocabularies, and intentional vocabularies alone, can fund bona fide theoretical understanding of intentional phenomena. But can they? What evidences their theoretical efficacy? What, if anything, does biology have to say?

No one denies the enormous practical power of those vocabularies. And yet, the fact remains that, as a theoretical explanatory tool, they invariably deliver us to disputation—philosophy. To rehearse my favourite William Uttal quote: “There is probably nothing that divides psychologists of all stripes more than the inadequacies and ambiguities of our efforts to define mind, consciousness, and the enormous variety of mental events and phenomena” (The New Phrenology, p.90).

In his “A Plea for Non-naturalism as Constructionism,” Luciano Floridi, undertakes a comprehensive revaluation of this philosophical and cognitive scientific inability to decisively formulate, let alone explain intentional phenomena. He begins with a quote from Quine’s seminal “Epistemology Naturalized,” the claim that “[n]aturalism does not repudiate epistemology, but assimilates it to empirical psychology.” Although Floridi entirely agrees that the sciences have relieved philosophy of a great number of questions over the centuries, he disagrees with Quine’s ‘assimilation,’ the notion of naturalism as “another way of talking about the death of philosophy.” Acknowledging that philosophy needs to remain scientifically engaged—naturalistic—does not entail discursive suicide. “Philosophy deals with ultimate questions that are intrinsically open to reasonable and informed disagreement,” Floridi declares. “And these are not “assimilable” to scientific enquiries.”

Ultimate? Reading this, one might assume that Floridi, like so many other thinkers, has some kind of transcendental argument operating in the background. But Floridi is such an exciting philosopher to read precisely because he isn’t ‘like so many other thinkers.’ He hews to intentionalism, true, but he does so in a manner that is uniquely his own.

To understand what he means by ‘ultimate’ in this paper we need to visit another, equally original essay of his, “What is a Philosophical Question?” where he takes an information ‘resource-oriented’ approach to the issue of philosophical questions, “the simple yet very powerful insight that the nature of problems may be fruitfully studied by focusing on the kind of resources required in principle to solve them, rather than on their form, meaning, reference, scope, and relevance.” He focuses on the three kinds of questions revealed by this perspective: questions requiring empirical resources, questions requiring logico-mathematical resources, and questions requiring something else—what he calls ‘open questions.’ Philosophical questions, he thinks, belong to this latter category.

But if open questions admit no exhaustive empirical or formal determination, then why think them meaningful? Why not, as Hume famously advises, consign them to the flames? Because, Floridi, argues, they are inescapable. Open questions possess no regress enders: they are ‘closed’ in the set-theoretic sense, which is to say, they are questions whose answers always beget more questions. To declare answers to open questions meaningless or trivial is to answer an open question.

But since not all open questions are philosophical questions, Floridi needs to restrict the scope of his definition. The difference, he thinks, is that philosophical questions “tend to concentrate on more significant and consequential problems.” Philosophical questions, in addition to being open questions, are also ultimate questions, not in any foundational or transcendental sense, but in the sense of casting the most inferential shade across less ultimate matter.

Ultimate questions may be inescapable, as Floridi suggests, but this in no way allays the problem of the resources used to answer them. Why not simply answer them pragmatically, or with a skeptical shrug? Floridi insists that the resources are found in “the world of mental contents, conceptual frameworks, intellectual creations, intelligent insights, dialectical reasonings,” or what he calls ‘noetic resources,’ the non-empirical, non-formal fund of things that we know. Philosophical questions, in addition to being ultimate, open questions, require noetic resources to be answered.

But all questions, of course, are not equal. Some philosophical problems, after all, are mere pseudo-problems, the product of the right question being asked in the wrong circumstances. Though the ways in which philosophical questions misfire seem manifold, Floridi focusses on a single culprit to distinguish ‘bad’ from ‘good’ philosophical questions: the former, he thinks, overstep their corresponding ‘level of abstraction,’ aspiring to be absolute or unconditioned. Philosophical questions, in addition to being noetic, ultimate, open questions, are also contextually appropriate questions.

Philosophy, then, pertains to questions involving basic matters, lacking decisive empirical or formal resources and so lacking institutional regress enders. Good philosophy, as opposed to bad, is always conditional, which is to say, sensitive to the context of inquiry. It is philosophy in this sense that Floridi thinks lies beyond the pale of Quinean assimilation in “A Plea for Non-naturalism as Constructionism.”

But resistance to assimilation isn’t his only concern. Science, Floridi thinks, is caught in a predicament: as ever more of the universe is dragged from the realm of open, philosophical interrogation into the realm of closed, scientific investigation, the technology enabled by and enabling this creeping closure is progressively artificializing our once natural environments. Floridi writes:

“the increasing and profound technologisation of science is creating a tension between what we try to explain, namely all sorts of realities, and how we explain it, through the highly artificial constructs and devices that frame and support our investigations. Naturalistic explanations are increasingly dependent on non-natural means to reach such explanations.”

This, of course, is the very question at issue between the meaning skeptic and the meaning realist. To make his case, Floridi has to demonstrate the how and why the artefactual isn’t simply more nature, every bit as bound by the laws of thermodynamics as everything else in nature. Why think the ‘artificial’ is anything more than (to turn a Hegelian line on its head) ‘nature reborn’? To presume as much would be to beg the question—to run afoul the very ‘scholasticism’ Floridi criticizes.

Again, he quotes Quine from “Epistemology Naturalized,” this time the famous line reminding us that the question of “how irritations of our sensory surfaces” result in knowledge is itself a scientific question. The absurdity of the assertion, Floridi thinks, is easily assayed by considering the complexity of cognitive and aesthetic artifacts: “by the same reasoning, one should then try to answer the question how Beethoven managed to arrive at his Ode to Joy from the seven-note diatonic musical scale, Leonardo to his Mona Lisa from the three colours in the RGB model, Orson Welles to his Citizen Kane from just black and white, and today any computer multimedia from just zeros and ones.”

The egregious nature of the disanalogies here are indicative of the problem Floridi faces. Quine’s point isn’t that knowledge reduces to sensory irritations, merely that knowledge consists of scientifically tractable physical processes. For all his originality, Floridi finds himself resorting to a standard ‘you-can’t-get-there-from-here’ argument against eliminativism. He even cites the constructive consensus in neuroscience, thinking it evidences the intrinsically artefactual, nature of knowledge. But he never explains why the artefactual nature of knowledge—unlike the artefactual nature of, say, a bird’s nest—rules out the empirical assimilation of knowledge. Biology isn’t any less empirical for being productive, so what’s the crucial difference here? At what point does artefactual qua biological become artefactual qua intentional?

Epistemological questions, he asserts, “are not descriptive or scientific, but rather semantic and normative.” But Quine is asking a question about epistemology and whether what we now call cognitive science can exhaustively answer it. As it so happens the question of epistemology as a natural phenomena is itself an epistemological question, and as such involves the application of intentional (semantic and normative) cognitive modes. But why think these cognitive modes themselves cannot be empirically described and explained the way, for example, neuroscience has described and explained the artefactual nature of cognition? If artefacts like termite mounds and bird’s nests admit natural explanations, then why not knowledge? Given that he hopes to revive “a classic, foundationalist role for philosophy itself,” this is a question he has got to answer. Philosophers have a long history of attempting to secure the epistemological primacy of their speculation on the back of more speculation. Unless Floridi is content with “an internal ‘discourse’ among equally minded philosophers,” he needs to explain what makes the artifactuality of knowledge intrinsically intentional.

In a sense, one can see his seminal 2010 work, The Philosophy of Information, as an attempt to answer this question, but he punts on the issue, here, providing only a reference to his larger theory. Perhaps this is why he characterizes this paper as “a plea for non-naturalism, not an argument for it, let alone a proof or demonstration of it.” Even though the entirety of the paper is given over to arguments inveighing against unrestricted naturalism a la Quine, they all turn on a shared faith in the intrinsic intentionality of cognition.


Reasonably Reiterable Queries

Floridi defines ‘strong naturalism’ as the thesis that all nonnatural phenomena can be reduced to natural phenomena. A strong naturalist believes that all phenomena can be exhaustively explained using only natural vocabularies. The key term, for him, is ‘exhaustively.’ Although some answers to our questions put the matter to bed, others simply leave us scratching our heads. The same applies to naturalistic explanations. Where some reductions are the end of the matter, ‘lossless,’ others are so ‘lossy’ as to explain nothing at all. The latter, he suggests, make it reasonable to reiterate the original query. This, he thinks, provides a way to test any given naturalization of some phenomena, an ‘RRQ’ test. If a reduction warrants repeating the very question it was intended to answer, then we have reason to assume the reduction to be ‘reductive,’ or lossy.

The focus of his test, not surprisingly, is the naturalistic inscrutability of intentional phenomena:

“According to normative (also known as moral or ethical) and semantic non-naturalism, normative and semantic phenomena are not naturalisable because their explanation cannot be provided in a way that appeals exhaustively and non-reductively only to natural phenomena. In both cases, any naturalistic explanation is lossy, in the sense that it is perfectly reasonable to ask again for an explanation, correctly and informatively.”

This failure, he asserts, demonstrates the category mistake of insisting that intentional phenomena be naturalistically explained. In lieu of an argument, he gives us examples. No matter how thorough our natural explanations of immoral photographs might be, one can always ask, Yes, but what makes them immoral (as opposed to socially sanctioned, repulsive, etc.)? Facts simply do not stack into value—Floridi takes himself to be expounding a version of Hume’s and Moore’s point here. The explanation remains ‘lossy’ no matter what our naturalistic explanation. Floridi writes:

“The recalcitrant, residual element that remains unexplained is precisely the all-important element that requires an explanation in the first place. In the end, it is the contribution that the mind makes to the world, and it is up to the mind to explain it, not the world.”

I’ve always admired, even envied, Floridi for the grace and lucidity of his prose. But no matter how artful, a god of the gaps argument is a god of the gaps argument. Failing the RRQ does not entail that only intentional cognition can solve for intentional phenomena.

He acknowledges the problem here: “Admittedly, as one of the anonymous reviewers rightly reminded me, one may object that the recalcitrant, residual elements still in need of explanation may be just the result of our own insipience (understood as the presence of a question without the corresponding relevant and correct answer), perhaps as just a (maybe even only temporary) failure to see that there is merely a false impression of an information deficit (by analogy with a scandal of deduction).” His answer here is to simply apply his test, suggesting the debate, as interminable, merely underscores “an openness to the questioning that the questioning itself keeps open.” I can’t help but think he feels the thorn, at this point. Short reading “What is a Philosophical Question?” this turn in the article would be very difficult to parse. Philosophical questioning, Floridi would say, is ‘closed under questioning,’ which is to say, a process that continually generates more questions. The result is quite ingenious. As with Derridean deconstruction, philosophical problematizations of Floridi’s account of philosophy end up evidencing his account of philosophy by virtue of exhibiting the vulnerability of all guesswork: the lack of regress enders. Rather than committing to any foundation, you commit to a dialectical strategy allowing you to pick yourself up by your own hair.

The problem is that RRQ is far from the domesticated discursive tool that Floridi would have you believe it is. If anything, it provides a novel and useful way to understand the limits of theoretical cognition, not the limits of this or that definition of ‘naturalism.’ RRQ is a great way to determine where the theoretical guesswork in general begins. Nonnaturalism is the province of philosophy for a reason: every single nonnatural answer ever adduced to answer the question of this or that intentional phenomena have failed to close the door on RRQ. Intentional philosophy, such as Floridi’s, possesses no explanatory regress enders—not a one. It is always rational to reiterate the question wherever theoretical applications of intentional cognition are concerned. This is not the case with natural cognition. If RRQ takes a bite out of natural theoretical explanation of apparent intentional phenomena, then it swallows nonnatural cognition whole.

Raising the question, Why bother with theoretical applications of nonnatural cognition at all? Think about it: if every signal received via a given cognitive mode is lossy, why not presume that cognitive mode defective? The successes of natural theoretical cognition—the process of Quinean ‘assimilation’—show us that lossiness typically dwindles with the accumulation of information. No matter how spectacularly our natural accounts of intentional phenomena fail, we need only point out the youth of cognitive science and the astronomical complexities of the systems involved. The failures of natural cognition belong to the process of natural cognition, the rondo of hypothesis and observation. Theoretical applications of intentional cognition, on the other hand, promise only perpetual lossiness, the endless reiteration of questions and uninformative answers.

One can rhetorically embellish endless disputation as discursive plenitude, explanatory stasis as ontological profundity. One can persuasively accuse skeptics of getting things upside down. Or one can speculate on What-Philosophy-Is, insist that philosophy, instead of mapping where our knowledge breaks down (as it does in fact), shows us where this-or-that ‘ultimate’ lies. In “What is a Philosophical Question?” Floridi writes:

“Still, in the long run, evolution in philosophy is measured in terms of accumulation of answers to open questions, answers that remain, by the very nature of the questions they address, open to reasonable disagreement. So those jesting that philosophy has never “solved” any problem but remains for ever stuck in endless debates, that there is no real progress in philosophy, clearly have no idea what philosophy is about. They may as well complain that their favourite restaurant is constantly refining and expanding its menu.”

RRQ says otherwise. According to Floridi’s own test, the problem isn’t that the restaurant is constantly refining and expanding its menu, the problem is that nothing ever makes it out of the kitchen! It’s always sent back by rational questions. Certainly countless breakdowns have found countless sociocognitive uses: philosophy is nothing if not recombinant, mutation machine. But these powerful adaptations of intentional cognition are simply that: powerful adaptations of natural systems originally evolved to solve complex systems on the metabolic cheap. All attempts to use intentional cognition to theorize their (entirely natural) nature end in disputation. Philosophy has yet to theoretically solve any aspect of intentional cognition. And this merely follows from Floridi’s own definition of philosophy—it just cuts against his rhetorical register. In fact, when one takes a closer, empirical look at the resources available, the traditional conceit at the heart of his nonnaturalism quickly becomes clear.


Follow the Money

So, what is it? Why spin a limit, a profound cognitive horizon, into a plenum? Floridi is nothing if not an erudite and subtle thinker, and yet his argument in this paper entirely depends on neglecting to see RRQ for the limit that it is. He does this because he fails to follow through on the question of resources.

For my part, I look at naturalism as a reliance on a particular set of ‘hacks,’ not as any dogma requiring multiple toes scratching multiple lines in the sand.  Reverse-engineering—taking things apart, seeing how they work—just happens to be an extraordinarily powerful approach, at least as far as our high-dimensional (‘physical’) environments are concerned. If we can reverse-engineer intentional phenomena—assimilate epistemology, say, to neuroscience—then so much the better for theoretical cognition (if not humanity). We still rely on unexplained explainers, of course, RRQ still pertains, but the boundaries will have been pushed outward.

Now the astronomical complexity of biology doesn’t simply suggest, it entails that we would find ourselves extraordinarily difficult to reverse-engineer, at least at first. Humans suffer medial neglect, fundamental blindness to the high-dimensional structure and dynamics of cognition. (As Floridi acknowledges in his own consideration of Dretske’s “How Do You Know You are Not a Zombie?” the proximal conditions of experience do not appear within experience (see The Philosophy of Information, chapter 13)). The obvious reason for this turns on the limitations of our tools, both onboard and prosthetic. Our ancestors, for instance, had no choice but to ignore biology altogether, to correlate what ‘sensory irritants’ they had available with this or that reproductively decisive outcome. Everything in the middle, the systems and ecology that enabled this cognitive feat, is consigned to neglect (and doomed to be reified as ‘transparency’). Just consider the boggling resources commanded by the cognitive sciences: until very recently reverse-engineering simply wasn’t a viable cognitive mode, at least when it came to living things.

This is what ‘intentional cognition’ amounts to: the collection of ancestral devices, ‘hacks,’ we use to solve, not only one another, but all supercomplicated systems. Since these hacks are themselves supercomplicated, our ancestors had to rely on them to solve for them. Problems involving intentional cognition, in other words, cue intentional problem-solving systems, not because (cue drumroll) intentional cognition inexplicably outruns the very possibility of reverse-engineering, but because our ancestors had no other means.

Recall Floridi’s ‘noetic resources,’ the “world of mental contents, conceptual frameworks, intellectual creations, intelligent insights, dialectical reasonings” that underwrites philosophical, as opposed to empirical or formal, answers. It’s no accident that the ‘noetic dimension’ also happens to be the supercomplicated enabling or performative dimension of cognition—the dimension of medial neglect. Whatever ancestral resources we possessed, they comprised heuristic capacities geared to information strategically correlated to the otherwise intractable systems. Ancestrally, noetic resources consisted of the information and metacognitive capacity available to troubleshoot applications of intentional cognitive systems. When our cognitive hacks went wrong, we had only metacognitive hacks to rely on. ‘Noetic resources’ refers to our heuristic capacities to troubleshoot the enabling dimension of cognition while neglecting its astronomical complexity.

So, take Floridi’s example of immoral photographs. The problem he faced, recall, was that “the question why they are immoral can be asked again and again, reasonably” not simply of natural explanations of morality, but nonnatural explanations as well. The RRQ razor cuts both ways.

The reason natural cognition fails to decisively answer moral questions should be pretty clear: moral cognition is radically heuristic, enabling the solution of certain sociocognitive problems absent high-dimensional information required by natural cognition. Far from expressing the ‘mind’s contribution’ (whatever that means), the ‘unexplained residuum’ warranting RRQ evidences the interdependence between cues and circumstance in heuristic cognition, the way the one always requires the other to function. Nothing so incredibly lossy as ‘mind’ is required. This inability to duplicate heuristic cognition, however, has nothing to do with the ability to theorize the nature of moral cognition, which is biological through and through. In fact, an outline of such an answer has just been provided here.

Moral cognition, of course, decisively solves practical moral problems all the time (despite often being fantastically uninformative): our ancestors wouldn’t have evolved the capacity otherwise. Moral cognition fails to decisively answer the theoretical question of morality, on the other hand, because it turns on ancestrally available information geared to the solution of practical problems. Like all the other devices comprising our sociocognitive toolbox, it evolved to derive as much practical problem-solving capacity from as little information as possible. ‘Noetic resources’ are heuristic resources, which is to say, ecological through and through. The deliverances of reflection are deliverances originally adapted to the practical solution of ancestral social and natural environments. Small wonder our semantic and normative theories of semantic and normative phenomena are chronically underdetermined! Imagine trying to smell skeletal structure absent all knowledge of bone.

But then why do we persist? Cognitive reflex. Raising the theoretical question of semantic and normative cognition automatically (unconsciously) cues the application of intentional cognition. Since the supercomplicated structure and dynamics of sociocognition belong to the information it systematically neglects, we intuit only this applicability, and nothing of the specialization. We suffer a ‘soda straw effect,’ a discursive version of Kahneman’s What-you-see-is-all-there-is effect. Intuition tells us it has to be this way, while the deliverances of reflection betray nothing of their parochialism. We quite simply did not evolve the capacity either to intuit our nature or to intuit our our inability to intuit our nature, and so we hallucinate something inexplicable as a result. We find ourselves trapped in a kind of discursive anosognosia, continually applying problem-parochial access and capacity to general, theoretical questions regarding the nature of inexplicable-yet-(allegedly)-undeniable semantic and normative phenomena.

This picture is itself open to RRQ, of course, the difference being that the positions taken are all natural, and so open to noise reduction as well. As per Quine’s process of assimilation, the above story provides a cognitive scientific explanation for a very curious kind of philosophical behaviour. Savvy to the ecological limits of noetic resources, it patiently awaits the accumulation of empirical resources to explain them, and so actually has a chance of ending the ancient regress.

The image Floridi chases is a mirage, what happens when our immediate intuitions are so impoverished as to arise without qualification, and so smack of the ‘ultimate.’ Much as the absence of astronomical information duped our ancestors into thinking our world stood outside the order of planets, celestial as opposed to terrestrial, the absence of metacognitive information dupes us into thinking our minds stand outside the order of the world, intentional as opposed to natural. Nothing, it seems, could be more obvious than noocentrism, despite our millennial inability to silence any—any—question regarding the nature of the intentional.

Reading From Bacteria to Bach and Back III: Beyond Stances

by rsbakker


The problem with his user-illusion model of consciousness, Dennett realizes, lies in its Cartesian theatricalization, the reflex to assume the reality of the illusion, and to thence argue that it is in fact this… the dumbfounding fact, the inexplicable explanandum. We acknowledge that consciousness is a ‘user-illusion,’ then insist this ‘manifest image’ is the very thing requiring explanation. Dennett’s de-theatricalization, in other words, immediately invites re-theatricalization, intuitions so powerful he feels compelled to devote an entire chapter to resisting the invitation, only to have otherwise generally sympathetic readers, like Tom Clark, to re-theatricalize everything once again. To deceive us at all, the illusion itself has to be something possessing, minimally it seems, the capacity to deceive. Faced with the question of what the illusion amounts to, he writes, “It is a representation of a red stripe in some neural system of representation” (358), allowing Clark and others to reply, ‘and so possesses content called qualia.’

One of the striking features of From Bacteria to Bach and Back is the degree to which his trademark Intentional Systems Theory (IST) fades into the background. Rather than speak of the physical stance, design stance, and intentional stance, he continually references Sellars tripartite nomenclature from “Philosophy and the Scientific Image of Man,” the ‘original image’ (which he only parenthetically mentions), the ‘manifest image,’ and the ‘scientific image.’ The manifest image in particular, far more than the intentional stance, becomes his primary theoretical term.

Why might this be?

Dennett has always seen himself threading a kind of theoretical needle, fending the scientifically preposterous claims of intentionalism on the one hand, and the psychologically bankrupt claims of eliminativism on the other. Where intentionalism strands us with impossible explanatory vocabularies, tools that cause more problems than they solve, eliminativism strands us with impoverished explanatory vocabularies, purging tools that do real work from our theoretical kits without replacing them. It’s not simply that Dennett wants, as so many of his critics accuse him, ‘to have it both ways’; it’s that he recognizes that having it both ways is itself the only way, theoretically speaking. What we want is to square the circle of intentionality and consciousness without running afoul either squircles or blank screens, which is to say, inexplicable intentionalisms or deaf-mute eliminativisms.

Seen in this light, Dennett’s apparent theoretical opportunism, rapping philosophical knuckles for some applications of intentional terms, shaking scientific hands for others, begins to look well motivated—at least from a distance. The global theoretical devil, of course, lies in the local details. Intentional Systems Theory constitutes Dennett’s attempt to render his ‘middle way’ (and so his entire project) a principled one. In From Bacteria to Bach and Back he explains it thus:

There are three different but closely related strategies or stances we can adopt when trying to understand, explain, and predict phenomena: the physical stance, the design stance, in the intentional stance. The physical stance is the least risky but also the most difficult; you treat the phenomenon in question as a physical phenomenon, obeying the laws of physics, and use your hard-won understanding of physics to predict what will happen next. The design stance works only for things that are designed, either artifacts or living things or their parts, and have functions or purposes. The intentional stance works primarily for things that are designed to use information to accomplish their functions. It works by treating the thing as a rational agent, attributing “beliefs” and “desires” and “rationality” to the thing, and predicting that it will act rationally. 37

The strategy is straightforward enough. There’s little doubt that the physical stance, design stance, and intentional stance assist solving certain classes of phenomena in certain circumstances, so when confronted by those kinds of phenomena in those kinds of circumstances, taking the requisite stance is a good bet. If we have the tools, then why not use them?

But as I’ve been arguing for years here at Three Pound Brain, the problems stack up pretty quick, problems which, I think, find glaring apotheosis in From Bacteria to Bach and Back. The first problem lies in the granularity of stances, the sense in which they don’t so much explain cognition as merely divvy it up into three families. This first problem arises from the second, their homuncularity, the fact that ‘stances’ amount to black-box cognitive comportments, ways to manipulate/explain/predict things that themselves resist understanding. The third, and (from the standpoint his thesis) most devastating problem, also turns on the second: the fact that stances are the very thing requiring explanation.

The reason the intentional stance, Dennett’s most famed explanatory tool, so rarely surfaces in From Bacteria to Bach and Back is actually quite simple: it’s his primary explanandum. The intentional stance cannot explain comprehension simply because it is, ultimately, what comprehension amounts to…

Well, almost. And it’s this ‘almost,’ the ways in which the intentional stance defects from our traditional (cognitivist) understanding of comprehension, which has ensnared Dennett’s imagination—or so I hope to show.

What does this defection consist in? As we saw, the retasking of metacognition to solve theoretical questions was doomed to run afoul sufficiency-effects secondary to frame and medial neglect. The easiest way to redress these illusions lies in interrogating the conditions and the constitution of cognition. What the intentional stance provides Dennett is a granular appreciation of the performative, and therefore the social, fractionate, constructive, and circumstantial nature of comprehension. Like Wittgenstein’s ‘language games,’ or Kuhn’s ‘paradigms,’ or Davidson’s ‘charity,’ Dennett’s stances allow him to capture something of the occluded external and internal complexities that have for so long worried the ‘clear and distinct’ intuition of the ambiguous human cylinder.

The intentional stance thus plays a supporting role, popping up here and there in From Bacteria to Bach and Back insofar as it complicates comprehension. At every turn, however, we’re left with the question of just what it amounts to. Intentional phenomena such as representations, beliefs, rules, and so on are perspectival artifacts, gears in what (according to Dennett) is the manifest ontology we use to predict/explain/manipulate one another using only the most superficial facts. Given the appropriate perspective, he assures us, they’re every bit as ‘real’ as you and I need. But what is a perspective, let alone a perspectival artifact? How does it—or they—function? What are the limits of application? What constitutes the ‘order’ it tracks, and why is it ‘there’ as opposed to, say, here?

Dennett—and he’s entirely aware of this—really doesn’t have much more than suggestions and directions when it comes to these and other questions. As recently as Intuition Pumps, he explicitly described his toolset as “good at nibbling, at roughly locating a few ‘fixed’ points that will help us see the general shape of the problem” (79). He knows the intentional stance cannot explain comprehension, but he also knows it can inflect it, nudge it closer to a biological register, even as it logically prevents the very kind of biological understanding Dennett—and naturalists more generally—take as the primary desideratum. As he writes (once again in 2013):

I propose we simply postpone the worrisome question of what really has a mind, about what the proper domain of the intentional stance is. Whatever the right answer to that question is—if it has a right answer—this will not jeopardize the plain fact that the intentional stance works remarkably well as a prediction method in these and other areas, almost as well as it works in our daily lives as folk-psychologists dealing with other people. This move of mine annoys and frustrates some philosophers, who want to blow the whistle and insist on properly settling the issue of what a mind, a belief, a desire is before taking another step. Define your terms, sir! No, I won’t. that would be premature. I want to explore first the power and the extent of application of this good trick, the intentional stance. Intuition Pumps, 79

But that was then and this is now. From Bacteria to Bach and Back explicitly attempts to make good on this promissory note—to naturalize comprehension, which is to say, to cease merely exploring the scope and power of the intentional stance, and to provide us with a genuine naturalistic explanation. To explain, in the high-dimensional terms of nature, what the hell it is. And the only way to do this is to move beyond the intentional stance, to cease wielding it as a tool, to hoist it on the work-bench, and to adduce the tools that will allows us to take it apart.

By Dennett’s own lights, then, he needs to reverse-engineer the intentional stance. Given his newfound appreciation for heuristic neglect, I understand why he feels the potential for doing this. A great deal of his argument for Cartesian gravity, as we’ve seen, turns on our implicit appreciation of the impact of ‘no information otherwise.’ But sensing the possibility of those tools, unfortunately, does not amount to grasping them. Short explicit thematizations of neglect and sufficiency, he was doomed to remain trapped on the wrong side of the Cartesian event horizon.

On Dennett’s view, intentional stances are homuncular penlights more than homuncular projectors. What they see, ‘reasons,’ lies in the ‘eye of the beholder’ only so far as natural and neural selection provisions the beholder with the specialized competencies required to light them up.

The reasons tracked by evolution I have called ‘free-floating rationales,’ a term that has apparent jangled the nerves of some few thinkers, who suspect I am conjuring up ghosts of some sort. Not at all. Free-floating rationales are no more ghostly or problematic than numbers or centers of gravity. Cubes had eight corners before people invented ways of articulating arithmetic, and asteroids had centers of gravity before there were physicists to dream up the idea and calculate with it. Reasons existed long before there were reasoners. 50

To be more precise, the patterns revealed by the intentional stance exist independent of the intentional stance. For Dennett, the problematic philosophical step—his version of the original philosophical sin of intentionalism—is to think the cognitive bi-stability of these patterns, the fact they appear to be radically different when spied with a first-person penlight versus scientific floodlights, turns on some fundamental ontological difference.

And so, Dennett holds that a wide variety of intentional phenomena are real, just not in the way we have traditionally understood them to be real. This includes reasons, beliefs, functions, desires, rules, choices, purposes, and—pivotally, given critiques like Tom Clark’s—representations. So far as this bestiary solves real world problems, they have to grab hold of the world somehow, don’t they? The suggestion that intentional posits are no more problematic than formal or empirical posits (like numbers and centers of gravity) is something of a Dennettian refrain—as we shall see, it presumes the heuristics involved in intentional cognition possess the same structure as heuristics in other domains, which is simply not the case. Otherwise, so long as intentional phenomena actually facilitate cognition, it seems hard to deny that they broker some kind high-dimensional relationship with the high-dimensional facts of our environment.

So what kind of relationship? Well, Dennett argues that it will be—has to be, given evolution—heuristic. So far as that relationship is heuristic, we can presume that it solves by taking the high-dimensional facts of the matter—what we might call the deep information environment—for granted. We can presume, in other words, that it will ignore the machinery, and focus on cues, available information systematically related to that machinery in ways that enable the prediction/explanation/manipulation of that machinery. In other words, rather than pick out the deep causal patterns responsible it will exploit those available patterns possessing some exploitable correlation to those patterns.

So then where, one might ask, do the real patterns pertaining to ‘representation’ lie in this? What part or parts of this machine-solving machinery gainsays the ‘reality’ of representations? Just where do we find the ‘real patterns’ underwriting the content responsible for individuating our reports? It can’t be the cue, the available information happily correlated to the system or systems requiring solution, simply because the cue is often little more than a special purpose trigger. The Heider-Simmel Illusion, for instance, provides a breathtaking example of just how little information it takes. So perhaps we need to look beyond the cue, to the adventitious correlations binding it to the neglected system or systems requiring solution. But if these are the ‘real patterns’ illuminated by the intentional stance, it’s hard to understand what makes them representational—more than hard in fact, since these relationships consist in regularities, which, as whole philosophical traditions have discovered, are thoroughly incompatible with the distinctively cognitive properties of representation. Well, then, how about the high-dimensional machinery indirectly targeted for solution? After all, representations provide us a heuristic way to understand otherwise complex cognitive relationships. This is where Dennett (and most everyone else, for that matter) seems to think the real patterns lie, the ‘order which is there,’ in the very machinery that heuristic systems are adapted—to avoid! Suddenly, we find ourselves stranded with regularities only indirectly correlated to the cues triggering different heuristic cognitive systems. How could the real patterns gainsaying the reality of representations be the very patterns our heuristic systems are adapted to ignore?

But if we give up on the high-dimensional systems targeted for solution, perhaps we should be looking at the heuristic systems cognizing—perhaps this is where the real patterns gainsaying the reality of representations lie, here, in our heads. But this is absurd, of course, since the whole point of saying representations are real (enough) is to say they’re out there (enough), independent of our determinations one way or another.

No matter how we play this discursive shell game, the structure of heuristic cognition guarantees that we’ll never discover the ‘real pattern pea,’ even with intentional phenomena so apparently manifest (because so useful in both everyday and scientific contexts) as representations. There’s real systems, to be sure, systems that make ‘identifying representations’ as easy as directing attention to the television screen. But those systems are as much here as they are there, making that television screen simply another component in a greater whole. Without the here, there is no there, which is to say, no ‘representation.’ Medial neglect assures the astronomical dimensionality of the here is flattened into near oblivion, stranding cognition with a powerful intuition of a representational there. Thanks to our ancestors, who discovered myriad ways to manipulate information to cue visual cognition out of school, to drape optical illusions across their cave walls, or to press them into lumps of clay, we’ve become so accustomed to imagery as to entirely forget the miraculousness of seeing absent things in things present. Those cues are more or less isomorphic to the actual systems comprising the ancestral problem ecologies visual cognition originally evolved to manage. This is why they work. They recapitulate certain real patterns of information in certain ways—as does your, retina, your optic nerve, and every stage of visual cognition culminating in visual experience. The only thing ‘special’ about the recapitulations belonging to your television screen is their availability, not simply to visual cognition, but to our attempts to cognize/troubleshoot such instances of visual cognition. The recapitulations on the screen, unlike, say, the recapitulations captured by our retinas, are the one thing we can readily troubleshoot should they begin miscuing visual cognition. Neglect ensures the intuition of sufficiency, the conviction that the screen is the basis, as opposed to simply another component in a superordinate whole. So, we fetishize it, attribute efficacies belonging to the system to what is in fact just another component. All its enabling entanglements vanish into the apparent miracle of unmediated semantic relationships to whatever else happens to be available. Look! we cry. Representation

Figure 1: This image of the Martian surface taken by Viking 1 in 1976 caused a furor on earth, for obvious reasons.

Figure 2: Images such as this one taken by the Mars Reconnaissance Orbiter reveal the former to be an example of facial pareidolia, an instance where information cues facial recognition where no faces are to be found. The “Face on Mars” seems be an obvious instance of projection—mere illusion—as opposed to discovery. Until, that is, one realizes that both of these images consist of pixels cuing your visual systems ‘out of school’! Both, in other words, constitute instances of pareidolia: the difference lies in what they enable.

Some apparent squircles, it turns out, are dreadfully useful. So long as the deception is systematic, it can be instrumentalized any which way. Environmental interaction is the basis of neural selection (learning), and neural selection is the basis of environmental domination. What artificial visual cuing—‘representation’—provides is environmental interaction on the cheap, ways to learn from experience without having to risk or endure experience. A ‘good trick’ indeed!

This brings us to a great fault-line running through the entirety of Dennett’s corpus. The more instrumental a posit, the more inclined he’s to say it’s ‘real.’ But when critics accuse him of instrumentalism, he adverts to the realities underwriting the instrumentalities, what enables them to work, to claim a certain (ambiguous, he admits) brand of realism. But as should now be clear, what he elides when he does this is nothing less than the structure of heuristic cognition, which blindly exploits the systematic correlations between information available and the systems involved to solve those systems as far as constraints on availability and capacity allow.

The reason he can elide the structure of heuristic cognition (and so find his real patterns argument convincing) lies, pretty clearly, I think, in the conflation of human intentional cognition (which is radically heuristic) with the intentional stance. In other words, he confuses what’s actually happening in instances of intentional cognition with what seems to be happening in instances of intentional cognition, given neglect. He runs afoul Cartesian gravity. “We tend to underestimate the strength of the forces that distort our imaginations,” he writes, “especially when confronted by irreconcilable insights that are ‘undeniable’” (22). Given medial neglect, the inability to cognize our contemporaneous cognizing, we are bound to intuit the order as ‘there’ (as ‘lateral’) even when we, like Dennett, should know better. Environmentalization is, as Hume observed, the persistent reflex, the sufficiency effect explaining our default tendency to report medial artifacts, features belonging to the signal, as genuine environmental phenomena, or features belonging to the source.

As a heuristic device, an assumption circumventing the brute fact of medial neglect, the environmentalization heuristic possesses an adaptive problem ecology—or as Dennett would put it, ‘normal’ and ‘abnormal’ applications. The environmentalization heuristic, in other words, possesses adaptive application conditions. What Dennett would want to argue, I’m sure, is that ‘representations’ are no more or less heuristic than ‘centres of gravity,’ and that we are no more justified in impugning the reality of the one than the reality of the other. “I don’t see why my critics think their understanding about what really exists is superior to mine,” he complains at one point in From Bacteria to Bach and Back, “so I demure” (224). And he’s entirely right on this score: no one has a clue as to what attributing reality amounts to. As he writes regarding the reality of beliefs in “Real Patterns”:

I have claimed that beliefs are best considered to be abstract objects rather like centers of gravity. Smith considers centers of gravity to be useful fictions while Dretske considers them to be useful (and hence?) real abstractions, and each takes his view to constitute a criticism of my position. The optimistic assessment of these opposite criticisms is that they cancel each other out; my analogy must have hit the nail on the head. The pessimistic assessment is that more needs to be said to convince philosophers that a mild and intermediate sort of realism is a positively attractive position, and not just the desperate dodge of ontological responsibility it has sometimes been taken to be. I have just such a case to present, a generalization and extension of my earlier attempts, via the concept of a pattern. 29

Heuristic Neglect Theory, however, actually put us in a position to make a great deal of sense of ‘reality.’ We can see, rather plainly, I think, the disanalogy between ‘centres of gravity’ and ‘beliefs,’ the disanalogy that leaps out as soon as we consider how only the latter patterns require the intentional stance (or more accurately, intentional cognition) to become salient. Both are heuristic, certainly, but in quite different ways.

We can also see the environmentalization heuristic at work in the debate between whether ‘centres of gravity’ are real or merely instrumental, and Dennett’s claim that they lie somewhere in-between. Do ‘centres of gravity’ belong to the order which is there, or do we simply project them in useful ways? Are they discoveries, or impositions? Why do we find it so natural to assume either the one or the other, and so difficult to imagine Dennett’s in-between or ‘intermediate’ realism? Why is it so hard conceiving of something half-real, half-instrumental?

The fundamental answer lies in the combination of frame and medial neglect. Our blindness to the enabling dimension of cognition renders cognition, from the standpoint of metacognition, an all but ethereal exercise. ‘Transparency’ is but one way of thematizing the rank incapacity generally rendering environmentalization such a good trick. “Of course, centres of gravity lie out there!” We are more realists than instrumentalists. The more we focus on the machinery of cognition, however, the more dimensional the medial becomes, the more efficacious, and the more artifactual whatever we’re focusing on begins to seem. Given frame neglect, however, we fail to plug this higher-dimensional artifactuality into the superordinate systems encompassing all instances of cognition, thus transforming gears into tools, fetishizing those instances, in effect. “Of course, centres of gravity organize out there!” We become instrumentalists.

If these incompatible intuitions are all that the theoretician has to go on, then Dennett’s middle way can only seem tendentious, an attempt to have it both ways. What makes Dennett’s ‘mild or intermediate’ realism so difficult to imagine is nothing less than Cartesian gravity, which is to say, the compelling nature of the cognitive illusions driving our metacognitive intuitions either way. Squares viewed on this angle become circles viewed on that. There’s no in-between! This is why Dennett, like so many revolutionary philosophical thinkers before him, is always quick to reference the importance of imagination, of envisioning how things might be otherwise. He’s always bumping against the limits of our shackles, calling attention to the rattle in the dark. Implicitly, he understands the peril that neglect, by way of sufficiency, poses to our attempts to puzzle through these problems.

But only implicitly, and as it turns out (given tools so blunt and so complicit as the intentional stance), imperfectly. On Heuristic Neglect Theory, the practical question of what’s real versus what’s not is simply one of where and when the environmentalization heuristic applies, and the theoretical question of what’s ‘really real’ and what’s ‘merely instrumental’ is simply an invitation to trip into what is obviously (given the millennial accumulation of linguistic wreckage) metacognitive crash space. When it comes to ‘centres of gravity,’ environmentalization—or the modifier ‘real’—applies because of the way the posit economizes otherwise available, as opposed to unavailable, information. Heuristic posits centres of gravity might be, but ones entirely compatible with the scientific examination of deep information environments.

Such is famously not the case with posits like ‘belief’ or ‘representation’—or for that matter, ‘real’! The heuristic mechanisms underwriting environmentalization are entirely real, as is the fact that these heuristics do not simply economize otherwise available information, but rather compensate for structurally unavailable information. To this extent, saying something is ‘real’—acknowledging the applicability of the environmentalization heuristic—involves the order here as much as the order there, so far as it compensates for structural neglect, rather than mere ignorance or contingent unavailability. ‘Reality’ (like ‘truth’) communicates our way of selecting and so sorting environmental interactions while remaining almost entirely blind to the nature of those environmental interactions, which is to say, neglecting our profound continuity with those environments.

At least as traditionally (intentionally) conceived, reality does not belong to the real, though reality-talk is quite real, and very useful. It pays to communicate the applicability of environmentalization, if only to avoid the dizzying cognitive challenges posed by the medial, enabling dimensions of cognition. Given the human circuit, truth-talk can save lives. The apparent paradox of such declarations—such as saying, for instance, that it’s true that truth does not exist—can be seen as a direct consequence of frame and medial neglect, one that, when thought carefully through step by empirically tractable step, was pretty much inevitable. We find ourselves dumbfounding for good reason!

The unremarkable fact is that the heuristic systems we resort to when communicating and trouble-shooting cognition are just that: heuristic systems we resort to when communicating and trouble-shooting cognition. And what’s more, they possess no real theoretical power. Intentional idioms are all adapted to shallow information ecologies. They comprise the communicative fraction of compensatory heuristic systems adapted not simply to solve astronomically complicated systems on the cheap, but absent otherwise instrumental information belonging to our deep information environments. Applying those idioms to theoretical problems amounts to using shallow resources to solve the natural deeps. The history of philosophy screams underdetermination for good reason! There’s no ‘fundamental ontology’ beneath, no ‘transcendental functions’ above, and no ‘language-games’ or ‘intentional stances’ between, just the machinations of meat, which is why strokes and head injuries and drugs produce the boggling cognitive effects they do.

The point to always keep in mind is that every act of cognition amounts to a systematic meeting of at least two functionally distinct systems, the one cognized, the other cognizing. The cognitive facts of life entail that all cognition remains, in some fundamental respect, insensitive to the superordinate system explaining the whole let alone the structure and activity of cognition. This inability to cognize our position within superordinate systems (frame neglect) or to cognize our contemporaneous cognizing (medial neglect) is what renders the so-called first-person (intentional stance) homuncular, blind to its own structure and dynamics, which is to say, oblivious to the role here plays ordering ‘there.’ This is what cognitive science needs to internalize, the way our intentional and phenomenal idioms steer us blindly, absent any high-dimensional input, toward solutions that, when finally mapped, will bear scant resemblance to the metacognitive shadows parading across our cave walls. And this is what philosophy needs to internalize as well, the way their endless descriptions and explanations, all the impossible figures—squircles—comprising the great bestiary of traditional reflection upon the nature of the soul, are little more than illusory artifacts of their inability to see their inability to see. To say something is ‘real’ or ‘true’ or ‘factual’ or ‘represents,’ or what have you is to blindly cue blind orientations in your fellows, to lock them into real but otherwise occluded systems, practically and even experimentally efficacious circuits, not to invoke otherworldly functions or pick out obscure-but-real patterns like ‘qualia’ or ‘representations.’

The question of ‘reality’ is itself a heuristic question. As horribly counter-intuitive as all this must sound, we really have no way of cognizing the high-dimensional facts of our environmental orientation, and so no choice but to problem-solve those facts absent any inkling of them. The issue of ‘reality,’ for us, is a radically heuristic one. As with all heuristic matters, the question of application becomes paramount: where does externalization optimize, and where does it crash? It optimizes where the cues relied upon generalize, provide behavioural handles that can be reverse-engineered—‘reduced’—absent reverse-engineering us. It optimizes, in other words, wherever frame and medial neglect do not matter. It crashes, however, where the cues relied upon compensate, provide behavioural handles that can only be reverse-engineered by reverse-engineering ourselves.

And this explains the ‘gobsmacking fact’ with which we began, how we can source the universe all the way back to first second, and yet remain utterly confounded by our ability to do so. Short cognitive science, compensatory heuristics were all that we possessed when it came to question of ourselves. Only now do we find ourselves in a position to unravel the nature of the soul.

The crazy thing to understand, here, the point Dennett continually throws himself toward in From Bacteria to Bach and Back only to be drawn back out on the Cartesian tide, is that there is no first-person. There is no original or manifest or even scientific ‘image’: these all court ‘imaginative distortion’ because they, like the intentional stance, are shallow ecological artifacts posturing as deep information truths. It is not the case that, “[w]e won’t have a complete science of consciousness until we can align our manifest-image identifications of mental states by their contents with scientific-image identifications of the subpersonal information structures and events that are causally responsible for generating the details of the user-illusion we take ourselves to operate in” (367)—and how could it be, given our abject inability to even formulate ‘our manifest-image identifications,’ to agree on the merest ‘detail of our user-illusion’? There’s a reason Tom Clark emphasizes this particular passage in his defense of qualia! If it’s the case that Dennett believes a ‘complete science of consciousness’ requires the ‘alignment’ of metacognitive reports with subpersonal mechanisms then he is as much a closet mysterian as any other intentionalist. There’s simply too many ways to get lost in the metacognitive labyrinth, as the history of intentional philosophy amply shows.

Dennett needs only continue following the heuristic tracks he’s started down in From Bacteria to Bach and Back—and perhaps recall his own exhortation to imagine—to see as much. Imagine how it was as a child, living blissfully unaware of philosophers and scientists and their countless confounding theoretical distinctions and determinations. Imagine the naïveté, not of dwelling within this or that ‘image,’ but within an ancestral shallow information ecology, culturally conditioned to be sure, but absent the metacognitive capacity required to run afoul sufficiency effects. Imagine thinking without ‘having thoughts,’ knowing without ‘possessing knowledge,’ choosing without ‘exercising freedom.’ Imagine this orientation and how much blinkered metacognitive speculation and rationalization is required to transform it into something resembling our apparent ‘first-person perspective’—the one that commands scarcely any consensus beyond exceptionalist conceit.

Imagine how much blinkered metacognitive speculation and rationalization is required to transform it into the intentional stance.

So, what, then, is the intentional stance? An illusory artifact of intentional cognition, understood in the high-dimensional sense of actual biological mechanisms (both naturally and neurally selected), not the low-dimensional, contentious sense of an ‘attitude’ or ‘perspective.’ The intentional stance represents an attempt to use intentional cognition to fundamentally explain intentional cognition, and in this way, it is entirely consonant with the history of philosophy as a whole. It differs—perhaps radically so—in the manner it circumvents the metacognitive tendency to report intentional phenomena as intrinsic (self-sufficient), but it nevertheless remains a way to theorize cognition and experience via, as Dennett himself admits, resources adapted to their practical troubleshooting.

The ‘Cartesian wound’ is no more than theatrical paint, stage make-up, and so something to be wiped away, not healed. There is no explanatory gap because there is no first-person—there never has been, apart from the misapplication of radically heuristic, practical problem-solving systems to the theoretical question of the soul. Stripped of the first-person, consciousness becomes a natural phenomenon like any other, baffling only for its proximity, for overwriting the very page it attempts to read. Heuristic Neglect Theory, in other words, provides a way for us to grasp what we are, what we always have been: a high-dimensional physical system possessing selective sensitivities and capacities embedded in other high-dimensional physical systems. This is what you’re experiencing now, only so far as your sensitivities and capacities allow. This, in other words, is this… You are fundamentally inscrutable unto yourself outside practical problem-solving contexts. Everything else, everything apparently ‘intentional’ or ‘phenomenal’ is simply ‘seems upon reflection.’ There is no manifest image,’ only a gallery of competing cognitive illusions, reflexes to report leading to the crash space we call intentional philosophy. The only ‘alignment’ required is that between our shallow information ecology and our deep information environments: the ways we do much with little, both with reference to each other and with ourselves. This is what you reference when describing a concert to your buddies. This is what you draw on when you confess your secrets, your feelings, your fears and aspirations. Not a ‘mind,’ not a ‘self-model,’ nor even a ‘user illusion,’ but the shallow cognitive ecology underwriting your brain’s capacity to solve and report itself and others.

There’s a positively vast research project buried in this outlook, and as much would become plain, I think, if enough souls could bring themselves see past the fact that it took an institutional outsider to discover. The resolutely post-intentional empirical investigation of the human has scarcely begun.

Reading From Bacteria to Bach and Back II: The Human Squircle

by rsbakker

The entry placing second (!!) in the 2016 Illusion of the Year competition, the Ambiguous Cylinder Illusion, blew up on Reddit for good reason. What you’re seeing below is an instance where visual guesswork arising from natural environmental frequencies have been cued ‘out of school.’ In this illusion, convex and concave curves trick the visual system into interpreting a ‘squircle’ as either a square or a circle—thus the dazzling images. Ambiguous cylinders provide dramatic illustration of a point Dennett makes many times in From Bacteria to Bach and Back: “One of the hallmarks of design by natural selection,” he writes, “is that it is full of bugs, in the computer programmer’s sense: design flaws that show up only under highly improbable conditions, conditions never encountered in the finite course of R&D that led to the design to date, and hence not yet patched or worked around by generations of tinkering” (83). The ‘bug’ exploited in this instance could be as much a matter of neural as natural selection, of course—perhaps, as with the Muller-Lyer illusion, individuals raised in certain environments are immune to this effect. But the upshot remains the same. By discovering ways to cue heuristic visual subsystems outside their adaptive problem ecologies, optical illusionists have developed a bona fide science bent on exploring what might be called ‘visual crash space.’

One of the ideas behind Three Pound Brain is to see traditional intentional philosophy as the unwitting exploration of metacognitive crash space. Philosophical reflection amounts to the application of metacognitive capacities adapted to trouble-shooting practical cognitive and communicative issues to theoretical problems. What Dennett calls ‘Cartesian gravity,’ in other words, has been my obsession for quite some time, and I think I have a fair amount of wisdom to share, especially when it comes to philosophical squircles, things that seem undeniable, yet nevertheless contradict our natural scientific understanding. Free will is perhaps the most famous of these squircles, but there’s really no end to them. The most pernicious squircle of all, I’m convinced, is the notion of intentionality, be it ‘derived’ or ‘original.’

On Heuristic Neglect Theory, Cartesian gravity boils down to metacognitive reflexes, the application of heuristic systems to questions they have no hope of answering absent any inkling of as much. The root of the difficulty lies in neglect, the way insensitivity to the limits of felicitous application results in various kinds of systematic errors (what might be seen as generalized versions of the WYSIATI effects discovered by Daniel Kahneman).

The centrality of neglect (understood as an insensitivity that escapes our sensitivity) underwrites my reference to the ‘Grand Inversion’ in the previous installment. As an ecological artifact, human cognition trivially possesses what might be called a neglect structure: we are blind to the vast bulk of the electromagnetic spectrum, for instance, because sensing things like gamma radiation, infrared, or radio waves paid no ancestral dividends. If fact, one can look at the sum of scientific instrumentation as mapping out human ‘insensitivity space,’ providing ingress into all those places our ancestral sensitivities simply could not take us. Neglect, in other words, allows us to quite literally invert our reflexive ways of comprehending comprehension, not only in a wholesale manner, but in a way entirely compatible with what Dennett calls, following Sellars, the scientific image.

Simply flipping our orientation in this way allows us to radically recharacterize Dennett’s project in From Bacteria to Bach and Back as a matter of implicitly mapping our human neglect structure by filling in all the naturalistic blanks. I say implicit because his approach remains primarily focused on what is neglected, rather than neglect considered in its own right. Despite this, Dennett is quite cognizant of the fact that he’s discussing a single phenomenon, albeit one he characterizes (thanks to Cartesian gravity!) in positive terms:

Darwin’s “strange inversion of reasoning” and Turing’s equally revolutionary inversion form aspects of a single discovery: competence without comprehension. Comprehension, far from being a god-like talent from which all design must flow, is an emergent effect of systems of uncomprehending competence… (75)

The problem with this approach is one that Dennett knows well: no matter how high you build your tower of natural processes, all you’ve managed to do, in an important sense, is recapitulate the mystery you’ve set out to solve. No matter how long you build your ramp, talk of indefinite thresholds and ‘emergent effects’ very quickly reveals you’re jumping the same old explanatory shark. In a sense, everyone in the know knows at least the moral of the story Dennett tells: competences stack into comprehension on any Darwinian account. The million-dollar question is how ‘all that’ manages to culminate in this

Personally speaking, I’ve never had an experience quite like the one I had reading this book. Elation, realizing that one of the most celebrated minds in philosophy had (finally!) picked up on the same trail. Urgency, knowing I had to write a commentary, like, now. And then, at a certain point, wonder at the sense of knowing, quite precisely, what it was tantalizing his intuitions: the profound connection between his Darwinian commitments and his metaphilosophical hunches regarding Cartesian gravitation.

Heuristic Neglect Theory not only allows us to economize Dennett’s bottom-up saga of stacking competences, it also provides a way to theorize his top-down diagnosis of comprehension. It provides, in other words, the common explanatory framework required to understand this… in terms of ‘all that.’ No jumps. No sharks. Just one continuous natural story folding comprehension into competence (or better, behaviour).

What applies to human cognition applies to human metacognition—understood as the deliberative derivation of endogenous or exogenous behaviour via secondary (functionally distinct) access to one’s own endogenous or exogenous behaviour. As an ecological artifact, human metacognition is fractionate and heuristic, and radically so, given the complexity of the systems it solves. As such, it possesses its own neglect structure. Understanding this allows us to ‘reverse-engineer’ far more than Dennett suspects, insofar as it lets us hypothesize the kinds of blind spots we should expect to plague our attempts to theorize ourselves given the deliverances of philosophical reflection. It provides the theoretical basis, I think, for understanding philosophy as the cognitive psychological phenomenon that it is.

It’s a truism to say that the ability to cognize any system crucially depends on a cognitive system’s position relative to that system. But things get very interesting once we begin picking at the how and why. The rationality of geocentrism, for instance, is generally attributed to the fact that from our terrestrial perspective, the sky does all the moving. We remain, as far as we can tell, motionless. Why is motionlessness the default? Why not assume ignorance? Why not assume that the absence of information warranted ‘orbital agnosticism’? Basically, because we lacked the information to determine our lack of information.

Figure 1: It is a truism to state that where we find ourselves within a system determines our ability to cognize that system. ‘Frame neglect’ refers to our cognitive insensitivity, not only to our position within unknown systems, but to this insensitivity.

Figure 2: Thus, the problem posed by sufficiency, the automatic presumption that what we see is all there is. The ancients saw the stars comprising Orion as equidistant simply because they lacked the information and theory required to understand their actual position—because they had no way of knowing otherwise.

Figure 3: It is also a truism to state that the constitution of our cognitive capacities determines our ability to cognize systems. ‘Medial neglect’ refers to our cognitive insensitivity, not only to the constitution of our cognitive capacities, but to this insensitivity. We see, but absent any sensitivity to the machinery enabling sight.

Figure 4: Thus, once again, the problem posed by sufficiency. Our brain interprets ambiguous cylinders as magical squircles because it possesses no sensitivity to the kinds of heuristic mechanisms involved in processing visual information.

Generally speaking, we find these ‘no information otherwise’ justifications so intuitive that we just move on. We never ask how or why the absence of sensible movement cues reports of motionlessness. Plato need only tell us that his prisoners have been chained before shadows their whole lives and we get it, we understand that for them, shadows are everything. By merely conjuring an image, Plato secures our acknowledgment that we suffer a congenital form of frame neglect, a cognitive insensitivity to the limits of cognition that can strand us with fantastic (and so destructive) worldviews—and without our permission, no less. Despite the risk entailed, we neglect this form of neglect. Though industry and science are becoming ever more sensitive to the problems posed by the ‘unknown unknown,’ it remains the case that each of us at once understands the peril and presumes we’re the exception, the system apart from the systems about us. The motionless one.

Frame neglect, our insensitivity to the superordinate systems encompassing us, blinds us to our position within those systems. As a result, we have no choice but to take those positions for granted. This renders our cognitive orientations implicit, immune to deliberative revision and so persistent (as well as vulnerable to manipulation). Frame neglect, in other words, explains why bent orientations stay bent, why we suffer the cognitive inertia we do. More importantly, it highlights what might be called default sufficiency, the congenital presumption of implicit cognitive adequacy. We were in no position to cognize our position relative the heavens, and yet we nevertheless assumed that we were simply because we were in no position to cognize the inadequacy of our position.

Why is sufficiency the presumptive default? The stacking of ‘competences’ so brilliantly described by Dennett requires that every process ‘do its part’: sufficiency, you could say, is the default presumption of any biological system, so far as its component systems turn upon the iterative behaviour of other component systems. Dennett broaches the notion, albeit implicitly, via the example of asking someone to report on a nearby house via cell phone:

Seeing is believing, or something like that. We tacitly take the unknown pathways between his open eyes and speaking lips to be secure, just like the requisite activity in the pathways in the cell towers between his phone and ours. We’re not curious on the occasion about how telephones work; we take them for granted. We also don’t scratch our heads in bafflement over how he can just open his eyes and then answer questions with high reliability about what is positioned in front of him in the light, because we can all do it (those of us who are not blind). 348-349

Sufficiency is the default. We inherit our position, our basic cognitive orientation, because it sufficed to solve the kinds of high-frequency and/or high impact problems faced by our ancestors. This explains why unprecedented circumstances generate the kinds of problems they do: it’s always an open question whether our basic cognitive orientation will suffice when confronted with a novel problem.

When it comes to vision, for instance, we possess a wide range of ways to estimate sufficiency and so can adapt our behaviour to a variety of lighting conditions, waving our hand in fog, peering against glares, and so on. Darkness in particular demonstrates how the lack of information requires information, lest it ‘fall off the radar’ in the profound sense entailed by neglect. So even though we possess myriad ways to vet visual information, squircles possess no precedent and so no warning, the sufficiency of the information available is taken for granted, and we suffer the ambiguous cylinder illusion. Our cognitive ecology plays a functional role in the efficacy of our heuristic applications—all of them.

From this a great deal follows. Retasking some system of competences always runs the risk of systematic deception on the one hand, where unprecedented circumstances strand us with false solutions (as with the millennia-long ontological dualism of the terrestrial and the celestial), and dumbfounding on the other, where unprecedented circumstances crash some apparently sufficient application in subsequently detectable ways, such as ambiguous for human visual systems, or the problem of determinism for undergraduate students.

To the extent that ‘philosophical reflection’ turns on the novel application of preexisting metacognitive resources, it almost certainly runs afoul instances of systematic deception and dumbfounding. Retasked metacognitive channels and resources, we can be assured, would report as sufficient, simply because our capacity to intuit insufficiency would be the product of ancestral, which is to say, practical, applications. How could information and capacity geared to catching our tongue in social situations, assessing what we think we saw, rehearsing how to explain some disaster, and so on hope to leverage theoretical insights into the fundamental nature of cognition and experience? It can’t, no more than auditory cognition, say, could hope to solve the origin of the universe. But even more problematically, it has no hope of intuiting this fundamental inability. Once removed from the vacuum of ecological ignorance, the unreliability of ‘philosophical reflection,’ its capacity to both dumbfound and to systematically deceive, becomes exactly what we should expect.

This follows, I think, on any plausible empirical account of human metacognition. I’ve been asking interlocutors to provide me a more plausible account for years now, but they always manage to lose sight of the question somehow.

On the availability side, we should expect the confusion of task-insufficient information with task-sufficient information. On the capacity side, we should expect the confusion of task-insufficient applications with task-sufficient applications. And this is basically what Dennett’s ‘Cartesian gravity’ amounts to, the reflexive deliberative metacognitive tendency to confuse scraps with banquets and hammers with swiss-army knives.

But the subtleties secondary to these reflexes can be difficult to grasp, at least at first. Sufficiency means that decreases in dimensionality, the absence of kinds and quantities of information, simply cannot be cognized as such. Just over two years ago I suffered a retinal tear, which although successfully repaired, left me with a fair amount of debris in my right eye (‘floaters,’ as they call them, which can be quite distracting if you spend as much time staring at white screens as I do). Last autumn I noticed I had developed a ‘crimp’ in my right eye’s field of vision: apparently some debris had become attached to my fovea, a mass that accumulated as I was passed from doctor to doctor and thence to the surgeon. I found myself with my own, entirely private visual illusion: the occluded retinal cells were snipped out of my visual field altogether, mangling everything I tried to focus on with my right eye. The centre of every word I looked at would be pinched into oblivion, leaving only the beginning and ending characters mashed together. Faces became positively demonic—to the point where I began developing a Popeye squint for equanimity’s sake. The world had become a grand bi-stable image: things were fine when my left eye predominated, but then for whatever reason, click, my friends and family would be eyeless heads of hair. Human squircles.

My visual centres simply neglected the missing information, and muddled along assuming the sufficiency of the information that was available. I understood the insufficiency of what I was seeing. I knew the prisoners were there, chained in their particular neural cave with their own particular shadows, but I had no way of passing that information upstream—the best I could do was manage the downstream consequences.

But what happens when we have no way of intuiting information loss? What happens when our capacity to deliberate and report finds itself chained ‘with no information otherwise’? Well, given sufficiency, it stands to reason that what metacognition cannot distinguish we will report as same, that what it cannot vet we will report as accurate, that what it cannot swap we will report inescapable, and that what it cannot source we will report as sourceless, and so on. The dimensions of information occluded, in other words, depend entirely on what we happen to be reporting. If we ponder the proximate sources of our experiences, they will strike us as sourceless. If we ponder the composition of our experiences, they will strike us simple. Why? Because human metacognition not only failed to evolve the extraordinary ability to theoretically source or analyze human experience, it failed to evolve the ability to intuit this deficit. And so, we find ourselves stranded with squircles, our own personal paradox (illusion) of ourselves, of what it is fundamentally like to be ‘me.’

Dialectically, it’s important to note how this consequence of the Grand Inversion overturns the traditional explanatory burden when it comes to conscious experience. Since it takes more metacognitive access and capacity, not less, to discern things like disunity and provenance, the question Heuristic Neglect Theory asks of the phenomenologist is, “Yes, but how could you report otherwise?” Why think the intuition of apperceptive unity (just for instance) is anything more than a metacognitive cousin of the flicker-fusion you’re experiencing staring at the screen this very instant?

Given the wildly heuristic nature of our metacognitive capacities, we should expect to possess the capacity to discriminate only what our ancestors needed to discriminate, and precious little else. So, then, how could we intuit anything but apperceptive unity? Left with a choice between affirming a low-dimensional exception to nature on the basis of an empirically implausible metacognitive capacity, and a low-dimensional artifact of the very kind we might expect given an empirically plausible metacognitive account, there really is no contest.

And the list goes on and on. Why think intuitions of ‘self-identity’ possess anything more than the information required to resolve practical, ancestral issues involving identification?

One can think of countless philosophical accounts of the ‘first-person’ as the product of metacognitive ‘neglect origami,’ the way sufficiency precludes intuiting the radical insufficiency of the typically scant dimensions of information available. If geocentrism is the default simply for the way our peripheral position in the solar system precludes intuiting our position as peripheral, then ‘noocentrism’ is the default for the way our peripheral position vis a vis ourselves precludes intuiting our position as peripheral. The same way astrophysical ignorance renders the terrestrial the apparently immovable anchor of celestial motion, metacognitive neglect renders the first-person the apparently transcendent anchor of third-person nature. In this sense, I think, ‘gravity’ is a well-chosen metaphor to express the impact of metacognitive neglect upon the philosophical imagination: metacognitive neglect, like gravity, isn’t so much a discrete force as a structural feature, something internal to the architecture of philosophical reflection. Given it, humanity was all but doomed to wallow in self-congratulatory cartoons once literacy enabled regimented inquiry into its own nature. If we’re not the centres of the universe, then surely we’re the centre of our knowledge, our projects, our communities—ourselves.

Figure 5: The retasking of deliberative metacognition is not unlike discovering something practical—such as ‘self’ (or in this case, Brian’s sandal)—in apparently exceptional, because informationally impoverished, circumstances.

Figure 6: We attempt to interpret this practical deliverance in light of these exceptional circumstances.

Figure 7: Given neglect, we presume the practical deliverance theoretically sufficient, and so ascribe it singular significance.

Figure 8: We transform ‘self’ into a fetish, something both self-sustaining and exceptional. A squircle.

Of all the metacognitive misapplications confounding traditional interpretations of cognition and experience, Dennett homes in on the one responsible for perhaps the most theoretical mischief in the form of Hume’s ‘strange inversion of reasoning’ (354-358), where the problem, as we saw in the previous post, lies in mistaking the ‘intentional object’ of the red stripe illusion for the cause of the illusion. Hume, recall, notes our curious propensity to confuse mental determinations for environmental determinations, to impute something belonging to this… to ‘all that.’ Dennett notes that the problem lies in the application: normally, this ‘confusion’ works remarkably well; it’s only in abnormal circumstances, like those belonging to the red stripe illusion, where this otherwise efficacious cognitive reflex leads us astray.

The first thing to note about this cognitive reflex is the obvious way it allows us to neglect the actual machinery of our environmental relations. Hume’s inversion, in other words, calls attention to the radically heuristic nature of so-called intentional thinking. Given the general sufficiency of all the processes mediating our environmental relationships, we need not cognize them to cognize those relationships, we can take them for granted, which is a good thing, because their complexity (the complexity cognitive science is just now surmounting) necessitates they remain opaque. ‘Opaque,’ in this instance, means heuristically neglected, the fact that all the mad dimensionalities belonging to our actual cognitive relationships appear nowhere in cognition, not even as something missing. What does appear? Well, as Dennett himself would say, only what’s needed to resolve practical ancestral problems.

Reporting environments economically entails taking as much for granted as possible. So long as the machinery you and I use to supervise and revise our environmental orientations is similar enough, we can ignore each other’s actual relationships in communication, focusing instead on discrepancies and how to optimize them. This is why we narrate only those things most prone to vary—environmentally and neurally sourced information prone to facilitate reproduction—and remain utterly oblivious to the all the things that go without saying, the deep information environment plumbed by cognitive science. The commonality of our communicative and cognitive apparatuses, not to mention their astronomical complexity, assures that we will suffer what might be called, medial neglect, congenital blindness to the high-dimensional systems enabling communication and cognition. “All the subpersonal, neural-level activity is where the actual causal interactions happen that provide your cognitive powers, but all “you” have access to is the results” (348).

From Bacteria to Bach and Back is filled with implicit references to medial neglect. “Our access to our own thinking, and especially to the causation and dynamics of its subpersonal parts, is really no better than our access to our digestive processes,” Dennett writes; “we have to rely on the rather narrow and heavily edited channel that responds to our incessant curiosity with user-friendly deliverances, only one step closer to the real me than the access to the real me that is enjoyed by my family and friends” (346).

Given sufficiency, “[t]he relative accessibility and familiarity of the outer part of the process of telling people what we can see—we know our eyes have to be open, and focused, and we have to attend, and there has to be light—conceals from us the other blank from the perspective of introspection or simple self-examination of the rest of the process” (349). The ‘outer part of the process,’ in other words, is all that we need.

Medial neglect may be both necessary and economical, but it remains an incredibly risky bet to make given the perversity of circumstance and the radical interdependency characterizing human communities. The most frequent and important discrepancies will be environmental discrepancies, those which, given otherwise convergent orientations (the same physiology, location, and training), can be communicated absent medial information, difference making differences geared to the enabling axis of communication and cognition. Such discrepancies can be resolved while remaining almost entirely ‘performance blind.’ All I need do is ‘trust’ your communication and cognition, build upon it the same blind way I build upon my own. You cry, ‘Wolf!’ and I run for the shotgun: our orientations converge.

But as my example implies, things are not always so simple. Say you and I report seeing two different birds, a vulture versus an albatross, in circumstances where such a determination potentially matters—looking for a lost hunting party, say. An endless number of medial confounds could possibly explain our sudden disagreement. Perhaps I have bad eyesight, or I think albatrosses are black, or I’m blinded by the glare of the sun, or I’m suffering schizophrenia, or I’m drunk, or I’m just sick and tired of you being right all the time, or I’m teasing you out of boredom, or more insidiously, I’m responsible for the loss of the hunting party, and want to prevent you from finding the scene of my crime.

There’s no question that, despite medial neglect, certain forms of access and capacity regarding the enabling dimension of cognition and communication could provide much in the way of problem resolution. Given the stupendous complexity of the systems involved, however, it follows that any capacity to accommodate medial factors will be heuristic in the extreme. This means that our cognitive capacity to flag/troubleshoot issues of sufficiency will be retail, fractionate, geared to different kinds of high-impact, high-frequency problems. And the simplest solution, the highest priority reflex, will be to ignore the medial altogether. If our search party includes a third soul who also reports seeing a vulture, for instance, I’ll just be ‘wrong’ for ‘reasons’ that may or not be determined afterward.

The fact of medial neglect, in other words, underwrites what might be called an environmentalization heuristic, the reflexive tendency to ‘blame’ the environment first.

When you attempt to tell us about what is happening in your experience, you ineluctably slide into a metaphorical idiom simply because you have no deeper, truer, more accurate knowledge of what was going on inside you. You cushion your ignorance with a false—but deeply tempting—model: you simply reproduce, with some hand waving and apologies, your everyday model of how you know about what is going on outside you. 348

Because that’s typically all that you need. Dennett’s hierarchical mountain of competences is welded together by default sufficiency, the blind mechanical reliance of one system upon other systems. Communicative competences not only exploit this mechanical reliance, they extend it, opening entirely novel ecosystems leveraging convergent orientation, brute environmental parallels and physiological isomorphisms, to resolve discrepancies. So long as those discrepancies are resolved, medial factors potentially impinging on sufficiency can be entirely ignored, and so will be ignored. Communications will be ‘right’ or ‘wrong,’ ‘true’ or ‘false.’ We remain as blind to the sources of our cognitive capacities as circumstances allow us to be. And we remain blind to this blindness as well.

When I say from the peak of my particular competence mountain, “Albatross…” and you turn to me in perplexity, and say from the peak of your competence mountain, “What the hell are you talking about?” your instance of ‘about-talk’ is geared to the resolution of a discrepancy between our otherwise implicitly convergent systems. This is what it’s doing. The idea that it reveals an exceptional kind of relationship, ‘aboutness,’ spanning the void between ‘albatross’ here and albatrosses out there is a metacognitive artifact, a kind of squircle. For one, the apparent void is jam-packed with enabling competences—vast networks of welded together by sufficiency. Medial neglect merely dupes metacognition into presuming otherwise, into thinking the apparently miraculous covariance (the product of vast histories of natural and neural selection) between ‘sign’ (here) and ‘signified’ (out there) is indeed some kind of miracle.

Philosophers dwell among general descriptions and explanations: this is why they have difficulty appreciating that naïveté generally consists in having no ‘image,’ no ‘view,’ regarding this or that domain. They habitually overlook the oxymoronic implication of attaching any ‘ism’ to the term ‘naïve.’ Instances of ‘about-talk’ do not implicitly presume ‘intentionality’ even in some naïve, mistaken sense. We are not born ‘naïve intentionalists’ (any more than we’re ‘naïve realists’). We just use meaning talk to solve what kinds of problems we can where we can. Granted, our shared metacognitive shortcomings lead us, given different canons of interrogation, into asserting this or that interpretation of ‘intentionality,’ popular or scholastic. We’re all prone to see squircles when prompted to peer into our souls.

So, when someone asks, “Where does causality lie?” we just point to where we can see it, out there on the billiard table. After all, where the hell else would it be (given medial neglect)? This is why dogmatism comes first in the order of philosophical complication, why Kant comes after Descartes. It takes time and no little ingenuity to frame plausible alternatives of this ‘elsewhere.’ And this is the significance of Hume’s inversion to Cartesian gravity: the reflexive sufficiency of whatever happens to be available, a sufficiency that may or may not obtain given the kinds of problem posed. The issue has nothing to do with confusing normal versus abnormal attributions of causal efficacy to intentional objects, because, for one, there’s just no such thing as ‘intentional objects,’ and for another, ‘intentional object-talk’ generates far more problems than it solves.

Of course, it doesn’t seem that way to Dennett whilst attempting to solve for Cartesian gravity, but only because, short theoretical thematizations of neglect and sufficiency, he lacks any real purchase on the problem of explaining the tendency to insist (as Tom Clark does) on the reality of the illusion. As a result, he finds himself in the strange position of embracing the sufficiency of intentionality in certain circumstances to counter the reflexive tendency to assume the sufficiency of phenomenality in other circumstances—of using one squircle, in effect, to overcome another. And this is what renders him eminently vulnerable to readings like Clark’s, which turns on Dennett’s avowal of intentional squircles to leverage, on pain of inconsistency, his commitment to phenomenal squircles. This problem vanishes once we recognize ourselves for the ambiguous cylinders we have always been. Showing as much, however, will require one final installment.

Framing “On Alien Philosophy”…

by rsbakker


Peter Hankins of Conscious Entities fame has a piece considering “On Alien Philosophy.” The debate is just getting started, but I thought it worthwhile explaining why I think this particular paper of mine amounts to more than yet just another interpretation to heap onto the intractable problem of ourselves.

Consider the four following claims:

1) We have biologically constrained (in terms of information access and processing resources) metacognitive capacities ancestrally tuned to the solution of various practical problem ecologies, and capable of exaptation to various other problems.

2) ‘Philosophical reflection’ constitutes such an exaptation.

3) All heuristic exaptations inherit, to some extent, the problem-solving limitations of the heuristic exapted.

4) ‘Philosophical reflection’ inherits the problem-solving limitations of deliberative metacognition.

Now I don’t think there’s much anything controversial about any of these claims (though, to be certain, there’s a great many devils lurking in the details adduced). So note what happens when we add the following:

5) We should expect human philosophical practice will express, in a variety of ways, the problem-solving limitations of deliberative metacognition.

Which seems equally safe. But note how the terrain of the philosophical debate regarding the nature of the soul has changed. Any claim purporting the exceptional nature of this or that intentional phenomena now needs to run the gauntlet of (5). Why assume we cognize something ontologically exceptional when we know we are bound to be duped somehow? All things being equal, mediocre explanations will always trump exceptional ones, after all.

The challenge of (5) has been around for quite some time, but if you read (precritical) eliminativists like Churchland, Stich, or Rosenberg, this is where the battle grinds to a standstill. Why? Because they have no general account of how the inevitable problem-solving limitations of deliberative metacognition would be expressed in human philosophical practice, let alone how they would generate the appearance of intentional phenomena. Since all they have are promissory notes and suggestive gestures, ontologically exceptional accounts remain the only game in town. So, despite the power of (5), the only way to speak of intentional phenomena remains the traditional, philosophical one. Science is blind without theory, so absent any eliminativist account of intentional phenomena, it has no clear way to proceed with their investigation. So it hews to exceptional posits, trusting in their local efficacy, and assuming they will be demystified by discoveries to come.

Thus the challenge posed by Alien Philosophy. By giving real, abductive teeth to (5), my account overturns the argumentative terrain between eliminativism and intentionalism by transforming the explanatory stakes. It shows us how stupidity, understood ecologically, provides everything we need to understand our otherwise baffling intuitions regarding intentional phenomena. “On Alien Philosophy” challenges the Intentionalist to explain more with less (the very thing, of course, he or she cannot do).

Now I think I’ve solved the problem, that I have a way to genuinely naturalize meaning and cognition. The science will sort my pretensions in due course, but in the meantime, the heuristic neglect account of intentionality, given its combination of mediocrity and explanatory power, has to be regarded as a serious contender.

Scripture become Philosophy become Fantasy

by rsbakker


Cosmos and History has published “From Scripture to Fantasy: Adrian Johnston and the Problem of Continental Fundamentalism” in their most recent edition, which can be found here. This is a virus that needs to infect as many continental philosophy graduate students as possible, lest the whole tradition be lost to irrelevance. The last millennium’s radicals have become this millennium’s Pharisees with frightening speed, and now only the breathless have any hope of keeping pace.

ABSTRACT: Only the rise of science allowed us to identify scriptural ontologies as fantastic conceits, as anthropomorphizations of an indifferent universe. Now that science is beginning to genuinely disenchant the human soul, history suggests that traditional humanistic discourses are about to be rendered fantastic as well. Via a critical reading of Adrian Johnston’s ‘transcendental materialism,’ I attempt to show both the shape and the dimensions of the sociocognitive dilemma presently facing Continental philosophers as they appear to their outgroup detractors. Trusting speculative a priori claims regarding the nature of processes and entities under scientific investigation already excludes Continental philosophers from serious discussion. Using such claims, as Johnston does, to assert the fundamentally intentional nature of the universe amounts to anthropomorphism. Continental philosophy needs to honestly appraise the nature of its relation to the scientific civilization it purports to decode and guide, lest it become mere fantasy, or worse yet, conceptual religion.

KEYWORDS: Intentionalism; Eliminativism; Humanities; Heuristics; Speculative Materialism

All transcendental indignation welcome! I was a believer once.

Dennett’s Black Boxes (Or, Meaning Naturalized)

by rsbakker

“Dennett’s basic insight is that there are under-explored possibilities implicit in contemporary scientific ideas about human nature that are, for various well understood reasons, difficult for brains like ours to grasp. However, there is a familiar remedy for this situation: as our species has done throughout its history when restrained by the cognitive limitations of the human brain, the solution is to engineer new cognitive tools that enable us to transcend these limitations. ”

—T. W. Zadwidzki, “As close to the definitive Dennett as we’re going to get.”

So the challenge confronting cognitive science, as I see it, is to find some kind of theoretical lingua franca, a way to understand different research paradigms relative to one another. This is the function that Darwin’s theory of evolution plays in the biological sciences, that of a common star chart, a way for myriad disciplines to chart their courses vis a vis one another.

Taking a cognitive version of ‘modern synthesis’ as the challenge, you can read Dennett’s “Two Black Boxes: a Fable” as an argument against the need for such a synthesis. What I would like to show is the way his fable can be carved along different joints to reach a far different conclusion. Beguiled by his own simplifications, Dennett trips into the same cognitive ‘crash space’ that has trapped traditional speculation on the nature of cognition more generally, fooling him into asserting explanatory limits that are apparent only.

Dennett’s fable tells the story (originally found in Darwin’s Dangerous Idea, 412-27) of a group of researchers stranded with two black boxes, each containing a supercomputer with a database of ‘true facts’ about the world, one in English, the other in Swedish. One box has two buttons labeled alpha and beta, while the second box has three lights coloured yellow, red, and green. Unbeknownst to the researchers, the button box simply transmits a true statement from the one supercomputer when the alpha button is pushed, which the other supercomputer acknowledges by lighting the red bulb for agreement, and a false statement when the beta button is pushed, which the bulb box acknowledges by lighting the green bulb for disagreement. The yellow bulb illuminates only when the bulb box can make no sense of the transmission, which is always the case when the researcher disconnect the boxes and, being entirely ignorant of any of these details, substitute signals of their own.

The intuitive power of the fable turns on the ignorance of the researchers, who begin by noting the manifest relations above, how pushing alpha illuminates red, pushing beta illuminates green, and how interfering with the signal between the boxes invariably illuminates yellow. Until the two hackers who built the supercomputers arrive, they have no way of explaining why the three actions—alpha pushing, beta pushing, and signal interfering—illuminate the lights they do. Even when they crack open the boxes and begin reverse engineering the supercomputers within, they find themselves no closer to solving the problem. This is what makes their ignorance so striking: not even the sustained, systematic application of mechanical cognition paradigmatic of science can solve the problem. Certainly a mechanical account of all the downstream consequences of pushing alpha or beta or interfering with the signal is possible, but this inevitably cumbersome account nevertheless fails to explain the significance of what is going on.

Dennett’s black boxes, in other words, are actually made of glass. They can be cracked open and mechanically understood. It’s their communication that remains inscrutable, the fact that no matter what resources the researchers throw at the problem, they have no way of knowing what is being communicated. The only way to do this, Dennett wants to argue, is to adopt the ‘intentional stance.’ This is exactly what Al and Bo, the two hackers responsible for designing and building the black boxes, provide when they finally let the researchers in on their game.

Now Dennett argues that the explanatory problem is the same whether or not the hackers simply hide themselves in the black boxes, Al in one and Bo in the other, but you don’t have to buy into the mythical distinction between derived and original intentionality to see this simply cannot be the case. The fact that the hackers are required to resolve the research conundrum pretty clearly suggests they cannot simply be swapped out with their machines. As soon as the researchers crack open the boxes and find two human beings are behind the communication the whole nature of the research enterprise is radically transformed, much as it is when they show up to explain their ‘philosophical toy.’

This underscores a crucial point: Only the fact that Al and Bo share a vast background of contingencies with the researchers allows for the ‘semantic demystification’ of the signals passing between the boxes. If anything, cognitive ecology is the real black box at work in this fable. If Al and Bo had been aliens, their appearance would have simply constituted an extension of the problem. As it is, they deliver a powerful, but ultimately heuristic, understanding of what the two boxes are doing. They provide, in other words, a black box understanding of the signals passing between our two glass boxes.

The key feature of heuristic cognition is evinced in the now widely cited gaze heuristic, the way fielders fix the ball in their visual field while running to keep the ball in place. The most economical way to catch pop flies isn’t to calculate angles and velocities but to simply ‘lock onto’ the target, orient locomotion to maintain its visual position, and let the ball guide you in. Heuristic cognition solves problems not via modelling systems, but via correlation, by comporting us to cues, features systematically correlated to the systems requiring solution. IIR heat-seeking missiles, for instance, need understand nothing of the targets they track and destroy. Heuristic cognition allows us to solve environmental systems (including ourselves) without the need to model those systems. It enables, in other words, the solution of environmental black boxes, systems possessing unknown causal structures, via known environmental regularities correlated to those structures.

This is why Al and Bo’s revelation has the effect of mooting most all of the work the researchers had done thus far. The boxes might as well be black, given the heuristic nature of their explanation. The arrival of the hackers provides a black box (homuncular) ‘glassing’ of the communication between the two boxes, a way to understand what they are doing that cannot be mechanically decomposed. How? By identifying the relevant cues for the researchers, thereby plugging them into the wider cognitive ecology of which they and the machines are a part.

The communication between the boxes is opaque to the researchers, even when the boxes are transparent, because it is keyed to the hackers, who belong to the same cognitive ecology as to the researchers—only unbeknownst to the researchers. As soon as they let the researchers in on their secret—clue (or ‘cue’) them in—the communication becomes entirely transparent. What the boxes are communicating becomes crystal clear because it turns out they were playing the same game with the same equipment in the same arena all along.

Now what Dennett would have you believe is that ‘understanding the communication’ is exhausted by taking the intentional stance, that the problem of what the machines are communicating is solved as far as it needs to be solved. Sure, there is a vast, microcausal story to be told (the glass box one), but it proves otiose. The artificiality of the fable facilitates this sense: the machines, after all, were designed to compare true or false claims. This generates the sense of some insuperable gulf segregating the two forms of cognition. One second the communication was utterly inscrutable, and the next, Presto! it’s transparent.

“The debate went on for years,” Dennett concludes, “but the mystery with which it began was solved” (84). This seems obvious, until one asks whether plugging the communication into our own intentional ecology answers our original question. If the question is, ‘What do the three lights mean?’ then of course the question is answered, as well it should be, given the question amounts to, ‘How do the three lights plug into the cognitive ecology of human meaning?’ If the question is, ‘What are the mechanics of the three lights, such that they mean?’ then the utility of intentional cognition simply provides more data. The mystery of the meaning of the communication is dissolved, sure, but the problem of relating this meaning to the machinery remains.

What Dennett is attempting to provide with this analogy is a version of ‘radical interpretation,’ an instance that strips away our preconceptions, and forces us to consider the problem of meaning from ‘conceptual scratch,’ you might say. To see the way his fable is loaded, you need only divorce the machines from the human cognitive ecology framing them. Make them alien black-cum-glass boxes and suddenly mechanical cognition is all our researchers have—all they can hope to have. If Dennett’s conclusions vis a vis our human black-cum-glass boxes are warranted, then our researchers might as well give up before they begin, “because there really is no substitute for semantic or intentional predicates when it comes to specifying the property in a compact, generative, explanatory way” (84). Since we don’t share the same cognitive ecology as the aliens, their cues will make no implicit or homuncular sense to us at all. Even if we could pick those cues out, we would have no way of plugging them into the requisite system of correlations, the cognitive ecology of human meaning. Absent homuncular purchase, what the alien machines are communicating would remain inscrutable—if Dennett is to be believed.

Dennett sees this thought experiment as a decisive rebuttal to those critics who think his position entails semantic epiphenomenalism, the notion that intentional posits are causally inert. Not only does he think the intentional stance answers the researchers’ primary question, he thinks it does so in a manner compatible (if not consilient) with causal explanation. Truthhood can cause things to happen:

“the main point of the example of the Two Black Boxes is to demonstrate the need for a concept of causation that is (1) cordial to higher-level causal understanding distinct from an understanding of the microcausal story, and (2) ordinary enough in any case, especially in scientific contexts.” “With a Little Help From my Friends,” Dennett’s Philosophy: A Comprehensive Assessment, 357

The moral of the fable, in other words, isn’t so much intentional as it is causal, to show how meaning-talk is indispensible to a certain crucial ‘high level’ kind of causal explanation. He continues:

“With regard to (1), let me reemphasize the key feature of the example: The scientists can explain each and every instance with no residual mystery at all; but there is a generalization of obviously causal import that they are utterly baffled by until they hit upon the right higher-level perspective.” 357

Everything, of course, depends on what ‘hitting upon the right higher level perspective’ means. The fact is, after all, causal cognition funds explanation across all ‘levels,’ and not simply those involving microstates. The issue, then, isn’t simply one of ‘levels.’ We shall return to this point below.

With regard to (2), the need for an ‘ordinary enough’ concept of cause, he points out the sciences are replete with examples of intentional posits figuring in otherwise causal explanations:

“it is only via … rationality considerations that one can identify or single out beliefs and desires, and this forces the theorist to adopt a higher level than the physical level of explanation on its own. This level crossing is not peculiar to the intentional stance. It is the life-blood of science. If a blush can be used as an embarrassment-detector, other effects can be monitored in a lie detector.” 358

Not only does the intentional stance provide a causally relevant result, it does so, he is convinced, in a way that science utilizes all the time. In fact, he thinks this hybrid intentional/causal level is forced on the theorist, something which need cause no concern because this is simply the cost of doing scientific business.

Again, the question comes down to what ‘higher level of causal understanding’ amounts to. Dennett has no way of tackling this question because he has no genuinely naturalistic theory of intentional cognition. His solution is homuncular—and self-consciously so. The problem is that homuncular solvers can only take us so far in certain circumstances. Once we take them on as explanatory primitives—the way he does with the intentional stance—we’re articulating a theory that can only take us so far in certain circumstances. If we confuse that theory for something more than a homuncular solver, the perennial temptation (given neglect) will be to confuse heuristic limits for general ones—to run afoul the ‘only-game-in-town-effect.’ In fact, I think Dennett is tripping over one of his own pet peeves here, confusing what amounts to a failure of imagination with necessity (Consciousness Explained, 401).

Heuristic cognition, as Dennett claims, is the ‘life-blood of science.’ But this radically understates the matter. Given the difficulties involved in the isolation of causes, we often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information humans have, evolved and learned sensitivities to effects systematically correlated to those environmental systems (including ourselves) relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible (and we are presently hellbent on creating tools that can do the same).

Humans are encircled, engulfed, by the inverse problem, the problem of isolating causes from effects. We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems are wildly disparate, not all access is equal. ‘Isolating causes,’ it turns out, means different things for different kinds of problem solving.

Information access, in fact, divides cognition into two distinct families. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.

Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.

Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.

(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the indispensability, the efficacy, and the limitations of source insensitive (cue correlative) cognition (see, “On the Interpretation of Artificial Souls“). Andrei Cimpian, Klaus Fiedler, and the work of the Adaptive Behaviour and Cognition Research Group more generally are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)

Now then, what Dennett is claiming is first, that instances of source insensitive cognition can serve source sensitive cognition, and second, that such instances fulfill our explanatory needs as far as they need to be fulfilled. What triggers the red light? The communication of a true claim from the other machine.

Can instances of source insensitive cognition serve source sensitive cognition (or vice versa)? Can there be such a thing as source insensitive/source sensitive hybrid cognition? Certainly seems that way, given how we cobble to two modes together both in science and everyday life. Narrative cognition, the human ability to cognize (and communicate) human action in context, is pretty clearly predicated on this hybridization. Dennett is clearly right to insist that certain forms of source insensitive cognition can serve certain forms of source sensitive cognition.

The devil is in the details. We know homuncular forms of source insensitive cognition, for instance, don’t serve the ‘hard’ sciences all that well. The reason for this is clear: source insensitive cognition is the mode we resort to when information regarding actual physical constraints isn’t available. Source insensitive idioms are components of wide correlative systems, cue-based cognition. The posits they employ cut no physical joints.

This means that physically speaking, truth causes nothing, because physically speaking, ‘truth’ does not so much refer to ‘real patterns’ in the natural world as participate in them. Truth is at best a metaphorical causer of things, a kind of fetish when thematized, a mere component of our communicative gear otherwise. This, of course, made no difference whatsoever to our ancestors, who scarce had any way of distinguishing source sensitive from source insensitive cognition. For them, a cause was a cause was a cause: the kinds of problems they faced required no distinction to be economically resolved. The cobble was at once manifest and mandatory. Metaphorical causes suited their needs no less than physical causes did. Since shallow information neglect entails ignorance of shallow information neglect—since insensitivity begets insensitivity to insensitivity—what we see becomes all there is. The lack of distinctions cues apparent identity (see, “On Alien Philosophy,” The Journal of Consciousness Studies (forthcoming)).

The crucial thing to keep in mind is that our ancestors, as shallow information consumers, required nothing more. The source sensitive/source insensitive cobble they possessed was the source sensitive/source insensitive cobble their ancestors required. Things only become problematic as more and more ancestrally unprecedented—or ‘deep’— information finds its way into our shallow information ambit. Novel information begets novel distinctions, and absolutely nothing guarantees the compatibility of those distinctions with intuitions adapted to shallow information ecologies.

In fact, we should expect any number of problems will arise once we cognize the distinction between source sensitive causes and source insensitive causes. Why should some causes so effortlessly double as effects, while other causes absolutely refuse? Since all our metacognitive capacities are (as a matter of computational necessity) source insensitive capacities, a suite of heuristic devices adapted to practical problem ecologies, it should come as no surprise that our ancestors found themselves baffled. How is source insensitive reflection on the distinction between source sensitive and source insensitive cognition supposed to uncover the source of the distinction? Obviously, it cannot, yet precisely because these tools are shallow information tools, our ancestors had no way of cognizing them as such. Given the power of source insensitive cognition and our unparalleled capacity for cognitive improvisation, it should come as no surprise that they eventually found ways to experimentally regiment that power, apparently guaranteeing the reality of various source insensitive posits. They found themselves in a classic cognitive crash space, duped into misapplying the same tools out of school over and over again simply because they had no way (short exhaustion, perhaps) of cognizing the limits of those tools.

And here we stand with one foot in and one foot out of our ancestral shallow information ecologies. In countless ways both everyday and scientific we still rely upon the homuncular cobble, we still tell narratives. In numerous other ways, mostly scientific, we assiduously guard against inadvertently tripping back into the cobble, applying source insensitive cognition to a question of sources.

Dennett, ever the master of artful emphasis, focuses on the cobble, pumping the ancestral intuition of identity. He thinks the answer here is to simply shrug our shoulders. Because he takes stances as his explanatory primitives, his understanding of source sensitive and source insensitive modes of cognition remains an intentional (or source insensitive) one. And to this extent, he remains caught upon the bourne of traditional philosophical crash space, famously calling out homuncularism on the one side and ‘greedy reductionism’ on the other.

But as much as I applaud the former charge, I think the latter is clearly an artifact of confusing the limits of his theoretical approach with the way things are. The problem is that for Dennett, the difference between using meaning-talk and using cause-talk isn’t the difference between using a stance (the intentional stance) and using something other than a stance. Sometimes the intentional stance suites our needs, and sometimes the physical stance delivers. Given his reliance on source insensitive primitives—stances—to theorize source sensitive and source insensitive cognition, the question of their relation to each other also devolves upon source insensitive cognition. Confronted with a choice between two distinct homuncular modes of cognition, shrugging our shoulders is pretty much all that we can do, outside, that is, extolling their relative pragmatic virtues.

Source sensitive cognition, on Dennett’s account, is best understood via source insensitive cognition (the intentional stance) as a form of source insensitive cognition (the ‘physical stance’). As should be clear, this not only sets the explanatory bar too low, it confounds the attempt to understand the kinds of cognitive systems involved outright. We evolved intentional cognition as a means of solving systems absent information regarding their nature. The idea then—the idea that has animated philosophical discourse on the soul since the beginning—that we can use intentional cognition to solve the nature of cognition generally is plainly mistaken. In this sense, Intentional Systems Theory is an artifact of the very confusion that has plagued humanity’s attempt to understand itself all along: the undying assumption that source insensitive cognition can solve the nature of cognition.

What do Dennett’s two black boxes ultimately illuminate? When two machines functionally embedded within the wide correlative system anchoring human source insensitive cognition exhibit no cues to this effect, human source sensitive cognition has a devil of a time understanding even the simplest behaviours. It finds itself confronted by the very intractability that necessitated the evolution of source insensitive systems in the first place. As soon as those cues are provided, what was intractable for source sensitive cognition suddenly becomes effortless for source insensitive cognition. That shallow environmental understanding is ‘all we need’ if explaining the behaviour for shallow environmental purposes happens to be all we want. Typically, however, scientists want the ‘deepest’ or highest dimensional answers they can find, in which case, such a solution does nothing more than provide data.

Once again, consider how much the researchers would learn were they to glass the black boxes and find the two hackers inside of them. Finding them would immediately plug the communication into the wide correlative system underwriting human source insensitive cognition. The researchers would suddenly find themselves, their own source insensitive cognitive systems, potential components of the system under examination. Solving the signal would become an anthropological matter involving the identification of communicative cues. The signal’s morphology, which had baffled before, would now possess any number of suggestive features. The amber light, for instance, could be quickly identified as signalling a miscommunication. The reason their interference invariably illuminated it would be instantly plain: they were impinging on signals belonging to some wide correlative system. Given the binary nature of the two lights and given the binary nature of truth and falsehood, the researchers, it seems safe to suppose, would have a fair chance of advancing the correct hypothesis, at least.

This is significant because source sensitive idioms do generalize to the intentional explanatory scale—the issue of free will wouldn’t be such a conceptual crash space otherwise! ‘Dispositions’ are the typical alternative offered in philosophy, but in fact, any medicalization of human behaviour examples the effectiveness of biomechanical idioms at the intentional level of description (something Dennett recognizes at various points in his oeuvre (as in “Mechanism and Responsibility”) yet seems to ignore when making arguments like these). In fact, the very idiom deployed here demonstrates the degree to which these issues can be removed from the intentional domain.

The degree to which meaning can be genuinely naturalized.

We are bathed in consequences. Cognizing causes is more expensive than cognizing correlations, so we evolved the ability to cognize the causes that count, and to leave the rest to correlations. Outside the physics of our immediate surroundings, we dwell in a correlative fog, one that thins or deepens, sometimes radically, depending on the physical complexity of the systems engaged. Thus, what Gerd Gigerenzer calls the ‘adaptive toolbox,’ the wide array of heuristic devices solving via correlations alone. Dennett’s ‘intentional stance’ is far better understood as a collection of these tools, particularly those involving social cognition, our ability to solve for others or for ourselves. Rather than settling for any homuncular ‘attitude taking’ (or ‘rule following’), we can get to the business of isolating devices and identifying heuristics and their ‘application conditions,’ understanding both how they work, where they work, and the ways they go wrong.

Snuffing the Spark: A Nihilistic Account of Moral Progress

by rsbakker



If we define moral progress in brute terms of more and more individuals cooperating, then I think we can cook up a pretty compelling naturalistic explanation for its appearance.

So we know that our basic capacity to form ingroups is adapted to prehistoric ecologies characterized by resource scarcity and intense intergroup competition.

We also know that we possess a high degree of ingroup flexibility: we can easily add to our teams.

We also know moral and scientific progress are related. For some reason, modern prosocial trends track scientific and technological advance. Any theory attempting to explain moral progress should explain this connection.

We know that technology drastically increases information availability.

It seems modest to suppose that bigger is better in group competition. Cultural selection theory, meanwhile, pretty clearly seems to be onto something.

It seems modest to suppose that ingroup cuing turns on information availability.

Technology, as the homily goes, ‘brings us closer’ across a variety of cognitive dimensions. Moral progress, then, can be understood as the sustained effect of deep (or ancestrally unavailable) social information cuing various ingroup responses–people recognizing fractions of themselves (procedural if not emotional bits) in those their grandfathers would have killed.  The competitive benefits pertaining to cooperation suggest that ingroup trending cultures would gradually displace those trending otherwise.

Certainly there’s a far, far more complicated picture to be told here—a bottomless one, you might argue—but the above set of generalizations strike me as pretty solid. The normativist would cry foul, for instance, claiming that some account of the normative nature of the institutions underpinning such a process is necessary to understanding ‘moral progress.’ For them, moral progress has to involve autonomy, agency, and a variety of other posits perpetually lacking decisive formulation. Heuristic neglect allows us to sidestep this extravagance as the very kind of dead-end we should expect to confound us. At the same time, however, reflection on moral cognition has doubtless had a decisive impact on moral cognition. The problem of explaining ‘norm-talk’ remains. The difference is we now recognize the folly of using normative cognition to theoretically solve the nature of normative cognition. How can systems adapted to solving absent information regarding the nature of normative cognition reveal the nature of normative cognition? Relieved of these inexplicable posits, the generalizations above become unproblematic. We can set aside the notion of some irreducible ‘human spark’ impinging on the process in a manner that makes them empirically inexplicable.

If only our ‘deepest intuitions’ could be trusted.

The important thing about this way of looking at things is that it reveals the degree to which moral progress depends upon its information environments. So far, the technical modification of our environments has allowed our suite of social instincts, combined with institutionally regimented social training, to progressively ratchet the expansion of the franchise. But accepting the contingency of moral progress means accepting vulnerability to radical transformations in our information environment. Nothing guarantees moral progress outside the coincidence of certain capacities in certain conditions. Change those conditions, and you change the very function of human moral cognition.

So, for instance, what if something as apparently insignificant as the ‘online disinhibition effect’ has the gradual, aggregate effect of intensifying adversarial group identifications? What if the network possibilities of the web gradually organizes those possessing authoritarian dispositions, renders them more socially cohesive, while having the opposite impact on those possessing anti-authoritarian dispositions?

Anything can happen here, folks.

One can be a ‘nihilist’ and yet be all for ‘moral progress.’ The difference is that you are advocating for cooperation, for hewing to heuristics that promote prosocial behaviour. More importantly, you have no delusions of somehow standing outside contingency, of ‘rational’ immunity to radical transformations in your cognitive environments. You don’t have the luxury of burning magical holes through actual problems with your human spark. You see the ecology of things, and so you intervene.