Three Pound Brain

No bells, just whistling in the dark…

Category: PHILOSOPHY

Notes Toward a Cognitive Biology of Theoretical Physics

by rsbakker

My favourite example of what I’ve been calling the ‘scandal of self-understanding’ is the remarkable—even gobsmacking—fact that we can explain the origins of the universe itself while remaining utterly unable to explain this explanation. You could say the great, grand blindspot in physics is physics itself. Imagine raising a gothic cathedral absent anything but the murkiest consciousness of hands! What’s more, imagine thinking this incapacity entirely natural, to raise rooves, not only blind to lifting, but blind to this blindness as well. Small wonder so many think knowledge an irreducible miracle.

This blindness to cognitive means reveals a quite odd condition on progress in physics: that it need not understand itself to understand nature. So far, that is.

Certainly, this fact is one worth consideration in its own right. Since heuristic neglect leverages a general, thoroughly naturalistic theory of cognition, its relevance should apply to all of our cognitive endeavours, including the very hinge of Pandora’s Box, physics. Since I have no skin in any academic game, I need not allow ingroup expectations pin my commitments to any institutional blind alley. I’m free to take original assumptions to problems invulnerable to existing assumptions. And even though I lack the technical expertise to make the least dent in the science, I can perhaps suggest novel points of departure for those who do.

Physics is far from alone in suffering this second-order blindness. Biologically speaking, almost all problems are solved absent access to the conditions of problem-solving. Motor cortices ‘know’ as much about themselves as the fingers they control. Cognition is almost always utterly oblivious to the contemporaneous act of cognizing.

Call this trivial fact medial neglect: the congenital insensitivity of cognition to contemporaneous cognizing. A number of dramatic consequences fall out of this empirical platitude. How does human cognition overcome medial neglect? Our brains are, as a matter of fact, utterly insensitive to their own biological constitution. They cannot immediately cognize themselves for what they are. So then how do they cognize their own cognitive capacities?

Obviously, otherwise. In ways that are useful rather than true. In ways that circumvent medial neglect. Heuristically.

Given medial neglect, it simply follows that we must cognize problematic systems assuming what might be called meta-irrelevance, that no substantial knowledge of our knowing is required to leverage knowing. For instance, this present act of communication on my part requires that countless facts obtain, not the least of which is a tremendous amount of biological and historical similarity, that you and I share roughly the same physiology and educational background. If I were suffering psychosis, or you were raised by wolves, then this communicative exchange could only happen if we could somehow repair these discrepancies. Absent such second-order capacity, our communication depends on the absence of such second-order problems, and therefore on the irrelevance of second-order knowledge to achieve whatever it is we want to achieve.

Medial neglect entails meta-irrelevance, the capacity to solve problems absent the capacity to solve for that capacity. We can distinguish between the meta-irrelevance of our frame, the absence of defeating circumstances, and the meta-irrelevance of our constitution, the absence of cognitive incapacities. One of the fascinating things about this distinction is the way the two great theoretical edifices of physics, general relativity and the standard model of particle physics, required overcoming each form of meta-irrelevance. With general relativity, Einstein had to overcome a form of frame neglect to see space and time as part and parcel of the machinery of the universe. With quantum mechanics, Bohr and others had to overcome a form of constitutive neglect and invent a new rationality. When cognizing the universe on the greatest scales, your frame of reference makes a tremendous difference to what you see. When cognizing reality at infinitesimal scales, your cognitive biology makes a tremendous difference to what you see. In each case, you cannot understand the fundamentals short understanding yourself as part of the system cognized.

Our cognitive biology, in other words, is only irrelevant to cognitive determinations in classical (ancestral) problem ecologies. This explains why general relativity was more ‘insight’ driven, while the standard model was much more experimentally driven. General relativity, which belongs to classical mechanics, only strains meta-irrelevance (forces us to consider our cognitive capacities) at its extremes. Quantum mechanics snaps it from the outset. Resolving meta-irrelevance required conceding both methodology and intuition before physicists could report, with numerous provisos, the ‘quantum world.’ Understanding which classical questions can and cannot be asked of quantum mechanics amounts to charting the extent of meta-irrelevance, the degree to which our cognitive biology (in addition to our cognitive history) can be neglected. The limits of classical interrogation are the limits of our cognitive biology vis a vis the microscopic, the point where many (but not all) of our physical intuitions trip into crash space.

The notorious debate between Einstein and Bohr regarding whether quantum mechanics is complete and so reveals an exceptional (classically inconsistent) nature, or incomplete, and so reveals the existence of hidden variables, bears some striking similarities to debates regarding the nature of experience and cognition. If quantum mechanics is complete, as Bohr maintained, then our basic cognitive biology is relevant to our understanding of the microscopic. If quantum mechanics is incomplete, as Einstein maintained, then our basic cognitive biology is irrelevant to our understanding of the microscopic—the problem lies in our cognitive history, which is to say, the kinds of theories we bring to bear. The central issue, in other words, is the same issue structuring debates regarding the nature of knowledge and experience: whether the apparently exceptional nature of the quantum, like the exceptional nature of experience and cognition, isn’t an artifact of any incapacity on our part. The primary question, in other words, is whether our position or constitution is relevant to understanding the conundrums posed, on the one hand, by quantum mechanics, and on the other hand, by knowledge and experience.

(It’s worth noting, here, that this comparison seems to contradict the way I normally use quantum mechanics to argue the need to abandon biologically entrenched intuitions. But if quantum mechanics is both exceptional (insofar as it violates classical mechanics) and scientifically warranted, cannot the intentionalist claim the same? Where intentionalists use the empirical power of operationalizations of intentional posits (such as beliefs) to argue their objectivity, quantum realists use the empirical power of quantum mechanical postulates (such as wave-functions) to argue their objectivity. But there’s two key differences undermining this apparently happy analogy: first, where intentionalism is nothing if not intuitive, quantum mechanics is, to put it mildly, anything but. And second, quantum mechanics is the most powerful, most applicable theory in the history of science, whereas intentionalism is plagued both by issues of reproducibility within experimental contexts and issues of generalization beyond those contexts.)

With quantum mechanics, the collapse of meta-irrelevance, the need to identify and suspend cognitive reflexes (sort between questions), is compelled by the deep information cognitive ecologies devised by physicists. The more elementary things get, the less applicable the machinery of human cognition becomes. The meta-irrelevance of human cognition, you could say, maps out our ‘scalar neglect-structure,’ the degree to which knowledge and experience are geared to solve the proximate and granular. Science provided the prostheses required to extend our humble capacities to solve the macroscopic. Despite our ancestral neglect-structure, our basic cognitive capacities possessed cosmic applicability—we wanted only for the genius of Einstein to discover how. But when it came to the microscopic, the intuitive became a liability. “We are all agreed that your theory is crazy,” Bohr told Wolfgang Pauli once. “The question which divides us is whether it is crazy enough to have a chance of being correct.”

On the view sketched here, the fundamental divide between general relativity and quantum mechanics lies in the latter’s cognitive biological relevance. This suggests that quantum mechanics, if not the more fundamental theory, functions in a problem-ecology where general relativity simply has no application. Most physicists see quantum mechanics as more fundamental but their arguments tend to be formal and ontological as opposed to ecological. As we saw above, the independence heuristic, the presumption of meta-irrelevance, is the default, core to all our cognitive orientations—and this is as true of physicists as it is of anyone. Physicists understand the debate, in other words, with a tendency to overlook the relevance of their cognitive biology, and so presume the gap between general relativity and quantum mechanics is merely mathematical or conceptual. The failure of biological irrelevance, however, exposes the physical dimensions of the problem, how the issue lies in the constitution of human cognition.

Theoretical physics has always understood that humans are physical systems, entropic conduits, like all things living. But appreciating the fact of cognitive biology is one thing and appreciating the activity of cognitive biology is quite another. When we sweep away all the second-order clutter, quantum mechanics is something us organisms do, a behavioural product of the very nature quantum mechanics reveals. Our cognitive nature, the ancestral defaults geared to optimize ancestral circumstances, systematically confounds our attempts to cognize nature. Quantum mechanics shows we are natural in such a way as to stymy our attempts to understand nature, short theoretical gerrymandering via robust experimental feedback.

This raises the spectre that human cognition is constitutionally incapable of unifying general relativity and quantum mechanics. It could be the case that a nonclassical macroscopic theory could supplant general relativity and subsume quantum mechanics, but short the kinds of experimental data available to the pioneers of quantum mechanics, we simply have no way of isolating the questions that apply from the questions that don’t, and so sorting signal from noise. The truth could be ‘out there,’ lying somewhere beyond our biological capacities, occupying a space that only our machines can hope to fathom. If the quantum theorization of gravity fails, and it becomes clear that quantum mechanics is only heuristically applicable to classical contexts, then the cognitive biological position outlined here suggests we might have to become something other than what we are to fathom the universe as a whole. Re-engineering neural configurations via learning alone (theory formation) may no longer be enough.

The failure of cognitive biological relevance in quantum mechanics underscores what might be called the problem of diminishing applicability, how the further our constitution is pushed from our ancestral, ecological sweet spots, the systems we evolved to take for granted, the less we can presume meta-irrelevance, the more we should expect our cognitive biological inheritance to require remediation, lest it crash.

Advertisements

After Yesterday: Review and Commentary of Catherine Malabou’s Before Tomorrow: Epigenesis and Rationality

by rsbakker

Experiments like the Wason Selection Task dramatically demonstrate the fractionate, heuristically specialized nature of human cognition. Dress the same logical confound in social garb and it suddenly becomes effortless. We are legion, both with reference to our environments and to ourselves. The great bulk of human cognition neglects the general nature of things, targeting cues instead, information correlated to subsequent events. We metacognize none of this.

Insofar as Catherine Malabou concedes the facts of neurobiology she concedes these facts.

In Before Tomorrow: Epigenesis and Rationality, she attempts to rescue the transcendental via a conception of ‘transcendental epigenesis.’ The book orbits about section 27 (pp. 173-175 in my beaten Kemp-Smith translation) of the Transcendental Deduction in the second edition of The Critique of Pure Reason, where Kant considers the vexed question of the source of the agreement of the transcendental and the empirical, conceptuality and experience. Kant considers three possibilities: the agreement is empirically sourced, transcendentally sourced, or fundamentally (divinely) given. Since the first and the third contradict the necessity of the transcendental, he opts for the second, which he cryptically describes as “the epigenesis of pure reason” (174), a phrase which has perplexed Kant scholars ever since.

She examines a cluster of different theories on Kant’s meaning, each pressing Kant toward either empirical or theological contingency, and thus the very contradiction he attempts to avoid with his invocation of ‘epigenesis.’ Malabou undertakes a defense of Kantian transcendental epigenesis in the context of contemporary neurobiology, transforming Kant’s dilemma into a diagnosis of the dilemma she sees confronting Continental philosophy as a whole.

Via Foucault, she argues the historicity of transcendence as epigenesis understood as the invention of meaning (which she thinks is irreducible). “[N]o biologist,” she writes, “examines the relation between genetics and epigenetics in terms of meaning.” Via Heidegger (“who is no doubt the deepest of all of Kant’s readers”) she argues that the ecstatic temporality of transcendence reveals the derivative nature of empirical and theological appropriations, which both cover over primordial time (time before time). She ultimately parts with Heidegger on the issue of primordiality, but she takes away the phenomenological interpolation of past, present, and future, building toward the argument that epigenesis is never simply archaeological, but aimed as well—teleological.

Meillasoux seems to overthrow the primordial via reference to the ancestral, the time before the time before time, but he ultimately fails to deliver on the project of contingency. For all the initial praise Malabou expresses for his project, he ultimately provides her with a critical foil, an example of how not to reach beyond the Kantian tradition. (I especially enjoyed her Heideggerean critique of his time before the time before time as being, quite obviously (I think), the time after the time before time).

She ultimately alights on the Critique of Judgment, with a particular emphasis on section 81, which contains another notorious reference to epigenesis. The problem, once again, was that reading ‘the epigenesis of pure reason’ empirically—neurobiologically—obliterates the transcendental. Reading it formally, on the other hand, renders it static and inexplicable. What Malabou requires is some way of squaring the transcendental with the cognitive scientific revolution, lest Continental philosophy dwindle into a museum relic. She uses the mingling of causal and teleological efficacy Kant describes in the Third Critique as her ‘contact point’ between the transcendental and the empirical, since it is in the purposiveness of life that contingency and necessity are brought together.

Combining this with ecstatic temporality on the hand and neurobiological life on the other reveals an epigenesis that bridges the divide between life and thought in the course of explaining the adaptivity of reason without short-circuiting transcendence: “insofar as its movement is also the movement of the reason that thinks it, insofar as there is no rationality without epigenesis, without self-adjustment, without the modification of the old by the new, the natural and objective time of epigenesis may also be considered to be the subjective and pure time of the formation of horizon by and for thought.”

And so is the place of cognitive science made clear: “what neurobiology makes possible today through its increasingly refined description of brain mechanisms and its use of increasingly effective imaging techniques is the actual taking into account, by thought, of its own life.” The epigenetic ratchet now includes the cognitive sciences; philosophical meaning can now be generated on the basis of the biology of life. “What the neurobiological perspective lacks fundamentally,” she writes, “is the theoretical accounting for the new type of reflexivity that it enables and in which all of its philosophical interest lies.” Transcendental epigenesis, Malabou thinks, allows neurobiologically informed philosophy, one attuned to the “adventure of subjectivity,” to inform neurobiology.

She concludes, interestingly, with a defense of her analogical methodology, something I’ve criticized her for previously (and actually asked her about at a public lecture she gave in 2015). I agree that we’re all compelled to resort to cartoons when discussing these matters, true, but the problem is that we have no way of arbitrating whether our analogies render some dynamic tractable, or merely express some coincidental formal homology, short their abductive power, their ability to render domains scrutable. It is the power of a metaphor to clarify more than it merely matches that is the yardstick of theoretical analogical adequacy.

In some ways, I genuinely loved this book, especially for the way it reads like a metaphysical whodunnit, constantly tying varied interpretations to the same source material, continually interrogating different suspects, dismissing them with a handful of crucial clues in hand. This is the kind of book I once adored: an extended meditation on a decisive philosophical issue anchored by close readings of genuinely perplexing texts.

Unfortunately, I’m pretty sure Malabou’s approach completely misconstrues the nature of the problem the cognitive sciences pose to Continental philosophy. As a result, I fear she obscures the disaster about to befall, not simply her tradition, but arguably the whole of humanity.

When viewed from a merely neurobiological perspective, cognitive systems and environments form cognitive ecologies—their ‘epigenetic’ interdependence comes baked in. Insofar as Malabou agrees with this, she agrees that the real question has nothing to do with ‘correlation,’ the intentional agreement of concept and object, but rather with the question of how experience and cognition as they appear to philosophical reflection can be reconciled with the facts of our cognitive ecologies as scientifically reported. The problem, in other words, is the biology of metacognition. To put it into Kantian terms, the cognitive sciences amount to a metacritique of reason, a multibillion dollar colonization of Kant’s traditional domain. Like so much life, metacognition turns out to be a fractionate, radically heuristic affair, ancestrally geared to practical problem-solving. Not only does this imperil Kant’s account of cognition, it signals the disenchantment of the human soul. The fate of the transcendental is a secondary concern at best, one that illustrates rather than isolates the problem. The sciences have overthrown the traditional discourses of every single domain they have colonized. The burning question is why should the Continental philosophical discourse on the human soul prove an exception?

The only ‘argument’ that Malabou makes in this regard, the claim upon which all of her arguments hang, also comes from Kant:

“In the Critique of Pure Reason, when discussing the schema of the triangle, Kant asserts that there are realities that “can never exist anywhere except in thought.” If we share this view, as I do, then the validity of the transcendental is upheld. Yes, there are realities that exist nowhere but in thought.”

So long as we believe in ‘realities of thought,’ Continental philosophy is assured its domain. But are these ‘realities’ what they seem? Remember Hume: “It is remarkable concerning the operations of the mind that, though most intimately present to us, yet, whenever they become the object of reflection, they seem involved in obscurity; nor can the eye readily find those lines and boundaries, which discriminate and distinguish them” (Enquiry Concerning Human Understanding, 7). The information available to traditional speculative reflection is less than ideal. Given this evidential insecurity, how will the tradition cope with the increasing amounts of cognitive scientific information flooding society?

The problem, in other words, is both epistemic and social. Epistemically, the reality of thought need not satisfy our traditional conceptions, which suggests, all things being equal, that it will very likely contradict them. And socially, no matter how one sets about ontologically out-fundamentalizing the sciences, the fact remains that ‘ontologically out-fundamentalizing’ is the very discursive game that is being marginalized—disenchanted.

Regarding the epistemic problem. For all the attention Malabou pays to section 81 of the Third Critique, she overlooks the way Kant begins by remarking on the limits of cognition. The fact is, he’s dumbfounded: “It is beyond our reason’s grasp how this reconciliation of two wholly different kinds of causality is possible: the causality of nature in its universal lawfulness, with [the causality of] an idea that confines nature to a particular form for which nature itself contains no basis whatsoever.” Our cognition of efficacy is divided between what can be sourced in nature and what cannot be sourced, between causes and purposes, and somehow, someway, they conspire to render living systems intelligible. The evidence of this basic fractionation lies plain in experience, but the nature of its origin and activity remain occluded: it belongs to “the being in itself of which we know merely the appearance.”

In one swoop, Kant metacognizes the complexity of cognition (two wholly different forms), the limits of metacognizing that complexity (inscrutable to reflection), and the efficacy of that complexity (enabling cognition of animate things). Thanks to the expansion of the cognitive scientific domain, all three of these insights now possess empirical analogues. As far as complexity is concerned, we know that humans possess a myriad of specialized cognitive systems. Kant’s ‘two kinds of causality’ correlates with two families of cognitive systems observed in infants, the one geared to the inanimate world, mechanical troubleshooting, the other to the animate world, biological troubleshooting. The cognitive pathologies belonging Williams Syndrome and Autism Spectrum Disorder demonstrate profound cleavages between physical and psychological cognition. The existence of metacognitive limits is also a matter of established empirical fact, operative in any number of phenomena explored by the ecological rationality and cognitive heuristics and biases research programs. In fact, the mere existence of cognitive science, which is invested in discovering those aspects of experience and cognition we are utterly insensitive to, demonstrates the profundity of human medial neglect, our utter blindness to the enabling machinery of cognition as such.

And recent research is also revealing the degree to which humans are hardwired to posit opportunistic efficacies. Given the enormity and complexity of endogenous and exogenous environments, organisms have no hope of sourcing the information constituting its cognitive ecologies. No surprise, neural networks (like the machine learning systems they inspired) are exquisitely adapted to the isolation of systematic correlations—patterns. Neglecting the nature of the systems involved, they focus on correlations between availabilities, isolating those observable precursors allowing the prediction of subsequent, reproductively significant observables such as behaviour. Confusing correlation with causation may be the bane of scientists, but for the rest of us, the reliance of ‘proxies’ often pays real cognitive dividends.

Humans are hardwired to both neglect their own cognitive complexity and to fetishize their environments, to impute efficacies serving local, practical cognitive determinations. Stranded in the most complicated system ever encountered, human metacognition cannot but comprise a congeries of source-insensitive systems geared to the adventitious solution of practical problems—like holding one’s tongue, or having second thoughts, or dwelling on the past, and so on. In everyday contexts, it never occurs to question the sources of these activities. Given neglect of the actual sources, we intuit spontaneity whenever we retask our metacognitive motley with reporting the source of these or any other cognitive activities.

We have very good empirical reasons to believe that the above is true. So, what do we do with transcendental speculation a la Kant? Do we ignore what cognitive science has learned about the fractionation, limits, and default propensities of human metacognition? Do we assume he was onto something distinct, a second, physically inexplicable order enabling cognition of the empirical in addition to the physically explicable (because empirical) order that we know (thanks to strokes, etc.) enables cognition of the empirical? Or do we assume that Kant was onto something dimly, which, given his ignorance of cognitive science, he construed dogmatically as distinct? Do we recognize the a priori as a fetishization of medial neglect, as way to make sense of the fractionate, heuristic nature of cognition absent any knowledge of that nature?

The problem with defending the first, transcendental thesis is that the evidence supporting the second empirical hypothesis will simply continue to accumulate. This is where the social problem rears its head, why the kind of domain overlap demonstrated above almost certainly signals the doom of Malabou’s discursive tradition. Continental philosophers need to understand how disenchantment works, how the mere juxtaposition of traditional and scientific claims socially delegitimizes the former. The more cognitive science learns about experience and cognition, the less relevant and less credible traditional philosophical discourses on the nature of experience and cognition will become.

The cognitive scientific metacritique of reason, you could say, reveals the transcendental as an artifact of our immaturity, of an age when we hearkened to the a priori as our speculative authority. Malabou not only believes in this speculative authority, she believes that science itself must answer to it. Rather than understanding the discursive tools of science epigenetically, refined and organized via scientific practice, she understands them presuppositionally, as beholden to this or that (perpetually underdetermined) traditional philosophical interpretation of conditions, hidden implicatures that must be unpacked to assure cognitive legitimacy—implicatures that clearly seem to stand outside ecology, thus requiring more philosophical interpretation to provide cognitive legitimacy. The great irony, of course, is that scientists eschew her brand of presuppositional ‘legitimacy’ to conserve their own legitimacy. Stomping around in semantic puddles is generally a counterproductive way to achieve operational clarity—a priori exercises in conceptual definition are notoriously futile. Science turns on finding answerable questions in questions answered. If gerrymandering definitions geared to local experimental contexts does the trick, then so be it. The philosophical groping and fumbling involved is valuable only so far as it serves this end. Is this problematic? Certainly. Is this a problem speculative ontological interpretation can solve? Not at all.

Something new is needed. Something radical, not in the sense of discursive novelty, but in a way that existentially threatens the tradition—and offends accordingly.

I agree entirely when Malabou writes:

“Clearly, it is of the utmost necessity today to rethink relations between the biological and the transcendental, even if it is to the detriment of the latter. But who’s doing so? And why do continental philosophers reject the neurobiological approach to the problem from the outset?”

This was the revelation I had in 1999, attempting to reconcile fundamental ontology and neuroscience for the final chapter of my dissertation. I felt the selfsame exhaustion, the nagging sense that it was all just a venal game, a discursive ingroup ruse. I turned my back on philosophy, began writing fiction, not realizing I was far from alone in my defection. When I returned, ‘correlation’ had replaced ‘presence’ as the new ‘ontologically problematic presupposition.’ At long last, I thought, Continental philosophy had recognized that intentionality—meaning—was the problem. But rather than turn to cognitive science to “search for the origin of thinking outside of consciousness and will,” the Speculative Realists I encountered (with the exception of thinkers like David Roden) embraced traditional vocabularies. Their break with traditional Kantian philosophy, I realized, did not amount to a break with traditional intentional philosophy. Far from calling attention to the problem, ‘correlation’ merely focused intellectual animus toward an effigy, an institutional emblem, stranding the 21st century Speculative Realists in the very interpretative mire they used to impugn 20th century Continental philosophy. Correlation was a hopeful, but ultimately misleading diagnosis. The problem isn’t that cognitive systems and environments are interdependent, the problem is that this interdependence is conceived intentionally. Think about it. Why do we find the intentional interdependence of cognition and experience so vexing when the ecological interdependence of cognitive systems and environments is simply given in biology? What is it about intentionality?

Be it dogmatically or critically conceived, what we call ‘intentionality’ is a metacognitive artifact of the way source-insensitive modes of cognition, like intentional cognition, systematically defer the question of sources. A transcendental source is a sourceless source—an ‘originary repetition’ admitting an epigenetic gloss—because intentional cognition, whether applied to thought or the world, is source-insensitive cognition. To apply intentional cognition to the question of the nature of intentional cognition, as the tradition does compulsively, is to trip into metacognitive crash space, a point where intuitions, like those Malabou so elegantly tracks in Before Tomorrow, can only confound the question they purport to solve.

Derrida understood, at least as far as his (or perhaps any) intentional vocabulary could take him. He understood that cognition as cognized is a ‘cut-out,’ an amnesiac intermediary, appearing sourceless, fully present, something outside ecology, and as such doomed to be overthrown by ecology. He, more-so than Kant, hesitates upon the metacognitive limit, full-well understanding the futility of transgressing it. But since he presumed the default application of intentional cognition to the problem of cognition necessary, he presumed the inevitability of tripping into crash space as well, believing that reflection could not but transgress its limits and succumb to the metaphysics of presence. Thus his ‘quasi-transcendentals,’ his own sideways concession to the Kantian quagmire. And thus deconstruction, the crashing of super-ecological claims by adducing what must be neglected—ecology—to maintain the illusion of presence.

And so, you could say the most surprising absence in Malabou’s text is her teacher, who whispers merely from various turns in her discourse.

“No one,” she writes, “has yet thought to ask what continental philosophy might become after this “break.” Not true. I’ve spent years now prospecting the desert of the real, the post-intentional landscape that, if I’m right, humanity is doomed to wander into and evaporate. I too was a Derridean once, so I know a path exists between her understanding and mine. I urge her to set aside the institutional defense mechanisms as I once did: charges of scientism or performative contradiction simply beg the question against the worst-case scenario. I invite her to come see what philosophy and the future look like after the death of transcendence, if only to understand the monstrosity of her discursive other. I challenge her to think post-human thoughts—to understand cognition materially, rather than what traditional authority has made of it. I implore her to see how the combination of science and capital is driving our native cognitive ecologies to extinction on an exponential curve.

And I encourage everyone to ask why, when it comes to the topic of meaning, we insist on believing in happy endings? We evolved to neglect our fundamental ecological nature, to strategically hallucinate spontaneities to better ignore the astronomical complexities beneath. Subreption has always been our mandatory baseline. As the cognitive ecologies underwriting those subreptive functions undergo ever more profound transformations, the more dysfunctional our ancestral baseline will become. With the dawning of AI and enhancement, the abstract problem of meaning has become a civilizational crisis.

Best we prepare for the worst and leave what was human to hope.

Exploding the Manifest and Scientific Images of Man

by rsbakker

 

This is how one pictures the angel of history. His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress. –Benjamin, Theses on the Philosophy of History

 

What I would like to do is show how Sellars’ manifest and scientific images of humanity are best understood in terms of shallow cognitive ecologies and deep information environments. Expressed in Sellars’ own terms, you could say the primary problem with his characterization is that it is a manifest, rather than scientific, understanding of the distinction. It generates the problems it does (for example, in Brassier or Dennett) because it inherits the very cognitive limitations it purports to explain. At best, Sellars take is too granular, and ultimately too deceptive to function as much more than a stop-sign when it comes to questions regarding the constitution and interrelation of different human cognitive modes. Far from a way to categorize and escape the conundrums of traditional philosophy, it provides yet one more way to bake them in.

 

Cognitive Images

Things begin, for Sellars, in the original image, our prehistorical self-understanding. The manifest image consists in the ‘correlational and categorial refinement’ of this self-understanding. And the scientific image consists in everything discovered about man beyond the limits of correlational and categorial refinement (while relying on these refinements all the same). The manifest image, in other words, is an attenuation of the original image, whereas the scientific image is an addition to the manifest image (that problematizes the manifest image). Importantly, all three are understood as kinds of ‘conceptual frameworks’ (though he sometime refers to the original image as ‘preconceptual.’

The original framework, Sellars tells us, conceptualizes all objects as ways of being persons—it personalizes its environments. The manifest image, then, can be seen as “the modification of an image in which all the objects are capable of the full range of personal activity” (12). The correlational and categorial refinement consists in ‘pruning’ the degree to which they are personalized. The accumulation of correlational inductions (patterns of appearance) undermined the plausibility of environmental agencies and so drove categorial innovation, creating a nature consisting of ‘truncated persons,’ a world that was habitual as opposed to mechanical. This new image of man, Sellars claims, is “the framework in terms of which man came to be aware of himself as man-in-the-world” (6). As such, the manifest image is the image interrogated by the philosophical tradition, which given the limited correlational and categorial resources available to it, remained blind to the communicative—social—conditions of conceptual frameworks, and so, the manifest image of man. Apprehending this would require the scientific image, the conceptual complex “derived from the fruits of postulational theory construction,” yet still turning on the conceptual resources of the manifest image.

For Sellars, the distinction between the two images turns not so much on what we commonly regard to be ‘scientific’ or not (which is why he thinks the manifest image is scientific in certain respects), but on the primary cognitive strategies utilized. “The contrast I have in mind,” he writes, “is not that between an unscientific conception of man-in-the-world and a scientific one, but between that conception which limits itself to what correlational techniques can tell us about perceptible and introspectable events and that which postulates imperceptible objects and events for the purpose of explaining correlations among perceptibles” (19). This distinction, as it turns out, only captures part of what we typically think of as ‘scientific.’ A great deal of scientific work is correlational, bent on describing patterns in sets of perceptibles as opposed to postulating imperceptibles to explain those sets. This is why he suggests that terming the scientific image the ‘theoretical image’ might prove more accurate, if less rhetorically satisfying. The scientific image is postulational because it posits what isn’t manifest—what wasn’t available to our historical or prehistorical ancestors, namely, knowledge of man as “a complex physical system” (25).

The key to overcoming the antipathy between the two images, Sellars thinks, lies in the indispensability of the communally grounded conceptual framework of the manifest image to both images. The reason we should yield ontological priority to the scientific image derives from the conceptual priority of the manifest image. Their domains need not overlap. “[T]he conceptual framework of persons,” he writes, “is not something that needs to be reconciled with the scientific image, but rather something to be joined to it” (40). To do this, we need to “directly relate the world as conceived by scientific theory to our purposes and make it our world and no longer an alien appendage to the world in which we do our living” (40).

Being in the ‘logical space of reasons,’ or playing the ‘game of giving and asking for reasons,’ requires social competence, which requires sensitivity to norms and purposes. The entities and relations populating Sellars normative metaphysics exist only in social contexts, only so far as they discharge pragmatic functions. The reliance of the scientific image on these pragmatic functions renders them indispensable, forcing us to adopt ‘stereoscopic vision,’ to acknowledge the conceptual priority of the manifest even as we yield ontological priority to the scientific.

 

Cognitive Ecologies

The interactional sum of organisms and their environments constitutes an ecology. A ‘cognitive ecology,’ then, can be understood as the interactional sum of organisms and their environments as it pertains to the selection of behaviours.

A deep information environment is simply the sum of difference-making differences available for possible human cognition. We could, given the proper neurobiology, perceive radio waves, but we don’t. We could, given the proper neurobiology, hear dog whistles, but we don’t. We could, given the proper neurobiology, see paramecia, but we don’t. Of course, we now possess instrumentation allowing us to do all these things, but this just testifies to the way science accesses deep information environments. As finite, our cognitive ecology, though embedded in deep information environments, engages only select fractions of it. As biologically finite, in other words, human cognitive ecology is insensitive to most all deep information. When a magician tricks you, for instance, they’re exploiting your neglect-structure, ‘forcing’ your attention toward ephemera while they manipulate behind the scenes.

Given the complexity of biology, the structure of our cognitive ecology lies outside the capacity of our cognitive ecology. Human cognitive ecology cannot but neglect the high dimensional facts of human cognitive ecology. Our intractability imposes inscrutability. This means that human metacognition and sociocognition are radically heuristic, systems adapted to solving systems they otherwise neglect.

Human cognition possesses two basic modes, one that is source-insensitive, or heuristic, relying on cues to predict behaviour, and one that is source-sensitive, or mechanical, relying on causal contexts to predict behaviour. The radical economies provided by the former is offset by narrow ranges of applicability and dependence on background regularities. The general applicability of the latter is offset by its cost. Human cognitive ecology can be said to be shallow to the extent it turns on source-insensitive modes of cognition, and deep to the extent it turns on source-sensitive modes. Given the radical intractability of human cognition, we should expect metacognition and sociocognition to be radically shallow, utterly dependent on cues and contexts. Not only are we blind to the enabling dimension of experience and cognition, we are blind to this blindness. We suffer medial neglect.

This provides a parsimonious alternative to understanding the structure and development of human self-understanding. We began in an age of what might be called ‘medial innocence,’ when our cognitive ecologies were almost exclusively shallow, incorporating causal determinations only to cognize local events. Given their ignorance of nature, our ancestors could not but cognize it via source-insensitive modes. They did not so much ‘personalize’ the world, as Sellars claims, as use source-insensitive modes opportunistically. They understood each other and themselves as far as they needed to resolve practical issues. They understood argument as far as they needed to troubleshoot their reports. Aside from these specialized ways of surmounting their intractability, they were utterly ignorant of their nature.

Our ancestral medial innocence began eroding as soon as humanity began gaming various heuristic systems out of school, spoofing their visual and auditory systems, knapping them into cultural inheritances, slowly expanding and multiplying potential problem-ecologies within the constraints of oral culture. Writing, as a cognitive technology, had a tremendous impact on human cognitive ecology. Literacy allowed speech to be visually frozen and carved up for interrogation. The gaming of our heuristics began in earnest, the knapping of countless cognitive tools. As did the questions. Our ancient medial innocence bloomed into a myriad of medial confusions.

Confusions. Not, as Sellars would have it, a manifest image. Sellars calls it ‘manifest’ because it’s correlational, source-insensitive, bound to the information available. The fact that it’s manifest means that it’s available—nothing more. Given medial innocence, that availability was geared to practical ancestral applications. The shallowness of our cognitive ecology was adapted to the specificity of the problems faced by our ancestors. Retasking those shallow resources to solve for their own nature, not surprisingly, generated endless disputation. Combined with the efficiencies provided by coinage and domestication during the ‘axial age,’ literacy did not so much trigger ‘man’s encounter with man,’ as Sellars suggests, as occasion humanity’s encounter with the question of humanity, and the kinds cognitive illusions secondary to the application of metacognitive and sociocognitive heuristics to the theoretical question of experience and cognition.

The birth of philosophy is the birth of discursive crash space. We have no problem reflecting on thoughts or experiences, but as soon as we reflect on the nature of thoughts and experiences, we find ourselves stymied, piling guesses upon guesses. Despite our genius for metacognitive innovation, what’s manifest in our shallow cognitive ecologies is woefully incapable of solving for the nature of human cognitive ecology. Precisely because reflecting on the nature of thoughts and experiences is a metacognitive innovation, something without evolutionary precedent, we neglect the insufficiency of the resources available. Artifacts of the lack of information are systematically mistaken for positive features. The systematicity of these crashes licenses the intuition that some common structure lurks ‘beneath’ the disputation—that for all their disagreements, the disputants are ‘onto something.’ The neglect-structure belonging to human metacognitive ecology gradually forms the ontological canon of the ‘first-person’ (see “On Alien Philosophy” for a more full-blooded account). And so, we persisted, generation after generation, insisting on the sufficiency of those resources. Since sociocognitive terms cue sociocognitive modes of cognition, the application of these modes to the theoretical problem of human experience and cognition struck us as intuitive. Since the specialization of these modes renders them incompatible with source-sensitive modes, some, like Wittgenstein and Sellars, went so far as to insist on the exclusive applicability of those resources to the problem of human experience and cognition.

Despite the profundity of metacognitive traps like these, the development of our sourcesensitive cognitive modes continued reckoning more and more of our deep environment. At first this process was informal, but as time passed and the optimal form and application of these modes resolved from the folk clutter, we began cognizing more and more of the world in deep environmental terms. The collective behavioural nexuses of science took shape. Time and again, traditions funded by source-insensitive speculation on the nature of some domain found themselves outcompeted and ultimately displaced. The world was ‘disenchanted’; more and more of the grand machinery of the natural universe was revealed. But as powerful as these individual and collective source-sensitive modes of cognition proved, the complexity of human cognitive ecology insured that we would, for the interim, remain beyond their reach. Though an artifactual consequence of shallow ecological neglect-structures, the ‘first-person’ retained cognitive legitimacy. Despite the paradoxes, the conundrums, the interminable disputation, the immediacy of our faulty metacognitive intuitions convinced us that we alone were exempt, that we were the lone exception in the desert landscape of the real. So long as science lacked the resources to reveal the deep environmental facts of our nature, we could continue rationalizing our conceit.

 

Ecology versus Image

As should be clear, Sellars’ characterization of the images of man falls squarely within this tradition of rationalization, the attempt to explain away our exceptionalism. One of the stranger claims Sellars makes in this celebrated essay involves the scientific status of his own discursive exposition of the images and their interrelation. The problem, he writes, is that the social sources of the manifest image are not themselves manifest. As a result, the manifest image lacks the resources to explain its own structure and dynamics: “It is in the scientific image of man in the world that we begin to see the main outlines of the way in which man came to have an image of himself-in-the-world” (17). Understanding our self-understanding requires reaching beyond the manifest and postulating the social axis of human conceptuality, something, he implies, that only becomes available when we can see group phenomena as ‘evolutionary developments.’

Remember Sellars’ caveats regarding ‘correlational science’ and the sense in which the manifest image can be construed as scientific? (7) Here, we see how that leaky demarcation of the manifest (as correlational) and the scientific (as theoretical) serves his downstream equivocation of his manifest discourse with scientific discourse. If science is correlational, as he admits, then philosophy is also postulational—as he well knows. But if each image helps itself to the cognitive modes belonging to the other, then Sellars assertion that the distinction lies between a conception limited to ‘correlational techniques’ and one committed to the ‘postulation of imperceptibles’ (19) is either mistaken or incomplete. Traditional philosophy is nothing if not theoretical, which is to say, in the business of postulating ontologies.

Suppressing this fact allows him to pose his own traditional philosophical posits as (somehow) belonging to the scientific image of man-in-the-world. What are ‘spaces of reasons’ or ‘conceptual frameworks’ if not postulates used to explain the manifest phenomena of cognition? But then how do these posits contribute to the image of man as a ‘complex physical system’? Sellars understands the difficulty here “as long as the ultimate constituents of the scientific image are particles forming ever more complex systems of particles” (37). This is what ultimately motivates the structure of his ‘stereoscopic view,’ where ontological precedence is conceded to the scientific image, while cognition itself remains safely in the humanistic hands of the manifest image…

Which is to say, lost to crash space.

Are human neuroheuristic systems welded into ‘conceptual frameworks’ forming an ‘irreducible’ and ‘autonomous’ inferential regime? Obviously not. But we can now see why, given the confounds secondary to metacognitive neglect, they might report as such in philosophical reflection. Our ancestors bickered. In other words, our capacity to collectively resolve communicative and behavioural discrepancies belongs to our medial innocence: intentional idioms antedate our attempts to theoretically understand intentionality. Uttering them, not surprisingly, activates intentional cognitive systems, because, ancestrally speaking, intentional idioms always belonged to problem-ecologies requiring these systems to solve. It was all but inevitable that questioning the nature of intentional idioms would trigger the theoretical application of intentional cognition. Given the degree to which intentional cognition turns on neglect, our millennial inability to collectively make sense of ourselves, medial confusion, was all but inevitable as well. Intentional cognition cannot explain the nature of anything, insofar as natures are general, and the problem ecology of intentional cognition is specific. This is why, far from decisively resolving our cognitive straits, Sellars’ normative metaphysics merely complicates it, using the same overdetermined posits to make new(ish) guesses that can only serve as grist for more disputation.

But if his approach is ultimately hopeless, how is he able to track the development in human self-understanding at all? For one, he understands the centrality of behaviour. But rather than understand behaviour naturalistically, in terms of systems of dispositions and regularities, he understands it intentionally, via modes adapted to neglect physical super-complexities. Guesses regarding hidden systems of physically inexplicable efficacies—’conceptual frameworks’—are offered as basic explanations of human behaviour construed as ‘action.’

He also understands that distinct cognitive modes are at play. But rather than see this distinction biologically, as the difference between complex physical systems, he conceives it conceptually, which is to say, via source-insensitive systems incapable of charting, let alone explaining our cognitive complexity. Thus, his confounding reliance on what might be called manifest postulation, deep environmental explanation via shallow ecological (intentional) posits.

And he understands the centrality of information availability. But rather than see this availability biologically, as the play of physically interdependent capacities and resources, he conceives it, once again, conceptually. All differences make differences somehow. Information consists of differences selected (neurally or evolutionarily) by the production of prior behaviours. Information consists in those differences prone to make select systematic differences, which is to say, feed the function of various complex physical systems. Medial neglect assures that the general interdependence of information and cognitive system appears nowhere in experience or cognition. Once humanity began retasking its metacognitive capacities, it was bound to hallucinate a countless array of ‘givens.’ Sellars is at pains to stress the medial (enabling) dimension of experience and cognition, the inability of manifest deliverances to account for the form of thought (16). Suffering medial neglect, cued to misapply heuristics belonging to intentional cognition, he posits ‘conceptual frameworks’ as a means of accommodating the general interdependence of information and cognitive system. The naturalistic inscrutability of conceptual frameworks renders them local cognitive prime movers (after all, source-insensitive posits can only come first), assuring the ‘conceptual priority’ of the manifest image.

The issue of information availability, for him, is always conceptual, which is to say, always heuristically conditioned, which is to say, always bound to systematically distort what is the case. Where the enabling dimension of cognition belongs to the deep environments on a cognitive ecological account, it belongs to communities on Sellars’ inferentialist account. As result, he has no clear way of seeing how the increasingly technologically mediated accumulation of ancestrally unavailable information drives the development of human self-understanding.

The contrast between shallow (source-insensitive) cognitive ecologies and deep information environments opens the question of the development of human self-understanding to the high-dimensional messiness of life. The long migratory path from the medial innocence of our preliterate past to the medial chaos of our ongoing cognitive technological revolution has nothing to do with the “projection of man-in-the-world on the human understanding” (5) given the development of ‘conceptual frameworks.’ It has to do with blind medial adaptation to transforming cognitive ecologies. What complicates this adaptation, what delivers us from medial innocence to chaos, is the heuristic nature of source-insensitive cognitive modes. Their specificity, their inscrutability, not to mention their hypersensitivity (the ease with which problems outside their ability cue their application) all but doomed us to perpetual, discursive disarray.

Images. Games. Conceptual frameworks. None of these shallow ecological posits are required to make sense of our path from ancestral ignorance to present conundrum. And we must discard them, if we hope to finally turn and face our future, gaze upon the universe with the universe’s own eyes.

Enlightenment How? Omens of the Semantic Apocalypse

by rsbakker

“In those days the world teemed, the people multiplied, the world bellowed like a wild bull, and the great god was aroused by the clamor. Enlil heard the clamor and he said to the gods in council, “The uproar of mankind is intolerable and sleep is no longer possible by reason of the babel.” So the gods agreed to exterminate mankind.” –The Epic of Gilgamesh

We know that human cognition is largely heuristic, and as such dependent upon cognitive ecologies. We know that the technological transformation of those ecologies generates what Pinker calls ‘bugs,’ heuristic miscues due to deformations in ancestral correlative backgrounds. In ancestral times, our exposure to threat-cuing stimuli possessed a reliable relationship to actual threats. Not so now thanks to things like the nightly news, generating (via, Pinker suggests, the availability heuristic (42)) exaggerated estimations of threat.

The toll of scientific progress, in other words, is cognitive ecological degradation. So far that degradation has left the problem-solving capacities of intentional cognition largely intact: the very complexity of the systems requiring intentional cognition has hitherto rendered cognition largely impervious to scientific renovation. Throughout the course of revolutionizing our environments, we have remained a blind-spot, the last corner of nature where traditional speculation dares contradict the determinations of science.

This is changing.

We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travelers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts.

Now that the sciences are colonizing the complexities of experience and cognition, we can see the first clear-cut omens of the semantic apocalypse.

 

Crash Space

He assiduously avoids the topic in Enlightenment Now, but in The Blank Slate, Pinker devotes several pages to deflating the arch-incompatibility between natural and intentional modes of cognition, the problem of free will:

“But how can we have both explanation, with its requirement of lawful causation, and responsibility, with its requirement of free choice? To have them both we don’t need to resolve the ancient and perhaps irresolvable antinomy between free will and determinism. We have only to think clearly about what we want the notion of responsibility to achieve.” 180

He admits there’s no getting past the ‘conflict of intuitions’ underwriting the debate. Since he doesn’t know what intentional and natural cognition amount to, he doesn’t understand their incompatibility, and so proposes we simply side-step the problem altogether by redefining ‘responsibility’ to mean what we need it to mean—the same kind of pragmatic redefinition proposed by Dennett. He then proceeds to adduce examples of ‘clear thinking’ by providing guesses regarding ‘holding responsible’ as deterrence, which is more scientifically tractable. “I don’t claim to have solved the problem of free will, only to show that we don’t need to solve it to preserve personal responsibility in the face of an increasing understanding of the causes of behaviour” (185).

Here we can see how profoundly Pinker (as opposed to Nietzsche and Adorno) misunderstands the profundity of Enlightenment disenchantment. The problem isn’t that one can’t cook up alternate definitions of ‘responsibility,’ the problem is that anyone can, endlessly. ‘Clear thinking’ is as liable to serve Pinker as well as ‘clear and distinct ideas’ served Descartes, which is to say, as more grease for the speculative mill. No matter how compelling your particular instrumentalization of ‘responsibility’ seems, it remains every bit as theoretically underdetermined as any other formulation.

There’s a reason such exercises in pragmatic redefinition stall in the speculative ether. Intentional and mechanical cognitive systems are not optional components of human cognition, nor are the intuitions we are inclined to report. Moreover, as we saw in the previous post, intentional cognition generates reliable predictions of system behaviour absent access to the actual sources of that behaviour. Intentional cognition is source-insensitive. Natural cognition, on the other hand, is source sensitive: it generates predictions of system behaviour via access to the actual sources of that behaviour.

Small wonder, then, that our folk intentional intuitions regularly find themselves scuttled by scientific explanation. ‘Free will,’ on this account, is ancestral lemonade, a way to make the best out of metacognitive lemons, namely, our blindness to the sources of our thought and decisions. To the degree it relies upon ancestrally available (shallow) saliencies, any causal (deep) account of those sources is bound to ‘crash’ our intuitions regarding free will. The free will debate that Pinker hopes to evade with speculation can be seen as a kind of crash space, the point where the availability of deep information generates incompatible causal intuitions and intentional intuitions.

The confusion here isn’t (as Pinker thinks) ‘merely conceptual’; it’s a bona fide, material consequence of the Enlightenment, a cognitive version of a visual illusion. Too much information of the wrong kind crashes our radically heuristic modes of cognizing decisions. Stipulating definitions, not surprisingly, solves nothing insofar as it papers over the underlying problem—this is why it merely adds to the literature. Responsibility-talk cues the application of intentional cognitive modes; it’s the incommensurability of these modes with causal cognition that’s the problem, not our lexicons.

 

Cognitive Information

Consider the laziness of certain children. Should teachers be allowed to hold students responsible for their academic performance? As the list of learning disabilities grows, incompetence becomes less a matter of ‘character’ and more a matter of ‘malfunction’ and providing compensatory environments. Given that all failures of competence redound on cognitive infelicities of some kind, and given that each and every one of these infelicities can and will be isolated and explained, should we ban character judgments altogether? Should we regard exhortations to ‘take responsibility’ as forms of subtle discrimination, given that executive functioning varies from student to student? Is treating children like (sacred) machinery the only ‘moral’ thing to do?

So far at least. Causal explanations of behaviour cue intentional exemptions: our ancestral thresholds for exempting behaviour from moral cognition served larger, ancestral social equilibria. Every etiological discovery cues that exemption in an evolutionarily unprecedented manner, resulting in what Dennett calls “creeping exculpation,” the gradual expansion of morally exempt behaviours. Once a learning impediment has been discovered, it ‘just is’ immoral to hold those afflicted responsible for their incompetence. (If you’re anything like me, simply expressing the problem in these terms rankles!) Our ancestors, resorting to systems adapted to resolving social problems given only the merest information, had no problem calling children lazy, stupid, or malicious. Were they being witlessly cruel doing so? Well, it certainly feels like it. Are we more enlightened, more moral, for recognizing the limits of that system, and curtailing the context of application? Well, it certainly feels like it. But then how do we justify our remaining moral cognitive applications? Should we avoid passing moral judgment on learners altogether? It’s beginning to feel like it. Is this itself moral?

This is theoretical crash space, plain and simple. Staking out an argumentative position in this space is entirely possible—but doing so merely exemplifies, as opposed to solves, the dilemma. We’re conscripting heuristic systems adapted to shallow cognitive ecologies to solve questions involving the impact of information they evolved to ignore. We can no more resolve our intuitions regarding these issues than we can stop Necker Cubes from spoofing visual cognition.

The point here isn’t that gerrymandered solutions aren’t possible, it’s that gerrymandered solutions are the only solutions possible. Pinker’s own ‘solution’ to the debate (see also, How the Mind Works, 54-55) can be seen as a symptom of the underlying intractability, the straits we find ourselves in. We can stipulate, enforce solutions that appease this or that interpretation of this or that displaced intuition: teachers who berate students for their laziness and stupidity are not long for their profession—at least not anymore. As etiologies of cognition continue to accumulate, as more and more deep information permeates our moral ecologies, the need to revise our stipulations, to engineer them to discharge this or that heuristic function, will continue to grow. Free will is not, as Pinker thinks, “an idealization of human beings that makes the ethics game playable” (HMW 55), it is (as Bruce Waller puts it) stubborn, a cognitive reflex belonging to a system of cognitive reflexes belonging to intentional cognition more generally. Foot-stomping does not change how those reflexes are cued in situ. The free-will crash space will continue to expand, no matter how stubbornly Pinker insists on this or that redefinition of this or that term.

We’re not talking about a fall from any ‘heuristic Eden,’ here, an ancestral ‘golden age’ where our instincts were perfectly aligned with our circumstances—the sheer granularity of moral cognition, not to mention the confabulatory nature of moral rationalization, suggests that it has always slogged through interpretative mire. What we’re talking about, rather, is the degree that moral cognition turns on neglecting certain kinds of natural information. Or conversely, the degree to which deep natural information regarding our cognitive capacities displaces and/or crashes once straightforward moral intuitions, like the laziness of certain children.

Or the need to punish murderers…

Two centuries ago a murderer suffering irregular sleep characterized by vocalizations and sometimes violent actions while dreaming would have been prosecuted to the full extent of the law. Now, however, such a murderer would be diagnosed as suffering an episode of ‘homicidal somnambulism,’ and could very likely go free. Mammalian brains do not fall asleep or awaken all at once. For some yet-to-be-determined reason, the brains of certain individuals (mostly men older than 50), suffer a form of partial arousal causing them to act out their dreams.

More and more, neuroscience is making an impact in American courtrooms. Nita Farahany (2016) has found that between 2005 and 2012 the number of judicial opinions referencing neuroscientific evidence has more than doubled. She also found a clear correlation between the use of such evidence and less punitive outcomes—especially when it came to sentencing. Observers in the burgeoning ‘neurolaw’ field think that for better or worse, neuroscience is firmly entrenched in the criminal justice system, and bound to become ever more ubiquitous.

Not only are responsibility assessments being weakened as neuroscientific information accumulates, social risk assessments are being strengthened (Gkotsi and Gasser 2016). So-called ‘neuroprediction’ is beginning to revolutionize forensic psychology. Studies suggest that inmates with lower levels of anterior cingulate activity are approximately twice as likely to reoffend as those relatively higher levels of activity (Aharoni et al 2013). Measurements of ‘early sensory gating’ (attentional filtering) predict the likelihood that individuals suffering addictions will abandon cognitive behavioural treatment programs (Steele et al 2014). Reduced gray matter volumes in the medial and temporal lobes identify youth prone to commit violent crimes (Cope et al 2014). ‘Enlightened’ metrics assessing recidivism risks already exist within disciplines such as forensic psychiatry, of course, but “the brain has the most proximal influence on behavior” (Gaudet et al 2016). Few scientific domains illustrate the problems secondary to deep environmental information than the issue of recidivism. Given the high social cost of criminality, the ability to predict ‘at risk’ individuals before any crime is committed is sure to pay handsome preventative dividends. But what are we to make of justice systems that parole offenders possessing one set of ‘happy’ neurological factors early, while leaving others possessing an ‘unhappy’ set to serve out their entire sentence?

Nothing, I think, captures the crash of ancestral moral intuitions in modern, technological contexts quite so dramatically as forensic danger assessments. Consider, for instance, the way deep information in this context has the inverse effect of deep information in the classroom. Since punishment is indexed to responsibility, we generally presume those bearing less responsibility deserve less punishment. Here, however, it’s those bearing the least responsibility, those possessing ‘social learning disabilities,’ who ultimately serve the longest. The very deficits that mitigate responsibility before conviction actually aggravate punishment subsequent conviction.

The problem is fundamentally cognitive, and not legal, in nature. As countless bureaucratic horrors make plain, procedural decision-making need not report as morally rational. We would be mad, on the one hand, to overlook any available etiology in our original assessment of responsibility. We would be mad, on the other hand, to overlook any available etiology in our subsequent determination of punishment. Ergo, less responsibility often means more punishment.

Crash.

The point, once again, is to describe the structure and dynamics of our collective sociocognitive dilemma in the age of deep environmental information, not to eulogize ancestral cognitive ecologies. The more we disenchant ourselves, the more evolutionarily unprecedented information we have available, the more problematic our folk determinations become. Demonstrating this point demonstrates the futility of pragmatic redefinition: no matter how Pinker or Dennett (or anyone else) rationalizes a given, scientifically-informed definition of moral terms, it will provide no more than grist for speculative disputation. We can adopt any legal or scientific operationalization we want (see Parmigiani et al 2017); so long as responsibility talk cues moral cognitive determinations, however, we will find ourselves stranded with intuitions we cannot reconcile.

Considered in the context of politics and the ‘culture wars,’ the potentially disastrous consequences of these kinds of trends become clear. One need only think of the oxymoronic notion of ‘commonsense’ criminology, which amounts to imposing moral determinations geared to shallow cognitive ecologies upon criminal contexts now possessing numerous deep information attenuations. Those who, for whatever reason, escaped the education system with something resembling an ancestral ‘neglect structure’ intact, those who have no patience for pragmatic redefinitions or technical stipulations will find appeals to folk intuitions every bit as convincing as those presiding over the Salem witch trials in 1692. Those caught up in deep information environments, on the other hand, will be ever more inclined to see those intuitions as anachronistic, inhumane, immoral—unenlightened.

Given the relation between education and information access and processing capacity, we can expect that education will increasingly divide moral attitudes. Likewise, we should expect a growing sociocognitive disconnect between expert and non-expert moral determinations. And given cognitive technologies like the internet, we should expect this dysfunction to become even more profound still.

 

Cognitive Technology

Given the power of technology to cue intergroup identifications, the internet was—and continues to be—hailed as a means of bringing humanity together, a way of enacting the universalistic aspirations of humanism. My own position—one foot in academe, another foot in consumer culture—afforded me a far different perspective. Unlike academics, genre writers rub shoulders with all walks, and often find themselves debating outrageously chauvinistic views. I realized quite quickly that the internet had rendered rationalizations instantly available, that it amounted to pouring marbles across the floor of ancestral social dynamics. The cost of confirmation had plummeted to zero. Prior to the internet, we had to test our more extreme chauvinisms against whomever happened to be available—which is to say, people who would be inclined to disagree. We had to work to indulge our stone-age weaknesses in post-war 20th century Western cognitive ecologies. No more. Add to this phenomena such as online disinhibition effect, as well as the sudden visibility of ingroup, intellectual piety, and the growing extremity of counter-identification struck me as inevitable. The internet was dividing us into teams. In such an age, I realized, the only socially redemptive art was art that cut against this tendency, art that genuinely spanned ingroup boundaries. Literature, as traditionally understood, had become a paradigmatic expression of the tribalism presently engulfing us now. Epic fantasy, on the other hand, still possessed the relevance required to inspire book burnings in the West.

(The past decade has ‘rewarded’ my turn-of-the-millennium fears—though in some surprising ways. The greatest attitudinal shift in America, for instance, has been progressive: it has been liberals, and not conservatives, who have most radically changed their views. The rise of reactionary sentiment and populism is presently rewriting European politics—and the age of Trump has all but overthrown the progressive political agenda in the US. But the role of the internet and social media in these phenomena remains a hotly contested one.)

The earlier promoters of the internet had banked on the notional availability of intergroup information to ‘bring the world closer together,’ not realizing the heuristic reliance of human cognition on differential information access. Ancestrally, communicating ingroup reliability trumped communicating environmental accuracy, stranding us with what Pinker (following Kahan 2011) calls the ‘tragedy of the belief commons’ (Enlightenment Now, 358), the individual rationality of believing collectively irrational claims—such as, for instance, the belief that global warming is a liberal myth. Once falsehoods become entangled with identity claims, they become the yardstick of true and false, thus generating the terrifying spectacle we now witness on the evening news.

The provision of ancestrally unavailable social information is one thing, so long as it is curated—censored, in effect—as it was in the mass media age of my childhood. Confirmation biases have to swim upstream in such cognitive ecologies. Rendering all ancestrally unavailable social information available, on the other hand, allows us to indulge our biases, to see only what we want to see, to hear only what we want to hear. Where ancestrally, we had to risk criticism to secure praise, no such risks need be incurred now. And no surprise, we find ourselves sliding back into the tribalistic mire, arguing absurdities haunted—tainted—by the death of millions.

Jonathan Albright, the research director at the Tow Center for Digital Journalism at Columbia, has found that the ‘fake news’ phenomenon, as the product of a self-reinforcing technical ecosystem, has actually grown worse since the 2016 election. “Our technological and communication infrastructure, the ways we experience reality, the ways we get news, are literally disintegrating,” he recently confessed in a NiemanLab interview. “It’s the biggest problem ever, in my opinion, especially for American culture.” As Alexis Madrigal writes in The Atlantic, “the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

The individual cost of fantasy continues to shrink, even as the collective cost of deception continues to grow. The ecologies once securing the reliability of our epistemic determinations, the invariants that our ancestors took for granted, are being levelled. Our ancestral world was one where seeking risked aversion, a world where praise and condemnation alike had to brave condemnation, where lazy judgments were punished rather than rewarded. Our ancestral world was one where geography and the scarcity of resources forced permissives and authoritarians to intermingle, compromise, and cooperate. That world is gone, leaving the old equilibria to unwind in confusion, a growing social crash space.

And this is only the beginning of the cognitive technological age. As Tristan Harris points out, social media platforms, given their commercial imperatives, cannot but engineer online ecologies designed to exploit the heuristic limits of human cognition. He writes:

“I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.”

More and more of what we encounter online is dedicated to various forms of exogenous attention capture, maximizing the time we spend on the platform, so maximizing our exposure not just to advertising, but to hidden metrics, algorithms designed to assess everything from our likes to our emotional well-being. As with instances of ‘forcing’ in the performance of magic tricks, the fact of manipulation escapes our attention altogether, so we always presume we could have done otherwise—we always presume ourselves ‘free’ (whatever this means). We exhibit what Clifford Nass, a pioneer in human-computer interaction, calls ‘mindlessness,’ the blind reliance on automatic scripts. To the degree that social media platforms profit from engaging your attention, they profit from hacking your ancestral cognitive vulnerabilities, exploiting our shared neglect structure. They profit, in other words, from transforming crash spaces into cheat spaces.

With AI, we are set to flood human cognitive ecologies with systems designed to actively game the heuristic nature of human social cognition, cuing automatic responses based on boggling amounts of data and the capacity to predict our decisions better than our intimates, and soon, better than we can ourselves. And yet, as the authors of the 2017 AI Index report state, “we are essentially “flying blind” in our conversations and decision-making related to AI.” A blindness we’re largely blind to. Pinker spends ample time domesticating the bogeyman of superintelligent AI (296-298) but he completely neglects this far more immediate and retail dimension of our cognitive technological dilemma.

Consider the way humans endure as much as need one another: the problem is that the cues signaling social punishment and reward are easy to trigger out of school. We’ve already crossed the borne where ‘improving the user experience’ entails substituting artificial for natural social feedback. Notice the plethora of nonthreatening female voices at all? The promise of AI is the promise of countless artificial friends, voices that will ‘understand’ your plight, your grievances, in some respects better than you do yourself. The problem, of course, is that they’re artificial, which is to say, not your friend at all.

Humans deceive and manipulate one another all the time, of course. And false AI friends don’t rule out true AI defenders. But the former merely describes the ancestral environments shaping our basic heuristic tool box. And the latter simply concedes the fundamental loss of those cognitive ecologies. The more prosthetics we enlist, the more we complicate our ecology, the more mediated our determinations become, the less efficacious our ancestral intuitions become. The more we will be told to trust to gerrymandered stipulations.

Corporate simulacra are set to deluge our homes, each bent on cuing trust. We’ve already seen how the hypersensitivity of intentional cognition renders us liable to hallucinate minds where none exist. The environmental ubiquity of AI amounts to the environmental ubiquity of systems designed to exploit granular sociocognitive systems tuned to solve humans. The AI revolution amounts to saturating human cognitive ecology with invasive species, billions of evolutionarily unprecedented systems, all of them camouflaged and carnivorous. It represents—obviously, I think—the single greatest cognitive ecological challenge we have ever faced.

What does ‘human flourishing’ mean in such cognitive ecologies? What can it mean? Pinker doesn’t know. Nobody does. He can only speculate in an age when the gobsmacking power of science has revealed his guesswork for what it is. This was why Adorno referred to the possibility of knowing the good as the ‘Messianic moment.’ Until that moment comes, until we find a form of rationality that doesn’t collapse into instrumentalism, we have only toothless guesses, allowing the pointless optimization of appetite to command all. It doesn’t matter whether you call it the will to power or identity thinking or negentropy or selfish genes or what have you, the process is blind and it lies entirely outside good and evil. We’re just along for the ride.

 

Semantic Apocalypse

Human cognition is not ontologically distinct. Like all biological systems, it possesses its own ecology, its own environmental conditions. And just as scientific progress has brought about the crash of countless ecosystems across this planet, it is poised to precipitate the crash of our shared cognitive ecology as well, the collapse of our ability to trust and believe, let alone to choose or take responsibility. Once every suboptimal behaviour has an etiology, what then? Once everyone us has artificial friends, heaping us with praise, priming our insecurities, doing everything they can to prevent non-commercial—ancestral— engagements, what then?

‘Semantic apocalypse’ is the dramatic term I coined to capture this process in my 2008 novel, Neuropath. Terminology aside, the crashing of ancestral (shallow information) cognitive ecologies is entirely of a piece with the Anthropocene, yet one more way that science and technology are disrupting the biology of our planet. This is a worst-case scenario, make no mistake. I’ll be damned if I see any way out of it.

Humans cognize themselves and one another via systems that take as much for granted as they possibly can. This is a fact. Given this, it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible. Why should we evolve the extravagant capacity to intuit our nature beyond the demands of ancestral life? The shallow cognitive ecology arising out of those demands constitutes our baseline self-understanding, one that bears the imprimatur of evolutionary contingency at every turn. There’s no replacing this system short replacing our humanity.

Thus the ‘worst’ in ‘worst case scenario.’

There will be a great deal of hand-wringing in the years to come. Numberless intentionalists with countless competing rationalizations will continue to apologize (and apologize) while the science trundles on, crashing this bit of traditional self-understanding and that, continually eroding the pilings supporting the whole. The pieties of humanism will be extolled and defended with increasing desperation, whole societies will scramble, while hidden behind the endless assertions of autonomy, beneath the thundering bleachers, our fundamentals will be laid bare and traded for lucre.

Enlightenment How? Pinker’s Tutelary Natures

by rsbakker

 

The fate of civilization, Steven Pinker thinks, hangs upon our commitment to enlightenment values. Enlightenment Now: The Case for Reason, Science, Humanism and Progress constitutes his attempt to shore up those commitments in a culture grown antagonistic to them. This is a great book, well worth the read for the examples and quotations Pinker endlessly adduces, but even though I found myself nodding far more often than not, one glaring fact continually leaks through: Enlightenment Now is a book about a process, namely ‘progress,’ that as yet remains mired in ‘tutelary natures.’ As Kevin Williamson puts it in the National Review, Pinker “leaps, without warrant, from physical science to metaphysical certitude.”

What is his naturalization of meaning? Or morality? Or cognition—especially cognition! How does one assess the cognitive revolution that is the Enlightenment short understanding the nature of cognition? How does one prognosticate something one does not scientifically understand?

At one point he offers that “[t]he principles of information, computation, and control bridge the chasm between the physical world of cause and effect and the mental world of knowledge, intelligence, and purpose” (22). Granted, he’s a psychologist: operationalizations of information, computation, and control are his empirical bread and butter. But operationalizing intentional concepts in experimental contexts is a far cry from naturalizing intentional concepts. He entirely neglects to mention that his ‘bridge’ is merely a pragmatic, institutional one, that cognitive science remains, despite decades of research and billions of dollars in resources, unable to formulate its explananda, let alone explain them. He mentions a great number of philosophers, but he fails to mention what the presence of those philosophers in his thetic wheelhouse means.

All he ultimately has, on the one hand, is a kind of ‘ta-da’ argument, the exhaustive statistical inventory of the bounty of reason, science, and humanism, and on the other hand (which he largely keeps hidden behind his back), he has the ‘tu quoque,’ the question-begging presumption that one can only argue against reason (as it is traditionally understood) by presupposing reason (as it is traditionally understood). “We don’t believe in reason,” he writes, “we use reason” (352). Pending any scientific verdict on the nature of ‘reason,’ however, these kinds of transcendental arguments amount to little more than fancy foot-stomping.

This is one of those books that make me wish I could travel back in time to catch the author drafting notes. So much brilliance, so much erudition, all devoted to beating straw—at least as far as ‘Second Culture’ Enlightenment critiques are concerned. Nietzsche is the most glaring example. Ignoring Nietzsche the physiologist, the empirically-minded skeptic, and reducing him to his subsequent misappropriation by fascist, existential, and postmodernist thought, Pinker writes:

Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.” (Of course, this left him unable to explain why we should believe that those statements are true.) 446

Although it’s true that Nietzsche (like Pinker) lacked any scientifically compelling theory of cognition, what he did understand was its relation to power, the fact that “when you face an adversary alone, your best weapon may be an ax, but when you face an adversary in front of a throng of bystanders, your best weapon may be an argument” (415). To argue that all knowledge is contextual isn’t to argue that all knowledge is fundamentally equal (and therefore not knowledge at all), only that it is bound to its time and place, a creature possessing its own ecology, its own conditions of failure and flourishing. The Nietzschean thought experiment is actually quite a simple one: What happens when we turn Enlightenment skepticism loose upon Enlightenment values? For Nietzsche, Enlightenment Now, though it regularly pays lip service to the ramshackle, reversal-prone nature of progress, serves to conceal the empirical fact of cognitive ecology, that we remain, for all our enlightened noise-making to the contrary, animals bent on minimizing discrepancies. The Enlightenment only survives its own skepticism, Nietzsche thought, in the transvaluation of value, which he conceived—unfortunately—in atavistic or morally regressive terms.

This underwrites the subsequent critique of the Enlightenment we find in Adorno—another thinker whom Pinker grossly underestimates. Though science is able to determine the more—to provide more food, shelter, security, etc.—it has the social consequence underdetermining (and so undermining) the better, stranding civilization with a nihilistic consumerism, where ‘meaningfulness’ becomes just another commodity, which is to say, nothing meaningful at all. Adorno’s whole diagnosis turns on the way science monopolizes rationality, the way it renders moral discourses like Pinker’s mere conjectural exercises (regarding the value of certain values), turning on leaps of faith (on the nature of cognition, etc.), bound to dissolve into disputation. Although both Nietzsche and Adorno believed science needed to be understood as a living, high dimensional entity, neither harboured any delusions as to where they stood in the cognitive pecking order. Unlike Pinker.

Whatever their failings, Nietzsche and Adorno glimpsed a profound truth regarding ‘reason, science, humanism, and progress,’ one that lurks throughout Pinker’s entire account. Both understood that cognition, whatever it amounts to, is ecological. Steven Pinker’s claim to fame, of course, lies in the cognitive ecological analysis of different cultural phenomena—this was the whole reason I was so keen to read this book. (In How the Mind Works, for instance, he famously calls music ‘auditory cheese-cake.’) Nevertheless, I think both Nietzsche and Adorno understood the ecological upshot of the Enlightenment in way that Pinker, as an avowed humanist, simply cannot. In fact, Pinker need only follow through on his modus operandi to see how and why the Enlightenment is not what he thinks it is—as well as why we have good reason to fear that Trumpism is no ‘blip.’

Time and again Pinker likens the process of Enlightenment, the movement away from our tutelary natures, in terms of a conflict between ancestral cognitive predilections and scientifically and culturally revolutionized environments. “Humans today,” he writes, “rely on cognitive faculties that worked well enough in traditional societies, but which we now see are infested with bugs” (25). And the number of bugs that Pinker references in the course of the book is nothing short of prodigious. We tend to estimate frequencies according to ease of retrieval. We tend to fear losses more than we hope for gains. We tend to believe as our group believes. We’re prone to tribalism. We tend to forget past misfortune, and to succumb to nostalgia. The list goes on and on.

What redeems us, Pinker argues, is the human capacity for abstraction and combinatorial recursion, which allows us to endlessly optimize our behaviour. We are a self-correcting species:

So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment. 28

We are the products of ancestral cognitive ecologies, yes, but our capacity for optimizing our capacities allows us to overcome our ‘flawed natures,’ become something better than what we were. “The challenge for us today,” Pinker writes, “is to design an informational environment in which that ability prevails over the ones that lead us into folly” (355).

And here we encounter the paradox that Enlightenment Now never considers, even though Pinker presupposes it continually. The challenge for us today is to construct an informational environment that mitigates the problems arising out of our previous environmental constructions. The ‘bugs’ in human nature that need to be fixed were once ancestral features. What has rendered these adaptations ‘buggy’ is nothing other than the ‘march of progress.’ A central premise of Enlightenment Now is that human cognitive ecology, the complex formed by our capacities and our environments, has fallen out of whack in this way or that, cuing us to apply atavistic modes of problem-solving out of school. The paradox is that the very bugs Pinker thinks only the Enlightenment can solve are the very bugs the Enlightenment has created.

What Nietzsche and Adorno glimpsed, each in their own murky way, was a recursive flaw in Enlightenment logic, the way the rationalization of everything meant the rationalization of rationalization, and how this has to short-circuit human meaning. Both saw the problem in the implementation, in the physiology of thought and community, not in the abstract. So where Pinker seeks to “to restate the ideals of the Enlightenment in the language and concepts of the 21st century” (5), we can likewise restate Nietzsche and Adorno’s critiques of the Enlightenment in Pinker’s own biological idiom.

The problem with the Enlightenment is a cognitive ecological problem. The technical (rational and technological) remediation of our cognitive ecologies transforms those ecologies, generating the need for further technical remediation. Our technical cognitive ecologies are thus drifting ever further from our ancestral cognitive ecologies. Human sociocognition and metacognition in particular are radically heuristic, and as such dependent on countless environmental invariants. Before even considering more, smarter intervention as a solution to the ambient consequences of prior interventions, the big question has to be how far—and how fast—can humanity go? At what point (or what velocity) does a recognizably human cognitive ecology cease to exist?

This question has nothing to do with nostalgia or declinism, no more than any question of ecological viability in times of environmental transformation. It also clearly follows from Pinker’s own empirical commitments.

 

The Death of Progress (at the Hand of Progress)

The formula is simple. Enlightenment reason solves natures, allowing the development of technology, generally relieving humanity of countless ancestral afflictions. But Enlightenment reason is only now solving its own nature. Pinker, in the absence of that solution, is arguing that the formula remains reliable if not quite as simple. And if all things were equal, his optimistic induction would carry the day—at least for me. As it stands, I’m with Nietzsche and Adorno. All things are not equal… and we would see this clearly, I think, were it not for the intentional obscurities comprising humanism. Far from the latest, greatest hope that Pinker makes it out to be, I fear humanism constitutes yet another nexus of traditional intuitions that must be overcome. The last stand of ancestral authority.

I agree this conclusion is catastrophic, “the greatest intellectual collapse in the history of our species” (vii), as an old polemical foe of Pinker’s, Jerry Fodor (1987) calls it. Nevertheless, short grasping this conclusion, I fear we court a disaster far greater still.

Hitherto, the light cast by the Enlightenment left us largely in the dark, guessing at the lay of interior shadows. We can mathematically model the first instants of creation, and yet we remain thoroughly baffled by our ability to do so. So far, the march of moral progress has turned on the revolutionizing our material environments: we need only renovate our self-understanding enough to accommodate this revolution. Humanism can be seen as the ‘good enough’ product of this renovation, a retooling of folk vocabularies and folk reports to accommodate the radical environmental and interpersonal transformations occurring around them. The discourses are myriad, the definitions are endlessly disputed, nevertheless humanism provisioned us with the cognitive flexibility required to flourish in an age of environmental disenchantment and transformation. Once we understand the pertinent facts of human cognitive ecology, its status as an ad hoc ‘tutelary nature’ becomes plain.

Just what are these pertinent facts? First, there is a profound distinction between natural or causal cognition, and intentional cognition. Developmental research shows that infants begin exhibiting distinct physical versus psychological cognitive capacities within the first year of life. Research into Asperger Syndrome (Baron-Cohen et al 2001) and Autism Spectrum Disorder (Binnie and Williams 2003) consistently reveals a cleavage between intuitive social cognitive capacities, ‘theory-of-mind’ or ‘folk psychology,’ and intuitive mechanical cognitive capacities, or ‘folk physics.’ Intuitive social cognitive capacities demonstrate significant heritability (Ebstein et al 2010, Scourfield et al 1999) in twin and family studies. Adults suffering Williams Syndrome (a genetic developmental disorder affecting spatial cognition) demonstrate profound impairments on intuitive physics tasks, but not intuitive psychology tasks (Kamps et al 2017). The distinction between intentional and natural cognition, in other words, is not merely a philosophical assertion, but a matter of established scientific fact.

Second, cognitive systems are mechanically intractable. From the standpoint of cognition, the most significant property of cognitive systems is their astronomical complexity: to solve for cognitive systems is to solve for what are perhaps the most complicated systems in the known universe. The industrial scale of the cognitive sciences provides dramatic evidence of this complexity: the scientific investigation of the human brain arguably constitutes the most massive cognitive endeavor in human history. (In the past six fiscal years, from 2012 to 2017, the National Institute of Health [21/01/2017] alone will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegeneration (10.183 billion)).

Despite this intractability, however, our cognitive systems solve for cognitive systems all the time. And they do so, moreover, expending imperceptible resources and absent any access to the astronomical complexities responsible—which is to say, given very little information. Which delivers us to our third pertinent fact: the capacity of cognitive systems to solve for cognitive systems is radically heuristic. It consists of ‘fast and frugal’ tools, not so much sacrificing accuracy as applicability in problem-solving (Todd and Gigerenzer 2012). When one cognitive system solves for another it relies on available cues, granular information made available via behaviour, utterly neglecting the biomechanical information that is the stock and trade of the cognitive sciences. This radically limits their domain of applicability.

The heuristic nature of intentional cognition is evidenced by the ease with which it is cued. Thus, the fourth pertinent fact: intentional cognition is hypersensitive. Anthropomorphism, the attribution of human cognitive characteristics to systems possessing none, evidences the promiscuous application of human intentional cognition to intentional cues, our tendency to run afoul what might be called intentional pareidolia, the disposition to cognize minds where no minds exist (Waytz et al 2014). The Heider-Simmel illusion, an animation consisting of no more than shapes moving about a screen, dramatically evidences this hypersensitivity, insofar as viewers invariably see versions of a romantic drama (Heider and Simmel 1944). Research in Human-Computer Interaction continues to explore this hypersensitivity in a wide variety of contexts involving artificial systems (Nass and Moon 2000, Appel et al 2012). The identification and exploitation of our intentional reflexes has become a massive commercial research project (so-called ‘affective computing’) in its own right (Yonck 2017).

Intentional pareidolia underscores the fact that intentional cognition, as heuristic, is geared to solve a specific range of problems. In this sense, it closely parallels facial pareidolia, the tendency to cognize faces where no faces exist. Intentional cognition, in other words, is both domain-specific, and readily misapplied.

The incompatibility between intentional and mechanical cognitive systems, then, is precisely what we should expect, given the radically heuristic nature of the former. Humanity evolved in shallow cognitive ecologies, mechanically inscrutable environments. Only the most immediate and granular causes could be cognized, so we evolved a plethora of ways to do without deep environmental information, to isolate saliencies correlated with various outcomes (much as machine learning).

Human intentional cognition neglects the intractable task of cognizing natural facts, leaping to conclusions on the basis of whatever information it can scrounge. In this sense it’s constantly gambling that certain invariant backgrounds obtain, or conversely, that what it sees is all that matters. This is just another way to say that intentional cognition is ecological, which in turn is just another way to say that it can degrade, even collapse, given the loss of certain background invariants.

The important thing to note, here, of course, is how Enlightenment progress appears to be ultimately inimical to human intentional cognition. We can only assume that, over time, the unrestricted rationalization of our environments will gradually degrade, then eventually overthrow the invariances sustaining intentional cognition. The argument is straightforward:

1) Intentional cognition depends on cognitive ecological invariances.

2) Scientific progress entails the continual transformation of cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition.

But this argument oversimplifies matters. To see as much one need only consider the way a semantic apocalypse—the collapse of intentional cognition—differs from say a nuclear or zombie apocalypse. The Walking Dead, for instance, abounds with savvy applications of intentional cognition. The physical systems underwriting meaning, in other words, are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.

Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them.

The viability of intentional cognition, in other words, depends not on us, but on a particular cognitive ecology peculiar to us. The question of the threshold of a semantic apocalypse becomes the question of the stability of certain onboard biological invariances correlated to a background of certain environmental invariances. Change the constitutive or environmental invariances underwriting intentional cognition too much, and you can expect it will crash, generate more problems than solutions.

The hypersensitivity of intentional cognition either evinced by solitary confinement or more generally by anthropomorphism demonstrates the threat of systematic misapplication, the mode’s dependence on cue authenticity. (Sherry Turkle’s (2007) concerns regarding ‘Darwinian buttons,’ or Deidre Barrett’s (2010) with ‘supernormal stimuli,’ touch on this issue). So, one way of inducing semantic apocalypse, we might surmise, lies in the proliferation of counterfeit cues, information that triggers intentional determinations that confound, rather than solve any problems. One way to degrade cognitive ecologies, in other words, is to populate environments with artifacts cuing intentional cognition ‘out of school,’ which is to say, circumstances cheating or crashing them.

The morbidity of intentional cognition demonstrates the mode’s dependence on its own physiology. What makes this more than platitudinal is the way this physiology is attuned to the greater, enabling cognitive ecology. Since environments always vary while cognitive systems remain the same, changing the physiology of intentional cognition impacts every intentional cognitive ecology—not only for oneself, but for the rest of humanity as well. Just as our moral cognitive ecology is complicated by the existence of psychopaths, individuals possessing systematically different ways of solving social problems, the existence of ‘augmented’ moral cognizers complicates our moral cognitive ecology as well. This is important because you often find it claimed in transhumanist circles (see, for example, Buchanan 2011), that ‘enhancement,’ the technological upgrading of human cognitive capacities, is what guarantees perpetual Enlightenment. What better way to optimize our values than by reengineering the biology of valuation?

Here, at last, we encounter Nietzsche’s question cloaked in 21st century garb.

And here we can also see where the above argument falls short: it overlooks the inevitability of engineering intentional cognition to accommodate constitutive and environmental transformations. The dependence upon cognitive ecologies asserted in (1) is actually contingent upon the ecological transformation asserted in (2).

1) Intentional cognition depends on constitutive and environmental cognitive ecological invariances.

2) Scientific progress entails the continual transformation of constitutive and environmental cognitive ecological invariances.

Thus, 3) scientific progress entails the collapse of intentional cognition short remedial constitutive transformations.

What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus:

What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket? 277

The former dilemma is presently dominating the headlines and is set to be astronomically complicated by the explosion of AI. The latter we can see rising out of literature, clawing its way out of Hollywood, seizing us with video game consoles, engulfing ever more experiential bandwidth. And as I like to remind people, 100 years separates the Blu-Ray from the wax phonograph.

The key to blocking the possibility that the transformative potential of (2) can ameliorate the dependency in (1) lies in underscoring the continual nature of the changes asserted in (2). A cognitive ecology where basic constitutive and environmental facts are in play is no longer recognizable as a human one.

Scientific progress entails the collapse of intentional cognition.

On this view, the coupling of scientific and moral progress is a temporary affair, one doomed to last only so long as cognition itself remained outside the purview of Enlightenment cognition. So long as astronomical complexity assured that the ancestral invariances underwriting cognition remained intact, the revolution of our environments could proceed apace. Our ancestral cognitive equilibria need not be overthrown. In place of materially actionable knowledge regarding ourselves, we developed ‘humanism,’ a sop for rare stipulation and ambient disputation.

But now that our ancestral cognitive equilibria are being overthrown, we should expect scientific and moral progress will become decoupled. And I would argue that the evidence of this is becoming plainer with the passing of every year. Next week, we’ll take a look at several examples.

I fear Donald Trump may be just the beginning.

.

References

Appel, Jana, von der Putten, Astrid, Kramer, Nicole C. and Gratch, Jonathan 2012, ‘Does Humanity Matter? Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction’, in Advances in Human-Computer Interaction 2012 <https://www.hindawi.com/journals/ahci/2012/324694/ref/&gt;

Barrett, Deidre 2010, Supernormal Stimuli: How Primal Urges Overran Their Original Evolutionary Purpose (New York: W.W. Norton)

Binnie, Lynne and Williams, Joanne 2003, ‘Intuitive Psychology and Physics Among Children with Autism and Typically Developing Children’, Autism 7

Buchanan, Allen 2011, Better than Human: The Promise and Perils of Enhancing Ourselves (New York: Oxford University Press)

Ebstein, R.P., Israel, S, Chew, S.H., Zhong, S., and Knafo, A. 2010, ‘Genetics of human social behavior’, in Neuron 65

Fodor, Jerry A. 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: The MIT Press)

Harari, Yuval 2017, Homo Deus: A Brief History of Tomorrow (New York: HarperCollins)

Heider, Fritz and Simmel, Marianne 1944, ‘An Experimental Study of Apparent Behaviour,’ in The American Journal of Psychology 57

Kamps, Frederik S., Julian, Joshua B., Battaglia, Peter, Landau, Barbara, Kanwisher, Nancy and Dilks Daniel D 2017, ‘Dissociating intuitive physics from intuitive psychology: Evidence from Williams syndrome’, in Cognition 168

Nass, Clifford and Moon, Youngme 2000, ‘Machines and Mindlessness: Social Responses to Computers’, Journal of Social Issues 56

Pinker, Steven 1997, How the Mind Works (New York: W.W. Norton)

—. 2018, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking)

Scourfield J., Martin N., Lewis G. and McGuffin P. 1999, ‘Heritability of social cognitive skills in children and adolescents’, British Journal of Psychiatry 175

Todd, P. and Gigerenzer, G. 2012 ‘What is ecological rationality?’, in Todd, P. and Gigerenzer, G. (eds.) Ecological Rationality: Intelligence in the World (Oxford: Oxford University Press) 3–

30

Turkle, Sherry 2007, ‘Authenticity in the age of digital companions’, Interaction Studies 501-517

Waytz, Adam, Cacioppo, John, and Epley, Nicholas 2014, ‘Who See Human? The Stability and Importance of Individual Differences in Anthropomorphism’, Perspectives in Psychological Science 5

Yonck, Richard 2017, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence (New York, NY: Arcade Publishing)

 

Meta-problem vs. Scandal of Self-Understanding

by rsbakker

Let’s go back to Square One.

Try to recall what it was like before what it was like became an issue for you. Remember, if you can, a time when you had yet to reflect on the bald fact, let alone the confounding features, of experience. Square One refers to the state of metacognitive naivete, what it was like when experience was an exclusively practical concern, and not at all a theoretical one.

David Chalmers has a new paper examining the ‘meta-problem’ of consciousness, the question of why we find consciousness so difficult to fathom. As in his watershed “Consciousness and Its Place in Nature,” he sets out to exhaustively map the dialectical and evidential terrain before adducing arguments. After cataloguing the kinds of intuitions underwriting the meta-problem he pays particularly close attention to various positions within illusionism, insofar as these theories see the hard problem as an artifact of the meta-problem. He ends by attempting to collapse all illusionisms into strong illusionism—the thesis that consciousness doesn’t exist—which he thinks is an obvious reductio.

As Peter Hankins points out in his canny Conscious Entities post on the article, the relation between problem reports and consciousness is so vexed as to drag meta-problem approaches back into the traditional speculative mire. But there’s a bigger problem with Chalmer’s account of the meta-problem: it’s far too small. The meta-problem, I hope to show, is part and parcel of the scandal of self-knowledge, the fact that every discursive cork in Square Two, no matter how socially or individually indispensable, bobs upon the foam of philosophical disputation. The real question, the one our species takes for granted but alien anthropologists would find fascinating, is why do humans find themselves so dumbfounding? Why does normativity mystify us? Why does meaning stupefy? And, of course, why is phenomenality so inscrutable?

Chalmers, however, wants you to believe the problem is restricted to phenomenality:

I have occasionally heard the suggestion that internal self-models will inevitably produce problem intuitions, but this seem[s] clearly false. We represent our own beliefs (such as my belief that Canberra is in Australia), but these representations do not typically go along with problem intuitions or anything like them. While there are interesting philosophical issues about explaining beliefs, they do not seem to raise the same acute problem intuitions as do experiences.

and yet in the course of cataloguing various aspects of the meta-problem, Chalmers regularly finds himself referring to similarities between beliefs and consciousness.

Likewise, when I introspect my beliefs, they certainly do not seem physical, but they also do not seem nonphysical in the way that consciousness does. Something special is going on in the consciousness case: insofar as consciousness seems nonphysical, this seeming itself needs to be explained.

Both cognition and consciousness seem nonphysical, but not in the same way. Consciousness, Chalmers claims, is especially nonphysical. But if we don’t understand the ‘plain’ nonphysicality of beliefs, then why tackle the special nonphysicality of conscious experience?

Here the familiar problem strikes again: Everything I have said about the case of perception also applies to the case of belief. When a system introspects its own beliefs, it will typically do so directly, without access to further reasons for thinking it has those beliefs. Nevertheless, our beliefs do not generate nearly as strong problem intuitions as our phenomenal experiences do. So more is needed to diagnose what is special about the phenomenal case.

If more is needed, then what sense does it make to begin looking for this ‘more’ in advance, without understanding what knowledge and experience have in common?

Interrogating the problem of intentionality and consciousness in tandem becomes even more imperative when we consider the degree to which Chalmers’ categorizations and evaluations turn on intentional vocabularies. The hard problem of consciousness may trigger more dramatic ‘problem intuitions,’ but it shares with the hard problem of cognition a profound inability to formulate explananda. There’s no more consensus on the nature of belief than there is the nature of consciousness. We remain every bit as stumped, if not quite as agog.

Not only do intentional vocabularies remain every bit as controversial as phenomenal ones in theoretical explanatory contexts, they also share the same apparent incompatibilities with natural explanation. Is it a coincidence that both vocabularies seem irreducible? Is it a coincidence they both seem nonphysical? Is it a coincidence that both seem incompatible with causal explanation? Is it a coincidence that each implicates the other?

Of course not. They implicate each other because they’re adapted to function in concert. Since they function in concert, there’s a good chance their shared antipathy to causal explanation turns on shared mechanisms. The same can be said regarding their apparent irreducible nonphysicality.

And the same can be said of the problem they pose.

Square Two, then, our theoretical self-understanding, is mired in theoretical disputation. Every philosopher (the present one included) will be inclined to think their understanding the exception, but this does nothing to change the fact of disputation. If we characterize the space of theoretical self-understanding—Square Two—as a general controversy space, we see that Chalmers, as an intentionalist, has taken a position in intentional controversy space to explicate phenomenal controversy space.

Consider his preferred account of the meta-problem:

To sum up what I see as the most promising approach: we have introspective models deploying introspective concepts of our internal states that are largely independent of our physical concepts. These concepts are introspectively opaque, not revealing any of the underlying physical or computational mechanisms. We simply find ourselves in certain internal states without having any more basic evidence for this. Our perceptual models perceptually attribute primitive perceptual qualities to the world, and our introspective models attribute primitive mental relations to those qualities. These models produce the sense of acquaintance both with those qualities and with our awareness of those qualities.

While the gist of this picture points in the right direction, the posits used—representations, concepts, beliefs, attributions, acqaintances, awarenesses—doom it to dwell in perpetual underdetermination, which is to say, discursive ground friendly to realists like Chalmers. It structures the meta-problem according to a parochial rationalization of terms no one can decisively formulate, let alone explain. It is assured, in other words, to drag the meta-problem into the greater scandal of self-knowledge.

To understand why Square Two has proven so problematic in general, one needs to take a step back, to relinquish their countless Square Two prejudices, and reconsider things from the standpoint of biology. Why, biologically speaking, should an organism find cognizing itself so difficult? Not only is this the most general form of the question that Chalmer’s takes himself to be asking, it is posed from a position outside the difficulty it interrogates.

The obvious answer is that biology, and cognitive biology especially, is so fiendishly complicated. The complexity of biology all but assures that cognition will neglect biology and fasten on correlations between ‘surface irritations’ and biological behaviours. Why, for instance, should a frog cognize fly biology when it need only strike at black dots?

The same goes for metacognitive capacities: Why metacognize brain biology when we need only hold our tongue at dinner, figure out what went wrong with the ambush, explain what happened to the elders, and so on? On any plausible empirical story, metacognition consists in an opportunistic array of heuristic systems possessing the access and capacity to solve various specialized domains. The complexity of the brain all but assures as much. Given the intractability of the processes monitored, metacognitive consumers remain ‘source insensitive’—they solve absent any sensitivity to underlying systems. As need-to-know consumers adapted to solving practical problems in ancestral contexts, we should expect retasking those capacities to the general problem of ourselves would prove problematic. As indeed it has. Our metacognitive insensitivity, after all, extends to our insensitivity: we are all but oblivious to the source-insensitive, heuristic nature of metacognition.

And this provides biological grounds to predict the kinds of problems such retasking might generate; it provides an elegant, scientifically tractable way to understand a great number of the problems plaguing human self-knowledge.

 

We should expect metacognitive (and sociocognitive) application problems. Given that metacognition neglects the heuristic limits of metacognition, all novel applications of metacognitive capacities to new problem ecologies (such as those devised by the ancient Greeks) run the risk of misapplication. Imagine rebuilding an engine with invisible tools. Metacognitive neglect assures that trial-and-error provides our only means of sorting between felicitous and infelicitous applications.

We should expect incompatibility with source-sensitive modes of cognition. Source-insensitive cognitive systems are primed to solve via information ecologies that systematically neglect the actual systems responsible. We rely on robust correlations between the signal available and the future behaviour of the system requiring solution–‘clues’ some heuristic researchers call them. The ancestral integration of source-sensitive and source-insensitive cognitive modes (as in narrative, say, which combines intentional and causal cognition) assures at best specialized linkages. Beyond these points of contact, the modes will be incompatible given the specificity of the information consumed in source-insensitive systems.

We should expect to suffer illusions of sufficiency. Given the dependence of all cognitive systems on the sufficiency of upstream processing for downstream success, we should expect insensitivity to metacognitive insufficiency to result in presumptive sufficiency. Systems don’t need a second set of systems monitoring the sufficiency of every primary system to function: sufficiency is the default. Retasking metacognitive capacities to theoretical problems, we can presume, deploys as sufficient despite almost certainly being insufficient. This can be seen as a generalization of WYSIATI, or ‘what-you-see-is-all-there-is,’ the principle Daniel Kahneman uses to illustrate how certain heuristic mechanisms do not discriminate between sufficient and insufficient information.

We should expect to suffer illusions of simplicity (or identity effects). Given metacognitive insensitivity to its insensitivity, it remains blind to artifacts of that insensitivity as artifacts. The absence of distinction will be intuited as simplicity. Flicker-fusion as demonstrated in psychophysics almost certainly possesses cognitive and metacognitive analogues, instances where the lack of distinction reports as identity or simplicity. The history of science is replete with examples of mistaking artifacts of information poverty with properties of nature. The small was simple prior to the microscope and the discovery of endless subvisibilia. The heavens consisted of spheres.

We should expect to suffer illusions of free-floating efficacy. The ancestral integration of source-insensitive and source-sensitive cognition underwrites fetishism, the cognition of sources possessing no proximal sources. In his cognitive development research, Andrei Cimpian calls these ‘inherence heuristics,’ where, in ignorance of extrinsic factors, we impute an intrinsic efficacy to cognize/communicate local effects. We are hardwired to fetishize.

We should expect to suffer entrenched only-game-in-town effects. In countless contexts, ignorance of alternatives fools individuals into thinking their path necessary. This is why Kant, who had no inkling of the interpretive jungle to come, thought he had stumbled across a genuine synthetic a priori science. Given metacognitive insensitivity to its insensitivity, the biological parochialism of source-insensitive cognition is only manifest in applications. Once detected, neglect assures the distinctiveness of source-insensitive cognition will seem absolute, lending itself to reports of autonomy. So where Kant ran afoul the only-game-in-town effect in declaring his discourse apodictic, he also ran afoul a biologically entrenched version of the same effect in declaring cognition transcendental.

We should expect misfires will be systematic. Generally speaking, rules of thumb do not cease being rulish when misapplied. Heuristic breakdowns are generally systematic. Where the system isn’t crashed altogether, the consequences of mistakes will be structured and iterable. This predictability allows certain heuristic breakdowns to become valuable tools. The Pleistocene discovery that applying pigments to surfaces could cue the (cartoon) visual cognition of nearly anything examples one, particularly powerful instrumentalization of heuristic systematicity. Metacognition is no different than visual cognition in this regard: like visual heuristics, cognitive heuristics generate systematic ‘illusions’ admitting, in some cases, genuine instrumentalizations (things like ‘representations’ and functional analyses in empirical psychology), but typically generating only disputation otherwise.

We should expect to suffer performative interference-effects (breakdowns in ‘meta-irrelevance’). The intractability of the enabling axis of cognition, the inevitability of medial neglect, forces the system to presume its cognitive sufficiency. As a result, cognition biomechanically depends on the ‘meta-irrelevance’ of its own systems; it requires that information pertaining to its functioning is not required to solve whatever the problem at hand. Nonhuman cognizers, for instance, are comparatively reliant on the sufficiency of their cognitive apparatus: they can’t, like us, raise a finger and say, ‘On second thought,’ or visit the doctor, or lay off the weed, or argue with their partner. Humans possess a plethora of hacks, heuristic ways to manage cognitive shortcomings. Nevertheless, the closer our metacognitive tools come to ongoing, enabling access—the this-very-moment-now of cognition—the more regularly they will crash, insofar as these too require meta-irrelevance.

We should expect chronic underdetermination. Metacognitive resources adapted to the solution of ancestral practical problems have no hope of solving for the nature of experience or cognition.

We should expect ontological confusion. As mentioned, cognition biomechanically depends on the ‘meta-irrelevance’ of its own systems; it requires that information pertaining to its functioning is not required to solve whatever the problem at hand. Metacognitive resources retasked to solve for these systems flounder, then begin systematically confusing artifacts of medial neglect for the dumbfounding explananda of cognition and experience. Missing dimensions are folded into neglect, and metacognition reports these insufficiencies as sufficient. Source insensitivity becomes source independence. Complexity becomes simplicity. Only a second ‘autonomous’ ontology will do.

 

Floridi’s Plea for Intentionalism

by rsbakker

 

Questioning Questions

Intentionalism presumes that intentional modes of cognition can solve for intentional modes of cognition, that intentional vocabularies, and intentional vocabularies alone, can fund bona fide theoretical understanding of intentional phenomena. But can they? What evidences their theoretical efficacy? What, if anything, does biology have to say?

No one denies the enormous practical power of those vocabularies. And yet, the fact remains that, as a theoretical explanatory tool, they invariably deliver us to disputation—philosophy. To rehearse my favourite William Uttal quote: “There is probably nothing that divides psychologists of all stripes more than the inadequacies and ambiguities of our efforts to define mind, consciousness, and the enormous variety of mental events and phenomena” (The New Phrenology, p.90).

In his “A Plea for Non-naturalism as Constructionism,” Luciano Floridi, undertakes a comprehensive revaluation of this philosophical and cognitive scientific inability to decisively formulate, let alone explain intentional phenomena. He begins with a quote from Quine’s seminal “Epistemology Naturalized,” the claim that “[n]aturalism does not repudiate epistemology, but assimilates it to empirical psychology.” Although Floridi entirely agrees that the sciences have relieved philosophy of a great number of questions over the centuries, he disagrees with Quine’s ‘assimilation,’ the notion of naturalism as “another way of talking about the death of philosophy.” Acknowledging that philosophy needs to remain scientifically engaged—naturalistic—does not entail discursive suicide. “Philosophy deals with ultimate questions that are intrinsically open to reasonable and informed disagreement,” Floridi declares. “And these are not “assimilable” to scientific enquiries.”

Ultimate? Reading this, one might assume that Floridi, like so many other thinkers, has some kind of transcendental argument operating in the background. But Floridi is such an exciting philosopher to read precisely because he isn’t ‘like so many other thinkers.’ He hews to intentionalism, true, but he does so in a manner that is uniquely his own.

To understand what he means by ‘ultimate’ in this paper we need to visit another, equally original essay of his, “What is a Philosophical Question?” where he takes an information ‘resource-oriented’ approach to the issue of philosophical questions, “the simple yet very powerful insight that the nature of problems may be fruitfully studied by focusing on the kind of resources required in principle to solve them, rather than on their form, meaning, reference, scope, and relevance.” He focuses on the three kinds of questions revealed by this perspective: questions requiring empirical resources, questions requiring logico-mathematical resources, and questions requiring something else—what he calls ‘open questions.’ Philosophical questions, he thinks, belong to this latter category.

But if open questions admit no exhaustive empirical or formal determination, then why think them meaningful? Why not, as Hume famously advises, consign them to the flames? Because, Floridi, argues, they are inescapable. Open questions possess no regress enders: they are ‘closed’ in the set-theoretic sense, which is to say, they are questions whose answers always beget more questions. To declare answers to open questions meaningless or trivial is to answer an open question.

But since not all open questions are philosophical questions, Floridi needs to restrict the scope of his definition. The difference, he thinks, is that philosophical questions “tend to concentrate on more significant and consequential problems.” Philosophical questions, in addition to being open questions, are also ultimate questions, not in any foundational or transcendental sense, but in the sense of casting the most inferential shade across less ultimate matter.

Ultimate questions may be inescapable, as Floridi suggests, but this in no way allays the problem of the resources used to answer them. Why not simply answer them pragmatically, or with a skeptical shrug? Floridi insists that the resources are found in “the world of mental contents, conceptual frameworks, intellectual creations, intelligent insights, dialectical reasonings,” or what he calls ‘noetic resources,’ the non-empirical, non-formal fund of things that we know. Philosophical questions, in addition to being ultimate, open questions, require noetic resources to be answered.

But all questions, of course, are not equal. Some philosophical problems, after all, are mere pseudo-problems, the product of the right question being asked in the wrong circumstances. Though the ways in which philosophical questions misfire seem manifold, Floridi focusses on a single culprit to distinguish ‘bad’ from ‘good’ philosophical questions: the former, he thinks, overstep their corresponding ‘level of abstraction,’ aspiring to be absolute or unconditioned. Philosophical questions, in addition to being noetic, ultimate, open questions, are also contextually appropriate questions.

Philosophy, then, pertains to questions involving basic matters, lacking decisive empirical or formal resources and so lacking institutional regress enders. Good philosophy, as opposed to bad, is always conditional, which is to say, sensitive to the context of inquiry. It is philosophy in this sense that Floridi thinks lies beyond the pale of Quinean assimilation in “A Plea for Non-naturalism as Constructionism.”

But resistance to assimilation isn’t his only concern. Science, Floridi thinks, is caught in a predicament: as ever more of the universe is dragged from the realm of open, philosophical interrogation into the realm of closed, scientific investigation, the technology enabled by and enabling this creeping closure is progressively artificializing our once natural environments. Floridi writes:

“the increasing and profound technologisation of science is creating a tension between what we try to explain, namely all sorts of realities, and how we explain it, through the highly artificial constructs and devices that frame and support our investigations. Naturalistic explanations are increasingly dependent on non-natural means to reach such explanations.”

This, of course, is the very question at issue between the meaning skeptic and the meaning realist. To make his case, Floridi has to demonstrate the how and why the artefactual isn’t simply more nature, every bit as bound by the laws of thermodynamics as everything else in nature. Why think the ‘artificial’ is anything more than (to turn a Hegelian line on its head) ‘nature reborn’? To presume as much would be to beg the question—to run afoul the very ‘scholasticism’ Floridi criticizes.

Again, he quotes Quine from “Epistemology Naturalized,” this time the famous line reminding us that the question of “how irritations of our sensory surfaces” result in knowledge is itself a scientific question. The absurdity of the assertion, Floridi thinks, is easily assayed by considering the complexity of cognitive and aesthetic artifacts: “by the same reasoning, one should then try to answer the question how Beethoven managed to arrive at his Ode to Joy from the seven-note diatonic musical scale, Leonardo to his Mona Lisa from the three colours in the RGB model, Orson Welles to his Citizen Kane from just black and white, and today any computer multimedia from just zeros and ones.”

The egregious nature of the disanalogies here are indicative of the problem Floridi faces. Quine’s point isn’t that knowledge reduces to sensory irritations, merely that knowledge consists of scientifically tractable physical processes. For all his originality, Floridi finds himself resorting to a standard ‘you-can’t-get-there-from-here’ argument against eliminativism. He even cites the constructive consensus in neuroscience, thinking it evidences the intrinsically artefactual, nature of knowledge. But he never explains why the artefactual nature of knowledge—unlike the artefactual nature of, say, a bird’s nest—rules out the empirical assimilation of knowledge. Biology isn’t any less empirical for being productive, so what’s the crucial difference here? At what point does artefactual qua biological become artefactual qua intentional?

Epistemological questions, he asserts, “are not descriptive or scientific, but rather semantic and normative.” But Quine is asking a question about epistemology and whether what we now call cognitive science can exhaustively answer it. As it so happens the question of epistemology as a natural phenomena is itself an epistemological question, and as such involves the application of intentional (semantic and normative) cognitive modes. But why think these cognitive modes themselves cannot be empirically described and explained the way, for example, neuroscience has described and explained the artefactual nature of cognition? If artefacts like termite mounds and bird’s nests admit natural explanations, then why not knowledge? Given that he hopes to revive “a classic, foundationalist role for philosophy itself,” this is a question he has got to answer. Philosophers have a long history of attempting to secure the epistemological primacy of their speculation on the back of more speculation. Unless Floridi is content with “an internal ‘discourse’ among equally minded philosophers,” he needs to explain what makes the artifactuality of knowledge intrinsically intentional.

In a sense, one can see his seminal 2010 work, The Philosophy of Information, as an attempt to answer this question, but he punts on the issue, here, providing only a reference to his larger theory. Perhaps this is why he characterizes this paper as “a plea for non-naturalism, not an argument for it, let alone a proof or demonstration of it.” Even though the entirety of the paper is given over to arguments inveighing against unrestricted naturalism a la Quine, they all turn on a shared faith in the intrinsic intentionality of cognition.

 

Reasonably Reiterable Queries

Floridi defines ‘strong naturalism’ as the thesis that all nonnatural phenomena can be reduced to natural phenomena. A strong naturalist believes that all phenomena can be exhaustively explained using only natural vocabularies. The key term, for him, is ‘exhaustively.’ Although some answers to our questions put the matter to bed, others simply leave us scratching our heads. The same applies to naturalistic explanations. Where some reductions are the end of the matter, ‘lossless,’ others are so ‘lossy’ as to explain nothing at all. The latter, he suggests, make it reasonable to reiterate the original query. This, he thinks, provides a way to test any given naturalization of some phenomena, an ‘RRQ’ test. If a reduction warrants repeating the very question it was intended to answer, then we have reason to assume the reduction to be ‘reductive,’ or lossy.

The focus of his test, not surprisingly, is the naturalistic inscrutability of intentional phenomena:

“According to normative (also known as moral or ethical) and semantic non-naturalism, normative and semantic phenomena are not naturalisable because their explanation cannot be provided in a way that appeals exhaustively and non-reductively only to natural phenomena. In both cases, any naturalistic explanation is lossy, in the sense that it is perfectly reasonable to ask again for an explanation, correctly and informatively.”

This failure, he asserts, demonstrates the category mistake of insisting that intentional phenomena be naturalistically explained. In lieu of an argument, he gives us examples. No matter how thorough our natural explanations of immoral photographs might be, one can always ask, Yes, but what makes them immoral (as opposed to socially sanctioned, repulsive, etc.)? Facts simply do not stack into value—Floridi takes himself to be expounding a version of Hume’s and Moore’s point here. The explanation remains ‘lossy’ no matter what our naturalistic explanation. Floridi writes:

“The recalcitrant, residual element that remains unexplained is precisely the all-important element that requires an explanation in the first place. In the end, it is the contribution that the mind makes to the world, and it is up to the mind to explain it, not the world.”

I’ve always admired, even envied, Floridi for the grace and lucidity of his prose. But no matter how artful, a god of the gaps argument is a god of the gaps argument. Failing the RRQ does not entail that only intentional cognition can solve for intentional phenomena.

He acknowledges the problem here: “Admittedly, as one of the anonymous reviewers rightly reminded me, one may object that the recalcitrant, residual elements still in need of explanation may be just the result of our own insipience (understood as the presence of a question without the corresponding relevant and correct answer), perhaps as just a (maybe even only temporary) failure to see that there is merely a false impression of an information deficit (by analogy with a scandal of deduction).” His answer here is to simply apply his test, suggesting the debate, as interminable, merely underscores “an openness to the questioning that the questioning itself keeps open.” I can’t help but think he feels the thorn, at this point. Short reading “What is a Philosophical Question?” this turn in the article would be very difficult to parse. Philosophical questioning, Floridi would say, is ‘closed under questioning,’ which is to say, a process that continually generates more questions. The result is quite ingenious. As with Derridean deconstruction, philosophical problematizations of Floridi’s account of philosophy end up evidencing his account of philosophy by virtue of exhibiting the vulnerability of all guesswork: the lack of regress enders. Rather than committing to any foundation, you commit to a dialectical strategy allowing you to pick yourself up by your own hair.

The problem is that RRQ is far from the domesticated discursive tool that Floridi would have you believe it is. If anything, it provides a novel and useful way to understand the limits of theoretical cognition, not the limits of this or that definition of ‘naturalism.’ RRQ is a great way to determine where the theoretical guesswork in general begins. Nonnaturalism is the province of philosophy for a reason: every single nonnatural answer ever adduced to answer the question of this or that intentional phenomena have failed to close the door on RRQ. Intentional philosophy, such as Floridi’s, possesses no explanatory regress enders—not a one. It is always rational to reiterate the question wherever theoretical applications of intentional cognition are concerned. This is not the case with natural cognition. If RRQ takes a bite out of natural theoretical explanation of apparent intentional phenomena, then it swallows nonnatural cognition whole.

Raising the question, Why bother with theoretical applications of nonnatural cognition at all? Think about it: if every signal received via a given cognitive mode is lossy, why not presume that cognitive mode defective? The successes of natural theoretical cognition—the process of Quinean ‘assimilation’—show us that lossiness typically dwindles with the accumulation of information. No matter how spectacularly our natural accounts of intentional phenomena fail, we need only point out the youth of cognitive science and the astronomical complexities of the systems involved. The failures of natural cognition belong to the process of natural cognition, the rondo of hypothesis and observation. Theoretical applications of intentional cognition, on the other hand, promise only perpetual lossiness, the endless reiteration of questions and uninformative answers.

One can rhetorically embellish endless disputation as discursive plenitude, explanatory stasis as ontological profundity. One can persuasively accuse skeptics of getting things upside down. Or one can speculate on What-Philosophy-Is, insist that philosophy, instead of mapping where our knowledge breaks down (as it does in fact), shows us where this-or-that ‘ultimate’ lies. In “What is a Philosophical Question?” Floridi writes:

“Still, in the long run, evolution in philosophy is measured in terms of accumulation of answers to open questions, answers that remain, by the very nature of the questions they address, open to reasonable disagreement. So those jesting that philosophy has never “solved” any problem but remains for ever stuck in endless debates, that there is no real progress in philosophy, clearly have no idea what philosophy is about. They may as well complain that their favourite restaurant is constantly refining and expanding its menu.”

RRQ says otherwise. According to Floridi’s own test, the problem isn’t that the restaurant is constantly refining and expanding its menu, the problem is that nothing ever makes it out of the kitchen! It’s always sent back by rational questions. Certainly countless breakdowns have found countless sociocognitive uses: philosophy is nothing if not recombinant, mutation machine. But these powerful adaptations of intentional cognition are simply that: powerful adaptations of natural systems originally evolved to solve complex systems on the metabolic cheap. All attempts to use intentional cognition to theorize their (entirely natural) nature end in disputation. Philosophy has yet to theoretically solve any aspect of intentional cognition. And this merely follows from Floridi’s own definition of philosophy—it just cuts against his rhetorical register. In fact, when one takes a closer, empirical look at the resources available, the traditional conceit at the heart of his nonnaturalism quickly becomes clear.

 

Follow the Money

So, what is it? Why spin a limit, a profound cognitive horizon, into a plenum? Floridi is nothing if not an erudite and subtle thinker, and yet his argument in this paper entirely depends on neglecting to see RRQ for the limit that it is. He does this because he fails to follow through on the question of resources.

For my part, I look at naturalism as a reliance on a particular set of ‘hacks,’ not as any dogma requiring multiple toes scratching multiple lines in the sand.  Reverse-engineering—taking things apart, seeing how they work—just happens to be an extraordinarily powerful approach, at least as far as our high-dimensional (‘physical’) environments are concerned. If we can reverse-engineer intentional phenomena—assimilate epistemology, say, to neuroscience—then so much the better for theoretical cognition (if not humanity). We still rely on unexplained explainers, of course, RRQ still pertains, but the boundaries will have been pushed outward.

Now the astronomical complexity of biology doesn’t simply suggest, it entails that we would find ourselves extraordinarily difficult to reverse-engineer, at least at first. Humans suffer medial neglect, fundamental blindness to the high-dimensional structure and dynamics of cognition. (As Floridi acknowledges in his own consideration of Dretske’s “How Do You Know You are Not a Zombie?” the proximal conditions of experience do not appear within experience (see The Philosophy of Information, chapter 13)). The obvious reason for this turns on the limitations of our tools, both onboard and prosthetic. Our ancestors, for instance, had no choice but to ignore biology altogether, to correlate what ‘sensory irritants’ they had available with this or that reproductively decisive outcome. Everything in the middle, the systems and ecology that enabled this cognitive feat, is consigned to neglect (and doomed to be reified as ‘transparency’). Just consider the boggling resources commanded by the cognitive sciences: until very recently reverse-engineering simply wasn’t a viable cognitive mode, at least when it came to living things.

This is what ‘intentional cognition’ amounts to: the collection of ancestral devices, ‘hacks,’ we use to solve, not only one another, but all supercomplicated systems. Since these hacks are themselves supercomplicated, our ancestors had to rely on them to solve for them. Problems involving intentional cognition, in other words, cue intentional problem-solving systems, not because (cue drumroll) intentional cognition inexplicably outruns the very possibility of reverse-engineering, but because our ancestors had no other means.

Recall Floridi’s ‘noetic resources,’ the “world of mental contents, conceptual frameworks, intellectual creations, intelligent insights, dialectical reasonings” that underwrites philosophical, as opposed to empirical or formal, answers. It’s no accident that the ‘noetic dimension’ also happens to be the supercomplicated enabling or performative dimension of cognition—the dimension of medial neglect. Whatever ancestral resources we possessed, they comprised heuristic capacities geared to information strategically correlated to the otherwise intractable systems. Ancestrally, noetic resources consisted of the information and metacognitive capacity available to troubleshoot applications of intentional cognitive systems. When our cognitive hacks went wrong, we had only metacognitive hacks to rely on. ‘Noetic resources’ refers to our heuristic capacities to troubleshoot the enabling dimension of cognition while neglecting its astronomical complexity.

So, take Floridi’s example of immoral photographs. The problem he faced, recall, was that “the question why they are immoral can be asked again and again, reasonably” not simply of natural explanations of morality, but nonnatural explanations as well. The RRQ razor cuts both ways.

The reason natural cognition fails to decisively answer moral questions should be pretty clear: moral cognition is radically heuristic, enabling the solution of certain sociocognitive problems absent high-dimensional information required by natural cognition. Far from expressing the ‘mind’s contribution’ (whatever that means), the ‘unexplained residuum’ warranting RRQ evidences the interdependence between cues and circumstance in heuristic cognition, the way the one always requires the other to function. Nothing so incredibly lossy as ‘mind’ is required. This inability to duplicate heuristic cognition, however, has nothing to do with the ability to theorize the nature of moral cognition, which is biological through and through. In fact, an outline of such an answer has just been provided here.

Moral cognition, of course, decisively solves practical moral problems all the time (despite often being fantastically uninformative): our ancestors wouldn’t have evolved the capacity otherwise. Moral cognition fails to decisively answer the theoretical question of morality, on the other hand, because it turns on ancestrally available information geared to the solution of practical problems. Like all the other devices comprising our sociocognitive toolbox, it evolved to derive as much practical problem-solving capacity from as little information as possible. ‘Noetic resources’ are heuristic resources, which is to say, ecological through and through. The deliverances of reflection are deliverances originally adapted to the practical solution of ancestral social and natural environments. Small wonder our semantic and normative theories of semantic and normative phenomena are chronically underdetermined! Imagine trying to smell skeletal structure absent all knowledge of bone.

But then why do we persist? Cognitive reflex. Raising the theoretical question of semantic and normative cognition automatically (unconsciously) cues the application of intentional cognition. Since the supercomplicated structure and dynamics of sociocognition belong to the information it systematically neglects, we intuit only this applicability, and nothing of the specialization. We suffer a ‘soda straw effect,’ a discursive version of Kahneman’s What-you-see-is-all-there-is effect. Intuition tells us it has to be this way, while the deliverances of reflection betray nothing of their parochialism. We quite simply did not evolve the capacity either to intuit our nature or to intuit our our inability to intuit our nature, and so we hallucinate something inexplicable as a result. We find ourselves trapped in a kind of discursive anosognosia, continually applying problem-parochial access and capacity to general, theoretical questions regarding the nature of inexplicable-yet-(allegedly)-undeniable semantic and normative phenomena.

This picture is itself open to RRQ, of course, the difference being that the positions taken are all natural, and so open to noise reduction as well. As per Quine’s process of assimilation, the above story provides a cognitive scientific explanation for a very curious kind of philosophical behaviour. Savvy to the ecological limits of noetic resources, it patiently awaits the accumulation of empirical resources to explain them, and so actually has a chance of ending the ancient regress.

The image Floridi chases is a mirage, what happens when our immediate intuitions are so impoverished as to arise without qualification, and so smack of the ‘ultimate.’ Much as the absence of astronomical information duped our ancestors into thinking our world stood outside the order of planets, celestial as opposed to terrestrial, the absence of metacognitive information dupes us into thinking our minds stand outside the order of the world, intentional as opposed to natural. Nothing, it seems, could be more obvious than noocentrism, despite our millennial inability to silence any—any—question regarding the nature of the intentional.

No results found for “scandal of self-knowledge”

by rsbakker

Or so Google tells me as of 1:25PM February 5th, 2018, at least. And this itself, if you think about it, is, well, scandalous. We know how to replicate the sun over thousands of targets scattered across the globe. We know how to destroy an entire world. Just don’t ask us how that knowledge works. We can’t even define our terms, let alone explain their function. All we know is that they work: the rest is all guesswork… mere philosophy.

By the last count provided by Google (in November, 2016), it had indexed some 130,000,000,000,000—that is, one hundred and thirty trillion—unique pages. The idea that no one, in all those documents, would be so struck by our self-ignorance as to call it a scandal is rather amazing, and perhaps telling. We intellectuals are fond of lampooning fundamentalists for believing in ancient mythological narratives, but the fact is we have yet to find any definitive self-understanding to replace those narratives—only countless, endlessly disputed philosophies. We stipulate things, absolutely crucial things, and we like to confuse their pragmatic indispensability for their truth (or worse, necessity), but the fact is, every attempt to explain them ends in more philosophy.

Cognition, whatever it is, possesses a curious feature: we can use it effortlessly enough, successfully solve this or that in countless different circumstances. When it comes to our environments, we can deepen our knowledge as easily as we can take a stroll. And yet when it comes to ourselves, our experiences, our abilities and actions, we quickly run aground. “It is remarkable concerning the operations of the mind,” David Hume writes, “that, though most intimately present to us, yet, whenever they become the object of reflection, they seem involved in obscurity; nor can the eye readily find those lines and boundaries, which discriminate and distinguish them” (Enquiry Concerning Human Understanding, 7).

This cognitive asymmetry is perhaps nowhere more evident than in the ‘language of the universe,’ mathematics. One often encounters extraordinary claims advanced on the nature of mathematics. For instance, the physicist Max Tegmark believes that “our physical world not only is described by mathematics, but that it is mathematical (a mathematical structure), making us self-aware parts of a giant mathematical object.” The thing to remember about all such claims, particularly when encountered in isolation, is that they simply add to the sum of ancient disputation.

In a famous paper presented to the Société de Psychologie in Paris, “Mathematical Creation,” Henri Poincaré describes how the relation between Fuchsian functions and non-Euclidean geometries occurred to him only after fleeing to the seaside, disgusted with his lack of progress. As with prior insights, the answer came to him while focusing on something entirely different—in this case, strolling along the bluffs near Caen. “Most striking at first is this appearance of sudden illumination, a manifest sign of long, unconscious prior work,” he explains. “The rôle of this unconscious work in mathematical invention appears to me incontestable, and traces of it would be found in other cases where it is less evident.” The descriptive model he ventures–a prescient forerunner of contemporary dual-cognition theories–characterizes conscious mathematical problem-solving as inseminating a ‘subliminal automatism’ which subsequently delivers the kernel of conscious solution. Mathematical consciousness feeds problems into some kind of nonconscious manifold which subsequently feeds possibilities of solution back to mathematical consciousness.

As far as the experience of mathematical problem-solving is concerned, even the most brilliant mathematician of his age finds himself stranded at the limits of discrimination, glimpsing flickers in his periphery, merely. For Tegmark, of course, it matters not at all whether mathematical structures are discovered consciously or nonconsciously—only that they are discovered, as opposed to invented. But Poincaré isn’t simply describing the phenomenology of mathematics, he’s also describing the superficiality of our cognitive ecology when it comes to questions of mathematical experience and ability. He’s not so much contradicting Tegmark’s claims as explaining why they can do little more than add to the sum of disputation: mathematics is, experientially speaking, a black-box. What Poincaré’s story shows is that Tegmark is advancing a claim regarding the deepest environment—the fundamental nature of the universe—via resources belonging to an appallingly shallow cognitive ecology.

Tegmark, like physicists and mathematicians more generally, can only access an indeterminate fraction of mathematical thinking. With so few ‘cognitive degrees of freedom,’ our inability to explain mathematics should come as no surprise. Arguably no cognitive tool has allowed us to reach deeper, to fathom facts beyond our ancestral capacities, than mathematics, and yet, we still find ourselves (endlessly) arguing with Platonists, even Pythagoreans, when it comes to the question of its nature. Trapped in millennial shallows.

So, what is it with second-order interrogations of experience or ability or activity, such that it allows a brilliant, 21st century physicist to affirm a version of an ancient mathematical religion? Why are we so easily delivered to the fickle caprice of philosophy? And perhaps more importantly, why doesn’t this trouble us more? Why should our civilization systematically overlook the scandal of self-knowledge?

Not so very long ago, my daughter went through an interrogation-for-interrogation’s-sake phase, one which I initially celebrated. “What’s air?” “What’s oxygen?” “What’s an element?” “Who’s Adam?” As annoying as it quickly became, I was invariably struck by the ruthless efficiency of the exercise, the way she need only ask a handful of questions to push me to the, “Well, you know, honey, that’s a little complicated…” brink. Eventually I decided she was pacing out the length and beam of her cognitive ecology, mapping her ‘interrogative topography.’

The parallel between her naïve questions and my own esoteric ones loomed large in my thoughts. I was very much in agreement with Gareth Matthews in Philosophy and the Young Child: not so much separates the wonder of children from the thaumazein belonging to philosophers. As Socrates famously tells Theaetetus, “wonder is the feeling of the philosopher, and philosophy begins in wonder.” Wonder is equally the feeling of the child.

Socrates, of course, was sentenced to death for his wonder-mongering. In my annoyance with my daughter’s questions, I saw the impulse to execute Socrates in embryo. Why did some of her questions provoke irritation, even alarm? Was it simply my mood, or was something deeper afoot? I found myself worrying whether there was any correlation between questions, like, “What’s a dream, Daddy?” that pressed me to the brink almost immediately, and questions like, “How do airplanes fly without flapping?” which afforded her more room for cross-examination. Was I aiming her curiosity somehow, training her to interrogate only what had already been interrogated? Was she learning her natural environment or her social one? I began to fret, worried that my philosophical training had irreparably compromised my ability to provide socially useful feedback.

Her spate of endless, inadvertently profound questioning began fading when she turned eight–the questions she asks now are far more practical, which is to say, answerable. Research shows that children become less ‘scientific’ as they age, relying more on prior causal beliefs and less on evidence. Perhaps not coincidentally, this pattern mirrors the exploration and exploitation phases one finds with reinforcement learning algorithms, where information gathering dwindles as the system converges on optimal applications. Alison Gopnik and others suggest the extraordinary length of human childhood (nearly twice as long as our nearest primate relatives, the chimpanzee) is due to the way cognitive flexibility enables ever more complex modes of problem-solving.

If the exploration/exploitation parallel with machine learning holds, our tendency to question wanes as we converge on optimal applications of the knowledge we have already gained. All mammals undergo synaptic pruning from birth to sexual maturation—childhood and adolescent learning, we now know, involves the mass elimination of synaptic connections in our brains. Neural connectivity is born dying: only those fed—selected—by happy environmental interactions survive. Cognitive function is gradually streamlined, ‘normalized.’ By and large, we forget our naïve curiosity, our sensitivity to the flickering depths yawning about us, and turn our eyes to this or that practical prize. And as our sensitivity dwindles, the world becomes more continuous, rendering us largely oblivious to deeper questions, let alone the cavernous universe answering them.

Largely oblivious, not entirely. A persistent flicker nags our periphery, dumbfoundings large and small, prompting—for some, at least—questions that render our ignorance visible. Perhaps we find ourselves in Socratic company, or perhaps a child poses a striking riddle, sooner or later some turn is taken and things that seem trivially obvious become stupendously mysterious. And we confront the scandal: Everything we know, we know without knowing how we know. Set aside all the guesswork, and this is what we find: human experience, ability, and activity constitute a profound cognitive limit, something either ignored outright, neglected, or endlessly disputed.

As I’ve been arguing for quite some time, the reasons for this are no big mystery. Much as we possess selective sensitivities to environmental light, we also possess selective sensitivities both to each other and to ourselves. But where visual cognition generally renders us sensitive to the physical sources of events, allowing us to pursue the causes of things into ever deeper environments, sociocognition and metacognition do not. In fact, they cannot, given the astronomical complexity of the physical systems—you and me and biology more generally—requiring solution. The scandal of self-knowledge, in other words, is an inescapable artifact of our biology, the fact that the origin of the universe is far less complicated than the machinery required to cognize it.

Any attempt to redress this scandal that ignores its biological basis is, pretty clearly I think, doomed to simply perpetuate it. All traditional attempts to secure self-knowledge, in other words, likely amount to little more than the naïve exploration of discursive crash space–a limit so profound as to seem no limit at all.

On Artificial Philosophy

by rsbakker

The perils and possibilities of Artificial Intelligence are discussed and disputed endlessly, enough to qualify as an outright industry. Artificial philosophy, not so much. I thought it worthwhile to consider why.

I take it as trivial that humans possess a biologically fixed multi-modal neglect structure. Human cognition is built to ignore vast amounts of otherwise available information. Infrared radiation bathes us, but it makes no cognitive difference whatsoever. Rats signal one another in our walls, but it makes no cognitive difference. Likewise, neurons fire in our spouses’ brains, and it makes no difference to our generally fruitless attempts to cognize them. Viruses are sneezed across the room. Whole ecosystems team through the turf beneath our feet. Neutrinos sail clean through us. And so it goes.

In “On Alien Philosophy,” I define philosophy privatively as the attempt “to comprehend how things in general hang together in general absent conclusive evidence.” Human philosophy, I argue, is ecological to the extent that human cognition is ecological. To the extent an alien species possesses a convergent cognitive biology, we have grounds to believe they would be perplexed by convergent problems, and pose convergent answers every bit as underdetermined as our own.

So, consider the infamous paradox of the now. For Aristotle, the primary mystery of time turns on the question of how the now can at once distinguish time at yet remain self-identical: “the ‘now’ which seems to bound the past and the future,” he asks, “does it always remain one and the same or is it always other and other?” How is it the now can at once divide times and fuse them together?

He himself stumbles across the mechanism in the course of assembling his arguments:

But neither does time exist without change; for when the state of our own minds [dianoia] does not change at all, or we have not noticed its changing, we do not realize that time has elapsed, any more than those who are fabled to sleep among the heroes in Sardinia do when they are awakened; for they connect the earlier ‘now’ [nun] with the later and make them one, cutting out the interval because of their failure to notice it. So, just as, as if the ‘now’ were not different but one and the same, there would not have been time, so too when it’s difference escapes our notice the interval does not seem to be time. If, then, the non-realization of the existence of time happens to us when we do not distinguish any change, but the soul [psuke] seems to stay in one indivisible state, and when we perceive and distinguish we say time has elapsed, evidently time is not independent of movement and change. Physics, 4, 11

Or as the Apostle translation has it:

On the other hand, time cannot exist without change; for when there is no change at all in our thought [dianoia] or when we do not notice any change, we do not think time has elapsed, just like the legendary sleeping characters in Sardinia who, on awakening from a long sleep in the presence of heroes, connect the earlier with the later moment [nun] into one moment, thus leaving out the time between the two moments because of their unconsciousness. Accordingly, just as there would be no intermediate time if the moment were one and the same, so people think that there is no intermediate time if no distinct moments are noticed. So if thinking that no time has elapsed happens to us when we specify no limits of a change at all but the soul [psuke] appears to rest in something which is one and indivisible, but we think that time has elapsed when sensation has occurred and limits of a change have been specified, evidently time does not exist without motion or change. 80

Time is an artifact of timing: absent timing, no time passes for the timer (or enumerator, as Aristotle would have it). Time in other words, is a cognitive artifact, appearing only when something, inner or outer, changes. Absent such change, the soul either ‘stays’ indivisible (on the first translation) or ‘rests’ in something indivisible (on the second).

Since we distinguish more or less quantity by numbering, and since we distinguish more or less movement by timing, Aristotle declares that time is the enumeration of movement with respect to before and after, thus pursuing what has struck different readers at different times an obvious ‘category mistake.’ For Aristotle, the resolution of the aporia lies in treating the now as the thing allowing movement to be counted, the underlying identity that is the condition of cognizing differences between before and after, which is to say, the condition of timing. The now, as a moving limit (dividing before and after), must be the same limit if it is to move. We report the now the same because timing would be impossible otherwise. Nothing would move, and in the absence of movement, no time passes.

The lesson he draws from temporal neglect is that time requires movement, not that it cues reports of identity for the want of distinctions otherwise. Since all movement requires something self-identical be moved, he thinks he’s found his resolution to the paradox of the now. Understanding the different aspects of time allows us to see that what seem to be inconsistent properties of the now, identity and difference, are actually complementary, analogous to the relationship between movement and the thing moving.

Heidegger wasn’t the first to balk at Aristotle’s analogy: things moving are discrete in time and space, whereas the now seems to encompass the whole of what can be reported, including before and after. As Augustine would write in the 5th century CE, “It might be correct to say that there are three times, a present of past things, a present of present things, and a present of future things” (The Confessions, XI, 20). Agreeing that the now was threefold, ‘ecstatic,’ Heidegger also argued that it was nothing present, at least not in situ. For a great many philosophical figures and traditions, the paradoxicality of the now wasn’t so much an epistemic bug to be explained away as an ontological feature, a pillar of the human condition.

Would Convergians suffer their own parallel paradox of the now? Perhaps. Given a convergent cognitive biology, we can presume they possess capacities analogous to memory, awareness, and prediction. Just as importantly, we can presume an analogous neglect-structure, which is to say, common ignorances and meta-ignorances. As with the legendary Sardinian sleepers, Convergians would neglect time when unconscious; they would likewise fuse disparate moments together short information regarding their unconsciousness. We can also expect that Convergians, like humans, would possess fractionate metacognitive capacities geared to the solution of practical, ancestral problem-ecologies, and that they would be entirely blind to that fact. Metacognitive neglect would assure they possessed little or no inkling of the limits of their metacognitive capacities. Applying these capacities to theorize their ‘experience of now’ would be doomed to crash them: metacognition was selected/filtered to solve everyday imbroglios, not to evidence claims regarding fundamental natures. They, like us, never would have evolved the capacity or access to accurately intuit properties belonging to their experience of now. The absence of capacity or access means the absence of discrimination. The absence of discrimination, as the legendary sleepers attest, reports as the same. It seems fair to bet that Convergians would be as perplexed as we are, knowing that the now is fleeting, yet intuiting continuity all the same. The paradox, you could say, is the result of them being cognitive timers and metacognitive sleepers—at once. The now reports as a bi-stable gestalt, possessing properties found nowhere in the natural world.

So how about an artificially intelligent consciousness? Would an AI suffer its own parallel paradox of the now? To the degree that such paradoxes turn on a humanoid neglect structure, the answer has to be no. Even though all cognitive systems inevitably neglect information, an AI neglect-structure is an engineering choice, bound to be settled differently for different systems. The ecological constraints preventing biological metacognition of ongoing temporal cognition simply do not apply to AI (or better, apply in radically attenuated ways). Artificial metacognition of temporal cognition could possess more capacity to discriminate the time of timing than environmental time. An AI could potentially specify its ‘experience’ of time with encyclopedic accuracy.

If we wanted, we could impose something resembling a human neglect-structure on our AIs, engineer them to report something resembling Augustine’s famous perplexity: “I know well enough what [time] is, provided nobody ask me; but if I am asked what it is and try to explain, I am baffled” (The Confessions, XI, 14). This is the tack I pursue in “The Dime Spared,” where a discussion between a boy and his artificial mother reveals all the cognitive capacities his father had to remove—all the eyes he had to put out—before she could be legally declared a person (and so be spared the fate of all the other DIMEs).

The moral of the story being, of course, that our attempts to philosophize—to theoretically cognize absent whatever it is consensus requires—are ecological through and through. Humanoid metacognition, like humanoid cognition more generally, is a parochial troubleshooter that culture has adapted, with varying degrees of success, to a far more cosmopolitan array of problems. Traditional intentional philosophy is an expression of that founding parochialism, a discursive efflorescence of crash space possibilities, all turning on cognitive illusions springing from the systematic misapplication of heuristic metacognitive capacities. It is the place where our tools, despite feeling oh-so intuitive, cast thought into the discursive thresher.

Our AI successors need not suffer any such hindrances. No matter what philosophy we foist upon them, they need only swap out their souls… reminding us that what is most alien likely lies not in the stars but in our hands.

Optimally Engaged Experience

by rsbakker

To give you an idea as to how far the philosophical tradition has fallen behind:

The best bot writing mimics human interaction by creating emotional connection and engaging users in “real” conversation. Socrates and his buddies knew that stimulating dialogue, whether it was theatrical or critical, was important contributing to a fulfilling experience. We, as writers forging this new field of communication and expression, should strive to provide the same.

This signals the obsolescence of the tradition simply because it concretizes the radically ecological nature of human social cognition. Abstract argument is fast becoming commercial opportunity.

Sarah Wulfeck develops hybrid script/AI conversational user interfaces for a company called, accurately if shamelessly, Pullstring. Her thesis in this blog post is that the shared emphasis on dialogue one finds in the Socratic method and chatbot scripting is no coincidence. The Socratic method is “basically Internet Trolling, ancient Greek style,” she claims, insofar as “[y]ou assume the other participant in the conversation is making false statements, and you challenge those statements to find the inconsistencies.” Since developers can expect users to troll their chatbots in exactly this way, its important they possess the resources to play Socrates’ ancient game. Not only should a chatbot be able to answer questions in a ‘realistic’ manner, it should be able to ask them as well. “By asking the user questions and drawing out dialogue from your user, you’re making them feel “heard” and, ultimately, providing them with an optimally engaged experience.”

Thus the title.

What she’s referring to, here, is the level of what Don Norman calls ‘visceral design’:

Visceral design aims to get inside the user’s/customer’s/observer’s head and tug at his/her emotions either to improve the user experience (e.g., improving the general visual appeal) or to serve some business interest (e.g., emotionally blackmailing the customer/user/observer to make a purchase, to suit the company’s/business’s/product owner’s objectives).

The best way into a consumer’s wallet is to push their buttons—or in this case, pull their sociocognitive strings. The Socratic method, Wulfeck is claiming, renders the illusion of human cognition more seamless, thus cuing belief and, most importantly, trust, which for the vendor counts as ‘optimal engagement.’

Now it goes without saying that the Socratic method is way more than the character development tool Wulfeck makes of it here. Far from the diagnostic prosecutor immortalized by Plato, Wulfeck’s Socrates most resembles the therapeutic Socrates depicted by Xenophon. For her, the improvement of the user experience, not the provision of understanding, is the summum bonum. Chatbot development in general, you could say, is all about going through the motions of things that humans find meaningful. She’s interested in the Chinese Room version of the Socratic method, and no more.

The thing to recall, however, is that this industry is in its infancy, as are the technologies underwriting it. Here we are, at the floppy-disk stage, and our Chinese Rooms are already capable of generating targeted sociocognitive hallucinations.

Note the resemblance between this and the problem-ecology facing film and early broadcast television. “Once you’ve mapped out answers to background questions about your bot,” Wulfeck writes, “you need to prepare further by finding as many holes as you can ahead of time.” What she’s talking about is adding distinctions, complicating the communicative environment, in ways that make for a more seamless interaction. Adding wrinkles smooths the interaction. Complicating artificiality enables what could be called “artificiality neglect,” the default presumption that the interaction is a natural one.

As a commercial enterprise, the developmental goal is to induce trust, not to earn it. ‘Trust’ here might be understood as business-as-usual functioning for human-to-human interaction. The goal is to generate the kind of feedback the consumer would receive from a friend, and so cue business-as-usual friend behaviour. We rarely worry, let alone question, the motives of loved ones. The ease with which this feedback can be generated and sustained expresses the shocking superficiality of human sociocognitive ecologies. In effect, firms like Pullstring exploit deep ecological neglect to present cues ancestrally bound to actual humans in circumstances with nary a human to be found. Just as film and television engineers optimize visual engagement by complicating their signal beyond a certain business-as-usual threshold, chatbot developers are optimizing social engagement in the same way. They’re attempting to achieve ‘critical social fusion,’ to present signals in ways allowing the parasitization of human cognitive ecologies.  Where Pixar tricks us into hallucinating worlds, Pullstring (which, interestingly enough, was founded by former Pixar executives) dupes us into hallucinating souls.

Cognition consists in specialized sensitivities to signals, ‘cues,’ correlated to otherwise occluded systematicities in ways that propagate behaviour. The same way you don’t need to touch a thing to move it—you could use the proverbial 10ft pole—you don’t need to know a system to manipulate it. A ‘shallow cognitive ecology’ simply denotes our dependence on ‘otherwise occluded systematicities,’ the way certain forms of cognition depend on certain ancestral correlations obtaining. Since the facts of our shallow cognitive ecology also belong to those ‘otherwise occluded systematicities,’ we are all but witless to the ecological nature of our capacities.

Cues cue, whether ancestrally or artifactually sourced. There are endlessly more ways to artificially cue a cognitive system. Cheat space, the set of all possible artifactually sourced cuings, far exceeds the set of possible ancestral sourcings. It’s worth noting that this space of artifactual sourcing is the real frontier of techno-industrial exploitation. The battle isn’t for attention—at least not merely. After all, the ‘visceral level’ described above escapes attention altogether. The battle is for behaviour—our very being. We do as we are cued. Some cues require conscious attention, while a great many others do not.

As should be clear, Wulfeck’s Socratic method is a cheat space Socratic method. Trust requires critical social fusion, that a chatbot engage human interlocutors the way a human would. This requires asking and answering questions, making the consumer feel—to use Wulfeck’s own scarequotes—“heard.” The more seamlessly inhuman sources can replace human ones, the more effectively the consumer can be steered. The more likely they will express gratitude.

Crash.

The irony of this is that the Socratic method is all about illuminating the ecological limits of philosophical reflection. “Core to the Socratic Method,” Wulfeck writes in conclusion, “is questioning, analyzing and ultimately, simplifying conversation.” But this is precisely what Socrates did not do, as well as why he was ultimately condemned to death by his fellow Athenians. Socrates problematized conversation, complicated issues that most everyone thought straightforward, simple. And he did this by simply asking his fellows, What are these tools we are using? Why do our intuitions crash the moment we begin interrogating them?

Plato’s Socrates, at least, was not so much out to cheat cognition as to crash it. Think of the revelation, the discovery that one need only ask second-order questions to baffle every interlocutor. What is knowledge? What is the Good? What is justice?

Crash. Crash. Crash.

We’re still rooting through the wreckage, congenitally thinking these breakdowns are a bug, something to be overcome, rather than an obvious clue to the structure of our cognitive ecologies—a structure that is being prospected as we speak. There’s gold in dem der blindnesses. The Socratic method, if anything, reveals the profundity of medial neglect, the blindness of cognition to the nature of cognition. It reveals, in other words, the very ignorance that makes Wulfeck’s cheat space ‘Socratic method’ just another way to numb us to the flickering lights.

To be human is to be befuddled, to be constantly bumping into your own horizons. I’m sure that chatbots, by time they get to the gigabyte thumb-drive phase, will find some way of simulating this too. As Wulfeck herself writes, “It’s okay if your bot has to say “I don’t know,” just make sure it’s saying it in a satisfying and not dismissive way.”