Enlightenment How? Omens of the Semantic Apocalypse
“In those days the world teemed, the people multiplied, the world bellowed like a wild bull, and the great god was aroused by the clamor. Enlil heard the clamor and he said to the gods in council, “The uproar of mankind is intolerable and sleep is no longer possible by reason of the babel.” So the gods agreed to exterminate mankind.” –The Epic of Gilgamesh
We know that human cognition is largely heuristic, and as such dependent upon cognitive ecologies. We know that the technological transformation of those ecologies generates what Pinker calls ‘bugs,’ heuristic miscues due to deformations in ancestral correlative backgrounds. In ancestral times, our exposure to threat-cuing stimuli possessed a reliable relationship to actual threats. Not so now thanks to things like the nightly news, generating (via, Pinker suggests, the availability heuristic (42)) exaggerated estimations of threat.
The toll of scientific progress, in other words, is cognitive ecological degradation. So far that degradation has left the problem-solving capacities of intentional cognition largely intact: the very complexity of the systems requiring intentional cognition has hitherto rendered cognition largely impervious to scientific renovation. Throughout the course of revolutionizing our environments, we have remained a blind-spot, the last corner of nature where traditional speculation dares contradict the determinations of science.
This is changing.
We see animals in charcoal across cave walls so easily because our visual systems leap to conclusions on the basis of so little information. The problem is that ‘so little information’ also means so easily reproduced. The world is presently engaged in a mammoth industrial research program bent on hacking every cue-based cognitive reflex we possess. More and more, the systems we evolved to solve our fellow human travelers will be contending with artificial intelligences dedicated to commercial exploitation. ‘Deep information,’ meanwhile, is already swamping the legal system, even further problematizing the folk conceptual (shallow information) staples that ground the system’s self-understanding. Creeping medicalization continues unabated, slowly scaling back warrant for things like character judgment in countless different professional contexts.
Now that the sciences are colonizing the complexities of experience and cognition, we can see the first clear-cut omens of the semantic apocalypse.
He assiduously avoids the topic in Enlightenment Now, but in The Blank Slate, Pinker devotes several pages to deflating the arch-incompatibility between natural and intentional modes of cognition, the problem of free will:
“But how can we have both explanation, with its requirement of lawful causation, and responsibility, with its requirement of free choice? To have them both we don’t need to resolve the ancient and perhaps irresolvable antinomy between free will and determinism. We have only to think clearly about what we want the notion of responsibility to achieve.” 180
He admits there’s no getting past the ‘conflict of intuitions’ underwriting the debate. Since he doesn’t know what intentional and natural cognition amount to, he doesn’t understand their incompatibility, and so proposes we simply side-step the problem altogether by redefining ‘responsibility’ to mean what we need it to mean—the same kind of pragmatic redefinition proposed by Dennett. He then proceeds to adduce examples of ‘clear thinking’ by providing guesses regarding ‘holding responsible’ as deterrence, which is more scientifically tractable. “I don’t claim to have solved the problem of free will, only to show that we don’t need to solve it to preserve personal responsibility in the face of an increasing understanding of the causes of behaviour” (185).
Here we can see how profoundly Pinker (as opposed to Nietzsche and Adorno) misunderstands the profundity of Enlightenment disenchantment. The problem isn’t that one can’t cook up alternate definitions of ‘responsibility,’ the problem is that anyone can, endlessly. ‘Clear thinking’ is as liable to serve Pinker as well as ‘clear and distinct ideas’ served Descartes, which is to say, as more grease for the speculative mill. No matter how compelling your particular instrumentalization of ‘responsibility’ seems, it remains every bit as theoretically underdetermined as any other formulation.
There’s a reason such exercises in pragmatic redefinition stall in the speculative ether. Intentional and mechanical cognitive systems are not optional components of human cognition, nor are the intuitions we are inclined to report. Moreover, as we saw in the previous post, intentional cognition generates reliable predictions of system behaviour absent access to the actual sources of that behaviour. Intentional cognition is source-insensitive. Natural cognition, on the other hand, is source sensitive: it generates predictions of system behaviour via access to the actual sources of that behaviour.
Small wonder, then, that our folk intentional intuitions regularly find themselves scuttled by scientific explanation. ‘Free will,’ on this account, is ancestral lemonade, a way to make the best out of metacognitive lemons, namely, our blindness to the sources of our thought and decisions. To the degree it relies upon ancestrally available (shallow) saliencies, any causal (deep) account of those sources is bound to ‘crash’ our intuitions regarding free will. The free will debate that Pinker hopes to evade with speculation can be seen as a kind of crash space, the point where the availability of deep information generates incompatible causal intuitions and intentional intuitions.
The confusion here isn’t (as Pinker thinks) ‘merely conceptual’; it’s a bona fide, material consequence of the Enlightenment, a cognitive version of a visual illusion. Too much information of the wrong kind crashes our radically heuristic modes of cognizing decisions. Stipulating definitions, not surprisingly, solves nothing insofar as it papers over the underlying problem—this is why it merely adds to the literature. Responsibility-talk cues the application of intentional cognitive modes; it’s the incommensurability of these modes with causal cognition that’s the problem, not our lexicons.
Consider the laziness of certain children. Should teachers be allowed to hold students responsible for their academic performance? As the list of learning disabilities grows, incompetence becomes less a matter of ‘character’ and more a matter of ‘malfunction’ and providing compensatory environments. Given that all failures of competence redound on cognitive infelicities of some kind, and given that each and every one of these infelicities can and will be isolated and explained, should we ban character judgments altogether? Should we regard exhortations to ‘take responsibility’ as forms of subtle discrimination, given that executive functioning varies from student to student? Is treating children like (sacred) machinery the only ‘moral’ thing to do?
So far at least. Causal explanations of behaviour cue intentional exemptions: our ancestral thresholds for exempting behaviour from moral cognition served larger, ancestral social equilibria. Every etiological discovery cues that exemption in an evolutionarily unprecedented manner, resulting in what Dennett calls “creeping exculpation,” the gradual expansion of morally exempt behaviours. Once a learning impediment has been discovered, it ‘just is’ immoral to hold those afflicted responsible for their incompetence. (If you’re anything like me, simply expressing the problem in these terms rankles!) Our ancestors, resorting to systems adapted to resolving social problems given only the merest information, had no problem calling children lazy, stupid, or malicious. Were they being witlessly cruel doing so? Well, it certainly feels like it. Are we more enlightened, more moral, for recognizing the limits of that system, and curtailing the context of application? Well, it certainly feels like it. But then how do we justify our remaining moral cognitive applications? Should we avoid passing moral judgment on learners altogether? It’s beginning to feel like it. Is this itself moral?
This is theoretical crash space, plain and simple. Staking out an argumentative position in this space is entirely possible—but doing so merely exemplifies, as opposed to solves, the dilemma. We’re conscripting heuristic systems adapted to shallow cognitive ecologies to solve questions involving the impact of information they evolved to ignore. We can no more resolve our intuitions regarding these issues than we can stop Necker Cubes from spoofing visual cognition.
The point here isn’t that gerrymandered solutions aren’t possible, it’s that gerrymandered solutions are the only solutions possible. Pinker’s own ‘solution’ to the debate (see also, How the Mind Works, 54-55) can be seen as a symptom of the underlying intractability, the straits we find ourselves in. We can stipulate, enforce solutions that appease this or that interpretation of this or that displaced intuition: teachers who berate students for their laziness and stupidity are not long for their profession—at least not anymore. As etiologies of cognition continue to accumulate, as more and more deep information permeates our moral ecologies, the need to revise our stipulations, to engineer them to discharge this or that heuristic function, will continue to grow. Free will is not, as Pinker thinks, “an idealization of human beings that makes the ethics game playable” (HMW 55), it is (as Bruce Waller puts it) stubborn, a cognitive reflex belonging to a system of cognitive reflexes belonging to intentional cognition more generally. Foot-stomping does not change how those reflexes are cued in situ. The free-will crash space will continue to expand, no matter how stubbornly Pinker insists on this or that redefinition of this or that term.
We’re not talking about a fall from any ‘heuristic Eden,’ here, an ancestral ‘golden age’ where our instincts were perfectly aligned with our circumstances—the sheer granularity of moral cognition, not to mention the confabulatory nature of moral rationalization, suggests that it has always slogged through interpretative mire. What we’re talking about, rather, is the degree that moral cognition turns on neglecting certain kinds of natural information. Or conversely, the degree to which deep natural information regarding our cognitive capacities displaces and/or crashes once straightforward moral intuitions, like the laziness of certain children.
Or the need to punish murderers…
Two centuries ago a murderer suffering irregular sleep characterized by vocalizations and sometimes violent actions while dreaming would have been prosecuted to the full extent of the law. Now, however, such a murderer would be diagnosed as suffering an episode of ‘homicidal somnambulism,’ and could very likely go free. Mammalian brains do not fall asleep or awaken all at once. For some yet-to-be-determined reason, the brains of certain individuals (mostly men older than 50), suffer a form of partial arousal causing them to act out their dreams.
More and more, neuroscience is making an impact in American courtrooms. Nita Farahany (2016) has found that between 2005 and 2012 the number of judicial opinions referencing neuroscientific evidence has more than doubled. She also found a clear correlation between the use of such evidence and less punitive outcomes—especially when it came to sentencing. Observers in the burgeoning ‘neurolaw’ field think that for better or worse, neuroscience is firmly entrenched in the criminal justice system, and bound to become ever more ubiquitous.
Not only are responsibility assessments being weakened as neuroscientific information accumulates, social risk assessments are being strengthened (Gkotsi and Gasser 2016). So-called ‘neuroprediction’ is beginning to revolutionize forensic psychology. Studies suggest that inmates with lower levels of anterior cingulate activity are approximately twice as likely to reoffend as those relatively higher levels of activity (Aharoni et al 2013). Measurements of ‘early sensory gating’ (attentional filtering) predict the likelihood that individuals suffering addictions will abandon cognitive behavioural treatment programs (Steele et al 2014). Reduced gray matter volumes in the medial and temporal lobes identify youth prone to commit violent crimes (Cope et al 2014). ‘Enlightened’ metrics assessing recidivism risks already exist within disciplines such as forensic psychiatry, of course, but “the brain has the most proximal influence on behavior” (Gaudet et al 2016). Few scientific domains illustrate the problems secondary to deep environmental information than the issue of recidivism. Given the high social cost of criminality, the ability to predict ‘at risk’ individuals before any crime is committed is sure to pay handsome preventative dividends. But what are we to make of justice systems that parole offenders possessing one set of ‘happy’ neurological factors early, while leaving others possessing an ‘unhappy’ set to serve out their entire sentence?
Nothing, I think, captures the crash of ancestral moral intuitions in modern, technological contexts quite so dramatically as forensic danger assessments. Consider, for instance, the way deep information in this context has the inverse effect of deep information in the classroom. Since punishment is indexed to responsibility, we generally presume those bearing less responsibility deserve less punishment. Here, however, it’s those bearing the least responsibility, those possessing ‘social learning disabilities,’ who ultimately serve the longest. The very deficits that mitigate responsibility before conviction actually aggravate punishment subsequent conviction.
The problem is fundamentally cognitive, and not legal, in nature. As countless bureaucratic horrors make plain, procedural decision-making need not report as morally rational. We would be mad, on the one hand, to overlook any available etiology in our original assessment of responsibility. We would be mad, on the other hand, to overlook any available etiology in our subsequent determination of punishment. Ergo, less responsibility often means more punishment.
The point, once again, is to describe the structure and dynamics of our collective sociocognitive dilemma in the age of deep environmental information, not to eulogize ancestral cognitive ecologies. The more we disenchant ourselves, the more evolutionarily unprecedented information we have available, the more problematic our folk determinations become. Demonstrating this point demonstrates the futility of pragmatic redefinition: no matter how Pinker or Dennett (or anyone else) rationalizes a given, scientifically-informed definition of moral terms, it will provide no more than grist for speculative disputation. We can adopt any legal or scientific operationalization we want (see Parmigiani et al 2017); so long as responsibility talk cues moral cognitive determinations, however, we will find ourselves stranded with intuitions we cannot reconcile.
Considered in the context of politics and the ‘culture wars,’ the potentially disastrous consequences of these kinds of trends become clear. One need only think of the oxymoronic notion of ‘commonsense’ criminology, which amounts to imposing moral determinations geared to shallow cognitive ecologies upon criminal contexts now possessing numerous deep information attenuations. Those who, for whatever reason, escaped the education system with something resembling an ancestral ‘neglect structure’ intact, those who have no patience for pragmatic redefinitions or technical stipulations will find appeals to folk intuitions every bit as convincing as those presiding over the Salem witch trials in 1692. Those caught up in deep information environments, on the other hand, will be ever more inclined to see those intuitions as anachronistic, inhumane, immoral—unenlightened.
Given the relation between education and information access and processing capacity, we can expect that education will increasingly divide moral attitudes. Likewise, we should expect a growing sociocognitive disconnect between expert and non-expert moral determinations. And given cognitive technologies like the internet, we should expect this dysfunction to become even more profound still.
Given the power of technology to cue intergroup identifications, the internet was—and continues to be—hailed as a means of bringing humanity together, a way of enacting the universalistic aspirations of humanism. My own position—one foot in academe, another foot in consumer culture—afforded me a far different perspective. Unlike academics, genre writers rub shoulders with all walks, and often find themselves debating outrageously chauvinistic views. I realized quite quickly that the internet had rendered rationalizations instantly available, that it amounted to pouring marbles across the floor of ancestral social dynamics. The cost of confirmation had plummeted to zero. Prior to the internet, we had to test our more extreme chauvinisms against whomever happened to be available—which is to say, people who would be inclined to disagree. We had to work to indulge our stone-age weaknesses in post-war 20th century Western cognitive ecologies. No more. Add to this phenomena such as online disinhibition effect, as well as the sudden visibility of ingroup, intellectual piety, and the growing extremity of counter-identification struck me as inevitable. The internet was dividing us into teams. In such an age, I realized, the only socially redemptive art was art that cut against this tendency, art that genuinely spanned ingroup boundaries. Literature, as traditionally understood, had become a paradigmatic expression of the tribalism presently engulfing us now. Epic fantasy, on the other hand, still possessed the relevance required to inspire book burnings in the West.
(The past decade has ‘rewarded’ my turn-of-the-millennium fears—though in some surprising ways. The greatest attitudinal shift in America, for instance, has been progressive: it has been liberals, and not conservatives, who have most radically changed their views. The rise of reactionary sentiment and populism is presently rewriting European politics—and the age of Trump has all but overthrown the progressive political agenda in the US. But the role of the internet and social media in these phenomena remains a hotly contested one.)
The earlier promoters of the internet had banked on the notional availability of intergroup information to ‘bring the world closer together,’ not realizing the heuristic reliance of human cognition on differential information access. Ancestrally, communicating ingroup reliability trumped communicating environmental accuracy, stranding us with what Pinker (following Kahan 2011) calls the ‘tragedy of the belief commons’ (Enlightenment Now, 358), the individual rationality of believing collectively irrational claims—such as, for instance, the belief that global warming is a liberal myth. Once falsehoods become entangled with identity claims, they become the yardstick of true and false, thus generating the terrifying spectacle we now witness on the evening news.
The provision of ancestrally unavailable social information is one thing, so long as it is curated—censored, in effect—as it was in the mass media age of my childhood. Confirmation biases have to swim upstream in such cognitive ecologies. Rendering all ancestrally unavailable social information available, on the other hand, allows us to indulge our biases, to see only what we want to see, to hear only what we want to hear. Where ancestrally, we had to risk criticism to secure praise, no such risks need be incurred now. And no surprise, we find ourselves sliding back into the tribalistic mire, arguing absurdities haunted—tainted—by the death of millions.
Jonathan Albright, the research director at the Tow Center for Digital Journalism at Columbia, has found that the ‘fake news’ phenomenon, as the product of a self-reinforcing technical ecosystem, has actually grown worse since the 2016 election. “Our technological and communication infrastructure, the ways we experience reality, the ways we get news, are literally disintegrating,” he recently confessed in a NiemanLab interview. “It’s the biggest problem ever, in my opinion, especially for American culture.” As Alexis Madrigal writes in The Atlantic, “the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”
The individual cost of fantasy continues to shrink, even as the collective cost of deception continues to grow. The ecologies once securing the reliability of our epistemic determinations, the invariants that our ancestors took for granted, are being levelled. Our ancestral world was one where seeking risked aversion, a world where praise and condemnation alike had to brave condemnation, where lazy judgments were punished rather than rewarded. Our ancestral world was one where geography and the scarcity of resources forced permissives and authoritarians to intermingle, compromise, and cooperate. That world is gone, leaving the old equilibria to unwind in confusion, a growing social crash space.
And this is only the beginning of the cognitive technological age. As Tristan Harris points out, social media platforms, given their commercial imperatives, cannot but engineer online ecologies designed to exploit the heuristic limits of human cognition. He writes:
“I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.”
More and more of what we encounter online is dedicated to various forms of exogenous attention capture, maximizing the time we spend on the platform, so maximizing our exposure not just to advertising, but to hidden metrics, algorithms designed to assess everything from our likes to our emotional well-being. As with instances of ‘forcing’ in the performance of magic tricks, the fact of manipulation escapes our attention altogether, so we always presume we could have done otherwise—we always presume ourselves ‘free’ (whatever this means). We exhibit what Clifford Nass, a pioneer in human-computer interaction, calls ‘mindlessness,’ the blind reliance on automatic scripts. To the degree that social media platforms profit from engaging your attention, they profit from hacking your ancestral cognitive vulnerabilities, exploiting our shared neglect structure. They profit, in other words, from transforming crash spaces into cheat spaces.
With AI, we are set to flood human cognitive ecologies with systems designed to actively game the heuristic nature of human social cognition, cuing automatic responses based on boggling amounts of data and the capacity to predict our decisions better than our intimates, and soon, better than we can ourselves. And yet, as the authors of the 2017 AI Index report state, “we are essentially “flying blind” in our conversations and decision-making related to AI.” A blindness we’re largely blind to. Pinker spends ample time domesticating the bogeyman of superintelligent AI (296-298) but he completely neglects this far more immediate and retail dimension of our cognitive technological dilemma.
Consider the way humans endure as much as need one another: the problem is that the cues signaling social punishment and reward are easy to trigger out of school. We’ve already crossed the borne where ‘improving the user experience’ entails substituting artificial for natural social feedback. Notice the plethora of nonthreatening female voices at all? The promise of AI is the promise of countless artificial friends, voices that will ‘understand’ your plight, your grievances, in some respects better than you do yourself. The problem, of course, is that they’re artificial, which is to say, not your friend at all.
Humans deceive and manipulate one another all the time, of course. And false AI friends don’t rule out true AI defenders. But the former merely describes the ancestral environments shaping our basic heuristic tool box. And the latter simply concedes the fundamental loss of those cognitive ecologies. The more prosthetics we enlist, the more we complicate our ecology, the more mediated our determinations become, the less efficacious our ancestral intuitions become. The more we will be told to trust to gerrymandered stipulations.
Corporate simulacra are set to deluge our homes, each bent on cuing trust. We’ve already seen how the hypersensitivity of intentional cognition renders us liable to hallucinate minds where none exist. The environmental ubiquity of AI amounts to the environmental ubiquity of systems designed to exploit granular sociocognitive systems tuned to solve humans. The AI revolution amounts to saturating human cognitive ecology with invasive species, billions of evolutionarily unprecedented systems, all of them camouflaged and carnivorous. It represents—obviously, I think—the single greatest cognitive ecological challenge we have ever faced.
What does ‘human flourishing’ mean in such cognitive ecologies? What can it mean? Pinker doesn’t know. Nobody does. He can only speculate in an age when the gobsmacking power of science has revealed his guesswork for what it is. This was why Adorno referred to the possibility of knowing the good as the ‘Messianic moment.’ Until that moment comes, until we find a form of rationality that doesn’t collapse into instrumentalism, we have only toothless guesses, allowing the pointless optimization of appetite to command all. It doesn’t matter whether you call it the will to power or identity thinking or negentropy or selfish genes or what have you, the process is blind and it lies entirely outside good and evil. We’re just along for the ride.
Human cognition is not ontologically distinct. Like all biological systems, it possesses its own ecology, its own environmental conditions. And just as scientific progress has brought about the crash of countless ecosystems across this planet, it is poised to precipitate the crash of our shared cognitive ecology as well, the collapse of our ability to trust and believe, let alone to choose or take responsibility. Once every suboptimal behaviour has an etiology, what then? Once everyone us has artificial friends, heaping us with praise, priming our insecurities, doing everything they can to prevent non-commercial—ancestral— engagements, what then?
‘Semantic apocalypse’ is the dramatic term I coined to capture this process in my 2008 novel, Neuropath. Terminology aside, the crashing of ancestral (shallow information) cognitive ecologies is entirely of a piece with the Anthropocene, yet one more way that science and technology are disrupting the biology of our planet. This is a worst-case scenario, make no mistake. I’ll be damned if I see any way out of it.
Humans cognize themselves and one another via systems that take as much for granted as they possibly can. This is a fact. Given this, it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible. Why should we evolve the extravagant capacity to intuit our nature beyond the demands of ancestral life? The shallow cognitive ecology arising out of those demands constitutes our baseline self-understanding, one that bears the imprimatur of evolutionary contingency at every turn. There’s no replacing this system short replacing our humanity.
Thus the ‘worst’ in ‘worst case scenario.’
There will be a great deal of hand-wringing in the years to come. Numberless intentionalists with countless competing rationalizations will continue to apologize (and apologize) while the science trundles on, crashing this bit of traditional self-understanding and that, continually eroding the pilings supporting the whole. The pieties of humanism will be extolled and defended with increasing desperation, whole societies will scramble, while hidden behind the endless assertions of autonomy, beneath the thundering bleachers, our fundamentals will be laid bare and traded for lucre.
So, basically the natural universe is at the mercy of an out-of-control process of reengineering reality to cheat or pander to human cognition.
About free will.. I’ve been thinking about that and its connection with the idea of choice. It seems to me that we have evolved into apes with ridiculously large brains, because large brains give us more potential behaviors, or more ways the solve the environment, so that more options open up for us to deal with our environment. Large brains give more perspective.
So it seems to me that nature itself shows that in its selection for large brains that opening up the metaphysical space for making choices gives an evolutionary advantage. And that therefore a large brain is proof from nature that such a thing as making choices exists. If so, does free will then also exist? The two concepts are often confused. Or does a large brain simply generate more pathways for potential deterministic processes, and is this a natural delineation of making choices? Could we then augment our ability to make choices and so expand our intuitively felt free will?
Are you suggesting that the evolution of large brains proves the existence of free will?
No, I wouldn’t dare. But it might prove the existence of choice.
This is basically Dennett’s argument in Freedom Evolves, where he advocates understanding free-will in terms of evolved versatility. But the problem, again, isn’t that we can’t reinterpret our folk idioms, only that we can’t do so in any decisive way, and so either end up with bald stipulation or more interminable disputation–while our intuitions trip us up more and more. Dennett (citing Wegner) admits that our ‘intuitively felt free will’ is illusory, but he’s loathe to adduce the problems arising out of this. So he would say that the discourse built around ‘choice talk’ can be preserved (given his pragmatic rationalizations) while recognizing that our metacognitive deliverances are fundamentally deceptive. This strikes me as a good way to keep philosophers employed, but little else.
Bakkerian Jihad… From my comments on the previous post, I think soft totalitarianism makes a lot of sense as a politics that pays lip service to but does not actually depend on the illusions of free will and moral responsibility. Of course if there is hope it’s the hope that machine intelligence will create sufficient wealth to eliminate the need for politics and morality, which are both ultimately ways to allocate scarce resources. On the other hand greed has been such a successful evolutionary strategy for so long that possibly no amount of wealth will be enough to seem like enough.
This is the million dollar question. My guess is we get an increasingly ‘akratic’ society, with the consumer proletariat slipping ever deeper into atavistic fantasy worlds, and the administrative technocracy becoming ever more ruthlessly manipulative.
The chimeric corporate machinery engineering all this has a fundamental problem – how to find and recruit the small percentage of humans who can master the symbolic complexities needed to implement the “program.” I think that’s why Zuckerberg offered to provide rural India with free internet access. What better way to discover and capture the genetic Jewels in the Crown than by monitoring the activities of all those Indian villagers?
Point being, the “machine” isn’t just silicon. It requires the intellectual and creative efforts of a large contingent of highly gifted, trained, educated and motivated humans. That’s not liable to change anytime soon. It may be an Achilles heel. Not sure about that.
If you’ve ever toured a silicon fab house producing 14 nm AMD chips you’ll come away wondering just how sustainable such a technology can be.
I suspect the gods of entropy will destroy this Golem and we’ll end up better off that natural selection has provided us with brains that exploit “shallow ecologies” – or more simply put, the challenges of tribal life in natural environments. Cause that’s how we’ll be living after all this comes crashing down.
Humans are absolutely essential to this, agreed. The image I have of future development is a deepening ‘akratic society,’ with the bulk of humanity withdrawing into atavistic simulacra–fantasy worlds–leaving an administrative caste increasingly ‘out of touch’ with their humanity. Everyone existentially dependent. Everyone immured. No one the wiser.
If I were seven feet tall and more than usually agile the NBA would have had no trouble finding me. Given the rewards available I would have done everything in my power to be found. Similarly, given the rewards available to those who can master the symbolic complexities I would guess kids from MIT and Caltech are trying as hard to make their names with the scouts from Google and Facebook as the kids from Kentucky and Kansas are trying to make their names with the scouts from the Lakers and the Celtics. That having been said, between climate change, genetically engineered (or naturally evolved) superbugs, nuclear war, fisheries collapse and a host of other troubles, it may all come crashing down yet.
First of all, great and very engaging post, as always. This series on Pinker has all of us hooked. Just one idea that popped into my head.
“it is not only possible, but exceedingly probable, that we would find squaring our intuitive self-understanding with our scientific understanding impossible.” Isn’t this what mathematics has been doing at least since a hundred of years ago, maybe even earlier? Increasingly divorced from our capacity to intuit what is going on? The very first bifurcation from intentional understanding? In a word, hasn’t mathematics been the locus of crash space all along –– only that we didn’t realize? It looks like the affiliation of AI, neuroscientific models, computation… with mathematical models might be pointing this way. In any case, I’m just wondering what relation you think mathematics – maybe even physics? – might have with crash space and its role in the semantic apocalypse.
Damn that Pythagoras!
Thank you infinitographies. I see mathematics as an ‘arch-heuristic,’ a kind of quantitative determinacy box, a way to purge ecology and so track systematicities ad infinitum. Perfect prediction of absolutely unprecedented deliberation (explaining the ‘scandal of deduction’). This second-order account, however, exemplifies the crash space intrinsic to our attempts to determine the nature of mathematics via reflection. In this second order sense, it’s been a crash space since Pythagoras. Is it this second order sense you have in mind?
Yes. Thank you. It’s always such a wonder that this ecologically-shallow mammalian nervous system has been able to even get a minimal hold of this arch-heuristic. I’m eager to see what neuroscience has to say about this. At the moment, the only thing that seems clear is that we’re just along for the ride.
“I see mathematics as an ‘arch-heuristic,’ a kind of quantitative determinacy box, a way to purge ecology and so track systematicities ad infinitum.”
It is staggeringly weird that we have this ability. Some animals have the ability to count in a rudimentary fashion, but to abstract ‘a number’ away from all its possible incarnations is seemingly unique.
A very compelling idea to me is that humans don’t inherently have much mathematical ability at all (at least, no sharp discontinuity between us and chimps in that regard)- but rather mathematics is a culturally inherited heuristic rather than a genetic/neuronal one (which is not to say that some people might not be genetically or developmentally primed for receiving it and using it). First language, then symbolic language and then mathematics as an emergent property thereof.
Pure speculation, and I don’t have an explanation for why a cultural heuristic is “better” than a genetic one. Maybe because a cultural one gets error-checked across ecological and genetic contexts? It’s less parochial. Ideal for use in science.
But if you think about it, pretty much everything in our toolkit consists of ‘knappables,’ neurally selected routines adapted from far, far more granular evolutionarily selected capacities. Basic numeracy was hammered into us by our environments. The knapping of this into mathematics suggests that there’s really nothing unreasonable about the effectiveness of mathematics at all. It just happens to be a particularly rewarding heuristic treasure chest. The efficacy of e=mc2 is just a gobsmacking extension of the efficacy of running when confronting two opponents instead of one.
Maybe we can think of Jorge’s idea as mathematics belonging to the accelerated evolutionary time that emerged with the acquisition of language. Mathematics is at the same time couple and decoupled: coupled from our (sometimes ancient) biological selected routines, decoupled from the environments whose conditions it was selected to operate on.
In this way it only makes sense that e=mc2 can only be efficient, because any (cognitive) behavioral pattern can only have been acquired on the adaptive conditions of our ecology. But, as it is repetitively recalled in this blog, our ecology is always inevitably shallow, attached to tribal and, above all, intentional thinking procedures. The problem is then, can this context account for the emergence/possibility/etc of an heuristic tool that is capable of tracking systematicities ad infinitum?
It looks like we find a chasm between the irreducible deficiency of our finitely, ecologically-based cognitive apparatus and the existence of this tool that is indifferent to our ecological finitude.
Like, what we should expect of any acquired toolkit that comes from us is not mathematics! Or, is it?
infinitographies, I know it’s earlier in Bakker’s elucidations (and probably could use a contemporary update) but Mathematics and the Russian Doll Structure of, Like, the Whole Universe.
Hey Mike thank you. Appreciated.
One of the things I think worthwhile to remember about mathematics is that it’s much harder to program a computer to accurately infer human emotions from facial expressions, vocal inflections and body language than it is to program a computer to do calculus. That suggests that there is some sense in which social interaction is computationally more demanding than mathematics. Perhaps human beings are better at social interaction than math because we devote more neurological resources to social interaction than to math and because social interaction has a longer evolutionary pedigree than math. If something like that is correct, it suggests that the neurological hardware being used for mathematics is actually repurposed from more demanding tasks. If mathematical ability is more rare than social skill perhaps it’s not so much that math is harder as that the ability to repurpose the needed resources is rare.
And the cool thing about language is that you only need to have a clever idea once. “…to abstract ‘a number’ away from all its possible incarnations…” only had to happen one time. And I would guess that once we have language we have the capacity for abstraction. If we get into the habit of evaluating statements as true or false it’s a short conceptual step to the idea of truth. Similarly, once we get into the habit of counting things it’s a short conceptual step to the idea of twoness. If abstraction is the miracle I’d suppose once that miracle happened and got spread around everything from then on is just, as Scott says, knapping.
One thing that struck me as I was rereading ‘Mathematics and the Russian Doll…’ is that math is like hunger in the sense that the phenomenal experience of hunger might be caused by low blood sugar levels, anxiety which we self-medicate with food (thus the term ‘comfort food’) or any other combination of neurological and endocrinological reasons. We don’t have conscious access to things like blood sugar levels (unless we are diabetic and we test them with technology). We don’t have conscious access to the neurological foundations of mathematical activity just like we don’t have conscious access to the endocrinological foundations of hunger. The difference is that we can affect the phenomenological experience of hunger by the physical activity of eating, thus demonstrating to ourselves that hunger is physical. Similarly, we can affect the phenomenological experience of lust by masturbating or copulating. We don’t have any mathematical analog of food or sex to prove to ourselves that the phenomenological experience of ‘doing mathematics’ is also physical, so it seems ‘Platonic.’
I love how magical you make disenchantment. This has to be one of your clearest articulations of crash space. Fantastic read.
crash space is cheat space which means it’s game space
And games are fun! Come at me Mr. Administrative Technocracy, *show me what u got*
Really air tight post, Scott! It’s like all the escape holes have been plugged with cork, then all the holes in the cork got plugged as well.
What I think is really useful is the separation of cognition into naturalistic cognition and intentional cognition. So to make yet another inapplicable comment as I am wont to do, I’d propose naturalistic cognition can grasp the physical dimensions involved in stories. Theory tends to fall to intentional cognition to interpret – which is probably why the ‘performative contradiction’ stuff comes up, since it’s being ‘mailed’ to intentional cognition yet conflicting with it. It’d be like sending a novel about car repair shop to a fantasy author, as an argument for just talking about motor mechanics – and they’re like ‘But you just sent a novel as your way of not wanting to talk about novels? Pahformatav contradicshun!’. But then again I’m distracting with intentional thinking here when the topic is natural thinking.
Could there be a story version of this post, whose physical depiction properties would help it reach naturalistic cognition rather than just default to intentional cognition? Some people have mentioned ‘A dime spared’ as clarifying – though heavy on philosophy (IMO), that involved depiction of physical space. This may help the message/scenario get to naturalistic cognition for consideration. That’s my inapplicable idea for the day!
One last review:
He’s a computer scientist of some renown. I wonder what he might have to say about the Semantic Apocalypse. I put links to your review in the comments to his review and I hope he responds…
[…] authority has made of it. I implore her to see how the combination of science and capital is driving our native cognitive ecologies to extinction on an exponential […]
[…] authority has made of it. I implore her to see how the combination of science and capital is driving our native cognitive ecologies to extinction on an exponential […]
[…] we are living through a “semantic apocalypse,” a likely implication is that the signal-to-noise ratio in most explicit political debates is not […]
[…] one regard: so far modernity has been a fantastic deal. We could plunder the ecologies about us, while largely ignoring the ecologies between. But now that science and technology are becoming cognitive, we ourselves are becoming the […]
[…] now use the same words in radically different ways that appear, so far, irreversible and irreconcilable. Because our basic cognitive capacities — such as moral intuitions — evolved in low-tech […]
[…] Listeners who enjoy this podcast might check out Bakker's What is the Semantic Apocalypse? and Enlightenment How? Omens of the Semantic Apocalypse. […]
[…] in human cognitive ecology, they find themselves cheek and jowl, causing the former to crash with greater and greater frequency. This crash occurs, not because people are confusing ‘ontologically distinct levels of […]