Enlightenment How? Pinker’s Tutelary Natures*
by rsbakker
The fate of civilization, Steven Pinker thinks, hangs upon our commitment to enlightenment values. Enlightenment Now: The Case for Reason, Science, Humanism and Progress constitutes his attempt to shore up those commitments in a culture grown antagonistic to them. This is a great book, well worth the read for the examples and quotations Pinker endlessly adduces, but even though I found myself nodding far more often than not, one glaring fact continually leaks through: Enlightenment Now is a book about a process, namely ‘progress,’ that as yet remains mired in ‘tutelary natures.’ As Kevin Williamson puts it in the National Review, Pinker “leaps, without warrant, from physical science to metaphysical certitude.”
What is his naturalization of meaning? Or morality? Or cognition—especially cognition! How does one assess the cognitive revolution that is the Enlightenment short understanding the nature of cognition? How does one prognosticate something one does not scientifically understand?
At one point he offers that “[t]he principles of information, computation, and control bridge the chasm between the physical world of cause and effect and the mental world of knowledge, intelligence, and purpose” (22). Granted, he’s a psychologist: operationalizations of information, computation, and control are his empirical bread and butter. But operationalizing intentional concepts in experimental contexts is a far cry from naturalizing intentional concepts. He entirely neglects to mention that his ‘bridge’ is merely a pragmatic, institutional one, that cognitive science remains, despite decades of research and billions of dollars in resources, unable to formulate its explananda, let alone explain them. He mentions a great number of philosophers, but he fails to mention what the presence of those philosophers in his thetic wheelhouse means.
All he ultimately has, on the one hand, is a kind of ‘ta-da’ argument, the exhaustive statistical inventory of the bounty of reason, science, and humanism, and on the other hand (which he largely keeps hidden behind his back), he has the ‘tu quoque,’ the question-begging presumption that one can only argue against reason (as it is traditionally understood) by presupposing reason (as it is traditionally understood). “We don’t believe in reason,” he writes, “we use reason” (352). Pending any scientific verdict on the nature of ‘reason,’ however, these kinds of transcendental arguments amount to little more than fancy foot-stomping.
This is one of those books that make me wish I could travel back in time to catch the author drafting notes. So much brilliance, so much erudition, all devoted to beating straw—at least as far as ‘Second Culture’ Enlightenment critiques are concerned. Nietzsche is the most glaring example. Ignoring Nietzsche the physiologist, the empirically-minded skeptic, and reducing him to his subsequent misappropriation by fascist, existential, and postmodernist thought, Pinker writes:
Disdaining the commitment to truth-seeking among scientists and Enlightenment thinkers, Nietzsche asserted that “there are no facts, only interpretations,” and that “truth is a kind of error without which a certain species of life could not live.” (Of course, this left him unable to explain why we should believe that those statements are true.) 446
Although it’s true that Nietzsche (like Pinker) lacked any scientifically compelling theory of cognition, what he did understand was its relation to power, the fact that “when you face an adversary alone, your best weapon may be an ax, but when you face an adversary in front of a throng of bystanders, your best weapon may be an argument” (415). To argue that all knowledge is contextual isn’t to argue that all knowledge is fundamentally equal (and therefore not knowledge at all), only that it is bound to its time and place, a creature possessing its own ecology, its own conditions of failure and flourishing. The Nietzschean thought experiment is actually quite a simple one: What happens when we turn Enlightenment skepticism loose upon Enlightenment values? For Nietzsche, Enlightenment Now, though it regularly pays lip service to the ramshackle, reversal-prone nature of progress, serves to conceal the empirical fact of cognitive ecology, that we remain, for all our enlightened noise-making to the contrary, animals bent on minimizing discrepancies. The Enlightenment only survives its own skepticism, Nietzsche thought, in the transvaluation of value, which he conceived—unfortunately—in atavistic or morally regressive terms.
This underwrites the subsequent critique of the Enlightenment we find in Adorno—another thinker whom Pinker grossly underestimates. Though science is able to determine the more—to provide more food, shelter, security, etc.—it has the social consequence underdetermining (and so undermining) the better, stranding civilization with a nihilistic consumerism, where ‘meaningfulness’ becomes just another commodity, which is to say, nothing meaningful at all. Adorno’s whole diagnosis turns on the way science monopolizes rationality, the way it renders moral discourses like Pinker’s mere conjectural exercises (regarding the value of certain values), turning on leaps of faith (on the nature of cognition, etc.), bound to dissolve into disputation. Although both Nietzsche and Adorno believed science needed to be understood as a living, high dimensional entity, neither harboured any delusions as to where they stood in the cognitive pecking order. Unlike Pinker.
Whatever their failings, Nietzsche and Adorno glimpsed a profound truth regarding ‘reason, science, humanism, and progress,’ one that lurks throughout Pinker’s entire account. Both understood that cognition, whatever it amounts to, is ecological. Steven Pinker’s claim to fame, of course, lies in the cognitive ecological analysis of different cultural phenomena—this was the whole reason I was so keen to read this book. (In How the Mind Works, for instance, he famously calls music ‘auditory cheese-cake.’) Nevertheless, I think both Nietzsche and Adorno understood the ecological upshot of the Enlightenment in way that Pinker, as an avowed humanist, simply cannot. In fact, Pinker need only follow through on his modus operandi to see how and why the Enlightenment is not what he thinks it is—as well as why we have good reason to fear that Trumpism is no ‘blip.’
Time and again Pinker likens the process of Enlightenment, the movement away from our tutelary natures, in terms of a conflict between ancestral cognitive predilections and scientifically and culturally revolutionized environments. “Humans today,” he writes, “rely on cognitive faculties that worked well enough in traditional societies, but which we now see are infested with bugs” (25). And the number of bugs that Pinker references in the course of the book is nothing short of prodigious. We tend to estimate frequencies according to ease of retrieval. We tend to fear losses more than we hope for gains. We tend to believe as our group believes. We’re prone to tribalism. We tend to forget past misfortune, and to succumb to nostalgia. The list goes on and on.
What redeems us, Pinker argues, is the human capacity for abstraction and combinatorial recursion, which allows us to endlessly optimize our behaviour. We are a self-correcting species:
So for all the flaws in human nature, it contains the seeds of its own improvement, as long as it comes up with norms and institutions that channel parochial interests into universal benefits. Among those norms are free speech, nonviolence, cooperation, cosmopolitanism, human rights, and an acknowledgment of human fallibility, and among the institutions are science, education, media, democratic government, international organizations, and markets. Not coincidentally, these were the major brainchildren of the Enlightenment. 28
We are the products of ancestral cognitive ecologies, yes, but our capacity for optimizing our capacities allows us to overcome our ‘flawed natures,’ become something better than what we were. “The challenge for us today,” Pinker writes, “is to design an informational environment in which that ability prevails over the ones that lead us into folly” (355).
And here we encounter the paradox that Enlightenment Now never considers, even though Pinker presupposes it continually. The challenge for us today is to construct an informational environment that mitigates the problems arising out of our previous environmental constructions. The ‘bugs’ in human nature that need to be fixed were once ancestral features. What has rendered these adaptations ‘buggy’ is nothing other than the ‘march of progress.’ A central premise of Enlightenment Now is that human cognitive ecology, the complex formed by our capacities and our environments, has fallen out of whack in this way or that, cuing us to apply atavistic modes of problem-solving out of school. The paradox is that the very bugs Pinker thinks only the Enlightenment can solve are the very bugs the Enlightenment has created.
What Nietzsche and Adorno glimpsed, each in their own murky way, was a recursive flaw in Enlightenment logic, the way the rationalization of everything meant the rationalization of rationalization, and how this has to short-circuit human meaning. Both saw the problem in the implementation, in the physiology of thought and community, not in the abstract. So where Pinker seeks to “to restate the ideals of the Enlightenment in the language and concepts of the 21st century” (5), we can likewise restate Nietzsche and Adorno’s critiques of the Enlightenment in Pinker’s own biological idiom.
The problem with the Enlightenment is a cognitive ecological problem. The technical (rational and technological) remediation of our cognitive ecologies transforms those ecologies, generating the need for further technical remediation. Our technical cognitive ecologies are thus drifting ever further from our ancestral cognitive ecologies. Human sociocognition and metacognition in particular are radically heuristic, and as such dependent on countless environmental invariants. Before even considering more, smarter intervention as a solution to the ambient consequences of prior interventions, the big question has to be how far—and how fast—can humanity go? At what point (or what velocity) does a recognizably human cognitive ecology cease to exist?
This question has nothing to do with nostalgia or declinism, no more than any question of ecological viability in times of environmental transformation. It also clearly follows from Pinker’s own empirical commitments.
The Death of Progress (at the Hand of Progress)
The formula is simple. Enlightenment reason solves natures, allowing the development of technology, generally relieving humanity of countless ancestral afflictions. But Enlightenment reason is only now solving its own nature. Pinker, in the absence of that solution, is arguing that the formula remains reliable if not quite as simple. And if all things were equal, his optimistic induction would carry the day—at least for me. As it stands, I’m with Nietzsche and Adorno. All things are not equal… and we would see this clearly, I think, were it not for the intentional obscurities comprising humanism. Far from the latest, greatest hope that Pinker makes it out to be, I fear humanism constitutes yet another nexus of traditional intuitions that must be overcome. The last stand of ancestral authority.
I agree this conclusion is catastrophic, “the greatest intellectual collapse in the history of our species” (vii), as an old polemical foe of Pinker’s, Jerry Fodor (1987) calls it. Nevertheless, short grasping this conclusion, I fear we court a disaster far greater still.
Hitherto, the light cast by the Enlightenment left us largely in the dark, guessing at the lay of interior shadows. We can mathematically model the first instants of creation, and yet we remain thoroughly baffled by our ability to do so. So far, the march of moral progress has turned on the revolutionizing our material environments: we need only renovate our self-understanding enough to accommodate this revolution. Humanism can be seen as the ‘good enough’ product of this renovation, a retooling of folk vocabularies and folk reports to accommodate the radical environmental and interpersonal transformations occurring around them. The discourses are myriad, the definitions are endlessly disputed, nevertheless humanism provisioned us with the cognitive flexibility required to flourish in an age of environmental disenchantment and transformation. Once we understand the pertinent facts of human cognitive ecology, its status as an ad hoc ‘tutelary nature’ becomes plain.
Just what are these pertinent facts? First, there is a profound distinction between natural or causal cognition, and intentional cognition. Developmental research shows that infants begin exhibiting distinct physical versus psychological cognitive capacities within the first year of life. Research into Asperger Syndrome (Baron-Cohen et al 2001) and Autism Spectrum Disorder (Binnie and Williams 2003) consistently reveals a cleavage between intuitive social cognitive capacities, ‘theory-of-mind’ or ‘folk psychology,’ and intuitive mechanical cognitive capacities, or ‘folk physics.’ Intuitive social cognitive capacities demonstrate significant heritability (Ebstein et al 2010, Scourfield et al 1999) in twin and family studies. Adults suffering Williams Syndrome (a genetic developmental disorder affecting spatial cognition) demonstrate profound impairments on intuitive physics tasks, but not intuitive psychology tasks (Kamps et al 2017). The distinction between intentional and natural cognition, in other words, is not merely a philosophical assertion, but a matter of established scientific fact.
Second, cognitive systems are mechanically intractable. From the standpoint of cognition, the most significant property of cognitive systems is their astronomical complexity: to solve for cognitive systems is to solve for what are perhaps the most complicated systems in the known universe. The industrial scale of the cognitive sciences provides dramatic evidence of this complexity: the scientific investigation of the human brain arguably constitutes the most massive cognitive endeavor in human history. (In the past six fiscal years, from 2012 to 2017, the National Institute of Health [21/01/2017] alone will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegeneration (10.183 billion)).
Despite this intractability, however, our cognitive systems solve for cognitive systems all the time. And they do so, moreover, expending imperceptible resources and absent any access to the astronomical complexities responsible—which is to say, given very little information. Which delivers us to our third pertinent fact: the capacity of cognitive systems to solve for cognitive systems is radically heuristic. It consists of ‘fast and frugal’ tools, not so much sacrificing accuracy as applicability in problem-solving (Todd and Gigerenzer 2012). When one cognitive system solves for another it relies on available cues, granular information made available via behaviour, utterly neglecting the biomechanical information that is the stock and trade of the cognitive sciences. This radically limits their domain of applicability.
The heuristic nature of intentional cognition is evidenced by the ease with which it is cued. Thus, the fourth pertinent fact: intentional cognition is hypersensitive. Anthropomorphism, the attribution of human cognitive characteristics to systems possessing none, evidences the promiscuous application of human intentional cognition to intentional cues, our tendency to run afoul what might be called intentional pareidolia, the disposition to cognize minds where no minds exist (Waytz et al 2014). The Heider-Simmel illusion, an animation consisting of no more than shapes moving about a screen, dramatically evidences this hypersensitivity, insofar as viewers invariably see versions of a romantic drama (Heider and Simmel 1944). Research in Human-Computer Interaction continues to explore this hypersensitivity in a wide variety of contexts involving artificial systems (Nass and Moon 2000, Appel et al 2012). The identification and exploitation of our intentional reflexes has become a massive commercial research project (so-called ‘affective computing’) in its own right (Yonck 2017).
Intentional pareidolia underscores the fact that intentional cognition, as heuristic, is geared to solve a specific range of problems. In this sense, it closely parallels facial pareidolia, the tendency to cognize faces where no faces exist. Intentional cognition, in other words, is both domain-specific, and readily misapplied.
The incompatibility between intentional and mechanical cognitive systems, then, is precisely what we should expect, given the radically heuristic nature of the former. Humanity evolved in shallow cognitive ecologies, mechanically inscrutable environments. Only the most immediate and granular causes could be cognized, so we evolved a plethora of ways to do without deep environmental information, to isolate saliencies correlated with various outcomes (much as machine learning).
Human intentional cognition neglects the intractable task of cognizing natural facts, leaping to conclusions on the basis of whatever information it can scrounge. In this sense it’s constantly gambling that certain invariant backgrounds obtain, or conversely, that what it sees is all that matters. This is just another way to say that intentional cognition is ecological, which in turn is just another way to say that it can degrade, even collapse, given the loss of certain background invariants.
The important thing to note, here, of course, is how Enlightenment progress appears to be ultimately inimical to human intentional cognition. We can only assume that, over time, the unrestricted rationalization of our environments will gradually degrade, then eventually overthrow the invariances sustaining intentional cognition. The argument is straightforward:
1) Intentional cognition depends on cognitive ecological invariances.
2) Scientific progress entails the continual transformation of cognitive ecological invariances.
Thus, 3) scientific progress entails the collapse of intentional cognition.
But this argument oversimplifies matters. To see as much one need only consider the way a semantic apocalypse—the collapse of intentional cognition—differs from say a nuclear or zombie apocalypse. The Walking Dead, for instance, abounds with savvy applications of intentional cognition. The physical systems underwriting meaning, in other words, are not the same as the physical systems underwriting modern civilization. So long as some few of us linger, meaning lingers.
Intentional cognition, you might think, is only as weak or as hardy as we are. No matter what the apocalyptic scenario, if humans survive it survives. But as autistic spectrum disorder demonstrates, this is plainly not the case. Intentional cognition possesses profound constitutive dependencies (as those suffering the misfortune of watching a loved one succumb to strokes or neurodegenerative disease knows first-hand). Research into the psychological effects of solitary confinement, on the other hand, show that intentional cognition also possesses profound environmental dependencies as well. Starve the brain of intentional cues, and it will eventually begin to invent them.
The viability of intentional cognition, in other words, depends not on us, but on a particular cognitive ecology peculiar to us. The question of the threshold of a semantic apocalypse becomes the question of the stability of certain onboard biological invariances correlated to a background of certain environmental invariances. Change the constitutive or environmental invariances underwriting intentional cognition too much, and you can expect it will crash, generate more problems than solutions.
The hypersensitivity of intentional cognition either evinced by solitary confinement or more generally by anthropomorphism demonstrates the threat of systematic misapplication, the mode’s dependence on cue authenticity. (Sherry Turkle’s (2007) concerns regarding ‘Darwinian buttons,’ or Deidre Barrett’s (2010) with ‘supernormal stimuli,’ touch on this issue). So, one way of inducing semantic apocalypse, we might surmise, lies in the proliferation of counterfeit cues, information that triggers intentional determinations that confound, rather than solve any problems. One way to degrade cognitive ecologies, in other words, is to populate environments with artifacts cuing intentional cognition ‘out of school,’ which is to say, circumstances cheating or crashing them.
The morbidity of intentional cognition demonstrates the mode’s dependence on its own physiology. What makes this more than platitudinal is the way this physiology is attuned to the greater, enabling cognitive ecology. Since environments always vary while cognitive systems remain the same, changing the physiology of intentional cognition impacts every intentional cognitive ecology—not only for oneself, but for the rest of humanity as well. Just as our moral cognitive ecology is complicated by the existence of psychopaths, individuals possessing systematically different ways of solving social problems, the existence of ‘augmented’ moral cognizers complicates our moral cognitive ecology as well. This is important because you often find it claimed in transhumanist circles (see, for example, Buchanan 2011), that ‘enhancement,’ the technological upgrading of human cognitive capacities, is what guarantees perpetual Enlightenment. What better way to optimize our values than by reengineering the biology of valuation?
Here, at last, we encounter Nietzsche’s question cloaked in 21st century garb.
And here we can also see where the above argument falls short: it overlooks the inevitability of engineering intentional cognition to accommodate constitutive and environmental transformations. The dependence upon cognitive ecologies asserted in (1) is actually contingent upon the ecological transformation asserted in (2).
1) Intentional cognition depends on constitutive and environmental cognitive ecological invariances.
2) Scientific progress entails the continual transformation of constitutive and environmental cognitive ecological invariances.
Thus, 3) scientific progress entails the collapse of intentional cognition short remedial constitutive transformations.
What Pinker would insist is that enhancement will allow us to overcome our Pleistocene shortcomings, and that our hitherto inexhaustible capacity to adapt will see us through. Even granting the technical capacity to so remediate, the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments. The problem, in other words, is that Enlightenment entails the end of invariances, the end of shared humanity, in fact. Yuval Harari (2017) puts it with characteristic brilliance in Homo Deus:
What then, will happen once we realize that customers and voters never make free choices, and once we have the technology to calculate, design, or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket? 277
The former dilemma is presently dominating the headlines and is set to be astronomically complicated by the explosion of AI. The latter we can see rising out of literature, clawing its way out of Hollywood, seizing us with video game consoles, engulfing ever more experiential bandwidth. And as I like to remind people, 100 years separates the Blu-Ray from the wax phonograph.
The key to blocking the possibility that the transformative potential of (2) can ameliorate the dependency in (1) lies in underscoring the continual nature of the changes asserted in (2). A cognitive ecology where basic constitutive and environmental facts are in play is no longer recognizable as a human one.
Scientific progress entails the collapse of intentional cognition.
On this view, the coupling of scientific and moral progress is a temporary affair, one doomed to last only so long as cognition itself remained outside the purview of Enlightenment cognition. So long as astronomical complexity assured that the ancestral invariances underwriting cognition remained intact, the revolution of our environments could proceed apace. Our ancestral cognitive equilibria need not be overthrown. In place of materially actionable knowledge regarding ourselves, we developed ‘humanism,’ a sop for rare stipulation and ambient disputation.
But now that our ancestral cognitive equilibria are being overthrown, we should expect scientific and moral progress will become decoupled. And I would argue that the evidence of this is becoming plainer with the passing of every year. Next week, we’ll take a look at several examples.
I fear Donald Trump may be just the beginning.
.
References
Appel, Jana, von der Putten, Astrid, Kramer, Nicole C. and Gratch, Jonathan 2012, ‘Does Humanity Matter? Analyzing the Importance of Social Cues and Perceived Agency of a Computer System for the Emergence of Social Reactions during Human-Computer Interaction’, in Advances in Human-Computer Interaction 2012 <https://www.hindawi.com/journals/ahci/2012/324694/ref/>
Barrett, Deidre 2010, Supernormal Stimuli: How Primal Urges Overran Their Original Evolutionary Purpose (New York: W.W. Norton)
Binnie, Lynne and Williams, Joanne 2003, ‘Intuitive Psychology and Physics Among Children with Autism and Typically Developing Children’, Autism 7
Buchanan, Allen 2011, Better than Human: The Promise and Perils of Enhancing Ourselves (New York: Oxford University Press)
Ebstein, R.P., Israel, S, Chew, S.H., Zhong, S., and Knafo, A. 2010, ‘Genetics of human social behavior’, in Neuron 65
Fodor, Jerry A. 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, MA: The MIT Press)
Harari, Yuval 2017, Homo Deus: A Brief History of Tomorrow (New York: HarperCollins)
Heider, Fritz and Simmel, Marianne 1944, ‘An Experimental Study of Apparent Behaviour,’ in The American Journal of Psychology 57
Kamps, Frederik S., Julian, Joshua B., Battaglia, Peter, Landau, Barbara, Kanwisher, Nancy and Dilks Daniel D 2017, ‘Dissociating intuitive physics from intuitive psychology: Evidence from Williams syndrome’, in Cognition 168
Nass, Clifford and Moon, Youngme 2000, ‘Machines and Mindlessness: Social Responses to Computers’, Journal of Social Issues 56
Pinker, Steven 1997, How the Mind Works (New York: W.W. Norton)
—. 2018, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking)
Scourfield J., Martin N., Lewis G. and McGuffin P. 1999, ‘Heritability of social cognitive skills in children and adolescents’, British Journal of Psychiatry 175
Todd, P. and Gigerenzer, G. 2012 ‘What is ecological rationality?’, in Todd, P. and Gigerenzer, G. (eds.) Ecological Rationality: Intelligence in the World (Oxford: Oxford University Press) 3–
30
Turkle, Sherry 2007, ‘Authenticity in the age of digital companions’, Interaction Studies 501-517
Waytz, Adam, Cacioppo, John, and Epley, Nicholas 2014, ‘Who See Human? The Stability and Importance of Individual Differences in Anthropomorphism’, Perspectives in Psychological Science 5
Yonck, Richard 2017, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence (New York, NY: Arcade Publishing)
*Originally posted 20/03/2018
I think one of the problems of this debate (like maybe the review of his book could be seen as a kind a debate maybe) is the We.
In reading yours and the other guys review, and not reading Pinkers book, I kinda agree with all three of you, but then disagree Becuase all of you (it seems) are thinking in terms of We.
Who is we? humanity? The people who already agree with you (either side)?
I am leaning toward the We of “those who are concerned with Being”. This is not human, as many people want to misinterpret Heidggers misinterpretation, but it is exactly only those for whom Being is approached in a particular manner. We are not ‘Being’ to this talk a out everyone; We are Being which generalizes and so effsctivly excludes everyone else through our theoretical posture.
So really all three of you are right: you tear up any ideal if transcedenc3, and Pinker (seems to say) says “you’re argument just verified that you are Enlightned. It seems he just has the fortitude to call a duck a duck, while your job is to destabilize such proclamations. But neverthelsss, both of you are doing the same thing, concerned with Being as though you speak for everyone’s Being. And the wonderful thing about this “enlightenment thinking “that you exemplify is, that no matter what you theorize, no matter what you put into words, you are still in acting this presumption about “your” great intelligence and how it knows so much about all humanity or all the Universe. I think Pinker (only judging fro your reviews) is just calling the hypocrites out.
To say that your words actually do create a ‘world’, I would offer that only Enlightnement thinkers are able to to that. Not all ‘humans’. Or all ‘beings’. Hence. The us and them.
It appears to me (again without actually having read his book) that your position (cognition and bbt and stuff) is exactly Doing what Pinker is taking about: ignoring a gap between the theory of cognition and the ability to formulate a theory about that, science and all, and the ‘actual’ thing that is occurring which the theory is supposedly accounting for. This equation Is founded in Enlightenment thinking, in a certain manner of coming upon the world, which imposes its “universal dogma” over all other dogmas by effectively ignoring them by colonizing them into the Enlightened position.
Thanks for writing this! It’s so great to have someone articulate what I can only grasp with my cognitive finger tips. Although, I think Donald Trump might be worst example of leaders in the decoupled future. Fingers crossed.
Maybe I’m less worried about the decoupling than yourself.
…I would also add, there are indeed people who see reality as manifested by the terms and their associated definitions. The support and logic that comes from such a metaphysical certitude is in every case a kind of enlightened thinking .
One cannot dispute this. And the reason why one cannot dispute this is because if you get into a discussion with someone whose metaphysical basis is established in the concretization of definitional semantics that person will simply refer you to what definition of structure are you referring. Basically such a person whose meta-physical substance is based in that particular method of finding definition and altering definition through cognitive ability, Will never place themselves into any definite metaphysical structure due to the fact that their view upon the substance of reality is flux.
Such people will indeed used their enlightened metaphysical ability to manipulate discourse to prevent their assertion of truth from being cornered into an actual universal identity. In other words, D’s in lightened people, please enlighten theories support this centrality of the transcendental thinker, The thinker who thought arises due to something that is outside.
What they will not admit, indeed are in capable of admitting, is that their particular manner of determining reality does not extend it self further then it’s theoretical basis, but More -and this is a significant difference – They will assert their particular manner upon the rest of reality despite what evidence would prove the contrary to their proposals.
The number of people who might agree with the particular semantic and discursive organization does not argue the certainty of an encompassing reality and truth, But only verifies the Marxist reality that has been argued through post modern enlighten meant; which is to say, that such group of people that get together and decide a particular method of coming upon reality is correct will therefore go out into the world and impose that reality upon everyone else.
What we are coming to terms with now is this limited way of viewing the world is actually merely abother mythological ordination. That indeed there is another force that lay outside of this effort towards power. And this other force does not answer to the theoretical maxims of Postmodernity which constantly use it is ironic privilege to avoid identification of itself.
What this post modern method sees in this other force is fallacy and in correction and basically the destruction of civilization and the world. It does not see that the climate which is changing is exactly the climate of postmodern mythology coming upon its own dissolution. Such post modern reckoning sees in this outside forces it’s own end, As it functions to attempt to assert this metaphysical post modern method over all reality, then sees the destruction of the world, rather than seeing the destruction of the world as that of the theoretical basis of its understanding.
Number three of your post here, the collapse of intentional cognition, is this statement of the exact motion that I am indicating in my replies. Because it relies upon a certain reality, which is to say, actually, for sure, true reality that is presented through the various convolutions of theoretical discussion is over definitions.
Because the fact of the matter is that an order for you to even come up with those kinds of ideas and post them in this particular way, two things must occur that did not reconcile to one another in one instance, and must reconcile to either one.
This is to say that in order for you to be able to speak on this particular topic your ideas and indeed your ideals must be best, as they are, in the idea that reality itself, however you want to define it, is based in a transcendental agency defining what this real truth is. So we have the modern post modern situation of a bunch of transcendental agents defining for themselves and for the rest of us the actual metaphysical structure of reality. It doesn’t matter what further definitions I put on it to two defined terms such as “transcendental“ or “reality“, because to do so only serves to justify the theoretical position of the asserting power.
The plain fact if this is when you go to order a beer at your local bar with dinner hanging out with friends you do not go up to the bartender and say hey I’ll have a porter and the bartender looks at you and doesn’t know what you’re talking about.
Indeed in order to write your post you are an agent of transcendence, you are communicating with something that is outside of what is knowable. Two define how this might be the case merely justifies your communion with us outside situation. And the only way that you can read but that that is true is to simply deny that the terms I am using have any substance, which nearly argues that particular theoretical method by which you find a true metaphysics.
It’s good because I think that I’ve figured out what is generally unsaddling with your proposals. It’s like 10% of the things that you say just doesn’t sit well. Like 90% of it I’m totally down and it makes philosophical sense and everything like that, and then there is this 10% that just doesn’t sit well. Lol
Set OK I’m done rambling now 🌎
… oh. Sorry: The other way that it must reconcile his to realize that this one particular methodological way of reconciling everything to the one world, indeed is missing the majority of the Rio actual world. Which is to say: mutual exclusivity forces. 1/force which is attempting to assert its truth Through a regular continuous application of its method. This particular method equates it self with civilization and basically the world as we know it in all its goodness. Then 2/force which understands that the world in its goodness is not dependent on that one methodological manner of metaphysical reality, The force that indeed occurs despite the methodological enlightened maxim of correct power. .
Modernity and postmodernity all function within the enlightenment idea of transcendental communion. To manipulate discourse depending on the definitions that we give to various terms does not change this fundamental fact of the functioning of consciousness. It nearly dresses it in different clothes to justify the theorist /cosmology.
OK now I’m done for real. I will shut up for this post.
VOTED
Hey Scott! You know that old “norm-generating mechanism” you’ve got in your garage? I think the Pentagon needs it:
https://www.lawfareblog.com/persistent-engagement-and-tacit-bargaining-path-toward-constructing-norms-cyberspace
They don’t go into how at their base norms are formed by predictable violence. Because they think they can make a norm that ends up in their favor in the end without actually paying for it with the effort that is violence. Get it for free…when its zero sum and it just means someone else pays for that. People’s children, no doubt.
A bit of cyberwar, if you’re into that:
https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/
Presumably this is the sort of thing that we’re trying to move outside these new norms. Regarding the post, it’s interesting that the norms of cyber-conduct we hope to derive from persistent engagement and tacit bargaining work the same way as intentional cognition, that is to say without deep information about the goals and internal organization of our adversaries. We try to construct those norms of cyber-conduct hoping the information we see is all we need, but in this case the enemy is actually the enemy, so any norm he allows us to believe in is bound to be a lie. (In the same way attempts to create norms regarding privacy relative to entities such as Google, Amazon etc. are likely to fail because while they are not necessarily our enemies they are the enemies of our privacy.) The difference between intentional cognition and this tacit norm-construction is that we will also try to gain deep information about the goals etc. of our adversaries. Espionage will be a norm, although hopefully destructive attacks like notpetya will not.
More cyberwar:
https://www.technologyreview.com/s/613054/cybersecurity-critical-infrastructure-triton-malware/
Chatbots are everybody’s friend:
From the article:
But there’s something about talking to software that is powerful, they discover, when it responds and seems alive.
“It’s conversation,” Darcy says. “And we’ve been conversing for, what is it, 200,000 years?”
My daughter is 22 months old now. At 9 months she said her first word, which was the dog’s name, and then at 13 months she learned to walk, and then by 15 months she started giving Alexa commands.” She added: “I think my daughter is growing up in a world where you just speak what you want into the universe and it provides.
What type of personality should bots have, when both we and they know they’re not human?
Emma Coats, the “character lead” for Google Assistant, describes the emotional affect of her company’s artificial life form as “a friendly companion that is trustworthy.”
As a literary endeavor, the field of bot creation is booming. The bots need to be equipped to answer the wide variety of weird, playful queries that people lob at them, which requires lots of writers. Coats and her co-workers have found that people like to simply shoot the breeze with their devices — probing their personalities, searching for the puppet strings. “ ‘Do you fart?’ is always a popular question,” Coats says dryly.
“I will literally buy whatever option Alexa puts first for me for paper towels,” Keaney Anderson, the HubSpot V.P., told me. “I don’t care. I don’t want to search through a million of them. I ask her for paper towels, she delivers. And that may be fine for paper towels, but is it fine for music? Is it fine for news sources?” Talking to bots will also mean new opportunities for tech firms to collect data on what we’re thinking, what we’re doing, all day long. That includes our feelings: Researchers are working on “affective” sensing that enables chatbots to recognize our emotions.
… though most popular voice assistants responded to suicidal thoughts by providing help lines and other appropriate resources, when they were told “I am being abused” or “I was raped,” they generally replied with some variant of “I don’t know what you mean.”
… a robot called “Jibo.” It’s a cute, squat device with a round screen for a face that sits on your desk or table and chats with you, posing questions and answering yours, offering bits of news. It can play songs, take and display pictures and purr like a cat when stroked. “He’s a robot, and he knows he’s a robot, but he’s a really optimistic robot, and he has a profound belief in the good of people…
“People are hard-wired with sort of Darwinian vulnerabilities, Darwinian buttons,” she told me. “And these Darwinian buttons are pushed by this technology.” That is, programmers are manipulating our emotions when they create objects that inquire after our needs.
Perhaps I am simply, and finally, internalizing your definitions, I found that his essay presents your own concepts with particular clarity as they get laid out in juxtaposition with Pinker’s.
Also wanted to add that I really enjoy chewing through your work and have been using the ideas around the semantic apocalypse among friends and work (video game design)
As usual, lots to consider. Thanks for your thoughtfulness. Sorry about the overlong comment below.
Disclosure: I haven’t actually read Pinker or Harari yet but am familiar with both via YouTube. Pinker has always struck me as an inveterate cheerleader or booster for the things he obviously values and embodies. No surprise there. His calm presence and refined academic speech are both convincing and off-putting, a little bloodless. Turning over his ideas in the abstract, and even more realistically in aggregated demographic terms, feels like the product of so many contingent thought experiments without the benefit of application. Criticisms of Pinker also make clear that, even without disputing his marshaling of demographic evidence of “progress,” regular folks are not soothed by comparisons with others (other people, other places, other times) of which they have little knowledge or direct experience (e.g., telling a rape victim that per capita incidence of rape is way down over whatever inflection point is chosen is irrelevant and perhaps a little cruel). It’s the equivalent of the parent forcing the child to eat because, well, some children in China may be starving. Doesn’t work as an argument except in the abstract.
Pinker, Harari, you, me, most of the highly educated West (discounting the nominally educated) live within a bubble. We fail to identify closely with all those other others yet have striking affiliation with each other. Journalists, politicians, filmmakers, all have their own self-reinforcing bubbles. So Pinker (and Sam Harris) is essentially preaching to a chamber choir of specialized thinkers. If we stretch and peer outside our bubble, however, we can sense something rather large looming right over the horizon. It’s already casting a significant shadow. If Pinker believes that narrow, hard-won Enlightenment values and their scientific/technological progeny can continue to exert disproportionate and salutary influence over the affairs of men, well, I believe he’s quite mistaken. The swell of false consciousness on the rise in hyperconnected minds attuned fixedly to entertainment, fake news, and comforting lies of late modernity attests otherwise. Under those influences, I expect assertion of a new barbarism of instrumental logic, which is the application of different kinds of force (including fascism) as nonsolutions to intractable social problems.
Two last points: cognition is more nearly unitary than fragmentary. As such, the division into causal cognition and intentional cognition is interesting but misleading. I’ve also heard it said that reason evolved (probably the wrong word) not to seek out truth but to convince others of one’s own perspective. It’s the deployment of a particular kind of force to which many are frankly immune. The adoption of objectivity, rationalism, logic, and other Enlightenment ideals functions imperfectly at best in artificially constrained condition and not at all in the wider public sphere. Indeed, the intentional cognition you explore as a heuristic for action in the world reinforces the idea that human cognition in particular (as distinguished from machine or mechanical operations) is built more for sufficiency than accuracy. Mass and mob psychology proceed on that basis and is reflected in how history actually unfolds (haphazardly). Programmed, designed, Enlightened cultures don’t take root on any scale worth considering.
Your discussion of cognitive ecology and environmental invariances confounds me. The hypercomplexity of cart-and-horse interaction (which goes first?) makes the presumption of any fixed ground beyond biology highly suspect. Moreover, like consciousness, cognition is a process, not an entity. Alternatively, it’s a wave, not a particle, always in motion and responding kinetically to myriad influences. We have discovered a few cognitive hacks useful on the short term, but eventual consequences and complications are always shielded from view. Possibilities are explored in arts (literature, cinema, music), which are often intuitive and prescient but encapsulated within a particular thought-world. When they inevitably cross over into the political and social realms (nearly everyone in the developed world being insatiable consumers of those arts so vulnerable to infectious memes), their distorted instantiations are scarcely ever anything Pinker might wish for.
Hey, Scott, been a while… wondered if you’d read Michael S. Gazzaniga’s latest (2018) The Consciousness Instinct: Unraveling the Mystery of How the Brain Makes the Mind, and have plans on reviewing it with your usual acumen. 🙂
Heads up, we got CRISPRed up babies now (even if the report turns out to be false, everyone now knows the technology is there- it’s cheap, it’s powerful, and it’s getting better every day).
Pit.of.Obscenity = ONLINE
Project.Srancv1.0 = INITIATED
Project.Wracuv1.0 = INITIATED
Project.Bashragv1.0 = INITIATED
the future looks bright
https://www.acsh.org/news/2018/11/27/scientist-created-world%E2%80%99s-first-gene-edited-babies-why-uproar-13624
If I remember correctly, the Inchoroi filled pits with the corpses of their failed experiments. Are some technologies so potentially powerful that no experiment likely to provide useful results can be performed ethically? These kinds of genetic experiments are a lot like AI in that we can’t learn enough to accurately assess the risks except by performing experiments whose risks we can’t accurately assess.
Then again, I suppose much of life is like that. The people who first though to use coal fired steam engines to pump water out of mines could not have predicted global warming. Similarly, the internet, smartphones etc. are technologies whose long term consequences we can’t predict. I say roll the dice, but I’m old enough and childless enough that it won’t much matter so much to me when we crap out.
IF we crap out. A little optimism never hurt.
So what is the general societal expectation on even mindedness? Or even the intellectual/academic (or however you’d call it) societal expectation? Does anyone have a procedure – I mean to drive a car you have to pass some tests before it’s agreed you can do so. But to be even minded it’s…left to folk intuition and folk culture?
https://www.acsh.org/news/2018/11/28/polarization-society-even-scientists-become-tribal-13628
I would guess that the ability to win arguments is a high reproductive fitness trait but the ability to disinterestedly seek the truth is not. If argument winning behavior and truth seeking behavior run on the same neurological hardware but that hardware is optimized for argument winning it might be that slipping into argument winning mode can happen when you think you’re truth seeking without you even realizing it.
In another respect it’s like diet. Our eating habits seem to have been optimized for an environment with frequent food shortages, so we hoard calories and store fat against the next famine. Our bodies don’t naturally choose healthy diets. They naturally choose sugar, fat and salt. Our obesity, strokes and heart attacks tell us our diet is problematic. In the same way, we naturally choose argument winning rather than truth seeking, and the ugliness of our interpersonal conflicts tells us when our intellectual conduct is problematic.
Well I’d say a lot of argument winning behavior isn’t that but actually confrontation/territory holding behavior. Or so I’d estimate. It’s about chasing off the interloper. People aren’t there to consider they could be wrong. That’s probably why it’s more open minded to believe in things that you wish weren’t true – then you have some inclination to want to be wrong on the matter. And it can become some sort of exploration of the presumed fact of the matter.
On calories and unhealthy levels of fat, yeah, I wonder if historical records were looked at we’ve gone from resisting our default primal reflexes (from a lack of resources) to more and more indulging in our primal reflexes like we devour greasy foods. Obese thinking.
Though I will say, like a food pyramid has some greasy food at the top, I don’t think it makes sense to try and remove all primal/bad reflexes. Probably trying to remove them all will just mean doing them on the sly without realising it. Sometimes it is just a shit on your turf who needs to be run off. But not nearly as often as people treat it as.
But yeah, how often do people pass off argument winning as being truth seeking? How many realise they are doing it or partially realise, but do it all the same? How many don’t realise it at all? How many have never considered the idea that argument winning could be different from truth determining? In public education, how much are any of these things brought up? I mean probably this has all been talked about in ancient philosophy – it’s probably not a new idea at all. But in public education how much are any of these things brought up?
But then again public education is ultimately about developing resources used to hold turf/a country. How could enact something that undoes itself?
And how to put these questions into some cotton candy fantasy novel or greasy delight video game so as to slip in some medicine with the bucket load of sugar?
This is for Scott, not related directly to the topic here…
Reza Negarestani has recently published a 550 pages book titled “Intelligence and Spirit” and that looks really, really interesting. I just got it today and I spotted a few pages on BBT (pag. 152).
I don’t know if you discussed this with him but it’s kind of interesting even if nothing new, really (and the book is about idealism, that I spent this whole year arguing back and forth). (and btw, my personal conclusion is that idealism can uniformly integrated with materialism, only accepting the basic premise that Parmenidean contradiction doesn’t exist in reality and contradiction in general is only the artifact of lack of knowledge/vision, and that all manifestations of paradoxes are simply linear processes that loop, but not truly contradictory in the sense of actual compresence…)
Some quotes of what he’s arguing about:
“One objection to the virtuous circle I have described is given by a figure I will call the greedy sceptic – one who wants to have the cake of semantic apocalypse, but who also eats it in order to fuel an all-out assault on the conceptual structure of thinking.”
“The greedy sceptic assertively claims that we do not know anything and we will never know anything, while at the same time confidently laying out a lavish theory of what he takes to be the case. In short, the greedy sceptic is someone who casually slips in and out of his complicated relationship with knowledge as he pleases.”
“The greedy sceptic is analogous to someone who eats his cake of semantic apocalypse while – to the ridicule of others – not realizing that the cake is actually made of semantic ingredients. Thus the greedy sceptic advocate of the blind brain ends up exposing himself as being doubly blind accordingly to his own standards.”
He goes more in detail, but this is the general approach. And I think he eventually contrasts this with his own approach that supposedly doesn’t make similar mistakes? It should be interesting.
Yea, been reading it as well of late, and he does seem to generalize and pigeon hole various counter stances without attacking personalities or actual specific philosophers, thinker, critics etc. I know he’s been lecturing recently on Nick Land and David Roden’s various theories against his own inhumanist stance on geist intelligence, etc. This one above does seem almost directed indirectly at Scott’s Blind Brain Theory and his own brand of skeptical ecologies of the mind. I see his project as almost taking up the old Intellect vs. Will of the Medieval debates between Realists vs. Nominalists in an altered key using Hegel-Geist Intelligence against the philosophies of will of Schopenhauer, Nietzsche, Freud, Bataille, Derrida, etc. He has definitely situated himself within the Analytical Camp of Frege, Carnap, Sellars, Brandom, etc.
Yeah, there’s this interview where he goes all out against Nick Land, it’s quite good actually:
https://www.neroeditions.com/docs/reza-negarestani-engineering-the-world-crafting-the-mind/
The quotes above from the book aren’t indirect, maybe I haven’t been clear.
He directly links to the “Magic Show” paper and references that paper and Bakker’s name in the index, that’s why I spotted it. It’s all explicit and those two-three pages are directly addressing all that.
I only wish he engaged more directly with some rational idealism that I think has some strong and interesting ideas on ontology. There’s this interesting debate between an Italian philosopher, Emanuele Severino and Graham Priest on paraconsistent logics, but this is where I think idealism is more accurate (by saying that the principle of non-contradiction (PNC), in its Parmenidean formulation, has to be absolute, and so the building block of ontology and epistemology. And from there it moves through phenomenology and the denial of the idea that things can “transform”. That in my opinion you can wrap and actually embed all of it right into BBT…)
So he makes a call of performative contradiction? “You can’t make an argument against arguments without making an argument yourself, maaaan!”. It’s not really a new approach.
It’s really hard to describe how toothless thought is against thought. I’d suggest Reza thinks the only thing that thought faces is other thoughts – thus the notion of performative contradiction because it appears all ‘the greedy skeptic’ has is more thoughts. And it appears to Reza you can’t eliminate thoughts with thoughts.
While Sco…ahem, I mean while ‘the greedy skeptic’ is trying to use thoughts to describe something else in play that can actually basically hack brains or in a way poison by pollution peoples brains.
If it were a more extreme effect it would be like the greedy skeptic is saying that bullets can blow brains out and Reza would insist that’s performative contradiction, that the idea of bullets can’t eliminate thoughts! That’s a thought eliminating a thought, Scott! You big silly!
So Reza and co think that greedy skeptic is dealing in the idea of bullets…when actually he’s dealing in actual bullets. Micro effects that more subtly hack a brain than a bullet would and micro pollution effects that more subtly damage a brain than a bullet would, granted. But subtlety doesn’t somehow make the effect drift into the magic world of ideas – they are still hard, physical effects as much as a bullet to the brain is a hard, physical effect.
That said these guys never turn on a dime – because they’re human and human’s don’t turn on a dime. So I make a dumb argument because it requires making a ninety degree turn in thinking, and I give the argument to a device that doesn’t do that. It’s like trying to use a USB stick on a commodore 64 – that’s dumb. Even as the USB stick makes perfect sense.
This is not directly related to your above comment, but it’s something I’ve mentioned to you before, but with a new (to me at least) buzzword:
ALGOCRACY (or the more technical sounding algorithmic governance)… But if it does have the effect on democracy that the article says it might then I guess it is a kind of subtle bullet.
What amazes me is how apparently easy it is to pass off responsibility to an object. Like people are baffled on who is to blame if a ‘self’ driving car runs into someone. And here fascistically inclined bureaucrats (whose inclinations are otherwise suppressed by a larger benign culture) can start passing off power to an object apparently quite easily, at which point the object can’t be held accountable and neither will anyone hold the/a bureaucrat (“Computer says no”) responsible. It’s like watching some kind of ball and cup game where the person just picks up the cup, takes the ball then puts it in his pocket…and yet people are still just looking at the cups wondering which cup the ball is under.
Ah, yes, all magic is ‘mis-direction’: sleight-of-hand optics, the illusionary tripping of the mind’s own inherent tendency to be error-prone and fallacy ridden. No one stage-magicians and tyrants have so much in common: both seek to manipulate the users inherent need to believe. “All the world’s a stage, And all the men…” merely fools manipulated by their own (technological) desires: “that ends this strange eventful history, Is second childishness and mere oblivion, Sans teeth, sans eyes, sans taste, sans everything.”
…And it occurs to me the idea of algocracy helps to make Scott’s point in the original post, by showing how the Enlightenment is going from enabling liberal democracy to destroying it, and doing so by replacing humans with “artifacts cuing intentional cognition ‘out of school,’ which is to say, circumstances cheating or crashing them.”
I wrote in comments to other posts about the surveillance state being built in China, and how it suggests the hopes some of us once had about the internet making the world more free are turning out to have been misplaced. As Scott puts it “scientific and moral progress will become decoupled.” It may well turn out that the most harmful (from Pinker’s perspective) kind of out of school cuing may well be the treatment of machines as if they were moral agents. Such treatment may also have the effect of encouraging bureaucrats to hide behind the decisions (if that’s the right word) of machines and thereby behave as if they were not moral agents.
I’m not sure I’d blame the Enlightenment for an efficiency fetish? In some ways to care seems to be to err and to err is human. An utterly efficient system will not err. The perfection is the error.
I think just world fallacy would be more the driving force of algocracy – that it’ll all work out for THE best (rather than A best). After all, it’s THE enlightenment, not A enlightenment. The enlightenment is not very enlightened about itself. Darkness that comes before and all that.
So has anyone stalked Scott well enough to know what he’s up to?
I hope he’s well and working on The No-God. I think I would blame the Enlightenment for modernity’s various efficiency fetishes. Surely this sort of thing:
https://en.wikipedia.org/wiki/Time_and_motion_study
could not have been invented in a society that had not undergone industrialization, and of course industrialization could not have happened absent the Enlightenment. From Time and Motion Studies it’s just a short conceptual hop to just-in-time delivery then to hyper-automated factories of Elon Musk’s imagination.
As you say, “the perfection is the error.” In the pre-Enlightenment world perfection was thought to be an attribute of God and available only in Heaven. The very idea that perfection of any kind was available in the material world is an Enlightenment idea.
I think algocracy is an example of artifacts cuing intentional cognition out of school. Liberal democracy is one of the Enlightenment’s crowning achievements, and destroying it by turning political power over to machines could make the harm society has done to itself by turning political power over to corporations seem like a paper cut. And it’s always good to remember that methods of oppression are first tried out against the poorest and least politically powerful segments of society, then they work their way up.
Why couldn’t industrialization happen without Enlightenment? I mean the Enlightenment includes stuff like ‘We probably shouldn’t torture prisoners of war and should probably look after them a bit’ (or as I understand it it does).
Whether other things could ride along with Enlightenment like a parasite? Sure, could do.
I mean, were we all actually very nice to each other during the dark ages and before? And the Enlightenment wasn’t something that needed to occur? Sure, the Enlightenment is kind of a leash – do we need a leash, are we actually all really fine without one?
But yes, once the leash is made could something else take hold of it and pull (even as the braiding of the leash starts to fall apart from doing so)? Potentially, yes.
“We are the priests
Of the temples
Of Syrinx.
Our great computers
Fill the hallowed halls.”
Neil Peart is a prophet.
Look up “the labour of ghosts” where bakker addresses Negarestani’s account.
If you have the money thenew Centre has tons of seminars conducted by Negarestani including one on Plato and another on Kant’s critique of pure reason, another on philosophy of science.
Personally I’m hedging my bets on Bayesian predictive coding approaches using mathematical modeling culled from information theory and dynamical systems theory giving us more than (purely) philosophical approaches. Have you looked at Friston’s account of self consciousness in terms of temporal counterfactual depth of action modeling? Also you’d probably be interested in metzingers presentation in minimal phenomenal experience in terms of predictive model of ascending reticular activating systems input to the cortex.
@VoidsIncision –
Scott Alexander has been blogging about Friston for a while now. Can you link to the relevant literature on Friston and self-consciousness?
Rant time:
I have to read Scott Alexander because Scott Bakker has been traversing The Outside or some shit. When he comes back, he’d better have a couple of Decapitants dangling on his belt and some stories about the Granary of Souls because I’m getting restless.
I mean, has he fucked around with the lobotomized GPT-2 yet? It’s awesome evidence for his post-posterity argument from back in 2011!
Odds and ends:
https://www.scientificamerican.com/article/there-is-no-such-thing-as-conscious-thought/
https://arstechnica.com/science/2018/12/how-computers-got-shockingly-good-at-recognizing-images/
https://getpocket.com/explore/item/the-empty-brain
https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment
For what it’s worth, “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” is a bit much, since we can’t ensure that natural intelligence systems of the present are ‘robustly, verifiably safe.’
Intelligent machines are likely to psychopaths, if only because… How would you build a training data set to teach machines morality? And one thing worth remembering about how children learn morality is they learn it from people who are bigger and stronger than they are, and on whom they are dependent for most of their means of physical existence. Is it possible for machines to learn morality without having a childhood?
…likely to be psychopaths…
And some people says most humans are born with a capacity to learn morality, or an instinct for fairness. If humans are genetically predisposed to morality it suggests teaching it to machines will be that much more difficult. On the other hand, maybe the techniques from the Ars Technica article I linked above can be adapted to teach morality. It would be interesting to see the attempt.
If we do attempt to teach machines morality using the same training methods used to teach image recognition control of the training databases will be a source of great power, and therefore a bone of great contention. Remember the fights school boards used to have over how to treat evolution and intelligent design in science text books? Like that, but with guns.
https://www.vox.com/future-perfect/2019/2/15/18226493/deepmind-alphafold-artificial-intelligence-protein-folding
“This will raise questions about, what is the nature of the scientific enterprise? What is it that we mean by “doing science”? Is science about understanding natural phenomena or about building mathematical models that’ll predict what will happen?”
What’s the difference?
A last bit on Chinese censorship:
This is a big part of what they want to be able to do with machine intelligence.
One last bit about the Enlightenment:
https://www.vox.com/conversations/2019/1/23/18128942/enlightenment-psychology-science-david-wootton
There are alternatives to the Semantic Apocalypse as post-Enlightenment futures, but as has been mentioned frequently in this blog, social realities are created and sustained by our beliefs, and as beliefs become easier to manage, social reality becomes easier to manage, like this:
https://www.vox.com/policy-and-politics/2019/1/22/18177076/social-media-facebook-far-right-authoritarian-populism
and this:
https://www.vox.com/science-and-health/2019/1/23/18194717/alexandria-ocasio-cortez-ai-bias
It appears to me rationality can only exist in retrospect. The self editing process begins only after the initial act. And then comes the react and then recursion. In other words we can not behave rationally in any ‘moment’ until that moment is processed and recursion begins. Like me editing this post my initial act was not rational and slowly, recursively [Backspace] I tuned it to be so.
Love the blog,
exe
The Canny Valley
A games developer talks about the realism of the game he was developing giving him a stress disorder and even being involuntarily committed for a night.
More people being driven insane by the internet.
Or driven sane/driven neuro default, given our long history of actively engaging with supernatural themes.
Also isn’t one of the chinese social media knock offs called momo?
‘Seeing the Dark’: Grounding Phenomenal Transparency and Opacity in Precision Estimation for Active Inference-
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5945877/
I suspect that for now this is just hype:
https://www.theguardian.com/technology/2019/mar/06/facial-recognition-software-emotional-science
I don’t know if machines will ever be as good as Kellhus but it seems reasonable to suspect they will eventually be as good as the average human. If you combine this technology with ubiquitous surveillance technology you have the most powerful social control mechanism ever created, Brave New World verging into Hellstrom’s Hive. Then again, CRISPR might provide a biotechnology-based way to achieve the same results.
Then again again, I don’t see much about possible synergies between CRISPR based biotechnology and machine intelligence. I’ll have to think about it.
Speaking of hideous things to do with biotechnology, have any of us recently read Cyteen, by C. J. Cherryh? A favorite of mine.
Looks like an arms race situation to me: if there’s an advantage to be gained by fooling the machines, we’ll learn to do it. Even Esmenet learned to wear a mask around all the Dunyain fuckers.
And there is tons of possible synergy between machine learning and modern biotechnology. Right now we have enormous datasets that humans are extremely bad at assessing.
As for CRISPR specifically, not directly, but you might use machine learning to generate hypotheses about poorly-understood regulatory regions in the genome and test it by using CRISPR.
The answer to the headline question is probably no, but
http://www.bbc.com/future/story/20190326-are-we-close-to-solving-the-puzzle-of-consciousness
Brains:
https://www.theatlantic.com/science/archive/2019/04/scientists-partly-restore-activity-dead-pig-brains/587329/
And sadly, Callan, the big corporations are taking over the vat meat business:
https://www.theatlantic.com/health/archive/2019/04/just-finless-foods-lab-grown-meat/587227/
What happened to Scott? Is he still blogging? Nothing since November…
‘Need for cognition’ – that’s an interesting one (I guess I say with somewhat ironic cognitive interest in the subject)
The article seems quite in tune with the books and trying to reach multiple increasingly incompatible empires.
It is rather like you try and get someone to engage in an idea and they just sort of lounge off it, like you proposed going jogging when it’s 6am and cold outside. “C’mon, let’s go!” and you’re half way out the door, skipping from foot to foot – and they state something that is clearly from the couch/bed. I guess I hadn’t thought that rather than resisting the argument perse, they were just not inclined to think on it as if it were 6am. I wonder how much climate change denial is really just ‘It’s warm in bed, I don’t wanna get out!’?
Reminds me of Earwa’s Gods
you might enjoy this:
https://zerohplovecraft.wordpress.com/2018/05/11/the-gig-economy-2/
https://arxiv.org/abs/1905.12616
This is basically publicly available tech. (Although, hey, OpenAI went from non-profit to profit now… so that’s probably the end of that)
Who knows what the private and military sectors can do now? Never has the darkness that comes before us been so replete with currents and eddies whose provenance are so thoroughly inscrutable and whose pull is so maliciously subtle.
An idea about a narcissistic epidemic
Loved your take on Pinker here and missing your voice on this blog! Just got a few copies of some of your books at a used book sale and I realized that I sorta distribute them like the bible.