Writing After the Death of Meaning
by rsbakker
[Presented June 2nd, 2015, for the Posthuman Aesthetics Research Group at Aarhus University]
Abstract: For centuries now, science has been making the invisible visible, thus revolutionizing our understanding of and power over different traditional domains of knowledge. Fairly all the speculative phantoms have been exorcised from the world, ‘disenchanted,’ and now, at long last, the insatiable institution has begun making the human visible for what it is. Are we the last ancient delusion? Is the great, wheezing heap of humanism more an artifact of ignorance than insight? We have ample reason to think so, and as the cognitive sciences creep ever deeper into our biological convolutions, the ‘worst case scenario’ only looms darker on the horizon. To be a writer in this age is stand astride this paradox, to trade in communicative modes at once anchored to our deepest notions of authenticity and in the process of being dismantled or worse, simulated. If writing is a process of making visible, communicating some recognizable humanity, how does it proceed in an age where everything is illuminated and inhuman? All revolutions require experimentation, but all too often experimentation devolves into closed circuits of socially inert production and consumption. The present revolution, I will argue, requires cultural tools we do not yet possess (or know how to use), and a sensibility that existing cultural elites can only regard as anathema. Writing in the 21st century requires abandoning our speculative past, and seeing ‘literature’ as praxis in a time of unprecedented crisis, as ‘cultural triage.’ Most importantly, writing after the death of meaning means communicating to what we in fact are, and not to the innumerable conceits of obsolescent tradition.
So, we all recognize the revolutionary potential of technology and the science that makes it possible. This is just to say that we all expect science will radically remake those traditional domains that fall within its bailiwick. Likewise, we all appreciate that the human is just such a domain. We all realize that some kind of revolution is brewing…
The only real question is one of how radically the human will be remade. Here, everyone differs, and in quite predictable ways. No matter what position people take, however, they are saying something about the cognitive status of traditional humanistic thought. Science makes myth of traditional ontological claims, relegates them to the history of ideas. So all things being equal we should suppose that science will make myth of traditional ontological claims regarding the human as well. Declaring that traditional ontological claims regarding the human will not suffer the fate of other traditional ontological claims more generally, amounts to declaring that all things are not equal when it comes to the human, that in this one domain at least, traditional modes of cognition actually tell us what is the case.
Let’s call this pole of argumentation humanistic exceptionalism. Any position that contends or assumes that science will not fundamentally revolutionize our understanding of the human supposes that something sets the human apart. Not surprisingly, given the underdetermined nature of the subject-matter, the institutionally entrenched nature of the humanities, and the human propensity to rationalize conceit and self-interests, the vast majority of theorists find themselves occupying this pole. There are, we now know, many, many ways to argue exceptionalism, and no way whatsoever to decisively arbitrate between any them.
What all of them have in common, I think it’s fair to say, is the signature theoretical function they accord to meaning. Another feature they share is a common reliance on pejoratives to police the boundaries of their discourse. Any time you encounter the terms ‘scientism’ or ‘positivism’ or ‘reductionism’ deployed without any corresponding consideration of the case against traditional humanism, you are almost certainly reading an exceptionalist discourse. One of the great limitations of committing to status-quo underdetermined discourses, of course, is the infrequency with which adherents encounter the limits of their discourse, and thus run afoul the same fluency and only game in town effects that render all dogmatic pieties self-perpetuating.
My artistic and philosophical project can be fairly summarized, I think, as a sustained critique of humanistic exceptionalism, an attempt to reveal these positions as the latest (and therefore most difficult to recognize) attempts to intellectually rationalize what are ultimately run-of-the-mill conceits, specious ways to set humanity—or select portions of it at least—apart from nature.
I occupy the lonely pole of argumentation, the one that says humans are not ontologically special in any way, and that accordingly, we should expect the scientific revolution of the human to be as profound as the scientific revolution of any other domain. My whole career is premised on arguing the worst case scenario, the future where humanity finds itself every bit as disenchanted—every bit as debunked—as the cosmos.
I understand why my pole of the debate is so lonely. One of the virtues of my position, I think anyway, lies in its ability to explain its own counter-intuitiveness.
Think about it. What does it mean to say meaning is dead? Surely this is metaphorical hyperbole, or worse yet, irresponsible alarmism. What could my own claims mean otherwise?
‘Meaning,’ on my account, will die two deaths, one theoretical or philosophical, the other practical or functional. Where the first death amounts to a profound cultural upheaval on a par with, say, Darwin’s theory of evolution, the second death amounts to a profound biological upheaval, a transformation of cognitive habitat more profound than any humanity has ever experienced.
‘Theoretical meaning’ simply refers to the endless theories of intentionality humanity has heaped on the question of the human. Pretty much the sum of traditional philosophical thought on the nature of humanity. And this form of meaning I think is pretty clearly dead. People forget that every single cognitive scientific discovery amounts to a feature of human nature that human nature is prone to neglect. We are, as a matter of empirical fact, fundamentally blind to what we are and what we do. Like traditional theoretical claims belonging to other domains, all traditional theoretical claims regarding the human neglect the information driving scientific interpretations. The question is one of what this naturally neglected information—or ‘NNI’—means.
The issue NNI poses for the traditional humanities is existential. If one grants that the sum of cognitive scientific discovery is relevant to all senses of the human, you could safely say the traditional humanities are already dwelling in a twilight of denial. The traditionalist’s strategy, of course, is to subdivide the domain, to adduce arguments and examples that seem to circumscribe the relevance of NNI. The problem with this strategy, however, is that it completely misconstrues the challenge that NNI poses. The traditional humanities, as cognitive disciplines, fall under the purview of cognitive sciences. One can concede that various aspects of humanity need not account for NNI, yet still insist that all our theoretical cognition of those aspects does…
And quite obviously so.
The question, ‘To what degree should we trust ‘reflection upon experience’?’ is a scientific question. Just for example, what kind of metacognitive capacities would be required to abstract ‘conditions of possibility’ from experience? Likewise, what kind of metacognitive capacities would be required to generate veridical descriptions of phenomenal experience? Answers to these kinds of questions bear powerfully on the viability of traditional semantic modes of theorizing the human. On the worst case scenario, the answers to these and other related questions are going to systematically discredit all forms of ‘philosophical reflection’ that fail to take account of NNI.
NNI, in other words, means that philosophical meaning is dead.
‘Practical meaning’ refers to the everyday functionality of our intentional idioms, the ways we use terms like ‘means’ to solve a wide variety of practical, communicative problems. This form of meaning lives on, and will continue to do so, only with ever-diminishing degrees of efficacy. Our everyday intentional idioms function effortlessly and reliably in a wide variety of socio-communicative contexts despite systematically neglecting everything cognitive science has revealed. They provide solutions despite the scarcity of data.
They are heuristic, part of a cognitive system that relies on certain environmental invariants to solve what would otherwise be intractable problems. They possess adaptive ecologies. We quite simply could not cope if we were to rely on NNI, say, to navigate social environments. Luckily, we don’t have to, at least when it comes to a wide variety of social problems. So long as human brains possess the same structure and capacities, the brain can quite literally ignore the brain when solving problems involving other brains. It can leap to conclusions absent any natural information regarding what actually happens to be going on.
But, to riff on Uncle Ben, with great problem-solving economy comes great problem-making potential. Heuristics are ecological; they require that different environmental features remain invariant. Some insects, most famously moths, use ‘transverse orientation,’ flying at a fixed angle to the moon to navigate. Porch lights famously miscue this heuristic mechanism, causing the insect to chase the angle into the light. The transformation of environments, in other words, has cognitive consequences, depending on the kind of short cut at issue. Heuristic efficiency means dynamic vulnerability.
And this means not only that heuristics can be short-circuited, they can also be hacked. Think of the once omnipresent ‘bug zapper.’ Or consider reed warblers, which provide one of the most dramatic examples of heuristic vulnerability nature has to offer. The system they use to recognize eggs and offspring is so low resolution (and therefore economical) that cuckoos regularly parasitize their nests, leaving what are, to human eyes, obviously oversized eggs and (brood-killing) chicks that the warbler dutifully nurses to adulthood.
All cognitive systems, insofar as they are bounded, possess what might be called a Crash Space describing all the possible ways they are prone to break down (as in the case of porch lights and moths), as well as an overlapping Cheat Space describing all the possible ways they can be exploited by competitors (as in the case of reed warblers and cuckoos, or moths and bug-zappers).
The death of practical meaning simply refers to the growing incapacity of intentional idioms to reliably solve various social problems in radically transformed sociocognitive habitats. Even as we speak, our environments are becoming more ‘intelligent,’ more prone to cue intentional intuitions in circumstances that quite obviously do not warrant them. We will, very shortly, be surrounded by countless ‘pseudo-agents,’ systems devoted to hacking our behaviour—exploiting the Cheat Space corresponding to our heuristic limits—via NNI. Combined with intelligent technologies, NNI has transformed consumer hacking into a vast research programme. Our social environments are transforming, our native communicative habitat is being destroyed, stranding us with tools that will increasingly let us down.
Where NNI itself delegitimizes traditional theoretical accounts of meaning (by revealing the limits of reflection), it renders practical problem-solving via intentional idioms (practical meaning) progressively more ineffective by enabling the industrial exploitation of Cheat Space. Meaning is dead, both as a second-order research programme and, more alarmingly, as a first-order practical problem-solver. This—this is the world that the writer, the producer of meaning, now finds themselves writing in as well as writing to. What does it mean to produce ‘content’ in such a world? What does it mean to write after the death of meaning?
This is about as open as a question can be. It reveals just how radical this particular juncture in human thought is about to become. Everything is new, here folks. The slate is wiped clean.
[I used the following possibilities to organize the subsequent discussion]
Post-Posterity Writing
The Artist can no longer rely on posterity to redeem ingroup excesses. He or she must either reach out, or risk irrelevance and preposterous hypocrisy. Post-semantic writing is post-posterity writing, the production of narratives for the present rather than some indeterminate tomorrow.
High Dimensional Writing
The Artist can no longer pretend to be immaterial. Nor can they pretend to be something material magically interfacing with something immaterial. They need to see the apparent lack of dimensionality pertaining to all things ‘semantic’ as the product of cognitive incapacity, not ontological exceptionality. They need to understand that thoughts are made of meat. Cognition and communication are biological processes, open to empirical investigation and high dimensional explanations.
Cheat Space Writing
The Artist must exploit Cheat Spaces as much as reveal Cheat Spaces. NNI is not simply an industrial and commercial resource; it is also an aesthetic one.
Cultural Triage
The Artist must recognize that it is already too late, that the processes involved cannot be stopped, let alone reversed. Extremism is the enemy here, the attempt to institute, either via coercive simplification (a la radical Islam, for instance) or via technical reduction (a la totalized surveillance, for instance), Orwellian forms of cognitive hygiene.
while you’re at it
http://www.theonion.com/article/existentialist-firefighter-delays-3-deaths-17500
Those guys at The Onion take things way, way, too seriously…
just 4 u: http://www.theonion.com/article/frustrated-novelist-no-good-at-describing-hands-33239
http://philosophybites.libsyn.com/pat_churchland_on_eliminative_materialism
Scott, thought this might interest you.
http://www.newstatesman.com/2015/05/neil-gaiman-kazuo-ishiguro-interview-literature-genre-machines-can-toil-they-can-t-imagine
Thanks for this. A fascinating, and curiously disheartening discussion.
Reblogged this on dark ecologies and commented:
Scott is at it again, deflating the bubbles of that last remaining bastion of humanistic belief: human exceptionalism.
“My artistic and philosophical project can be fairly summarized, I think, as a sustained critique of humanistic exceptionalism, an attempt to reveal these positions as the latest (and therefore most difficult to recognize) attempts to intellectually rationalize what are ultimately run-of-the-mill conceits, specious ways to set humanity—or select portions of it at least—apart from nature.”
I had no idea this published to the main page! Here I thought I had tucked it into the Essays… I was planning to do an update linking it this Sunday.
But this suits me just as fine. Thanks, Craig!
Haha… it’s like one of those Freudian slips of the tongue… but in this case a web slip with unexpected consequences!!
is there not a residue of exception in that apparently only humans do science. here i suppose i would ask whether the biomechanical conception of cognition forces a revision in the understanding of what science is. how, in other words, do you see science as being continuous with prescientific activity in the organic world? i take it that you do think there is a kind discontinuity here, but you just don’t think it warrants thick ontological posits in order to account for it?
Precisely, no more than any other game-changing adaptation, anyway. I’m far, far, from having any complete account of scientific cognition, but I see it as operating in a somewhat similar manner to the way I see ‘consciousness’ acting, as a ‘workspace’ allowing the selection, ‘broadcast,’ and ‘stabilization’ of information allowing the application of existing cognitive resources to new problem ecologies. Institutional regimentations knap this into ever more effective behavioural ensembles.
In some ways you’re describing what D&G term an Abstract Machine:
A Thousand Plateaus:
The abstract machine in itself is destratified, deterritorialized; it has no form of its own (much less substance) and makes no distinction within itself between content and expression, even though outside itself it presides over that distinction and distributes it in strata, domains, territories. An abstract machine in itself is not physical or corporeal, any more than it is semiotic; it is diagrammatic (it knows nothing of the distinction between the artificial and the natural either). It operates by matter, not by substance; by function, not by form. Substances and forms are of expression “or” content. But functions are not yet “semiotically” formed, and matters are not yet “physically” formed. The abstract machine is pure Matter-Function – a diagram independent of the forms and substances, expressions and contents it will distribute. (A Thousand Plateaus, p. 141)
In this sense it’s a Heuristic Machine as functional device and global workspace for data mining reality…
All I mean is just meat and bones, as tracked and hacked via scientific cognition. I fear Deleuze just strikes me as a step back into dogmatism anymore, a place to strike matches just to see how long they’ll burn. Meanwhile there’s nothing paradoxical about science uncovering its own natural conditions, so why not curtail one’s ontological commitments? Shrug our shoulders a bit sooner than we’re used to.
I understand your point, just not the path… think of your own writing in that sense: why write fantasy since science is the better path. Money? Economics? Fame? etc. What’s the point of even blogging? What question anything if you’re not a scientist? What’s the point?
Why? Because we are not just scientists: and these other facets of life aren’t science, philosophy isn’t trying to answer the same questions nor to solve the same puzzles you assume they are. It’s a category mistake to assume philosophical speculation should be reduced to scientific description, or vice versa… they are of different levels and domains of thought and problematique. Even if some philosophers take science as one of the conditions for their thought… it is not the only one.
In what your saying: Why write at all sense the semantic apocalypse forbids human knowledge other than scientific statement. Let’s all quit thinking and writing and let our Masters, the specialists, the scientists of all the various sciences do our thinking for us; and, by the way let them rule us from their high towers too.
Sometimes your statements end in what they are the dust heap of meaninglessness. Your very blogging supports the fact that you want to communicate and argue with philosophers about their very tool-sets and methodologies; as well as, get feedback on your own philosophical, not scientific theoretic – the Blind Brain Theory. A theory, not a fact, but rather an interpretative or heuristic framework or enframing of certain aspects of a domain of facts – a level of abstraction onto that domain.
Why this continuing battering of the obvious? I’m no philosopher, but I don’t agree with you that it’s just going to go away, nor that there isn’t value in the questions it poses. We’re all building models, designing heuristical devices to probe reality with various tools. Why restrict it to one set: why reduce it all to the monocultural blinkers of neurosciences or any other branch of the sciences? Why not become a science writer rather than a fantasy writer? Since this seems to be the mode of truth you find so powerful.
Continually you berate philosophers in the sense of just as you say: shrug… it’s all mute anyway. I don’t see Deleuze as dogmatist of any stripe or form… his tools changed according to the very nature of the problems… whereas you go in the same round of basic claims in a form of endless repetition that never escapes the fetish of your own impartial dogmatism. There is no room for other forms of thought, even from various scientists that you’ve dealt with. Why? Why do you think you have a corner on truth? Why are you yourself so dogmatic? Ask yourself that, take a step back and look hard…
I like you, respect you, but sometimes you are exasperating. With you there is not actual conversation, because you’ve got your mind made up… that’s dogmatism, my friend, pure and simple.
(Hopefully you want take this to heart! Just being my usual bullish self.)
It ain’t nearly so complicated. Time is short. Traditional philosophy is largely a parlour game. Everywhere around us, genuinely unprecedented shit, existentially momentous shit, is happening. The tradition couldn’t solve any of the traditional questions, so why suppose it’ll solve anything now?
I got nothing cornered. I don’t need to to eschew metaphysics. A yawn serves as counter-argument enough. We need fucking answers man, not another hit from the opium bowl!
Yet, to lay one’s hands down and think that science will save us from ourselves is a misnomer as well. Yes, it may be the last great hope for mankind (Babylon 5 :)), but I’m not going to quit arguing, fussing, thinking, inventing possibilities, fictionalizing, philosophizing… just because I’m in a sandbox with the other morons. I might as well put in my own moronic blather… what else should I do with my old age? So far scientists themselves can’t agree on climate change or what we should do either; nor, is science even prepared to take over the ethical and normative, nor the economic or political domains… so who will? Obviously philosophy is not the ruler of its own house much less of anyone else… but, I don’t think those who have actually made a difference: let’s say about 15 philosophers in 2000 years would say they actually made much difference in their own times either… they’d more than likely side with you on that one. But then have any scientists solved the world’s issues in the past 2000 years? Their domain of study, research, and knowledge: the methodology of the sciences is situated to study the natural order in its extend, including humanity as so far as it is part of that order…. the rest is part of that fantasy world of culture we call the social world. That’s a no man’s land of if’s rather than facts…
So here’s an opium bowl from one moron to… shall I say, friend, to another 🙂
True. My argument is just that science is going to lay out the facts of human nature, and that there’s no more pretending these facts aren’t relevant than there is not going to the doctor. Scientific cognition will not take over from intentional cognition, it’ll just tell us what intentional cognition is, and if I’m right, then that will turn out to be nothing intentional. The hope is that empirically understanding social cognition will allow us to adapt to the crazy problem-ecologies developing all around us, to begin philosophizing anew. It’s a long shot, but it’s only one that I can see. If it is just another brand of opium, then it behooves us to make the dreams interesting at least!
Yea, I know we both agree on this. Philosophers must take into consideration the findings of the sciences. Hell, that’s a central point in Badiou, and Zizek. Badiou in fact insist on it: that the sciences are a condition of philosophy not the other way round. And, to me philosophy should not pretend to deal with what the sciences understand in the factual sense. Philosophy is not science… yet, as we both agree there is a need for some form of collective approach of a non-intentional or even folk-psychological portrait of these pure mathematic and empirical data into the parlance of the human speak-easy. Instead of a philosophy of Mind, etc. that turns in circles and can never be done… there are other aspects of existence that exist outside the box of philosophical discussions of Mind, Consciousness, etc. Philosophy in this sense will not go away… it was never bound to the world of perception, consciousness, etc. Only certain individual philosophers and their methodologies in the past two-hundred years bound us to this mind/world problem, etc.
Time to move on… and, furthermore, why even philosophy? Maybe as in Deleuze & Guattari, Badiou, Laruelle and others a form of non-philosophy or anti-philosophy moves forward…. People want quit thinking just because science has declared thought off limits… People will go on, and even include the scientific findings within their work, just like they always have. Why this whole need to do away with philosophy as if it was always centered only on consciousness, etc.? Cognition is the least of it. Political philosophy, technology, etc… As you say: and some scientists like A.O. Wilson are already doing this: the notion of sociality and how it touches with our neural and physical substrates is very much a needed aspect in our understanding. There seems to be so much misunderstanding as to What philosophy is? Rather than continuing to see philosophy as just a DYI Science Tool-Kit that is no longer needed… maybe you should broaden the terms. I know your enemy is the phenomenological tradition: the intentionalists, etc. These seem to be your main culprits. But there are other forms of speculation out there…
It’s triage time. I’ve found a plausible way to clear aside a good number of the old confusions; the question for me is one of how to move forward. This is what Crash Space and Cheat Space are all about: creating ways of conceptualizing our dilemma that plug into our growing understanding of human nature (and the boggling possibilities leveraged by this understanding).
Now we’re talking… go on… Fill this conceptuality out in pragmatic form, apply it to specific parts of the material world and sociality. Show us how such concepts can be applied to actual objects, things, entities; and, processes in the real world.
An abstract machine sounds a little bit like a corporation.
Michael Murden, its more general than that. In “War In the Age of Intelligent Machines” De Landa gives two senses to abstract machines. One would be implimentation independent models of systems. Turing machines, the basic architecture of negative feedback loops, etc. The other, I think more aligned with D/G’s original notion which is the abstracted features of an assemblage which shape the circuit between the virtual and the actual for that machine. For complex systems with many kinds of behaviors or functions there may be many associated abstract machines. He gives the example of a choke bore for a shot gun. Varying the diameter of the choke changes the probability distribution of of the spread of the shot. Whereas the color the bore is painted is irrelevant to the abstract machine of “concentrating projectiles”. Corporations per se are not abstract machines but have their associated abstract machines. You might say you know that real wages won’t increase 500% over the next two decades the way corporate executive salaries increased over 500% since the 80s because you know something about the abstract machine which shapes the possible regime of admissible behaviors for corporate entities.
This is a great example of the problem: What does the posit ‘abstract machine’ add to understanding in this instance? Why not say you know employee wages won’t increase 500% because you know economics? Either its an interpretative/communicative metaphor or it’s a genuine posit. If the former, then there’s nothing shaping ‘regimes of admissible behaviours’ apart from the system itself, and if the latter, then it is a wheel that does not seem to turn.
It no different with Normativism: you scaffold phenomena with generalizations, then claim those generalizations EXPLAIN (drive, make possible) the phenomena so scaffolded. You don’t need to read much traditional philosophy to realize humans have a weakness for this approach (despite millennia of chronic underdetermination), but on naturalist view, it pretty clearly gets things backwards.
the de landa lecture on topological thinking goes into abstract machines. For simple physical systems it’s just the phase portrait with its associated singularities which shape and parse the behavior of the system into its different regimes.
I’m not familiar with De Landa’s work – does he have a theory of meaning?
If you read Deleuze against the backdrop of pre-Kantian philosophy (as I once did), you see quite quickly that pre-Kantian philosophy is what he’s doing (which is what makes him so attractive to those allergic to ‘correlation’). Since the only way to pluck conceptual exaptations from conceptual ‘Crash Space’ are efficacies, actual systematic difference making differences, the proof will always be in the pudding. These are interesting ways of looking at things, to be sure, but there are innumerable interesting ways.
Curious as to why you think Deleuze / Guattari are dogmatic here, Scott?
In the Kantian sense: because they are ontological short an account of HOW the entities/relations posited could be cognized.
amusingly in the uestion session of his lecture on deleuze and death that was Ray’s issue with the derridean suite of quasi concepts: iteration, trace, and so on. How did he know them and what explanatory purchase did they have. Intensive difference good iteration bad. This I think is what you are saying. Theres no cognitive criterion of adequacy to choose between any of these competing stories. its analagous to religion here. ive actually made the defense of deleuze before that he is used as an influence in art. but… so are religious discourses. de landa claims to have social theory that is based off of assemblage theory, and i figure if anywhere thats where hed give an account of meaning. in the lecture on social systems he just takes signification and significance as two aspects of meaning which are real / ineliminable / efficacious or whatever.
Exactly. I do think the skeptics have always had the better argument–far and away so. The difference now is that cognitive science is providing ways to see why this is so, and why underdetermination in contexts like philosophy and religion should be understood symptomatically, as examples of cognitive ‘crash space.’
No continental philosopher has a theory of meaning, so in a sense it’s a trick question. The point is to simply underscore how little any of these guys understand their own problematics, let alone the posits they adduce to ‘solve’ them. How can they assert the ‘ontological priority’ of this or that intentional discourse when they have no clue as to what intentionality could be?
Click to access RayBrassierDelevelingAgainstFlatOntologies.pdf
Here Brassier somewhat agrees that De Landa is dogmatic in the kantian sense. Brassier thinks that De Landa confuses causation and justification when he claims that the philosopher or even the experimental scientists knows just by virtue of being able to set up an isomorphism, where the abstract machine of their embodied practical know how “draws out” the interesting problem singularization from the virtual. On this account knowledge becomes less about true or false representations and more about sieving out the interesting problems from the routine or dull problems, which amounts to delanda privileging non linearity over linearity, far from equilibrium behavior over equilibrium behavior, variety over redundancy, intensity over inactivity, production over stasis, and so on. For Brassier this just dislodges isomorphism from the level of linguistic representation (“bad”propositional knowing that from delandas view is symptomatic of transcendentalism) to sub representational knowing-how (for delanda models arent linguistic propositional structures), and it becomes spooky because how the philosopher or scientist is able to ‘become the quasi causal operator’ to sieve out virtual intensities is absolutely unclear. How do models causally relate to the processes they model? For Deleuze it was simply intuition, but this is obviously a bad answer for someone trying to marshall a case for empirical responsibility of an ontological account, but it’s Completely unclear on delanda’s story.
The big problem he raises is the problem I always raise as well simply because it’s the problem faced by speculative realism in general: the need for some plausible theory of meaning. (Were you around back when I had Bryant wriggling from that hook, Div?) Brassier writes:
I actually think of this as the Humean problem (which Kant attempts to solve), but the upshot is clear: the problem of flattening ontologies, in other words, is the problem of naturalizing meaning. As far as I can tell, the spec-realism crew simply shun, ignore, and stomp their feet on the issue. They seem to think that identifying an issue (under a neologism, ‘correlation’) allows them to obviate it. Could you imagine Chalmer’s recasting Levine’s explanatory gap as the ‘hard problem’ then declaring it solved! This is the spec-realism ‘strategy.’
Brassier’s Sellarsian solution is to adopt a make-believe transcendentalism, to render unto science what belongs to science, and no more–without, of course, reading any science. You raise a transcendental normative metaphysics on the basis of intellectually perceived ‘functions’ that somehow ‘supervene’ on the empirical realities involved without any account of intellectual perception or the ontological status of the functions perceived or the nature of their emergence from or interaction with nature. The fact that they’re make-believe (yet necessarily binding!) is supposed to be enough to provide a ‘fat-as-flat-can-be’ ontology.
My answer is so much more simple… and frightening.
Do you have the link to that exchange with Bryant?
I am looking through Deleuze’s book The Logic of Sense. In the span of a few pages he likens sense to ‘non being’, to being a ‘thin film’, a ‘frontier of mist’, and ‘the mere result of bodies and passions’, or enclosure within a sphere, the indifference of what is denoted to the sense of the denotation, and even something like what you described as performance reference asymetry (‘my impotence to say the sense of what i say, at the same time to say something and its meaning’)
“All art constantly aspires to the condition of music.”
Walter Pater
What would a non-representational literature look like? I think that the seeming emphasis within some literary fiction on elegantly crafted sentences is an attempt to make literature that is more musical and less representational. If meaning is really dead in the ways you describe it will no longer be possible to write something like Hamlet or Don Quixote non-ironically. Indeed your argument suggests it may in the not too distant future be possible for literary works which present seemingly real persons to be created by machines. If so, and if the literature machines present themselves as flesh and blood people the theoretical and practical deaths of meaning will be elegantly united.
This suggests to me that in the future there will be two kinds of literary art. The first will be increasing abstract and musical, and created by humans fighting a rear guard action against the rise of the machines. The second will be Akratic, Disneylike, and created by machines to deliver us to and soften us up for advertisers.
Have you checked out this?
That’s a cool possibility–very cool. Depending the complexities of the machines, tho, there’s a good chance that the rear guard is actually the cutting edge!
I did, just now. I wonder if neurological science will do something to literature like what photography has done to painting. That’s the nearest example I can think of in terms of a technology forcing an artistic practice to reinvent itself. I suppose one might also argue that the novel superseded the epic poem as the typical long narrative around the same time the modern conception of the self was being invented. If the modern conception of the self that you call ‘humanistic exceptionalism’ passes into myth and if the novel is the characteristic literary form that goes with that concept the novel may be superseded at some point in the not too distant literary future. Perhaps video games will become the typical long narrative form of the 21st century. If human beings come to see themselves as defined by their actions more than by their thoughts a video game that allows human beings to act, as themselves or as avatars, might be better suited to the conditions of the 21st century than literature.
Life as the pursuit of fitness indicators. The apotheosis of Akratic Society is pretty much what you describe, I think. The proliferation of virtual worlds possessing virtual problems awaiting virtual solution. This is where I head with the next Disney piece, anyway.
but its only intensifying. here i think this is where nick land’s own account of capitalism is just empirically wrong, and where baudrillards story made more sense to me. forms of subjectivation are proliferating along with the proliferation of how people today brand their lifestyles along narrow lines as every activity and consumption carries with itself its corresponding narrativisations, norms, values, and so forth– with the advent of shared interest spaces, one can have many different non overlapping lines of subjectivation which require a “metasemanticization narrative” (Floridi) or an even more inflated metacognitive self to coordinate all these different disjoint self expressions and self constructions.
just to use an example. back in the 70s if you were into BDSM maybe you occasionally would get together at a party, go to a dungeon, see your dominatrix or whatever, but now you can be logged into Fetlife… all the fucking time. Go there and read some of the stuff people write if you think the self is going anywhere. You know all that hardcore pushing experiential boundaries, peak experience, limit experiences, pain, transgressive sex, fetishistic objectification… its all as intentional if not more so than what you encounter at work, or with your friends and famility. Very sophisticated normative and subjective narrativization occurs within those spaces. So, you see, the proliferation of access to narrow informational spaces which encourage certain subjectivation processes are able to self amplify as these spaces become progressively microinfoecological enclosures that have few lines of connections to the traditional shared social spaces that everyone in a society is accommodated to (again, it was already fractionated… I remember thinking how radically different different friend’s families were growing up, but there were at least common structural terms which enabled coordination).
The difficulty here becomes that many of these spaces are being juggled at the same time by people, and there different spaces themselves often don’t possess overlap. I wonder if the brain has limits to the amount of switching and coordination it can do between these spaces, and I wonder if this itself could account for much of psychopharmacological drug use. What I could see here is heuristic overextension and gear grinding. And this is even empirically verified even for basic social relational differentiation in modernity. They have found that fathers who are the primary bread winner, who have to have calculative concerns in the forefront of their mind, actually have decreased capability of emotional relatedness and emotional nuance than mothers who spend most of the time raising the children, and this is confirmed with brain imaging.
Its almost as if the modern metacognitive self was a necessary invention as the agent who can posit the linkages between the proliferating functionally specialized social informational spaces of modernity into some kind of coherence.
I agree with the thrust of what you’re saying, but ‘subjectivation’ is still too bound to semiotic ways of looking at this problem, I think. Looking at individuals as ‘dispositional nodes’ (and ‘subjectivity’ as an artifact of various communicative dispositions) provides a way of looking at cultural balkanization that has longer legs. If you think about the classic example of socialization shock, the returning veteran, the problem is far better understood in terms of mismatches between expectations and behaviours rather than the possession of ‘different selves.’ It also allows you to see the way social media dissociates expectations and behaviours within individuals: as soon as you possess the technical means to police the information available to others, to manipulate their expectations, then you have essentially begun self-policing as well, condemned yourself to endless metacognitive vigilance, the need to nip those expectation busting behaviours before they even happen. Before we could simply lie. Now we have to live the lie… or keep our damn mouth shut, which seems to the favourite strategy of great many today.
The problem then becomes one keeping track of one’s claims vis a vis expectations. The more one has to track, the more stress one feels, the more alienated, and so on and so on.
I still gotta get the new Floridi book.
I did not know until just now, but apparently you can buy tools and weapons within a role playing video game for real world money.
And theft and crime are rampant!
His work is interesting. It’s kantian but without the horrorist tinge that you find in negarestani, although he and negarestani come to many of the same formulations. Fliordis forthcoming paper ‘Plea for Antinaturalism’ should be a real knee slapper.
and just like negarestani, he doesnt think philosophy is mere conceptual science as someone like McGinn, but he thinks its a full blown conceptual *engineering*
A craftsman must hock his wares, even if they happen to be shower curtain rings…
Are you tired of your concepts letting you down?
Do you hate the way other philosophers roll their eyes in your company?
Try Eezee Norms from Pragmatico!
re expectations. i think the concern would usually be something like ‘what holds them together’, why do some behaviroral and expectation clusters tend to form constellations? did you see where wolfendale proposed to identify ‘concepts’ with deleuzian virtual-multiplicities. he even proposes to read idioms like ‘slippery slope’ in terms of the trajectories in the possibility space of argumentation! you could see how this could lead straightforwardly to a concept of a subject as the abstract machine or virtual multiplicity whose ‘critical points’ stitch together or orient the various commitments and expectations of their corresponding ‘actual person’. but i think he kind of ran afoul his own treatment of brandom (ie his claim that brandom is compatible with eliminativism). in deleuze virtual multiplicities are REAL, so I don’t see how he gets away with saying concepts are virtual, but not under the purview of the natural sciences (who can investigate state spaces / phase spaces and so on just fine in other areas, what would stop them from doing it on the lonely island of the conceptual?)
while i tend to want to agree with you, it more amazes me how the very notion, the meaning, you have gained, is the applied to a progress in the stature or conceptual arena of humanity that then really only and conveniently avoids the meaning you have gained. I can only take this contradiction to mean, then, that your meaning only has creedence to the extent that you are attempting to assert your primacy despite the collapase of meaning you propose upon. In other words: you are making a proposal of the repercussions of ‘non-exceptional’ being based upon your having an exeptional position by which to make the proposal. Maybe you have accounted for that somewhere, but it seems plain to me that any propsal of some ‘non’ human (over-human) progress is based entirely within a trascendental-exceptional field that is denied in the power vested of discourse: magical thinking.
In other words I think I’ve won the Magical Belief Lottery! Yes. The degree to which I realize I’m just another clown (these are the terms I use) is the degree to which I rely on the science to break my way. So yes, I feel lucky… I think things are breaking my way in cognitive science.
But I’m not sure how that feeds into any paradox.
The Artist must exploit Cheat Spaces as much as reveal Cheat Spaces. NNI is not simply an industrial and commercial resource; it is also an aesthetic one.
And maybe admit your tresspasses as well?
Oh, I don’t know: I’m just a man thinking; although, I might be a machine thinking he’s a man. In that case the point is moot: how could you cheat your own programmatic mind? The neglect ratio would be left out of the loop.
It was a question of tresspass upon others and admitting it to them.
As to cheating your own mind, I guess it depends if you can percieve a rough model of certain functionalities – if so then it’s fairly easy to figure ways those functionalities could be undercut. But if you don’t and it’s just a sort of ‘free will’ landscape – well, you still have functionalities that can be undercut, but it’s gunna seem like nothing can be cheated. Neglected babies out with the neglected bathwater.
haha … good one!
[…] my presentation in Denmark, I thought it worthwhile reposting this little gem from the TPB […]
Reblogged this on Liam Uber's Blog and commented:
A very interesting wake-up call.
Given our agonizingly dystopian past, a dystopian future is almost guaranteed. However, assuming that more and improved information about the human represents progress, there might even be a glimmer of new hope:
* better understanding of human biology will allow for more human friendly arrangements.
* we have been able to flourish under circumstances of dire ignorance and prejudice for thousands of years. True enlightenment should be harmful mainly to the old order.
* revolutions have been a regular feature of the past but the majority at the time were preoccupied with other, more important matters.
Perhaps we should worry less and be more brave.
Cool beans. I would say that worry is a necessary condition of bravery. We’re just oblivious otherwise.
What if “circumstances of dire ignorance and prejudice” are optimal for human flourishing? That seems to be part of what Scott is getting at with his talk of semantic apocalypse.
If only we knew what is optimal for human flourishing! I’m hopeful that a neo-narrative can lead to a neo-humanism. I agree that our fascination with technology are creating new possibilities for exploitation and oppression; most importantly, we are also being encouraged to neglect our biological selves – being conservative isn’t necessarily all bad.
You’re a much better writer than a “philosopher”. Just publish the book already.
It’s been out of my hands for months now already. Care to elaborate on my relative weaknesses? My guess is that you don’t have a clue what you’re talking about on either account.
But in a way Miodrag’s comment is a complement. It shows that the books work as entertainments even for people who don’t get (or don’t want) the philosophy.
I appreciate that. One of the things I hate about academically oriented sites, tho, is how CYA or ‘civil’ communication is. You come here to rap knuckles, and you will get your knuckles rapped, unless you happen to know what you’re talking about. Sounds fair to me!
What’s going on with Through the Brain Darkly btw?
I’m still hung up on the introduction. Getting closer, though.
This is the best explanation I’ve read so far on the theoretical and practical implications of BBT.
Ultimately, as you mention at the end, it is ‘too late’. I remember watching an interview with a climatologist who explained that the goal was to ‘avoid the unmanageable and manage the unavoidable’. The problem I suppose is that the industrialization and commercialization of Cheat Space threatens to create a situation both unmanageable and unavoidable. The manipulation of heuristics or Cheat Spaces by writers (like you) in order to ostensibly ‘reveal’ the Cheat Spaces via (or is it ‘along with’?) their underlying function can never hope to reach (and thereby compete with or check) the scale of industrial/commercial applications of manipulations of these same Cheat Spaces – at least not without tossing out traditionalist views of literature which themselves are also competing with the project you articulate here (a project that I think you make an excellent case for).
This has the dual effect of imparting both a sense of urgency and a sense of futility. Still though, I support the gesture, no matter how futile. And I’m starting to feel like the punchline of an offensive joke you’re telling, which is another way of saying all this stuff is starting to keep me awake at night.
Cheers!
[…] One must turn away from the old ways, the old ideas, and dare to look hard at the prospect of a post-intentional future. The horrific […]
[…] And this is basically the foundational premise of the Semantic Apocalypse: intentional cognition, as a radically specialized system, is especially vulnerable to both crashing and cheating. The very power of our sociocognitive systems is what makes them so liable to be duped (think religious anthropomorphism), as well as so easy to dupe. When Sherry Turkle, for instance, bemoans the ease with which various human-computer interfaces, or ‘HCIs,’ push our ‘Darwinian buttons’ she is talking about the vulnerability of sociocognitive cues to various cheats (but since she, like Barrett, lacks any theory of meaning, she finds herself in similar explanatory straits). In a variety of experimental contexts, for instance, people have been found to trust artificial interlocutors over human ones. Simple tweaks in the voices and appearance of HCIs have a dramatic impact on our perceptions of those encounters—we are in fact easily manipulated, cued to draw erroneous conclusions, given what are quite literally cartoonish stimuli. So the so-called ‘internet of things,’ the distribution of intelligence throughout our artifactual ecologies, takes on a far more sinister cast when viewed through the lens of human sociocognitive specialization. Populating our ecologies with gadgets designed to cue our sociocognitive capacities ‘out of school’ will only degrade the overall utility of those capacities. Since those capacities underwrite what we call meaning or ‘intentionality,’ the collapse of our ancestral sociocognitive ecologies signals the ‘death of meaning.’ […]
“The death of practical meaning simply refers to the growing incapacity of intentional idioms to reliably solve various social problems in radically transformed sociocognitive habitats. Even as we speak, our environments are becoming more ‘intelligent,’ more prone to cue intentional intuitions in circumstances that quite obviously do not warrant them. We will, very shortly, be surrounded by countless ‘pseudo-agents,’ systems devoted to hacking our behaviour—exploiting the Cheat Space corresponding to our heuristic limits—via NNI. Combined with intelligent technologies, NNI has transformed consumer hacking into a vast research programme. Our social environments are transforming, our native communicative habitat is being destroyed, stranding us with tools that will increasingly let us down.”
Seems like the observations are on point but I don’t think the implications are so dire. We’ve never been able to “solve” social problems completely. We’ve always been surrounded by ‘pseudo-agents’ who hack our systems. They just didn’t realize they were hacking our systems. They’re called mass movements. Now they’re just called marketers. And the scale is no different. Now it’s just in the name of profit instead of humanistic ideals. Unless you think net worth makes someone worth something. That would actually be kind of humanistic. It seems like the tools have always let us down and that does not seem like it’s going to change any time soon.
“What does it mean to write after the death of meaning?”
The same thing it meant before. Entertainment, mostly. Since we know that old meanings and narratives were vacant that’s what the writers of the past were. Nothing is really that different. Humans are remarkably adaptable. We’ll make due.
I agree with your observation that meaning has always been messy business, but the differences are more than simply radical in this case. Intentional cognition turns on cues, which is to say, a background that can be ignored simply because it is stable. Intentional cognition is ecological. I’m arguing we’re staring down the barrel of the death of all stable backgrounds, an ‘anarcho-ecology.’ If this is the case, then intentional cognition is doomed.
[…] meaning? How do we employ 5G as an analogy for the intellectual labor of tomorrow? I am reminded of Bakker on writing after the death of meaning: will the fictions and metaphors of tomorrow be in fact more like advertisements, honed clarity […]