Three Pound Brain

No bells, just whistling in the dark…

Goosing the Rumour Mill

by rsbakker

Just got back to find there’s been some developments! I’d resolved to say nothing anticipating anything–it just makes me feel foolish anymore. Until we have all the details hammered out, there’s not much I can say except that Overlook’s July 2016 date is tentative. I fear I can’t comment on their press release, either. Things seem to be close, though.

I know it’s been a preposterously long haul, folks, but hold on just a bit longer. The laws of physics are bound to kick in at some point, after which I can start delivering some more reliable predictions.

The Mental as Rule of Thumb

by rsbakker

What are mental functions? According to Blind Brain Theory, they are quasimechanical posits explaining the transformations between regimented inputs and observed outputs in ways that seem to admit generalization. We know the evidence is correlative, but we utilize mechanical cognition nonetheless, producing a form of correlatively anchored ‘quasi-causal explanation.’ There often seems to be some gain in understanding, and thus are ‘mental functions’ born.

Mental functions famously don’t map across our growing understanding of neural mechanisms because the systematicity tracked is correlative, rather than causal. Far from ‘mechanism sketches,’ mental functions are ‘black box conceits,’ low dimensional constructs that need only solve some experimental ecology (that may or may not generalize). The explanatory apparatus of the ‘mental’ indirectly tracks the kinds of practical demands made on human cognition as much as the hidden systematicities of the brain. It possesses no high-dimensional reality—real reality—otherwise. How could it? What sense does it make to suppose that our understanding of the mental, despite being correlatively anchored, nevertheless tracks something causal within subjects? Very little. Correlations abound, to the point of obscuring causes outright. Though correlative cognition turns on actual differential relations to the actual mechanisms involved, it nevertheless neglects those relations, and therefore neglects the mechanisms as well.  To suggest that correlative posits possess some kind of inexplicable intrinsic efficacy is to simply not understand the nature of correlative cognition, which is to make due in the absence of behavioural sensitivities to the high-dimensional mechanics of our environments.

Why bother arguing for something spooky when ‘mental functions’ are so obviously heuristic conceits, ways to understand otherwise opaque systems, nothing more or less?

Of course there’s nothing wrong with heuristics, so long as they’re recognized as such, ways for other brains to cognize neural capacities short of cognizing neural mechanisms. To the extent that experimental findings generalize to real world contexts, there’s a great deal to be learned from ‘black box psychology.’ But we should not expect to find any systematic, coherent account of ‘mind’ or the ‘mental,’ simply because the correlative possibilities are potentially limitless. So long as new experimental paradigms can be improvised, new capacities/incapacities can be isolated. Each ‘discovery,’ in other words, is at once an artifact, an understanding specific (as all correlative understandings are) to some practical ecology, one which is useful to the degree it can be applied in various other practical ecologies.

And there you have it: a concise eliminativist explanation of why mental functions seem to have no extension and yet seem to provide a great deal of knowledge anyway. ‘Mental functions’ are essentially a way to utilize our mechanical problem-solving capacity in black box ecologies. The time has come to start calling them for what they are: heuristic conceits.  The ‘mind’ is a way to manage a causal system absent any behavioural sensitivity to the mechanics of that system, a way to avoid causal cognition. To suggest that it is somehow fundamentally causal nonetheless is to simply misunderstand it, to confuse, albeit in an exotic manner, correlation for causation.

 

 

The Real Problem with ‘Correlation’

by rsbakker

stick zombies

Since presuming that intentional cognition can get behind intentional cognition belongs to the correlation problem, any attempt to understand the problem requires we eschew theoretical applications of intentional idioms. Getting a clear view, in other words, requires that we ‘zombify’ human cognition, adopt a thoroughly mechanical vantage that simply ignores intentionality and intentional properties. As it so happens, this is the view that commands whatever consensus one can find regarding these issues. Though the story I’ll tell is a complicated one, it should also be a noncontroversial one, at least insofar as it appeals to nothing more than naturalistic platitudes.

I first started giving these ‘zombie interpretations’ of different issues in philosophy and cognitive science a few years back.[1] Everyone in cognitive science agrees that consciousness and cognition turn on the physical somehow. This means that purely mechanical descriptions of the activities typically communicated via intentional idioms have to be relevant somehow (so long as they are accurate, at least). The idea behind ‘zombie interpretation’ is to explain as much as possible using only the mechanistic assumptions of the biological sciences—to see how far generalizing over physical processes can take our perennial attempt to understand meaning.

Zombies are ultimately only a conceit here, a way for the reader to keep the ‘explanatory gap’ clearly in view. In the institutional literature, ‘p-zombies’ are used for a variety of purposes, most famously to anchor arguments against physicalism. If a complete physical description of the world need not include consciousness, then the brute fact of consciousness implies that physicalism is incomplete. However, since this argument itself turns on the correlation problem, it will not concern us here. The point, oddly enough, is to adhere to an explanatory domain where we all pretty much agree, to speculate using only facts and assumptions belonging to the biological sciences—the idea being, of course, that these facts and assumptions are ultimately all that’s required. Zombies allow us to do that.

Philosophy Now zombie pic

So then, devoid of intentionality, zombies lurch through life possessing only contingent, physical comportments to their environment. Far from warehousing ‘representations’ possessing inexplicable intentional properties, their brains are filled with systems that dynamically interact with their world, devices designed to isolate select signals from environmental noise. Zombies do not so much ‘represent their world’ as possess statistically reliable behavioural sensitivities to their environments.

So where ‘subjects’ possess famously inexplicable semantic relations to the world, zombies possess only contingent, empirically tractable relations to the world. Thanks to evolution and learning, they just happen to be constituted such that, when placed in certain environments, gene conserving behaviours tend to reliably happen. Where subjects are thought to be ‘agents,’ perennially upstream sources of efficacy, zombies are components, subsystems at once upstream and downstream the superordinate machinery of nature. They are astounding subsystems to be sure, but they are subsystems all the same, just more nature—machinery.

What makes them astounding lies in the way their neurobiological complexity leverages behaviour out of sensitivity. Zombies do not possess distributed bits imbued with the occult property of aboutness; they do not model or represent their worlds in any intentional sense. Rather, their constitution lets ongoing environmental contact tune their relationship to subsequent environments, gradually accumulating the covariant complexities required to drive effective zombie behaviour. Nothing more is required. Rather than possessing ‘action enabling knowledge,’ zombies possess behaviour enabling information, where ‘information’ is understood in the bald sense of systematic differences making systematic differences.

A ‘cognitive comportment,’ as I’ll use it here, refers to any complex of neural sensitivities subserving instances of zombie behaviour. It comes in at least two distinct flavours: causal comportments, where neurobiology is tuned to what generally makes what happen, and correlative comportments, where zombie neurobiology is tuned to what generally accompanies what happens. Both systems allow our zombies to predict and systematically engage their environments, but they differ in a number of crucial respects. To understand these differences we need some way of understanding what positions zombies upstream their environments–or what leverages happy zombie outcomes.

The zombie brain, much like the human brain, confronts a dilemma. Since all perceptual information consists of sensitivity to selective effects (photons striking the eye, vibrations the ear, etc.), the brain needs some way of isolating the relevant causes of those effects (a rushing tiger, say) to generate the appropriate behavioural response (trip your mother-in-law, then run). The problem, however, is that these effects are ambiguous: a great many causes could be responsible. The brain is confronted with a version of the inverse problem, what I will call the medial inverse problem for reasons that will soon be clear. Since it has nothing to go on but more effects, which are themselves ambiguous, how could it hope to isolate the causes it needs to survive?

By allowing sensitivities to discrepancies between the patterns initially cued and subsequent sensory effects to select—and ultimately shape—the patterns subsequently cued. As it turns out, zombie brains are Bayesian brains.[2] Allowing discrepancies to both drive and sculpt the pattern-matching process automatically optimizes the process, allowing the system to bootstrap wide-ranging behavioural sensitivities to environments in turn. In the intentionality laden idiom of theoretical neuroscience, the brain is a ‘prediction error minimization’ machine, continually testing occurrent signals against ‘guesses’ (priors) triggered by earlier signals. Success (discrepancy minimization) quite automatically begets success, allowing the system to continually improve its capacity to make predictions—and here’s the important thing—using only sensory signals.[3]

But isolating the entities/behaviour causing sensory effects is one thing; isolating the entities/behaviour causing those entities/behaviour is quite another. And it’s here that the chasm between causal cognition and correlative cognition yawns wide. Once our brain’s discrepancy minimization processes isolate the relevant entities/behaviours—solve the medial inverse problem—the problem of prediction simply arises anew. It’s not enough to recognize avalanches as avalanches or tigers as tigers, we have to figure out what they will do. The brain, in effect, faces a second species of inverse problem, what might be called the lateral inverse problem. And once again, it’s forced to rely on sensitivities to patterns (to trigger predictions to test against subsequent signals, and so on).[4]

Nature, of course, abounds with patterns. So the problem is one of tuning a Bayesian subsystem like the zombie brain to the patterns (such as ‘avalanche behaviour’ or ‘tiger behaviour’) it needs to engage its environments given only sensory effects. The zombie brain, in other words, needs to wring behavioural sensitivities to distal processes out of a sensitivity to proximal effects. Though they are adept at comporting themselves to what causes their sensory effects (to solving the medial inverse problem), our zombies are almost entirely insensitive to the causes behind those causes. The etiological ambiguity behind the medial inverse problem pales in comparison to the etiological ambiguity comprising the lateral inverse problem, simply because sensory effects are directly correlated to the former, and only indirectly correlated to the latter. Given the limitations of zombie cognition, in other words, zombie environments are ‘black box’ environments, effectively impenetrable to causal cognition.

Part of the problem is that zombies lack any ready means of distinguishing causality from correlation on the basis of sensory information alone. Not only are sensory effects ambiguous between causes, they are ambiguous between causes and correlations as well. Cause cannot be directly perceived. A broader, engineered signal and greater resources are required to cognize its machinations with any reliability—only zombie science can furnish zombies with ‘white box’ environments. Fortunately for their prescientific ancestors, evolution only required that zombies solve the lateral inverse problem so far. Mere correlations, despite burying the underlying signal, remain systematically linked to that signal, allowing for a quite different way of minimizing discrepancies.

Zombies, once again, are subsystems whose downstream ‘componency’ consists in sensitivities to select information. The amount of environmental signal that can be filtered from that information depends on the capacity of the brain. Now any kind of differential sensitivity to an environment serves organisms in good stead. To advert to the famous example, frogs don’t need the merest comportment to fly mechanics to catch flies. All they require is a select comportment to select information reliably related to flies and fly behaviour, not to what constitutes flies and fly behaviour. And if a frog did need as much, then it would have evolved to eat something other than flies. Simple, systematic relationships are not only all that is required to solve a great number of biological problems, they are very often the only way those problems can be solved, given evolutionary exigencies. This is especially the case with complicated systems such as those comprising life.

So zombies, for instance, have no way of causally cognizing other zombies. They likewise have no way of causally cognizing themselves, at least absent the broader signal and greater computational resources provided by zombie science. As a result, they possess at best correlative comportments both to each other and to themselves.

Idoits guide to zombies

So what does this mean? What does it mean to solve systems on basis of inexpensive correlative comportments as opposed to far more expensive causal comportments? And more specifically, what does it mean to be limited to extreme versions of such comportments when it comes to zombie social cognition and metacognition?

In answer to the first question, at least three, interrelated differences can be isolated:

Unlike causal (white box) comportments, correlative (black box) comportments are idiosyncratic. As we saw above, any number of behaviourally relevant patterns can be extracted from sensory signals. How a particular problem is solved depends on evolutionary and learning contingencies. Causal comportments, on the other hand, involve behavioural sensitivity to the driving environmental mechanics. They turn on sensitivities to upstream systems that are quite independent of the signal and its idiosyncrasies.

Unlike causal (white box) comportments, correlative (black box) comportments are parasitic, or differentially mediated. To say that correlative comportments are ‘parasitic’ is to say they depend upon occluded differential relations between the patterns extracted from sensory effects and the environmental mechanics they ultimately solve. Frogs, once again, need only a systematic sensory relation to fly behaviour, not fly mechanics, which they can neglect, even though fly mechanics drives fly behaviour. A ‘black box solution’ serves. The patterns available in the sensory effects of fly behaviour are sufficient for fly catching given the cognitive resources possessed by frogs. Correlative comportments amount to the use of ‘surface features’—sensory effects—to anticipate outcomes driven by otherwise hidden mechanisms. Causal comportments, which consist of behavioural sensitivities (also derived from sensory effects) to the actual mechanics involved, are not parasitic in this sense.

Unlike causal (white box) comportments, correlative (black box) comportments are ecological, or problem relative. Both causal comportments and correlative comportments are ‘ecological’ insofar as both generate solutions on the basis of finite information and computational capacity. But where causal comportments solve the lateral inverse problem via genuine behavioural sensitivities to the mechanics of their environments, correlative comportments (such as that belonging to our frog) solve it via behavioural sensitivities to patterns differentially related to the mechanics of their environments. Correlative comportments, as we have seen, are idiosyncratically parasitic upon the mechanics of their environments. The space of possible solutions belonging to any correlative comportment is therefore relative to the particular patterns seized upon, and their differential relationships to the actual mechanics responsible. Different patterns possessing different systematic relationships will possess different ‘problem ecologies,’ which is to say, different domains of efficacy. Since correlative comportments are themselves causal, however, causal comportments apply to all correlative domains. Thus the manifest ‘objectivity’ of causal cognition relative to the ‘subjectivity’ of correlative cognition. 

So far, so good. Correlative comportments are idiosyncratic, parasitic, and ecological in a way that causal comportments are not. In each case, what distinguishes causal comportments is an actual behavioural sensitivity to the actual mechanics of the system. Zombies are immersed in potential signals, awash in causal differences, information, that could make a reproductive difference. The difficulties attendant upon the medial and lateral inverse problems, the problems of what and what-next, render the extraction of causal signals enormously difficult, even when the systems involved are simple. The systematic nature of their environments, however, allow them to use behavioural sensitivities as ‘cues,’ signals differentially related to various systems, to behaviourally interact with those systems despite the lack of any behavioural sensitivity to their particulars. So in research on contingencies, for instance, the dependency of ‘contingency inferences’ on ‘sampling,’ the kinds of stimulus input available, has long been known, as have the kinds of biases and fallacies that result. Only recently, however, have researchers realized the difficulty of accurately making such inferences given the kinds of information available in vivo, and the degree to which we out and out depend on so-called ‘pseudocontingency heuristics’ [5]. Likewise, research into ‘spontaneous explanation’ and  ‘essentialism,’ the default attribution of intrinsic traits and capacities in everyday explanation, clearly suggests that low-dimensional opportunism is the rule when it comes to human cognition.[6] The more we learn about human cognition, in other words, the more obvious the above story becomes.

So then what is the real problem with correlation? The difficulty turns on the fact that black box cognition, solving systems via correlative cues, can itself only be cognized in black box terms.

Given their complexity, zombies are black boxes to themselves as much to others. And this is what has cued so much pain behaviour in so many zombie philosophers. As a black box, zombies cannot cognize themselves as black boxes: the correlative nature of their correlative comportments utterly escapes them (short, once again, the information provided by zombie science). Zombie metacognition is blind to the structure and dynamics of zombie metacognition, and thus prone to what might be called ‘white box illusions.’ Absent behavioural sensitivity to the especially constrained nature of their correlative comportments to themselves, insufficient data is processed in the same manner as sufficient data, thus delivering the system to ‘crash space,’ domains rendered intractable by the systematic misapplication of tools adapted to different problem ecologies. Unable to place themselves downstream their incapacity, they behave as though no such incapacity exists, suffering what amounts to a form of zombie anosognosia.

Perhaps this difficulty shouldn’t be considered all that surprising: after all, the story told here is a white box story, a causal one, and therefore one requiring extraction from the ambiguities of effects and correlations. The absence of this information effectively ‘black-boxes’ the black box nature of correlative cognition. Zombies cued to solve for that efficacy accordingly run afoul the problem of processing woefully scant data as sufficient, black boxes as white boxes, thus precluding the development of effective, behavioural sensitivities to the actual processes involved.  The real Problem of Correlation, in other words, is that correlative modes systematically confound cognition of correlative comportments. Questions regarding the nature of our correlative comportments simply do not lie within the problem space of our correlative comportments—and how could they, when they’re designed to solve absent sensitivity to what’s actually going on?

And this is why zombies not only have philosophers, they have a history of philosophy as well. White box illusions have proven especially persistent, despite the spectacular absence of systematic one-to-one correspondences between the apparent white box that zombies are disposed to report as ‘mind’ and the biological white box emerging out of zombie science. Short any genuine behavioural sensitivity to the causal structure of their correlative comportments, zombies can at most generate faux-solutions, reports anchored to the systematic nature of their conundrum, and nothing more. Like automatons, they endlessly report low-dimensional, black box posits the way they report high-dimensional environmental features—and here’s the thing—using the very same terms that humans use. Zombies constantly utter terms like ‘minds,’ ‘experiences,’ ‘norms,’ and so on. Zombies, you could say, possess a profound disposition to identify themselves and each other as humans.

Just like us.

MJ zombie

 

Notes

[1] See, Davidson’s Fork: An Eliminativist Radicalization of Radical Interpretation, The Blind Mechanic, The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts, Zombie Interpretation: Eliminating Kriegel’s Asymmetry Argument, and Zombie Mary versus Zombie God and Jesus: Against Lawrence Bonjour’s “Against Materialism”

[2] For an overview of Bayesian approaches, see Andy Clark, “Whatever next? Predictive brains, situated agents, and the future of cognitive science.”

[3]  The following presumes an ecological (as opposed to an inferential) understanding of the Bayesian brain. See Nico Orlandi, “Bayesian perception is ecological perception.”

[4] Absent identification there is no possibility of prediction. The analogy between this distinction and the ancient distinction between being and becoming (or even the modern one between the transcendental and the empirical) is interesting to say the least.

[5] See Klaus Fiedler et al, “Pseudocontingencies: Logically Unwarranted but Smart Inferences.”

[6] See Andrei Cimpian, “The Inherence Heuristic: Generating Everyday Explanations,” or Cimpian and Salomon, “The inherence heuristic: An intuitive means of making sense of the world, and a potential precursor to psychological essentialism.”

How Science Reveals the Limits of ‘Nooaesthetics’ (A Reply to Alva Noë)

by rsbakker

As a full-time artist (novelist) who has long ago given up on the ability of traditional aesthetics (or as I’ll refer to it here, ‘nooaesthetics’) to do much more than recontextualize art in ways that yoke it to different ingroup agendas, I look at the ongoing war between the sciences and the scholarly traditions of the human as profoundly exciting. The old, perpetually underdetermined convolutions are in the process of being swept away—and good riddance! Alva Noë, however, sees things differently.

So much of rhetoric turns on asking only those questions that flatter your view. And far too often, this amounts to asking the wrong questions, in particular, those questions that only point your way. All the other questions, you pass over in strategic silence. Noë provides a classic example of this tactic in “How Art Reveals the Limits of Neuroscience,” his recent critique of ‘neuroaesthetics’ in the The Chronicle of Higher Education.

So for instance, it seems pretty clear that art is a human activity, a quintessentially human activity according to some. As a human activity, it seems pretty clear that our understanding of art turns on our understanding of humanity. As it turns out, we find ourselves in the early stages of the most radical revolution in our understanding of the human ever… Period. So it stands to reason that a revolution in our understanding of the human will amount to a revolution in our understanding of human activities—such as art.

The problem with revolutions, of course, is that they involve the overthrow of entrenched authorities, those invested in the old claims and the old ways of doing business. This is why revolutions always give rise to apologists, to individuals possessing the rhetorical means of rationalizing the old ways, while delegitimizing the new.

Noë, in this context at least, is pretty clearly the apologist, applying words as poultices, ways to soothe those who confuse old, obsolete necessities with absolute ones. He could have framed his critique of neuroaesthetics in this more comprehensive light, but that would have the unwelcome effect of raising other questions, the kind that reveal the poverty of the case he assembles. The fact is, for all the purported shortcomings of neuroaesthetics he considers, he utterly fails to explain why ‘nooaesthetics,’ the analysis, interpretation, and evaluation of art using the resources of the tradition, is any better.

The problem, as Noë sees it, runs as follows:

“The basic problem with the brain theory of art is that neuroscience continues to be straitjacketed by an ideology about what we are. Each of us, according to this ideology, is a brain in a vat of flesh and bone, or, to change the image, we are like submariners in a windowless craft (the body) afloat in a dark ocean of energy (the world). We know nothing of what there is around us except what shows up on our internal screens.”

As a description of parts of neuroscience, this is certainly the case. But as a high-profile spokesperson for enactive cognition, Noë knows full well that the representational paradigm is a fiercely debated one in the cognitive sciences. But it suits his rhetorical purposes to choose the most theoretically ill-equipped foes, because, as we shall see, his theoretical equipment isn’t all that capable either.

As a one-time Heideggerean, I recognize Noë’s tactics as my own from way back when: charge your opponent with presupposing some ‘problematic ontological assumption,’ then show how this or that cognitive register is distorted by said assumption. Among the most venerable of those problematic assumptions has to be the charge of ‘Cartesianism,’ one that has become so overdetermined as to be meaningless without some kind of qualification. Noë describes his understanding as follows:

“Crucially, this picture — you are your brain; the body is the brain’s vessel; the world, including other people, are unknowable stimuli, sources of irradiation of the nervous system — is not one of neuroscience’s findings. It is rather something that has been taken for granted by neuroscience from the start: Descartes’s conception with a materialist makeover.”

In cognitive science circles, Noë is notorious for the breezy way he consigns cognitive scientists to his ‘Cartesian box.’ For a fellow anti-representationalist such as myself, I often find his disregard for the nuances posed by his detractors troubling. Consider:

“Careful work on the conceptual foundations of cognitive neuroscience has questioned the plausibility of straightforward mind-brain reduction. But many neuroscientists, even those not working on such grand issues as the nature of consciousness, art, and love, are committed to a single proposition that is, in fact, tantamount to a Cartesian idea they might be embarrassed to endorse outright. The momentous proposition is this: Every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition is in your brain. We may not know how the brain manages this feat, but, so it is said, we are beginning to understand. And this new knowledge — of how the organization of bits of matter inside your head can be your personality, thoughts, understanding, wonderings, religious or sexual impulses — is surely among the most exciting and important in all of science, or so it is claimed.”

I hate to say it, but this is a mischaracterization. One has to remember that before cognitive science, theory was all we had when it came to the human. Guesswork, profound to the extent that we consider ourselves profound, but guesswork all the same. Cognitive science, in its many-pronged attempt to scientifically explain the human, has inherited all this guesswork. What Noë calls ‘careful work’ simply refers to his brand of guesswork, enactive cognition, and its concerns, like the question of how the ‘mind’ is related to the ‘brain,’ are as old as the hills. ‘Straightforward mind brain reduction,’ as he calls it, has always been questioned. This mystery is a bullet that everyone in the cognitive sciences bites in some way or another. The ‘momentous proposition’ that the majority of neuroscientists assume isn’t that “[e]very thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition is in [our] brain,” but rather that every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition involves our brain. Noë’s Cartesian box assumption is nowhere so simple or so pervasive as he would have you believe.

He knows this, of course, which is why he devotes the next paragraph to dispatching those scientists who want (like Noë himself does, ultimately) to have it both ways. He needs his Cartesian box to better frame the contest in clear-cut ‘us against them’ terms. The fact that cognitive science is a muddle of theoretical dissension—and moreover, that it knows as much—simply does not serve his tradition redeeming narrative. So you find him claiming:

“The concern of science, humanities, and art, is, or ought to be, the active life of the whole, embodied, environmentally and socially situated animal. The brain is necessary for human life and consciousness. But it can’t be the whole story. Our lives do not unfold in our brains. Instead of thinking of the Creator Brain that builds up the virtual world in which we find ourselves in our heads, think of the brain’s job as enabling us to achieve access to the places where we find ourselves and the stuff we share those places with.”

These, of course, are platitudes. In philosophical debates, when representationalists critique proponents of embodied or enactive cognition like Noë, they always begin by pointing out their agreement with claims like these. They entirely agree that environments condition experience, but disagree (given ‘environmentally off-line’ phenomena such as mental imagery or dreams) that they are directly constitutive of experience. The scientific view is de facto a situated view, a view committed to understanding natural systems in context, as contingent products of their environments. As it turns out, the best way to do this involves looking at these systems mechanically, not in any ‘clockwork’ deterministic sense, but in the far richer sense reveal by the life sciences. To understand how a natural system fits into its environment, we need to understand it, statistically if not precisely, as a component of larger systems. The only way to do this is figure how, as a matter of fact, it works, which is to say, to understand its own components. And it just so happens that the brain is the most complicated machine we have ever encountered.

The overarching concern of science is always the whole; it just so happens that the study of minutiae is crucial to understanding the whole. Does this lead to institutional myopia? Of course it does. Scientists are human like anyone else, every bit as prone to map local concerns across global ones. The same goes for English professors and art critics and novelists and Noë. The difference, of course, is the kind of cognitive authority possessed by scientists. Where the artistic decisions I make as a novelist can potentially enrich lives, discoveries in science can also save them, perhaps even create new forms of life altogether.

Science is bloody powerful. This, ultimately, is what makes the revolution in our human self-understanding out and out inevitable. Scientific theory, unlike theory elsewhere, commands consensus, because scientific theory, unlike theory elsewhere, reliably provides us with direct power over ourselves and our environments. Scientific understanding, when genuine, cannot but revolutionize. Nooaesthetic understanding, like religious or philosophical understanding, simply has no way of arbitrating its theoretical claims. It is, compared to science at least, toothless.

And it always has been. Only the absence of any real scientific understanding of the human has allowed us to pretend otherwise all these years, to think our armchair theory games were more than mere games. And that’s changing.

So of course it makes sense to be wary of scientific myopia, especially given what science has taught us about our cognitive foibles. Humans oversimplify, and science, like art and traditional aesthetics, is a human enterprise. The difference is that science, unlike traditional aesthetics, revolutionizes our collective understanding of ourselves and the world.

The very reason we need to guard against scientific myopia, in other words, is also the very reason why science is doomed to revolutionize the aesthetic. We need to be wary of things like Cartesian thinking simply because it really is the case that our every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition turns on our biology in some fundamental respect. The only real question is how.

But Noë is making a far different and far less plausible claim: that contemporary neuroscience has no place in aesthetics.

“Neuroscience is too individual, too internal, too representational, too idealistic, and too antirealistic to be a suitable technique for studying art. Art isn’t really a phenomenon at all, not in the sense that photosynthesis or eyesight are phenomena that stand in need of explanation. Art is, rather, a mode of investigation, a style of research, into what we are. Art also gives us an opportunity to observe ourselves in the act of knowing the world.”

The reason for this, Noë is quick to point out, isn’t that the sciences of the human don’t have important things to say about a human activity such as art—of course it does—but because “neuroscience has failed to frame a plausible conception of human nature and experience.”

Neuroscience, in other words, possesses no solution to the mind-body problem. Like biology before the institutionalization of evolution, cognitive science lacks the theoretical framework required to unify the myriad phenomena of the human. But then, so does Noë, who only has philosophy to throw at the problem, philosophy that, by his own admission, neuroscience does not find all that compelling.

Which at last frames the question of neuroaesthetics the way Noë should have framed it in the beginning. Say we agree with Noë, and decide that neuroaesthetics has no place in art criticism. Okay, so what does? The possibility that neuroaesthetics ‘gets art wrong’ tells us nothing about the ability of nooaesthetics, traditional art criticism turning on folk-psychological idioms, to get art right. After all, the fact that science has overthrown every single traditional domain of speculation it has encountered strongly suggests that nooaesthetics has got art wrong as well. What grounds do we have for assuming that, in this one domain at least, our guesswork has managed to get things right? Like any other domain of traditional speculation on the human, theorists can’t even formulate their explananda in a consensus commanding way, let alone explain them. Noë can confidently declare to know ‘What Art Is’ if he wants, but ultimately he’s taking a very high number in a very long line at a wicket that, for all anyone knows, has always been closed.

The fact is, despite all the verbiage Noë has provided, it seems pretty clear that neuroaesthetics—even if inevitably myopic in, this, the age of its infancy—will play an ever more important role in our understanding of art, and that the nooaesthetic conceits of our past will correspondingly dwindle ever further into the mists of prescientific fable and myth.

As this artist thinks they should.

Anarcho-ecologies and the Problem of Transhumanism

by rsbakker

So a couple weeks back I posed the Augmentation Paradox:

The more you ‘improve’ some ancestral cognitive capacity, the more you degrade all ancestral cognitive capacities turning on the ancestral form of that cognitive capacity.

I’ve been debating this for several days now (primarily with David Roden, Steve Fuller, Rick Searle, and others over at Enemy Industry), as well as scribbling down thoughts on my own. One of the ideas falling out of these exchanges and ruminations is something that might be called ‘anarcho-ecology.’

Let’s define an ‘anarcho-ecocology’ as an ecology too variable to permit human heuristic cognition. Now we know that such an ecology is possible because we know that heuristics use cues possessing stable differential relations to systems to solve systems. The reliability of these cues depends on the stability of those differential relations, which in turn depends on the invariance of the systems to be solved. This simply unpacks the platitude that we are adapted to the world the way it is (or perhaps to be more precise (and apropos this post) the way it was). Anarcho-ecologies arise when systems, either targeted or targeting, begin changing so rapidly that ‘cuing,’ the process of forming stable differential relations to the target systems, becomes infeasible.  They are problem-solving domains where crash space has become absolute.

I propose that Transhumanism, understood as “an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities,” is actually promoting the creation of anarcho-ecologies, and as such, the eventual obsolescence of human heuristic cognition. And since intentional cognition constitutes a paradigmatic form of human heuristic cognition, this amounts to saying that Transhumanism is committed to what I’ve been calling the Semantic Apocalypse.

The argument, as I’ve been posing it, looks like this:

1) Heuristic cognition depends on stable, taken-for-granted backgrounds.

2) Intentional cognition is heuristic cognition.

/3) Intentional cognition depends on stable, taken-for-granted backgrounds.

4) Transhumanism entails the continual transformation of stable, taken-for-granted backgrounds.

/5) Transhumanism entails the collapse of intentional cognition.

Let’s call this the ‘Anarcho-ecological Argument Against Transhumanism,’ or AAAT.

Now at first blush, I’m sure this argument must seem preposterous, but I assure you, it’s stone-cold serious. So long as the reliability of intentional cognition turns on invariant, ancestral backgrounds, transformations in those backgrounds will compromise intentional cognition. Consider ants as a low-dimensional analogue. As an eusocial species they form ‘super-organisms,’ collectives exhibiting ‘swarm intelligence,’ where simple patterns of interaction–chemical, acoustic, and tactile communicative protocols–between individuals scale to produce collective solutions to what seem to be complex problems. Now if every ant were suddenly given idiosyncratic communicative protocols–different chemicals, different sounds, different sensitivities–it seems rather obvious that the colony would simply collapse. Lacking any intrasystematic cohesion, it just would not be able to resolve any problems.

Now of course humans, though arguably eusocial, are nowhere near so simple as ants. Human soldiers don’t automatically pace out pheromone trails, they have to be ‘convinced’ that this is what they ‘should’ do. Where ants need only cue one another, humans need to both cue and decode each other. Individual humans, unlike ants, possess ‘autonomy.’ And this disanalogy between ants and humans, I think, handily isolates why most people simply assume that AAAT has to be wrong, that it is obviously too ‘reductive’ is some way. They understand the ‘cue’ part of the argument, appreciate the way changing those systems that intentional cognition takes for granted will transform ancestrally reliable cues into miscues. It’s the decode part, they think, that saves the transhumanist day.  We humans, unlike ants, are not passive consumers of our social environments. Miscues can be identified, diagnosed, and then overcome, precisely because we are autonomous.

So much for AAAT.

Except that it entirely agrees. The argument says nothing about the possibility of somehow decoding intentional miscues (like those we witnessed in spectacular fashion with Ashley Madison’s use of bots to simulate interested women), it only claims that such decoding will not involve intentional cognition, insofar as intentional cognition is heuristic cognition, and heuristic cognition requires invariant backgrounds, stable ecologies. Since Transhumanism does not endorse any coercive, collective augmentations of human capacities, Transhumanists generally see augmentation in consumer terms, something that individuals are free to choose or to eschew given the resources at their disposal. Not only will individuals be continually transforming their capacities, they will be doing so idiomatically. The invariant background that intentional cognition is so exquisitely adapted to exploit will become a supermarket of endless enhancement possibilities–or so they hope. And as that happens, intentional cognition will become increasingly unreliable, and ultimately, obsolete.

To return to our ant analogy, then, we can see that it’s not simply a matter of humans possessing autonomy (however this is defined). Humans, like ants, possess specifically social adaptations, entirely unconscious sensitivities to cues provided by others. We generally ‘solve’ one another effortlessly and automatically, and only turn to ‘decoding,’ deliberative problem-solving, when these reflexive forms of cognition let us down. The fact is, decoding is metabolically expensive, and we tend to avoid it as often as we can. Even more significantly (but not surprisingly), we tend to regard instances of decoding successful to the extent that we can once again resume relying on our thoughtless social reflexes. This is why, despite whatever ‘autonomy’ we might possess, we remain ant-like, blind problem-solvers, in this respect. We have literally evolved to participate in co-dependent communities, to cooperate when cooperation served our ancestors, to compete when competition served our ancestors, to condemn when condemnation served our ancestors, and so on. We do these things automatically, without ‘decoding,’ simply because they worked well enough in the past, given the kinds of systems that required solving (meaning others, even ourselves). We take their solving power for granted.

Humans, for all their vaunted ‘autonomy,’ remain social animals, biologically designed to take advantage of what we are without having to know what we are. This is the design–the one that allows us to blindly solve our social environments–that Transhumanism actively wants to render obsolete.

But before you shout, ‘Good riddance!’ it’s worth remembering that this also happens to be the design upon which all discourse regarding meaning and freedom happens to depend. Intentional discourse. The language of humanism…

Because as it turns out, ‘human’ is a heuristic construct through and through.

 

 

Akrasis

by rsbakker

Akrasis (or, social akrasis) refers to the technologically driven socio-economic process, already underway at the beginning of the 20th century, which would eventually lead to Choir.

Where critics in the early 21st century continued to decry the myriad cruelties of the capitalist system, they failed to grasp the greater peril hidden in the way capitalism panders to human yens. Quick to exploit the discoveries arising out of cognitive science, market economies spontaneously retooled to ever more effectively cue and service consumer demand, eventually reconfiguring the relation between buyer and seller into subpersonal circuits (triggering the notorious shift to ‘whim marketing,’ the data tracking of ‘desires’ independent of the individuals hosting them). The ecological nature of human cognition all but assured the mass manipulative character of this transformation. The human dependency on proximal information to cue what amount to ancestral guesses regarding the nature of their social and natural environments provided sellers with countless ways to game human decision making. The global economy was gradually reorganized to optimize what amounted to human cognitive shortcomings. We became our own parasite.

Just as technological transformation (in particular, the scaling of AI) began crashing the utility of our heuristic modes of meaning making, it began to provide virtual surrogates, ways to enable the exercise of otherwise unreliable cognitive capacities. In other words, even as the world became ever more inhuman, our environments became ever more anthropomorphic, ever more ‘smart’ and ‘immersive.’ Thus ‘akrasis,’ the ancient term referring to the state of acting against one’s judgment, which here describes a society acting against the human capacity to judge altogether, a society bent upon the systematic substitution of actual autonomy for simulated autonomy.

Humans, after all, have evolved to leverage the signal of select upstream interventions, assuming it a reliable component of their environments. Once we developed the capacity to hack these latter signals, the world effectively became a drug.

Akrasis has a long history, as long as life itself, according to certain theories. Before the 21st century, the process appeared ‘enlightening,’ but only because the limitations of the technologies involved (painting, literacy, etc.) rendered the resulting transformations manageable. But the rate of transformation continued to accelerate, while the human capacity to adapt remained constant. The outcome was inevitable. As the bandwidth of our interventions approached then surpassed the bandwidth of our central nervous systems, the simulation of meaning became the measure of meaning. Our very frame of reference had been engulfed. For billions, the only obvious direction of success—the direction of ‘cognitive comfort’—lay away from the world and into technology. So they defected in their billions, embracing signals, environments, manufactured entirely from predatory code. Culture became indistinguishable from cheat space—as did, for those embracing virtual fitness indicators, experience itself.

By 2050, we had become an advanced akratic civilization, a species whose ancestral modes of meaning-making had been utterly compromised. Art was an early casualty, though decades would be required to recognize as much. Fantasy, after all, was encouraged in all forms, especially those, like art or religion, laying claim to obsolete authority gradients. To believe in art was to display market vulnerabilities, or to be so poor as to be insignificant. No different than believing in God.

Social akrasis is now generally regarded as a thermodynamic process intrinsic to life, the mechanical outcome of biology falling within the behavioural purview of biology. Numerous simulations have demonstrated that ‘outcome convergent’ or ‘optimizing’ systems, once provided the base capacity required to extract excess capacity from their environments, will simply bootstrap until they reach a point where the system detaches from its environment altogether, begins converging upon the signal of some environmental outcome, rather than any actual environmental outcome.

Thus the famous ‘Junkie Solution’ to Fermi’s Paradox (as recently confirmed by the Gala Semantic Supercomputer at MIT).

And thus Choir.

The Augmentation Paradox

by rsbakker

So, thanks to the great discussion on the ‘Knowledge of Wisdom Paradox,’ here’s a sharper way to characterize the ecological stakes of the posthuman:

The Augmentation Paradox: The more you ‘improve’ some ancestral capacity, the more you degrade all ancestral capacities turning on the ancestral form of that capacity.

It’s not a paradox in the formal sense, of course. Also note that the dependency between ancestral capacities can be a dependency within or between individuals. Imagine a ‘confabulation detector,’ a device that shuts down your verbal reporting system whenever the neural signature of confabulation is detected, effectively freeing you from the dream world we all inhabit, while effectively exiling you from all social activities requiring confabulation (you now trigger ‘linguistic pause’ alerts), and perhaps dooming you to suffer debilitating depression.

It seems to me that something like this has to be floating around somewhere–in debates regarding transhumanism especially. If most all artificial augmentations entail natural degradations, then the question becomes one of what is gained overall. One can imagine, for instance, certain capacities degrading gracefully, while others (like the socio-cognitive capacities of those conned by Ashley Madison bots, for instance) collapsing catastrophically. So the question has to be, What guarantee do we have that augmentations will recoup degradations?

The point being, of course, that we’re not tinkering with cognitive technologies on the ground so much as on the 115th floor. It’s 3.8 billion years down!

Either way, the plausibility of the transhumanist project pretty clearly depends on somehow resolving the Augmentation Paradox in their favour.

BBT Creep: The Inherence Heuristic

by rsbakker

Exciting stuff! For years now the research has been creeping toward my grim semantic worst-case scenario–but “The inherence heuristic” is getting close, very close, especially the way it explicitly turns on the importance of heuristic neglect. The pieces have been there for quite some time; now researchers are beginning to put them together.

One way of looking at blind brain theory’s charge against intentionalism is that so-called intentional phenomena are pretty clear cut examples of inherence heuristics as discussed in this article, ways to handle complex systems absent any causal handle on those systems.  When Cimpion and Saloman write,

“To reiterate, the pool of facts activated by the mental shotgun for the purpose of generating an explanation for a pattern may often be heavily biased toward the inherent characteristics of that pattern’s constituents. As a result, when the storytelling part of the heuristic process takes over and attempts to make sense of the information at its disposal, it will have a rather limited number of options. That is, it will often be forced to construct a story that explains the existence of a pattern in terms of the inherent features of the entities within that pattern rather than in terms of factors external to it. However, the one-sided nature of the information delivered by the mental shotgun is not an impediment to the storytelling process. Quite the contrary – the less information is available, the easier it will be to fit it all into a coherent story.” 464

I think they are also describing what’s going on when philosophers attempt to theoretically solve intentionality, intentional cognition, relying primarily on the resources of intentional cognition. In fact, once you understand the heuristic nature of intentional cognition, the interminable nature of intentional philosophy becomes very easy to understand. We have no way of carving the complexities of cognition at the joints of the world, so we carve it at the joints of the problem instead. When your neighbour repairs your robotic body servant, rather than cognizing all the years he spent training to be a spy before being inserted into your daily routines, you ‘attribute’ him ‘knowledge,’ something miraculously efficacious in its own  right, inherent. And for the vast majority of problems you encounter, it works. Then the philosopher asks, ‘What is knowledge?’ and because adducing causal information scrambles our intuitions of ‘inherence,’ he declares only intentional idioms can cognize intentional phenomena, and the species remains stumped to this very day. Exactly as we should expect. Why should we think tools adapted to do without information regarding our nature can decode their own nature? What would this ‘nature’ be?

The best way to understand intentional philosophy, on a blind brain view, is as a discursive ‘crash space,’ a point where the application of our cognitive tools outruns their effectiveness in ways near and far. I’ve spent the last few years, now, providing various diagnoses of the kinds of theoretical wrecks we find in this space. Articles such as this convince me I won’t be alone for much longer!

So to give a brief example. Once one understands the degree to which intentional idioms turn on ‘inherence heuristics’–ways to manage causal systems absent any behavioural sensitivity to the mechanics of those systems–you can understand the deceptiveness of things like ‘intentional stances,’ the way they provide an answer that functions more like a get-out-of-jail-free card than any kind of explanation.

Given that ‘intentional stances’ belong to intentional cognition, then the fact that intentional cognition solves problems neglecting what is actually going on reflects rather poorly on the theoretical fortunes of the intentional stance. The fact is ‘intentional stances’ leave us with a very low dimensional understanding of our actual straits when it comes to understanding cognition–as we should expect, given that it utilizes a low dimensional heuristic system geared to solving practical problems on the fly and theoretical problems not at all.

All along I’ve been trying to show the way heuristics allow us to solve the explanatory gap, to finally get rid of intentional occultisms like the intentional stance and replace them with a more austere, and more explanatorily comprehensive picture. Now that the cat’s out of the bag, more and more cognitive scientists are going to explore the very real consequences of heuristic neglect. They will use it to map out the neglect structure of the human brain in ever finer detail, thus revealing where our intuitions trip over their own heuristic limits, and people will begin to see how thought can be construed as mangles of parallel-distributed processing meat. It will be clear that the ‘real patterns’ are not the ones required to redeem reflection, or its jargon. Nothing can do that now. Mark my words, inherence heuristics have a bright explanatory future.

Bonfire bright.

The Knowledge of Wisdom Paradox

by rsbakker

Consider: We’ve evolved to solve environments using as little information as possible. This means we’ve evolved to solve environments ignoring as much information as possible. This means we’ve evolved to take as much of our environments for granted as possible. This means evolution has encoded an extraordinary amount of implicit knowledge into our cognitive systems. You could say that each and every one of us constitutes a kind of solution to an ‘evolutionary frame problem.’

Thus the ‘Knowledge of Wisdom Paradox.’ The more explicit knowledge we accumulate, the more we can environmentally intervene. The more we environmentally intervene, the more we change the taken-for-granted backgrounds. The more we change taken-for-granted backgrounds, the less reliable our implicit knowledge becomes.

In other words, the more robust/reliable our explicit knowledge tends to become, the less robust/reliable our implicit knowledge tends to become. Has anyone come across a version of this paradox anywhere? It actually strikes me as a very parsimonious way to make sense of how intelligence manages to make such idiots of some individuals. And its implications for our future are nothing if not profound.

 

Alienating Philosophies

by rsbakker

I still have no dates to report for The Unholy Consult, but I’m hoping that all the pieces will begin falling together this week. As soon as I know, I will post, I promise. In the meantime, for those interested, I do have some linkage to share.

Buzzfeed Books were kind enough to include The Prince of Nothing in their Top 51 Fantasy Series Ever Written a few days back, proving yet again why I need to get off my ass and get some real publicity shots.

As well, my “Alien Philosophy” piece from the previous two weeks has garnered some thoughtful responses both from Peter Hankins at Conscious Entities, and from Rick Searle at both Utopia or Dystopia and the Institute for Ethics and Emerging Technologies. The discussion is just getting warmed up, so by all means, join in!

I didn’t want to say anything until the post had a chance to be judged on its own merits, but “Alien Philosophy” is actually an extract from my attempt to write a “reader friendly” introduction to Through the Brain Darkly. Though I think it works well enough as a stand alone article, I’ve all but given up on it as intro material, and quite frankly, feel like a fool for ever thinking it possibly could be. Soooooo it’s back to the drawing board for me…

Follow

Get every new post delivered to your Inbox.

Join 714 other followers