Three Pound Brain

No bells, just whistling in the dark…

Tag: eliminativism

The Truth Behind the Myth of Correlationism

by rsbakker

A wrong turn lies hidden in the human cultural code, an error that has scuttled our every attempt to understand consciousness and cognition. So much philosophical activity reeks of dead ends: we try and we try, and yet we find ourselves mired in the same ancient patterns of disputation. The majority of thinkers believe the problem is local, that they need only tinker with the tools they’ve inherited. They soldier on, arguing that this or that innovative modification will overcome our confusion. Some, however, believe the problem lies deeper. I’m one of those thinkers, as is Meillassoux. I think the solution lies in speculation bound to the hip of modern science, in something I call ‘heuristic neglect.’ For me, the wrong turn lies in the application of intentional cognition to solve the theoretical problem of intentional cognition. Meillassoux thinks it lies in what he calls ‘correlationism.’

Since I’ve been accused of ‘correlationism’ on a couple of occasions now, I thought it worthwhile tackling the issue in more detail. This will not be an institutional critique a la Golumbia’s, who manages to identify endless problems with Meillassoux’s presentation, while somehow entirely missing his skeptical point: once cognition becomes artifactual, it becomes very… very difficult to understand. Cognitive science is itself fractured about Meillassoux’s issue.

What follows will be a constructive critique, an attempt to explain the actual problem underwriting what Meillassoux calls ‘correlationism,’ and why his attempt to escape that problem simply collapses into more interminable philosophy. The problem that artifactuality poses to the understanding of cognition is very real, and it also happens to fall into the wheelhouse of Heuristic Neglect Theory (HNT). For those souls growing disenchanted with Speculative Realism, but unwilling to fall back into the traditional bosom, I hope to show that HNT not only offers the radical break with tradition that Meillassoux promises, it remains inextricably bound to the details of this, the most remarkable age.

What is correlationism? The experts explain:

Correlation affirms the indissoluble primacy of the relation between thought and its correlate over the metaphysical hypostatization or representational reification of either term of the relation. Correlationism is subtle: it never denies that our thoughts or utterances aim at or intend mind-independent or language-independent realities; it merely stipulates that this apparently independent dimension remains internally related to thought and language. Thus contemporary correlationism dismisses the problematic of scepticism, and or epistemology more generally, as an antiquated Cartesian hang-up: there is supposedly no problem about how we are able to adequately represent reality; since we are ‘always already’ outside ourselves and immersed in or engaging with the world (and indeed, this particular platitude is constantly touted as the great Heideggerean-Wittgensteinian insight). Note that correlationism need not privilege “thinking” or “consciousness” as the key relation—it can just as easily replace it with “being-in-the-world,” “perception,” “sensibility,” “intuition,” “affect,” or even “flesh.” Ray Brassier, Nihil Unbound, 51

By ‘correlation’ we mean the idea according to which we only ever have access to the correlation between thinking and being, and never to either term considered apart from the other. We will henceforth call correlationism any current of thought which maintains the unsurpassable character of the correlation so defined. Consequently, it becomes possible to say that every philosophy which disavows naive realism has become a variant of correlationism. Quentin Meillassoux, After Finitude, 5

Correlationism rests on an argument as simple as it is powerful, and which can be formulated in the following way: No X without givenness of X, and no theory about X without a positing of X. If you speak about something, you speak about something that is given to you, and posited by you. Consequently, the sentence: ‘X is’, means: ‘X is the correlate of thinking’ in a Cartesian sense. That is: X is the correlate of an affection, or a perception, or a conception, or of any subjective act. To be is to be a correlate, a term of a correlation . . . That is why it is impossible to conceive an absolute X, i.e., an X which would be essentially separate from a subject. We can’t know what the reality of the object in itself is because we can’t distinguish between properties which are supposed to belong to the object and properties belonging to the subjective access to the object. Quentin Meillassoux,”Time without Becoming

The claim of correlationism is the corollary of the slogan that ‘nothing is given’ to understanding: everything is mediated. Once knowing becomes an activity, then the objects insofar as they are known become artifacts in some manner: reception cannot be definitively sorted from projection and as a result no knowledge can be said to be absolute. We find ourselves trapped in the ‘correlationist circle,’ trapped in artifactual galleries, never able to explain the human-independent reality we damn well know exists. Since all cognition is mediated, all cognition is conditional somehow, even our attempts (or perhaps, especially our attempts) to account for those conditions. Any theory unable to decisively explain objectivity is a theory that cannot explain cognition. Ergo, correlationism names a failed (cognitivist) philosophical endeavour.

It’s a testament to the power of labels in philosophy, I think, because as Meillassoux himself acknowledges there’s nothing really novel about the above sketch. Explaining the ‘cognitive difference’ was my dissertation project back in the 90’s, after all, and as smitten as I was with my bullshit solution back then, I didn’t think the problem itself was anything but ancient. Given this whole website is dedicated to exploring and explaining consciousness and cognition, you could say it remains my project to this very day! One of the things I find so frustrating about the ‘critique of correlationism’ is that the real problem—the ongoing crisis—is the problem of meaning. If correlationism fails because correlationism cannot explain cognition, then the problem of correlationism is an expression of a larger problem, the problem of cognition—or in other words, the problem of intentionality.

Why is the problem of meaning an ongoing crisis? In the past six fiscal years, from 2012 to 2017, the National Institute of Health will have spent more than 113 billion dollars funding research bent on solving some corner of the human soul. [1] And this is just one public institution in one nation involving health related research. If you include the cognitive sciences more generally—research into everything from consumer behaviour to AI—you could say that solving the human soul commands more resources than any other domain in history. The reason all this money is being poured into the sciences rather than philosophy departments is that the former possesses real world consequences: diseases cured, soap sold, politicians elected. As someone who tries to keep up with developments in Continental philosophy, I already find the disconnect stupendous, how whole populations of thinkers continue discoursing as if nothing significant has changed, bitching about traditional cutlery in the shadow of the cognitive scientific tsunami.

Part of the popularity of the critique of correlationism derives from anxieties regarding the growing overlap of the sciences of the human and the humanities. All thinkers self-consciously engaged in the critique of correlationism reference scientific knowledge as a means of discrediting correlationist thought, but as far as I can tell, the project has done very little to bring the science, what we’re actually learning about consciousness and cognition, to the fore of philosophical debates. Even worse, the notion of mental and/or neural mediation is actually central to cognitive science. What some neuroscientists term ‘internal models,’ which monolopolize our access to ourselves and the world, is nothing if not a theoretical correlation of environments and cognition, trapping us in models of models. The very science that Meillassoux thinks argues against correlationism in one context, explicitly turns on it in another. The mediation of knowledge is the domain of cognitive science—full stop. A naturalistic understanding of cognition is a biological understanding is an artifactual understanding: this is why the upshot of cognitive science is so often skeptical, prone to further diminish our traditional (if not instinctive) hankering for unconditioned knowledge—to reveal it as an ancestral conceit

A kind of arche-fossil.

If an artifactual approach to cognition is doomed to misconstrue cognition, then cognitive science is a doomed enterprise. Despite the vast sums of knowledge accrued, the wondrous and fearsome social instrumentalities gained, knowledge itself will remain inexplicable. What we find lurking in the bones of Meillassoux’s critique, in other words, is precisely the same commitment to intentional exceptionality we find in all traditional philosophy, the belief that the subject matter of traditional philosophical disputation lies beyond the pale of scientific explanation… that despite the cognitive scientific tsunami, traditional intentional speculation lies secure in its ontological bunkers.

Only more philosophy, Meillassoux thinks, can overcome the ‘scandal of philosophy.’ But how is mere opinion supposed to provide bona fide knowledge of knowledge? Speculation on mathematics does nothing to ameliorate this absurdity: even though paradigmatic of objectivity, mathematics remains as inscrutable as knowledge itself. Perhaps there is some sense to be found in the notion of interrogating/theorizing objects in a bid to understand objectivity (cognition), but given what we now know regarding our cognitive shortcomings in low-information domains, we can be assured that ‘object-oriented’ approaches will bog down in disputation.

I just don’t know how to make the ‘critique of correlationism’ workable, short ignoring the very science it takes as its motivation, or just as bad, subordinating empirical discoveries to some school of ‘fundamental ontological’ speculation. If you’re willing to take such a leap of theoretical faith, you can be assured that no one in the vicinity of cognitive science will take it with you—and that you will make no difference in the mad revolution presently crashing upon us.

We know that knowledge is somehow an artifact of neural function—full stop. Meillassoux is quite right to say this renders the objectivity of knowledge very difficult to understand. But why think the problem lies in presuming the artifactual nature of cognition?—especially now that science has begun reverse-engineering that nature in earnest! What if our presumption of artifactuality weren’t so much the problem, as the characterization? What if the problem isn’t that cognitive science is artifactual so much as how it is?

After all, we’ve learned a tremendous amount about this how in the past decades: the idea of dismissing all this detail on the basis of a priori guesswork seems more than a little suspect. The track record would suggest extreme caution. As the boggling scale of the cognitive scientific project should make clear, everything turns on the biological details of cognition. We now know, for instance, that the brain employs legions of special purpose devices to navigate its environments. We know that cognition is thoroughly heuristic, that it turns on cues, bits of available information statistically correlated to systems requiring solution.

Most all systems in our environment shed information enabling the prediction of subsequent behaviours absent the mechanical particulars of that information. The human brain is exquisitely tuned to identify and exploit the correlation of information available and subsequent behaviours. The artifactuality of biology is an evolutionary one, and as such geared to the thrifty solution of high impact problems. To say that cognition (animal or human) is heuristic is to say it’s organized according to the kinds of problems our ancestors needed to solve, and not according to those belonging to academics. Human cognition consists of artifactualities, subsystems dedicated to certain kinds of problem ecologies. Moreover, it consists of artifactualities selected to answer questions quite different from those posed by philosophers.

These two facts drastically alter the landscape of the apparent problem posed by ‘correlationism.’ We have ample theoretical and empirical reasons to believe that mechanistic cognition and intentional cognition comprise two quite different cognitive regimes, the one dedicated to explanation via high-dimensional (physical) sourcing, the other dedicated to explanation absent that sourcing. As an intentional phenomena, objectivity clearly belongs to the latter. Mechanistic cognition, meanwhile, is artifactual. What if it’s the case that ‘objectivity’ is the turn of a screw in a cognitive system selected to solve in the absence of artifactual information? Since intentional cognition turns on specific cues to leverage solutions, and since those cues appear sufficient (to be the only game in town where that behaviour is concerned), the high-dimensional sourcing of that same behavior generates a philosophical crash space—and a storied one at that! What seems sourceless and self-evident becomes patently impossible.

Short magic, cognitive systems possess the environmental relationships they do thanks to super-complicated histories of natural and neural selection—evolution and learning. Let’s call this their orientation, understood as the nonintentional (‘zombie’) correlate of ‘perspective.’ The human brain is possibly the most complex thing we know of in the universe (a fact which should render any theory of the human neglecting that complexity suspect). Our cognitive systems, in other words, possess physically intractable orientations. How intractable? Enough that billions of dollars in research has merely scratched the surface.

Any capacity to cognize this relationship will perforce be radically heuristic, which is to say, provide a means to solve some critical range of problems—a problem ecology—absent natural historical information. The orientation heuristically cognized, of course, is the full-dimensional relationship we actually possess, only hacked in ways that generate solutions (repetitions of behaviour) while neglecting the physical details of that relationship.

Most significantly, orientation neglects the dimension of mediation: thought and perception (whatever they amount to) are thoroughly blind to their immediate sources. This cognitive blindness to the activity of cognition, or medial neglect, amounts to a gross insensitivity to our physical continuity with our environments, the fact that we break no thermodynamic laws. Our orientation, in other words, is characterized by a profound, structural insensitivity to its own constitution—its biological artifactuality, among other things. This auto-insensitivity, not surprisingly, includes insensitivity to the fact of this insensitivity, and thus the default presumption of sufficiency. Specialized sensitivities are required to flag insufficiencies, after all, and like all biological devices, they do not come for free. Not only are we blind to our position within the superordinate systems comprising nature, we’re blind to our blindness, and so, unable to distinguish table-scraps from a banquet, we are duped into affirming inexplicable spontanieties.

‘Truth’ belongs to our machinery for communicating (among other things) the sufficiency of iterable orientations within superordinate systems given medial neglect. You could say it’s a way to advertise clockwork positioning (functional sufficiency) absent any inkling of the clock. ‘Objectivity,’ the term denoting the supposed general property of being true apart from individual perspectives, is a deliberative contrivance derived from practical applications of ‘truth’—the product of ‘philosophical reflection.’ The problem with objectivity as a phenomenon (as opposed to ‘objectivity’ as a component of some larger cognitive articulation) is that the sufficiency of iterable orientations within superordinate systems is always a contingent affair. Whether ‘truth’ occasions sufficiency is always an open question, since the system provides, at best, a rough and ready way to communicate and/or troubleshoot orientation. Unpredictable events regularly make liars of us all. The notion of facts ‘being true’ absent the mediation of human cognition, ‘objectivity,’ also provides a rough and ready way to communicate and/or troubleshoot orientation in certain circumstances. We regularly predict felicitous orientations without the least sensitivity to their artifactual nature, absent any inkling how their pins lie in intractable high-dimensional coincidences between buzzing brains. This insensitivity generates the illusion of absolute orientation, a position outside natural regularities—a ‘view from nowhere.’ We are a worm in the gut of nature convinced we possess disembodied eyes. And so long as the consequences of our orientations remain felicitous, our conceit need not be tested. Our orientations might as well ‘stand nowhere’ absent cognition of their limits.

Thus can ‘truth’ and ‘objectivity’ be naturalized and their peculiarities explained.

The primary cognitive moral here is that lacking information has positive cognitive consequences, especially when it comes to deliberative metacognition, our attempts to understand our nature via philosophical reflection alone. Correlationism evidences this in a number of ways.

As soon as the problem of cognition is characterized as the problem of thought and being, it becomes insoluble. Intentional cognition is heuristic: it neglects the nature of the systems involved, exploiting cues correlated to the systems requiring solution instead. The application of intentional cognition to theoretical explanation, therefore, amounts to the attempt to solve natures using a system adapted to neglect natures. A great deal of traditional philosophy is dedicated to the theoretical understanding of cognition via intentional idioms—via applications of intentional cognition. Thus the morass of disputation. We presume that specialized problem-solving systems possess general application. Lacking the capacity to cognize our inability to cognize the theoretical nature of cognition, we presume sufficiency. Orientation, the relation between neural systems and their proximal and distal environments—between two systems of objects—becomes perspective, the relation between subjects (or systems of subjects) and systems of objects (environments). If one conflates the manifest artifactual nature of orientation for the artifactual nature of perspective (subjectivity), then objectivity itself becomes a subjective artifact, and therefore nothing objective at all. Since orientation characterizes our every attempt to solve for cognition, conflating it with perspective renders perspective inescapable, and objectivity all but inexplicable. Thus the crash space of traditional epistemology.

Now I know from hard experience that the typical response to the picture sketched above is to simply insist on the conflation of orientation and perspective, to assert that my position, despite its explanatory power, simply amounts to more of the same, another perspectival Klein Bottle distinctive only for its egregious ‘scientism.’ Only my intrinsically intentional perspective, I am told, allows me to claim that such perspectives are metacognitive artifacts, a consequence of medial neglect. But asserting perspective before orientation on the basis of metacognitive intuitions alone not only begs the question, it also beggars explanation, delivering the project of cognizing cognition to never-ending disputation—an inability to even formulate explananda, let alone explain anything. This is why I like asking intentionalists how many centuries of theoretical standstill we should expect before that oft advertised and never delivered breakthrough finally arrives. The sin Meillassoux attributes to correlationism, the inability to explain cognition, is really just the sin belonging to intentional philosophy as a whole. Thanks to medial neglect, metcognition,  blind to both its sources and its source blindness, insists we stand outside nature. Tackling this intuition with intentional idioms leaves our every attempt to rationalize our connection underdetermined, a matter of interminable controversy. The Scandal dwells on eternal.

I think orientation precedes perspective—and obviously so, having watched loved ones dismantled by brain disease. I think understanding the role of neglect in orientation explains the peculiarities of perspective, provides a parsimonious way to understand the apparent first-person in terms of the neglect structure belonging to the third. There’s no problem with escaping the dream tank and touching the world simply because there’s no ontological distinction between ourselves and the cosmos. We constitute a small region of a far greater territory, the proximal attuned to the distal. Understanding the heuristic nature of ‘truth’ and ‘objectivity,’ I restrict their application to adaptive problem-ecologies, and simply ask those who would turn them into something ontologically exceptional why they would trust low-dimensional intuitions over empirical data, especially when those intuitions pretty much guarantee perpetual theoretical underdetermination. Far better trust to our childhood presumptions of truth and reality, in the practical applications of these idioms, than in any one of the numberless theoretical misapplications ‘discovering’ this trust fundamentally (as opposed to situationally) ‘naïve.’

The cognitive difference, what separates the consequences of our claims, has never been about ‘subjectivity’ versus ‘objectivity,’ but rather intersystematicity, the integration of ever-more sensitive orientations possessing ever more effectiveness into the superordinate systems encompassing us all. Physically speaking, we’ve long known that this has to be the case. Short actual difference making differences, be they photons striking our retinas or compression waves striking our eardrums or so on, no difference is made. Even Meillassoux acknowledges the necessity of physical contact. What we’ve lacked is a way of seeing how our apparently immediate intentional intuitions, be they phenomenological, ontological, or normative, fit into this high-dimensional—physical—picture.

Heuristic Neglect Theory not only provides this way, it also explains why it has proven so elusive over the centuries. HNT explains the wrong turn mentioned above. The question of orientation immediately cues the systems our ancestors developed to circumvent medial neglect. Solving for our behaviourally salient environmental relationships, in other words, automatically formats the problem in intentional terms. The automaticity of the application of intentional cognition renders it apparently ‘self-evident.’

The reason the critique of correlationism and speculative realism suffer all the problems of underdetermination their proponents attribute to correlationism is that they take this very same wrong turn. How is Meillassoux’s ‘hyper-chaos,’ yet another adventure in a priori speculation, anything more than another pebble tossed upon the heap of traditional disputation? Novelty alone recommends them. Otherwise they leave us every bit as mystified, every bit as unable to accommodate the torrent of relevant scientific findings, and therefore every bit as irrelevant to the breathtaking revolutions even now sweeping us and our traditions out to sea. Like the traditions they claim to supersede, they peddle cognitive abjection, discursive immobility, in the guise of fundamental insight.

Theoretical speculation is cheap, which is why it’s so frightfully easy to make any philosophical account look bad. All you need do is start worrying definitions, then let the conceptual games begin. This is why the warrant of any account is always a global affair, why the power of Evolutionary Theory, for example, doesn’t so much lie in the immunity of its formulations to philosophical critique, but in how much it explains on nature’s dime alone. The warrant of Heuristic Neglect Theory likewise turns on the combination of parsimony and explanatory power.

Anyone arguing that HNT necessarily presupposes some X, be it ontological or normative, is simply begging the question. Doesn’t HNT presuppose the reality of intentional objectivity? Not at all. HNT certainly presupposes applications of intentional cognition, which, given medial neglect, philosophers pose as functional or ontological realities. On HNT, a theory can be true even though, high-dimensionally speaking, there is no such thing as truth. Truth talk possesses efficacy in certain practical problem-ecologies, but because it participates in solving something otherwise neglected, namely the superordinate systematicity of orientations, it remains beyond the pale of intentional resolution.

Even though sophisticated critics of eliminativism acknowledge the incoherence of the tu quoque, I realize this remains a hard twist for many (if not most) to absorb, let alone accept. But this is exactly as it should be, both insofar as something has to explain why isolating the wrong turn has proven so stupendously difficult, and because this is precisely the kind of trap we should expect, given the heuristic and fractionate nature of human cognition. ‘Knowledge’ provides a handle on the intersection of vast, high-dimensional histories, a way to manage orientations without understanding the least thing about them. To know knowledge, we will come to realize, is to know there is no such thing, simply because ‘knowing’ is a resolutely practical affair, almost certainly inscrutable to intentional cognition. When you’re in the intentional mode, this statement simply sounds preposterous—I know it once struck me as such! It’s only when you appreciate how far your intuitions have strayed from those of your childhood, back when your only applications of intentional cognition were practical, that you can see the possibility of a more continuous, intersystematic way to orient ourselves to the cosmos. There was a time before you wandered into the ancient funhouse of heuristic misapplication, when you could not distinguish between your perspective and your orientation. HNT provides a theoretical way to recover that time and take a radically different path.

As a bona fide theory of cognition, HNT provides a way to understand our spectacular inability to understand ourselves. HNT can explain ‘aporia.’ The metacognitive resources recruited for the purposes of philosophical reflection possess alarm bells—sensitivities to their own limits—relevant only to their ancestral applications. The kinds of cognitive apories (crash spaces) characterizing traditional philosophy are precisely those we might expect, given the sudden ability to exercise specialized metacognitive resources out of school, to apply, among other things, the problem-solving power of intentional cognition to the question of intentional cognition.

As a bona fide theory of cognition, HNT bears as much on artificial cognition as on biological cognition, and as such, can be used to understand and navigate the already radical and accelerating transformation of our cognitive ecologies. HNT scales, from the subpersonal to the social, and this means that HNT is relevant to the technological madness of the now.

As a bona fide empirical theory, HNT, unlike any traditional theory of intentionality, will be sorted. Either science will find that metacognition actually neglects information in the ways I propose, or it won’t. Either science will find this neglect possesses the consequences I theorize, or it won’t. Nothing exceptional and contentious is required. With our growing understanding of the brain and consciousness comes a growing understanding of information access and processing capacity—and the neglect structures that fall out of them. The human brain abounds in bottlenecks, none of which are more dramatic than consciousness itself.

Cognition is biomechanical. The ‘correlation of thought and being,’ on my account, is the correlation of being and being. The ontology of HNT is resolutely flat. Once we understand that we only glimpse as much of our orientations as our ancestors required for reproduction, and nothing more, we can see that ‘thought,’ whatever it amounts to, is material through and through.

The evidence of this lies strewn throughout the cognitive wreckage of speculation, the alien crash site of philosophy.

 

Notes

[1] This includes, in addition to the neurosciences proper, research into Basic Behavioral and Social Science (8.597 billion), Behavioral and Social Science (22.515 billion), Brain Disorders (23.702 billion), Mental Health (13.699 billion), and Neurodegenerative (10.183 billion). https://report.nih.gov/categorical_spending.aspx 21/01/2017

 

Framing “On Alien Philosophy”…

by rsbakker

dubbit

Peter Hankins of Conscious Entities fame has a piece considering “On Alien Philosophy.” The debate is just getting started, but I thought it worthwhile explaining why I think this particular paper of mine amounts to more than yet just another interpretation to heap onto the intractable problem of ourselves.

Consider the four following claims:

1) We have biologically constrained (in terms of information access and processing resources) metacognitive capacities ancestrally tuned to the solution of various practical problem ecologies, and capable of exaptation to various other problems.

2) ‘Philosophical reflection’ constitutes such an exaptation.

3) All heuristic exaptations inherit, to some extent, the problem-solving limitations of the heuristic exapted.

4) ‘Philosophical reflection’ inherits the problem-solving limitations of deliberative metacognition.

Now I don’t think there’s much anything controversial about any of these claims (though, to be certain, there’s a great many devils lurking in the details adduced). So note what happens when we add the following:

5) We should expect human philosophical practice will express, in a variety of ways, the problem-solving limitations of deliberative metacognition.

Which seems equally safe. But note how the terrain of the philosophical debate regarding the nature of the soul has changed. Any claim purporting the exceptional nature of this or that intentional phenomena now needs to run the gauntlet of (5). Why assume we cognize something ontologically exceptional when we know we are bound to be duped somehow? All things being equal, mediocre explanations will always trump exceptional ones, after all.

The challenge of (5) has been around for quite some time, but if you read (precritical) eliminativists like Churchland, Stich, or Rosenberg, this is where the battle grinds to a standstill. Why? Because they have no general account of how the inevitable problem-solving limitations of deliberative metacognition would be expressed in human philosophical practice, let alone how they would generate the appearance of intentional phenomena. Since all they have are promissory notes and suggestive gestures, ontologically exceptional accounts remain the only game in town. So, despite the power of (5), the only way to speak of intentional phenomena remains the traditional, philosophical one. Science is blind without theory, so absent any eliminativist account of intentional phenomena, it has no clear way to proceed with their investigation. So it hews to exceptional posits, trusting in their local efficacy, and assuming they will be demystified by discoveries to come.

Thus the challenge posed by Alien Philosophy. By giving real, abductive teeth to (5), my account overturns the argumentative terrain between eliminativism and intentionalism by transforming the explanatory stakes. It shows us how stupidity, understood ecologically, provides everything we need to understand our otherwise baffling intuitions regarding intentional phenomena. “On Alien Philosophy” challenges the Intentionalist to explain more with less (the very thing, of course, he or she cannot do).

Now I think I’ve solved the problem, that I have a way to genuinely naturalize meaning and cognition. The science will sort my pretensions in due course, but in the meantime, the heuristic neglect account of intentionality, given its combination of mediocrity and explanatory power, has to be regarded as a serious contender.

Scripture become Philosophy become Fantasy

by rsbakker

scripture

Cosmos and History has published “From Scripture to Fantasy: Adrian Johnston and the Problem of Continental Fundamentalism” in their most recent edition, which can be found here. This is a virus that needs to infect as many continental philosophy graduate students as possible, lest the whole tradition be lost to irrelevance. The last millennium’s radicals have become this millennium’s Pharisees with frightening speed, and now only the breathless have any hope of keeping pace.

ABSTRACT: Only the rise of science allowed us to identify scriptural ontologies as fantastic conceits, as anthropomorphizations of an indifferent universe. Now that science is beginning to genuinely disenchant the human soul, history suggests that traditional humanistic discourses are about to be rendered fantastic as well. Via a critical reading of Adrian Johnston’s ‘transcendental materialism,’ I attempt to show both the shape and the dimensions of the sociocognitive dilemma presently facing Continental philosophers as they appear to their outgroup detractors. Trusting speculative a priori claims regarding the nature of processes and entities under scientific investigation already excludes Continental philosophers from serious discussion. Using such claims, as Johnston does, to assert the fundamentally intentional nature of the universe amounts to anthropomorphism. Continental philosophy needs to honestly appraise the nature of its relation to the scientific civilization it purports to decode and guide, lest it become mere fantasy, or worse yet, conceptual religion.

KEYWORDS: Intentionalism; Eliminativism; Humanities; Heuristics; Speculative Materialism

All transcendental indignation welcome! I was a believer once.

It Is What It Is (Until Notified Otherwise)

by rsbakker

wynnwood-brilliance

 

The thing to always remember when one finds oneself in the middle of some historically intractable philosophical debate is that path-dependency is somehow to blame. This is simply to say that the problem is historical in that squabbles regarding theoretical natures always arises from some background of relatively problem-free practical application. At some point, some turn is taken and things that seem trivially obvious suddenly seem stupendously mysterious. St. Augustine, in addition to giving us one of the most famous quotes in philosophy, gives us a wonderful example of this in The Confessions when he writes:

“What, then, is time? If no one asks of me, I know; if I wish to explain to him who asks, I know not.” XI, XIV, 17

But the rather sobering fact is that this is the case with a great number of the second order questions we can pose. What is mathematics? What’s a rule? What’s meaning? What’s cause? And of course, what is phenomenal consciousness?

So what is it with second order interrogations? Why is ‘time talk’ so easy and effortlessly used even though we find ourselves gobsmacked each and every time someone asks what time qua time is? It seems pretty clear that either we lack the information required or the capacity required or some nefarious combination of both. If framing the problem like this sounds like a no-brainer, that’s because it is a no-brainer. The remarkable thing lies in the way it recasts the issue at stake, because as it turns out, the question of the information and capacity we have available is a biological one, and this provides a cognitive ecological means of tackling the problem. Since practical solving for time (‘timing’) is obviously central to survival, it makes sense that we would possess the information access and cognitive capacity required to solve a wide variety of timing issues. Given that theoretical solving for time (qua-time) isn’t central to survival (no species does it and only our species attempts it), it makes sense that we wouldn’t possess the information access and cognitive capacity required, that we would suffer time-qua-time blindness.

From a cognitive ecological perspective, in other words, St. Augustine’s perplexity should come as no surprise at all. Of course solving time-qua-time is mystifying: we evolved the access and capacity required for solving the practical problems of timing, and not the theoretical problem of time. Now I admit if the cognitive ecological approach ground to a halt here it wouldn’t be terribly illuminating, but there’s quite a bit more to be said: it turns out cognitive ecology is highly suggestive of the different ways we might expect our attempts to solve things like time-qua-time to break down.

What would it be like to reach the problem-solving limits of some practically oriented problem-solving mode? Well, we should expect our assumptions/intuitions to stop delivering answers. My daughter is presently going through a ‘cootie-catcher’ phase and is continually instructing me to ask questions, then upbraiding me when my queries don’t fit the matrix of possible ‘answers’ provided by the cootie-catcher (yes, no, and versions of maybe). Sometimes she catches these ill-posed questions immediately, and sometimes she doesn’t catch them until the cootie-catcher generates a nonsensical response.

cootie-catcher-2

Now imagine your child never revealed their cootie-catcher to you: you asked questions, then picked colours or numbers or animals, and it turned out some were intelligibly answered, and some were not. Very quickly you would suss out the kinds of questions that could be asked, and the kinds that could not. Now imagine unbeknownst to you that your child replaced their cootie-catcher with a computer running two separately tasked, distributed AlphaGo type programs, the first trained to provide well-formed (if not necessarily true) answers to basic questions regarding causality and nothing else, the second trained to provide well-formed (if not necessarily true) answers to basic questions regarding goals and intent. What kind of conclusions would you draw, or more importantly, assume? Over time you would come to suss out the questions generating ill-formed answers versus questions generating well-formed ones. But you would have no way of knowing that two functionally distinct systems were responsible for the well-formed answers: causal and purposive modes would seem the product of one cognitive system. In the absence of distinctions you would presume unity.

Think of the difference between Plato likening memory to an aviary in the Theaetetus and the fractionate, generative memory we now know to be the case. The fact that Plato assumed as much, unity and retrieval, shouts something incredibly important once placed in a cognitive ecological context. What it suggests is that purely deliberative attempts to solve second-order problems, to ask questions like what is memory-qua-memory, will almost certainly run afoul the problem of default identity, the identification that comes about for the want of distinctions. To return to our cootie-catcher example, it’s not simply that we would report unity regarding our child’s two AlphaGo type programs the way Plato did with memory, it’s that information involving its dual structure would play no role in our cognitive economy whatsoever. Unity, you could say, is the assumption built into the system. (And this applies as much to AI as it does to human beings. The first ‘driverless fatality’ died because his Tesla Model S failed to distinguish a truck trailer from the sky.)

Default identity, I think, can play havoc with even the most careful philosophical interrogations—such as the one Eric Schwitzgebel gives in the course of rebutting Keith Frankish, both on his blog and in his response in The Journal of Consciousness Studies, “Phenomenal Consciousness, Defined and Defended as Innocently as I Can Manage.”

According to Eric, “Illusionism as a Theory of Consciousness” presents the phenomenal realist with a dilemma: either they commit to puzzling ontological features such as simple, ineffable, intrinsic, or so on, or they commit to explaining those features away, which is to say, some variety of Illusionism. Since Eric both believes that phenomenal consciousness is real, and that the extraordinary properties attributed to it are likely not real, he proposes a third way, a formulation of phenomenal experience that neither inflates it into something untenable, nor deflates into something that is plainly not phenomenal experience. “The best way to meet Frankish’s challenge,” he writes, “is to provide something that the field of consciousness studies in any case needs: a clear definition of phenomenal consciousness, a definition that targets a phenomenon that is both substantively interesting in the way that phenomenal consciousness is widely thought to be interesting but also innocent of problematic metaphysical and epistemological assumptions” (2).

It’s worth noting the upshot of what Eric is saying here: the scientific study of phenomenal consciousness cannot, as yet, even formulate their primary explanandum. The trick, as he sees it, is to find some conceptual way to avoid the baggage, while holding onto some semblance of a wardrobe. And his solution, you might say, is to wear as many outfits as he possibly can. He proposes that definition by example is uniquely suited to anchor an ontologically and epistemologically innocent concept of phenomenal consciousness.

He has but one caveat: any adequate formulation of phenomenal consciousness has to account or allow for what Eric terms its ‘wonderfulness’:

If the reduction of phenomenal consciousness to something physical or functional or “easy” is possible, it should take some work. It should not be obviously so, just on the surface of the definition. We should be able to wonder how consciousness could possibly arise from functional mechanisms and matter in motion. Call this the wonderfulness condition. 3

He concedes the traditional properties ascribed to phenomenal experience outrun naturalistic credulity, but the feature of begging belief remains to be explained. This is the part of Eric’s position to keep an eye on because it means his key defense against eliminativism is abductive. Whatever phenomenal consciousness is, it seems safe to say it is not something easily solved. Any account purporting to solve phenomenal consciousness that leaves the wonderfulness condition unsatisfied is likely missing phenomenal consciousness altogether.

And so Eric provides a list of positive examples including sensory and somatic experiences, conscious imagery, emotional experience, thinking and desiring, dreams, and even other people, insofar as we continually attribute these very same kinds of experiences to them. By way of negative examples, he mentions a variety of intimate, yet obviously not phenomenally conscious processes, such as fingernail growth, intestinal lipid absorption, and so on.

He writes:

Phenomenal consciousness is the most folk psychologically obvious thing or feature that the positive examples possess and that the negative examples lack. I do think that there is one very obvious feature that ties together sensory experiences, imagery experiences, emotional experiences, dream experiences, and conscious thoughts and desires. They’re all conscious experiences. None of the other stuff is experienced (lipid absorption, the tactile smoothness of your desk, etc.). I hope it feels to you like I have belabored an obvious point. Indeed, my argumentative strategy relies upon this obviousness. 8

Intuition, the apparent obviousness of his examples, is what he stresses here. The beauty of definition by example is that offering instances of the phenomenon at issue allows you to remain agnostic regarding the properties possessed by that phenomenon. It actually seems to deliver the very metaphysical and epistemological innocence Eric needs to stave off the charge of inflation. It really does allow him to ditch the baggage and travel wearing all his clothes, or so it seems.

Meanwhile the wonderfulness condition, though determining the phenomenon, does so indirectly, via the obvious impact it has on human attempts to cognize experience-qua-experience. Whatever phenomenal consciousness is, contemplating it provokes wonder.

And so the argument is laid out, as spare and elegant as all of Eric’s arguments. It’s pretty clear these are examples of whatever it is we call phenomenal consciousness. Of course, there’s something about them that we find downright stupefying. Surely, he asks, we can be phenomenal realists in this austere respect?

For all its intuitive appeal, the problem with this approach is that it almost certainly presumes a simplicity that human cognition does not possess. Conceptually, we can bring this out with a single question: Is phenomenal consciousness the most folk psychologically obvious thing or feature the examples share, or is it obvious in some other respect? Eric’s claim amounts to saying the recognition of phenomenal consciousness as such belongs to everyday cognition. But is this the case? Typically, recognition of experience-qua-experience is thought to be an intellectual achievement of some kind, a first step toward the ‘philosophical’ or ‘reflective’ or ‘contemplative’ attitude. Shouldn’t we say, rather, that phenomenal consciousness is the most obvious thing or feature these examples share upon reflection, which is to say, philosophically?

This alternative need only be raised to drag Eric’s formulation back into the mire of conceptual definition, I think. But on a cognitive ecological picture, we can actually reframe this conceptual problematization in path-dependent terms, and so more forcefully insist on a distinction of modes and therefore a distinction in problem-solving ecologies. Recall Augustine, how we understand time without difficulty until we ask the question of time qua time. Our cognitive systems have no serious difficulty with timing, but then abruptly break down when we ask the question of time as such. Even though we had the information and capacity required to solve any number of practical issues involving time, as soon as we pose the question of time-qua-time that fluency evaporates and we find ourselves out-and-out mystified.

Eric’s definition by example, as an explicitly conceptual exercise, clearly involves something more than everyday applications of experience talk. The answer intuitively feels as natural as can be—there must be some property X these instances share or exclude, certainly!—but the question strikes most everyone as exceptional, at least until they grow accustomed to it. Raising the question, as Augustine shows us, is precisely where the problem begins, and as my daughter would be quick to remind Eric, cootie-catchers only work if we ask the right question. Human cognition is fractionate and heuristic, after all.

cootie-catcher

All organisms are immersed in potential information, difference making differences that could spell the difference between life and death. Given the difficulties involved in the isolation of causes, they often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information organisms have, evolved and learned sensitivities to effects systematically correlated to those environmental systems relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers adapted to deep information environments, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible.

We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems and the resources required to solve them are wildly disparate, not all access is equal.

Information access, I think, divides cognition into two distinct forms, two different families of ‘AlphaGo type’ programs. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.

Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.

Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.

(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the necessity, the efficacy, and the limitations of source insensitive (cue correlative) cognition. Andrei Cimpian’s lab and the work of Klaus Fiedler (as well as that of the Adaptive Behaviour and Cognition Research Group more generally) are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)

So what are we to make of Eric’s attempt to innocently (folk psychologically) pose the question of experience-qua-experience in light of this rudimentary distinction?

If one takes the brain’s ability to cognize its own cognitive functions as a condition of ‘experience talk,’ it becomes very clear very quickly that experience talk belongs to a source insensitive cognitive regime, a system adapted to exploit correlations between the information consumed (cues) and the vastly complicated systems (oneself and others) requiring solution. This suggests that Eric’s definition by example is anything but theoretically innocent, assuming, as it does, that our source insensitive, experience-talk systems pick out something in the domain of source sensitive cognition… something ‘real.’ Defining by example cues our experience-talk system, which produces indubitable instances of recognition. Phenomenal consciousness becomes, apparently, an indubitable something. Given our inability to distinguish between our own cognitive systems (given ‘cognition-qua-cognition blindness’), default identity prevails; suddenly it seems obvious that phenomenal experience somehow, minimally, belongs to the order of the real. And once again, we find ourselves attempting to square ‘posits’ belonging to sourceless modes of cognition with a world where everything has a source.

We can now see how the wonderfulness condition, which Eric sees working in concert with his definition by example, actually cuts against it. Experience-qua-experience provokes wonder precisely because it delivers us to crash space, the point where heuristic misapplication leads our intuitions astray. Simply by asking this question, we have taken a component from a source insensitive cognitive system relying (qua heuristic) on strategic correlations to the systems requiring solution to solve, and asked a completely different, source sensitive system to make sense of it. Philosophical reflection is a ‘cultural achievement’ precisely because it involves using our brains in new ways, applying ancient tools to novel questions. Doing so, however, inevitably leaves us stumbling around in a darkness we cannot see, running afoul confounds we have no way of intuiting, simply because they impacted our ancestors not at all. Small wonder ‘phenomenal consciousness’ provokes wonder. How could the most obvious thing possess so few degrees of cognitive freedom? How could light itself deliver us to darkness?

I appreciate the counterintuitive nature of the view I’m presenting here, the way it requires seeing conceptual moves in terms of physical path-dependencies, as belonging to a heuristic gearbox where our numbness to the grinding perpetually convinces us that this time, at long last, we have slipped from neutral into drive. But recall the case of memory, the way blindness to its neurocognitive intricacies led Plato to assume it simple. Only now can we run our (exceedingly dim) metacognitive impressions of memory through the gamut of what we know, see it as a garden of forking paths. The suggestion here is that posing the question of experience-qua-experience poses a crucial fork in the consciousness studies road, the point where a component of source-insensitive cognition, ‘experience,’ finds itself dragged into the court of source sensitivity, and productive inquiry grinds to a general halt.

When I employ experience talk in a practical, first-order way, I have a great deal of confidence in that talk. But when I employ experience talk in a theoretical, second-order way, I have next to no confidence in that talk. Why would I? Why would anyone, given the near-certainty of chronic underdetermination? Even more, I can see of no way (short magic) for our brain to have anything other than radically opportunistic and heuristic contact with its own functions. Either specialized, simple heuristics comprise deliberative metacognition or deliberative metacognition does not exist. In other words, I see no way of avoiding experience-qua-experience blindness.

This flat out means that on a high dimensional view (one open to as much relevant physical information as possible), there is just no such thing as ‘phenomenal consciousness.’ I am forced to rely on experience related talk in theoretical contexts all the time, as do scientists in countless lines of research. There is no doubt whatsoever that experience-talk draws water from far more than just ‘folk psychological’ wells. But this just means that various forms of heuristic cognition can be adapted to various experimentally regimented cognitive ecologies—experience-talk can be operationalized. It would be strange if this weren’t the case, and it does nothing to alleviate the fact that solving for experience-qua-experience delivers us, time and again, to crash space.

One does not have to believe in the reality of phenomenal consciousness to believe in the reality of the systems employing experience-talk. As we are beginning to discover, the puzzle has never been one of figuring out what phenomenal experiences could possibly be, but rather figuring out the biological systems that employ them. The greater our understanding of this, the greater our understanding of the confounds characterizing that perennial crash space we call philosophy.

Real Systems

by rsbakker

THE ORDER WHICH IS THERE

Now I’ve never had any mentors; my path has been too idiosyncratic, for the better, since I think it’s the lack of institutional constraints that has allowed me to experiment the way I have. But if I were pressed to name any spiritual mentor, Daniel Dennett would be the first name to cross my lips—without the least hesitation. Nevertheless, I see the theoretical jewel of his project, the intentional stance, as the last gasp of what will one day, I think, count as one of humanity’s great confusions… and perhaps the final one to succumb to science.

A great many disagree, of course, and because I’ve been told so many times to go back to “Real Patterns” to discover the error of my ways, I’ve decided I would use it to make my critical case.

Defenders of Dennett (including Dennett himself) are so quick to cite “Real Patterns,” I think, because it represents his most sustained attempt to situate his position relative to his fellow philosophical travelers. At issue is the reality of ‘intentional states,’ and how the traditional insistence on some clear cut binary answer to this question—real/unreal—radically underestimates the ontological complexity charactering both everyday life and the sciences. What he proposes is “an intermediate doctrine” (29), a way of understanding intentional states as real patterns.

I have claimed that beliefs are best considered to be abstract objects rather like centers of gravity. Smith considers centers of gravity to be useful fictions while Dretske considers them to be useful (and hence?) real abstractions, and each takes his view to constitute a criticism of my position. The optimistic assessment of these opposite criticisms is that they cancel each other out; my analogy must have hit the nail on the head. The pessimistic assessment is that more needs to be said to convince philosophers that a mild and intermediate sort of realism is a positively attractive position, and not just the desperate dodge of ontological responsibility it has sometimes been taken to be. I have just such a case to present, a generalization and extension of my earlier attempts, via the concept of a pattern. My aim on this occasion is not so much to prove that my intermediate doctrine about the reality of psychologcal states is right, but just that it is quite possibly right, because a parallel doctrine is demonstrably right about some simpler cases. 29

So what does he mean by ‘real patterns’? Dennett begins by considering a diagram with six rows of five black boxes each characterized by varying degrees of noise, so extreme in some cases as completely obscure the boxes. He then, following the grain of his characteristic genius, provides a battery of different ways these series might find themselves used.

This crass way of putting things-in terms of betting and getting rich-is simply a vivid way of drawing attention to a real, and far from crass, trade-off that is ubiquitous in nature, and hence in folk psychology. Would we prefer an extremely compact pattern description with a high noise ratio or a less compact pattern description with a lower noise ratio? Our decision may depend on how swiftly and reliably we can discern the simple pattern, how dangerous errors are, how much of our resources we can afford to allocate to detection and calculation. These “design decisions” are typically not left to us to make by individual and deliberate choices; they are incorporated into the design of our sense organs by genetic evolution, and into our culture by cultural evolution. The product of this design evolution process is what Wilfrid Sellars calls our manifest image, and it is composed of folk physics, folk psychology, and the other pattern-making perspectives we have on the buzzing blooming confusion that bombards us with data. The ontology generated by the manifest image has thus a deeply pragmatic source. 36

The moral is straightforward: the kinds of patterns that data sets yield are both perspectival and pragmatic. In each case, the pattern recognized is quite real, but bound upon some potentially idiosyncratic perspective possessing some potentially idiosyncratic needs.

He then takes this moral to Conway’s Game of Life, a computer program where cells in a grid are switched on or off in successive turns depending on the number of adjacent cells switched on. The marvelous thing about this program lies in the kinds of dynamic complexities arising from this simple template and single rule, subsystems persisting from turn to turn, encountering other subsystems with predictable results. Despite the determinism of this system, patterns emerge that only the design stance seems to adequately capture, a level possessing “it’s own language, a transparent foreshortening of the tedious descriptions one could give at the physical level” (39).

For Dennett, the fact that one can successfully predict via the design stance clearly demonstrates that it’s picking out real patterns somehow. He asks us to imagine transforming the Game into a supersystem played out on a screen miles wide and using the patterns picked out to design a Turing Machine playing chess against itself. Here, Dennett argues, the determinacy of the microphysical picture is either intractable or impracticable, yet we need only take up a chess stance or a computational stance to make, from a naive perspective, stunning predictions as to what will happen next.

And this is of course as true of life life as it is the Game of Life: “Predicting that someone will duck if you throw a brick at him is easy from the folk-psychological stance; it is and will always be intractable if you have to trace the photons from brick to eyeball, the neurotransmitters from optic nerve to motor nerve, and so forth” (42). His supersized Game of Life, in other words, makes plain the power and the limitations of heuristic cognition.

This brings him to his stated aim of clarifying his position vis a vis his confreres and Fodor. As he points out, everyone agrees there’s some kind of underlying “order which is there,” as Anscombe puts it in Intention. The million dollar question, of course, is what this order amounts to:

Fodor and others have claimed that an interior language of thought is the best explanation of the hard edges visible in “propositional attitude psychology.” Churchland and I have offered an alternative explanation of these edges… The process that produces the data of folk psychology, we claim, is one in which the multidimensional complexities of the underlying processes are projected through linguistic behavior, which creates an appearance of definiteness and precision, thanks to the discreteness of words. 44-45

So for traditional realists, like Fodor, the structure beliefs evince in reflection and discourse expresses the structure beliefs must possess in the head. For Dennett, on the other hand, the structure beliefs evince in reflection and discourse expresses, among other things, the structure of reflection and discourse. How could it be otherwise, he asks, given the ‘stupendous scale of compression’ (42) involved?

As Haugeland points out in “Pattern and Being,” this saddles Dennett’s account of patterns with a pretty significant ambiguity: if the patterns characteristic of intentional states express the structure of reflection and discourse, then the ‘order which is there’ must be here as well. Of course, this much is implicit in Dennett’s preamble: the salience of certain patterns depends on the perspective we possess on them. But even though this implicit ‘here-there holism’ becomes all but explicit when Dennett turns to Radical Translation and the distinction between his and Davidson’s views, his emphasis nevertheless remains on the order out there. As he writes:

Davidson and I both like Churchland’s alternative idea of propositional-attitude statements as indirect “measurements” of a reality diffused in the behavioral dispositions of the brain (and body). We think beliefs are quite real enough to call real just so long as belief talk measures these complex behavior-disposing organs as predictively as it does. 45-46

Rhetorically (even diagrammatically if one takes Dennett’s illustrations into account), the emphasis is on the order there, while here is merely implied as a kind of enabling condition. Call this the ‘epistemic-ontological ambiguity’ (EOA). On the one hand, it seems to make eminent sense to speak of patterns visible only from certain perspectives and to construe them as something there, independent of any perspective we might take on them. But on the other hand, it seems to make jolly good sense to speak of patterns visible only from certain perspectives and to construe them as around here, as something entirely dependent on the perspective we find ourselves taking. Because of this, it seems pretty fair to ask Dennett which kind of pattern he has in mind here. To speak of beliefs as dispositions diffused in the brain seems to pretty clearly imply the first. To speak of beliefs as low dimensional, communicative projections, on the other hand, seems to clearly imply the latter.

Why this ambiguity? Do the patterns underwriting belief obtain in individual believers, dispositionally diffused as he says, or do they obtain in the communicative conjunction of witnesses and believers? Dennett promised to give us ‘parallel examples’ warranting his ‘intermediate realism,’ but by simply asking the whereabouts of the patterns, whether we will find them primarily out there as opposed to around here, we quickly realize his examples merely recapitulate the issue they were supposed to resolve.

 

THE ORDER AROUND HERE

Welcome to crash space. If I’m right then you presently find yourself strolling through a cognitive illusion generated by the application of heuristic capacities outside their effective problem ecology.

Think of how curious the EOA is. The familiarity of it should be nothing short of gobsmacking: here, once again we find ourselves stymied by the same old dichotomies: here versus there, inside versus outside, knowing versus known. Here, once again we find ourselves trapped in the orbit of the great blindspot that still, after thousands of years, stumps the wise of the world.

What the hell could be going on?

Think of the challenge facing our ancestors attempting cognize their environmental relationships for the purposes of communication and deliberate problem-solving. The industrial scale of our ongoing attempt to understand as much demonstrates the intractability of that relationship. Apart from our brute causal interactions, our ability to cognize our cognitive relationships is source insensitive through and through. When a brick is thrown at us, “the photons from brick to eyeball, the neurotransmitters from optic nerve to motor nerve, and so forth” (42) all go without saying. In other words, the whole system enabling cognition of the brick throwing is neglected, and only information relevant to ancestral problem-solving—in this case, brick throwing—finds its way to conscious broadcast.

In ancestral cognitive ecologies, our high-dimensional (physical) continuity with nature mattered as much as it matters now, but it quite simply did not exist for them. They belonged to any number of natural circuits across any number of scales, and all they had to go on was the information that mattered (disposed them to repeat and optimize behaviours) given the resources they possessed. Just as Dennett argues, human cognition is heuristic through and through. We have no way of cognizing our position within any number of the superordinate systems science has revealed in nature, so we have to make do with hacks, subsystems allowing us to communicate and troubleshoot our relation to the environment while remaining almost entirely blind to it. About talk belongs to just such a subsystem, a kluge communicating and troubleshooting our relation to our environments absent cognition of our position in larger systems. As I like to say, we’re natural in such a way as to be incapable of cognizing ourselves as natural.

About talk facilitates cognition and communication of our worldly relation absent any access to the physical details of that relation. And as it turns out, we are that occluded relation’s most complicated component—we are the primary thing neglected in applications of about talk. As the thing most neglected, we are the thing most presumed, the invariant background guaranteeing the reliability of about talk (this is why homuncular arguments are so empty). This combination of cognitive insensitivity to and functional dependence upon the machinations of cognition (what I sometimes refer to as medial neglect) suggests that about talk would be ideally suited to communicating and troubleshooting functionally independent systems, processes generally insensitive to our attempts to cognize them. This is because the details of cognition make no difference to the details cognized: the automatic distinction about talk poses between cognizing system and the system cognized poses no impediment to understanding functionally independent systems. As a result, we should expect about talk to be relatively unproblematic when it comes to communicating and troubleshooting things ‘out there.’

Conversely, we should expect about talk to generate problems when it comes to communicating and troubleshooting functionally dependent systems, processes somehow sensitive to our attempts to cognize them. Consider ‘observer effects,’ the problem researchers themselves pose when their presence or their tools/techniques interfere with the process they are attempting to study. Given medial neglect, the researchers themselves always constitute a black box. In the case of systems functionally sensitive to the activity of cognition, as is often the case in psychology and particle physics, understanding the system requires we somehow obviate our impact on the system. As the interactive, behavioural components of cognition show, we are in fact quite good (though far from perfect) at inserting and subtracting our interventions in processes. But since we remain a black box, since our position in the superordinate systems formed by our investigations remains occluded, our inability to extricate ourselves, to gerrymander functional independence, say, undermines cognition.

Even if we necessarily neglect our positions in superordinate systems, we need some way of managing the resulting vulnerabilities, to appreciate that patterns may be artifacts of our position. This suggests one reason, at least, for the affinity of mechanical cognition and ‘reality.’ The more our black box functions impact the system to be cognized, the less cognizable that system becomes in source sensitive terms. We become an inescapable source of noise. Thus our intuitive appreciation of the need for ‘perspective,’ to ‘rise above the fray’: The degree to which a cognitive mode preserves (via gerrymandering if not outright passivity) the functional independence of a system is the degree to which that cognitive mode enables reliable source sensitive cognition is the degree to which about talk can be effectively applied.

The deeper our entanglements, on the other hand, the more we need to rely on source insensitive modes of cognition to cognize target systems. Even if our impact renders the isolation of source signals impossible, our entanglement remains nonetheless systematic, meaning that any number of cues correlated in any number of ways to the target system can be isolated (which is really all ‘radical translation’ amounts to). Given that metacognition is functionally entangled by definition, it becomes easy to see why the theoretical question of cognition causes about talk to crash the spectacular ways it does: our ability to neglect the machinations of cognition (the ‘order which is here’) is a boundary condition for the effective application of ‘orders which are there’—or seeing things as real. Systems adapted to work around the intractability of our cognitive nature find themselves compulsively applied to the problem of our cognitive nature. We end up creating a bestiary of sourceless things, things that, thanks to the misapplication of the aboutness heuristic, have to belong to some ‘order out there,’ and yet cannot be sourced like anything else out there… as if they were unreal.

The question of reality cues the application of about talk, our source insensitive means of communicating and troubleshooting our cognitive relation to the world. For our ancient ancestors, who lacked the means to distinguish between source sensitive and source insensitive modes of cognition, asking, ‘Are beliefs real?’ would have sounded insane. HNT, in fact, provides a straightforward explanation for what might be called our ‘default dogmatism,’ our reflex for naive realism: not only do we lack any sensitivity to the mechanics of cognition, we lack any sensitivity to this insensitivity. This generates the persistent illusion of sufficiency, the assumption (regularly observed in different psychological phenomena) that the information provided is all the information there is.

Cognition of cognitive insufficiency always requires more resources, more information. Sufficiency is the default. This is what makes the novel application of some potentially ‘good trick,’ as Dennett would say, such tricky business. Consider philosophy. At some point, human culture acquired the trick of recruiting existing metacognitive capacities to explain the visible in terms of the invisible in unprecedented (theoretical) ways. Since those metacognitive capacities are radically heuristic, specialized consumers of select information, we can suppose retasking those capacities to solve novel problems—as philosophers do when they, for instance, ‘ponder the nature of knowledge’—would run afoul some pretty profound problems. Even if those specialized metacognitive consumers possessed the capacity to signal cognitive insufficiency, we can be certain the insufficiency flagged would be relative to some adaptive problem-ecology. Blind to the heuristic structure of cognition, the first philosophers took the sufficiency of their applications for granted, much as very many do now, despite the millennia of prior failure.

Philosophy inherited our cognitive innocence and transformed it, I would argue, into a morass of competing cognitive fantasies. But if it failed to grasp the heuristic nature of much cognition, it did allow, as if by delayed exposure, a wide variety of distinctions to blacken the photographic plate of philosophical reflection—that between is and ought, fact and value, among them. The question, ‘Are beliefs real?’ became more a bona fide challenge than a declaration of insanity. Given insensitivity to the source insensitive nature of belief talk, however, the nature of the problem entirely escaped them. Since the question of reality cues the application of about talk, source insensitive modes of cognition struck them as the only game in town. Merely posing the question springs the trap (for as Dennett says, selecting cues is “typically not left to us to make by individual and deliberate choices” (36)). And so they found themselves attempting to solve the hidden nature of cognition via the application of devices adapted to ignore hidden natures.

Dennett runs into the epistemic-ontological ambiguity because the question of the reality of intentional states cues the about heuristic out of school, cedes the debate to systems dedicated to gerrymandering solutions absent high-dimensional information regarding our cognitive predicament—our position within superordinate systems. Either beliefs are out there, real, or they’re in here, merely, an enabling figment of some kind. And as it turns out, IST is entirely amenable to this misapplication, in that ‘taking the intentional stance’ involves cuing the about heuristic, thus neglecting our high-dimensional cognitive predicament. On Dennett’s view, recall, an intentional system is any system that can be predicted/explained/manipulated via the intentional stance. Though the hidden patterns can only be recognized from the proper perspective, they are there nonetheless, enough, Dennett thinks, to concede them reality as intentional systems.

Heuristic Neglect Theory allows us to see how this amounts to mistaking a CPU for a PC. On HNT, the trick is to never let the superordinate systems enabling and necessitating intentional cognition out of view. Recall the example of the gaze heuristic from my prior post, how fielders essentially insert—functionally entangle—themselves into the pop fly system to let the ball itself guide them in. The same applies to beliefs. When your tech repairs your computer, you have no access to her personal history, the way thousands of hours have knapped her trouble-shooting capacities, and even less access to her evolutionary history, the way continual exposure to problematic environments has sculpted her biological problem-solving capacities. You have no access, in other words, to the vast systems of quite natural relata enabling her repair. The source sensitive story is unavailable, so you call her ‘knowledgeable’ instead; you presume she possesses something—a fetish, in effect—possessing the sourceless efficacy explaining her almost miraculous ability to make your PC run: a mass of true beliefs (representations), regarding personal computer repair. You opt for a source insensitive means that correlates with her capacities well enough to neglect the high-dimensional facts—the natural and personal histories—underwriting her ability.

So then where does the ‘real pattern’ gainsaying the reality of belief lie? The realist would say in the tech herself. This is certainly what our (heuristic) intuitions tell us in the first instance. But as we saw above, squaring sourceless entities in a world where most everything has a source is no easy task. The instrumentalist would say in your practices. This certainly lets us explain away some of the peculiarities crashing our realist intuitions, but at the cost of other, equally perplexing problems (this is crash space, after all). As one might expect, substituting the use heuristic for the about heuristic merely passes the hot potato of source insensitivity. ‘Pragmatic functions’ are no less difficult to square with the high-dimensional than beliefs.

But it should be clear by now that the simple act of pairing beliefs with patterns amounts to jumping the same ancient shark. The question, ‘Are beliefs real?’ was a no-brainer for our preliterate ancestors simply because they lived in a seamless shallow information cognitive ecology. Outside their local physics, the sources of things eluded them altogether. ‘Of course beliefs are real!’ The question was a challenge for our philosophical ancestors because they lived in a fractured shallow information ecology. They could see enough between the cracks to appreciate the potential extent and troubling implications of mechanical cognition, it’s penchant to crash our shallow (ancestral) intuitions. ‘It has to be real!’

With Dennett, entire expanses of our shallow information ecology have been laid low and we get, ‘It’s as real as it needs to be.’ He understands the power of the about heuristic, how ‘order out there’ thinking effects any number of communicative solutions—thus his rebuttal of Rorty. He understands, likewise, the power of the use heuristic, how ‘order around here’ thinking effects any number of communicative solutions—thus his rebuttal of Fodor. And most importantly, he understands the error of assuming the universal applicability of either. And so he concludes:

Now, once again, is the view I am defending here a sort of instrumentalism or a sort of realism? I think that the view itself is clearer than either of the labels, so I shall leave that question to anyone who stills find [sic] illumination in them. 51

What he doesn’t understand is how it all fits together—and how could he, when IST strands him with an intentional theorization of intentional cognition, a homuncular or black box understanding of our contemporary cognitive predicament? This is why “Real Patterns” both begins and ends with EOA, why we are no closer to understanding why such ambiguity obtains at all. How are we supposed to understand how his position falls between the ‘ontological dichotomy’ of realism and instrumentalism when we have no account of this dichotomy in the first place? Why the peculiar ‘bi-stable’ structure? Why the incompatibility between them? How can the same subject matter evince both? Why does each seem to inferentially beg the other?

 

THE ORDER

The fact is, Dennett was entirely right to eschew outright realism or outright instrumentalism. This hunch of his, like so many others, was downright prescient. But the intentional stance only allows him to swap between perspectives. As a one-time adherent I know first-hand the theoretical versatility IST provides, but the problem is that explanation is what is required here.

HNT argues that simply interrogating the high-dimensional reality of belief, the degree to which it exists out there, covers over the very real system—the cognitive ecology—explaining the nature of belief talk. Once again, our ancestors needed some way of communicating their cognitive relations absent source-sensitive information regarding those relations. The homunculus is a black box precisely because it cannot source its own functions, merely track their consequences. The peculiar ‘here dim’ versus ‘there bright’ character of naive ontological or dogmatic cognition is a function of medial neglect, our gross insensitivity to the structure and dynamics of our cognitive capacities. Epistemic or instrumental cognition comes with learning from the untoward consequences of naive ontological cognition—the inevitable breakdowns. Emerging from our ancestral, shallow information ecologies, the world was an ‘order there’ world simply because humanity lacked the ability to discriminate the impact of ‘around here.’ The discrimination of cognitive complexity begets intuitions of cognitive activity, undermines our default ‘out there’ intuitions. But since ‘order there’ is the default and ‘around here’ the cognitive achievement, we find ourselves in the peculiar position of apparently presuming ‘order there’ when making ‘around here’ claims. Since ‘order there’ intuitions remain effective when applied in their adaptive problem-ecologies, we find speculation splitting along ‘realist’ versus ‘anti-realist’ lines. Because no one has any inkling of any of this, we find ourselves flipping back and forth between these poles, taking versions of the same obvious steps to trod the same ancient circles. Every application is occluded, and so ‘transparent,’ as well as an activity possessing consequences.

Thus EOA… as well as an endless parade of philosophical chimera.

Isn’t this the real mystery of “Real Patterns,” the question of how and why philosophers find themselves trapped on this rickety old teeter-totter? “It is amusing to note,” Dennett writes, “that my analogizing beliefs to centers of gravity has been attacked from both sides of the ontological dichotomy, by philosophers who think it is simply obvious that centers of gravity are useful fictions, and by philosophers who think it is simply obvious that centers of gravity are perfectly real” (27). Well, perhaps not so amusing: Short of solving this mystery, Dennett has no way of finding the magic middle he seeks in this article—the middle of what? IST merely provides him with the means to recapitulate EOA and gesture to the possibility of some middle, a way to conceive all these issues that doesn’t deliver us to more of the same. His instincts, I think, were on the money, but his theoretical resources could not take him where he wanted to go, which is why, from the standpoint of his critics, he just seems to want to have it both ways.

On HNT we can see, quite clearly, I think, the problem with the question, ‘Are beliefs real?’ absent an adequate account of the relevant cognitive ecology. The bitter pill lies in understanding that the application conditions of ‘real’ have real limits. Dennett provides examples where those application conditions pretty clearly seem to obtain, then suggests more than argues that these examples are ‘parallel’ in all the structurally relevant respects to the situation with belief. But to distinguish his brand from Fodor’s ‘industrial strength’ realism, he has no choice but to ‘go instrumental’ in some respect, thus exposing the ambiguity falling out of IST.

It’s safe to say belief talk is real. It seems safe to say that beliefs are ‘real enough’ for the purposes of practical problem-solving—that is, for shallow (or source insensitive) cognitive ecologies. But it also seems safe to say that beliefs are not real at all when it comes to solving high-dimensional cognitive ecologies. The degree to which scientific inquiry is committed to finding the deepest (as opposed to the most expedient) account, should be the degree to which it views belief talk as components of real systems and views ‘belief’ as a source insensitive posit, a way to communicate and troubleshoot both oneself and one’s fellows.

This is crash space, so I appreciate the kinds of counter-intuitiveness involved in this view I’m advancing. But since tramping intuitive tracks has hitherto only served to entrench our controversies and confusions, we have good reason to choose explanatory power over intuitive appeal. We should expect synthesis in the cognitive sciences will prove every bit as alienating to traditional presumption as it was in biology. There’s more than a little conceit involved in thinking we had any special inside track on our own nature. In fact, it would be a miracle if humanity had not found itself in some version of this very dilemma. Given only source insensitive means to troubleshoot cognition, to understand ourselves and each other, we were all but doomed to be stumped by the flood of source sensitive cognition unleashed by science. (In fact, given some degree of interstellar evolutionary convergence, I think one can wager that extraterrestrial intelligences will have suffered their own source insensitive versus source sensitive cognitive crash spaces. See my, “On Alien Philosophy,” The Journal of Consciousness Studies, (forthcoming))

IST brings us to the deflationary limit of intentional philosophy. HNT offers a way to ratchet ourselves beyond, a form of critical eliminativism that can actually explain, as opposed to simply dispute, the traditional claims of intentionality. Dennett, of course, reserves his final criticism for eliminativism, perhaps because so many critics see it as the upshot of his interpretivism. He acknowledges the possibility that “that neuroscience will eventually-perhaps even soon-discover a pattern that is so clearly superior to the noisy pattern of folk psychology that everyone will readily abandon the former for the latter (50),” but he thinks it unlikely:

For it is not enough for Churchland to suppose that in principle, neuroscientific levels of description will explain more of the variance, predict more of the “noise” that bedevils higher levels. This is, of course, bound to be true in the limit-if we descend all the way to the neurophysiological “bit map.” But as we have seen, the trade-off between ease of use and immunity from error for such a cumbersome system may make it profoundly unattractive. If the “pattern” is scarcely an improvement over the bit map, talk of eliminative materialism will fall on deaf ears-just as it does when radical eliminativists urge us to abandon our ontological commitments to tables and chairs. A truly general-purpose, robust system of pattern description more valuable than the intentional stance is not an impossibility, but anyone who wants to bet on it might care to talk to me about the odds they will take. 51

The elimination of theoretical intentional idiom requires, Dennett correctly points out, some other kind of idiom. Given the operationalization of intentional idioms across a wide variety of research contexts, they are not about to be abandoned anytime soon, and not at all if the eliminativist has nothing to offer in their stead. The challenge faced by the eliminativist, Dennett recognizes, is primarily abductive. If you want to race at psychological tracks, you either enter intentional horses or something that can run as fast or faster. He thinks this unlikely because he thinks no causally consilient (source sensitive) theory can hope to rival the combination of power and generality provided by the intentional stance. Why might this be? Here he alludes to ‘levels,’ suggest that any causally consilient account would remain trapped at the microphysical level, and so remain hopelessly cumbersome. But elsewhere, as in his discussion of ‘creeping depersonalization’ in “Mechanism and Responsibility,” he readily acknowledges our ability to treat with one another as machines.

And again, we see how the limited resources of IST have backed him into a philosophical corner—and a traditional one at that. On HNT, his claim amounts to saying that no source sensitive theory can hope to supplant the bundle of source insensitive modes comprising intentional cognition. On HNT, in other words, we already find ourselves on the ‘level’ of intentional explanation, already find ourselves with a theory possessing the combination of power and generality required to eliminate a particle of intentional theorization: namely, the intentional stance. A way to depersonalize cognitive science.

Because IST primarily provides a versatile way to deploy and manage intentionality in theoretical contexts rather than any understanding of its nature, the disanalogy between ‘center of gravity’ and ‘beliefs’ remains invisible. In each case you seem to have an entity that resists any clear relation to the order which is there, and yet finds itself regularly and usefully employed in legitimate scientific contexts. Our brains are basically short-cut machines, so it should come as no surprise that we find heuristics everywhere, in perception as much as cognition (insofar as they are distinct). It also should come as no surprise that they comprise a bestiary, as with most all things biological. Dennett is comparing heuristic apples and oranges, here. Centers of gravity are easily anchored to the order which is there because they economize otherwise available information. They can be sourced. Such is not the case with beliefs, belonging as they do to a system gerrymandering for the want of information.

So what is the ultimate picture offered here? What could reality amount to outside our heuristic regimes? Hard to think, as it damn well should be. Our species’ history posed no evolutionary challenges requiring the ability to intuitively grasp the facts of our cognitive predicament. It gave us a lot of idiosyncratic tools to solve high impact practical problems, and as a result, Homo sapiens fell through the sieve in such a way as to be dumbfounded when it began experimenting in earnest with its interrogative capacities. We stumbled across a good number of tools along the way, to be certain, but we remain just as profoundly stumped about ourselves. On HNT, the ‘big picture view’ is crash space, in ways perhaps similar to the subatomic, a domain where our biologically parochial capacities actually interfere with our ability to understand. But it offers a way of understanding the structure and dynamics of intentional cognition in source sensitive terms, and in so doing, explains why crashing our ancestral cognitive modes was inevitable. Just consider the way ‘outside heuristic regimes’ suggests something ‘noumenal,’ some uber-reality lost at the instant of transcendental application. The degree to which this answer strikes you as natural or ‘obvious’ is the degree you have been conditioned to apply that very regime out of school. With HNT we can demand those who want to stuff us into this or that intellectual Klein bottles define their application conditions, convince us this isn’t just more crash space mischief.

It’s trivial to say some information isn’t available, so why not leave well enough alone? Perhaps the time has come to abandon the old, granular dichotomies and speak in terms of dimensions of information available and cognitive capacities possessed. Imagine that

Moving on.

Dennett’s Black Boxes (Or, Meaning Naturalized)

by rsbakker

“Dennett’s basic insight is that there are under-explored possibilities implicit in contemporary scientific ideas about human nature that are, for various well understood reasons, difficult for brains like ours to grasp. However, there is a familiar remedy for this situation: as our species has done throughout its history when restrained by the cognitive limitations of the human brain, the solution is to engineer new cognitive tools that enable us to transcend these limitations. ”

—T. W. Zadwidzki, “As close to the definitive Dennett as we’re going to get.”

So the challenge confronting cognitive science, as I see it, is to find some kind of theoretical lingua franca, a way to understand different research paradigms relative to one another. This is the function that Darwin’s theory of evolution plays in the biological sciences, that of a common star chart, a way for myriad disciplines to chart their courses vis a vis one another.

Taking a cognitive version of ‘modern synthesis’ as the challenge, you can read Dennett’s “Two Black Boxes: a Fable” as an argument against the need for such a synthesis. What I would like to show is the way his fable can be carved along different joints to reach a far different conclusion. Beguiled by his own simplifications, Dennett trips into the same cognitive ‘crash space’ that has trapped traditional speculation on the nature of cognition more generally, fooling him into asserting explanatory limits that are apparent only.

Dennett’s fable tells the story (originally found in Darwin’s Dangerous Idea, 412-27) of a group of researchers stranded with two black boxes, each containing a supercomputer with a database of ‘true facts’ about the world, one in English, the other in Swedish. One box has two buttons labeled alpha and beta, while the second box has three lights coloured yellow, red, and green. Unbeknownst to the researchers, the button box simply transmits a true statement from the one supercomputer when the alpha button is pushed, which the other supercomputer acknowledges by lighting the red bulb for agreement, and a false statement when the beta button is pushed, which the bulb box acknowledges by lighting the green bulb for disagreement. The yellow bulb illuminates only when the bulb box can make no sense of the transmission, which is always the case when the researcher disconnect the boxes and, being entirely ignorant of any of these details, substitute signals of their own.

The intuitive power of the fable turns on the ignorance of the researchers, who begin by noting the manifest relations above, how pushing alpha illuminates red, pushing beta illuminates green, and how interfering with the signal between the boxes invariably illuminates yellow. Until the two hackers who built the supercomputers arrive, they have no way of explaining why the three actions—alpha pushing, beta pushing, and signal interfering—illuminate the lights they do. Even when they crack open the boxes and begin reverse engineering the supercomputers within, they find themselves no closer to solving the problem. This is what makes their ignorance so striking: not even the sustained, systematic application of mechanical cognition paradigmatic of science can solve the problem. Certainly a mechanical account of all the downstream consequences of pushing alpha or beta or interfering with the signal is possible, but this inevitably cumbersome account nevertheless fails to explain the significance of what is going on.

Dennett’s black boxes, in other words, are actually made of glass. They can be cracked open and mechanically understood. It’s their communication that remains inscrutable, the fact that no matter what resources the researchers throw at the problem, they have no way of knowing what is being communicated. The only way to do this, Dennett wants to argue, is to adopt the ‘intentional stance.’ This is exactly what Al and Bo, the two hackers responsible for designing and building the black boxes, provide when they finally let the researchers in on their game.

Now Dennett argues that the explanatory problem is the same whether or not the hackers simply hide themselves in the black boxes, Al in one and Bo in the other, but you don’t have to buy into the mythical distinction between derived and original intentionality to see this simply cannot be the case. The fact that the hackers are required to resolve the research conundrum pretty clearly suggests they cannot simply be swapped out with their machines. As soon as the researchers crack open the boxes and find two human beings are behind the communication the whole nature of the research enterprise is radically transformed, much as it is when they show up to explain their ‘philosophical toy.’

This underscores a crucial point: Only the fact that Al and Bo share a vast background of contingencies with the researchers allows for the ‘semantic demystification’ of the signals passing between the boxes. If anything, cognitive ecology is the real black box at work in this fable. If Al and Bo had been aliens, their appearance would have simply constituted an extension of the problem. As it is, they deliver a powerful, but ultimately heuristic, understanding of what the two boxes are doing. They provide, in other words, a black box understanding of the signals passing between our two glass boxes.

The key feature of heuristic cognition is evinced in the now widely cited gaze heuristic, the way fielders fix the ball in their visual field while running to keep the ball in place. The most economical way to catch pop flies isn’t to calculate angles and velocities but to simply ‘lock onto’ the target, orient locomotion to maintain its visual position, and let the ball guide you in. Heuristic cognition solves problems not via modelling systems, but via correlation, by comporting us to cues, features systematically correlated to the systems requiring solution. IIR heat-seeking missiles, for instance, need understand nothing of the targets they track and destroy. Heuristic cognition allows us to solve environmental systems (including ourselves) without the need to model those systems. It enables, in other words, the solution of environmental black boxes, systems possessing unknown causal structures, via known environmental regularities correlated to those structures.

This is why Al and Bo’s revelation has the effect of mooting most all of the work the researchers had done thus far. The boxes might as well be black, given the heuristic nature of their explanation. The arrival of the hackers provides a black box (homuncular) ‘glassing’ of the communication between the two boxes, a way to understand what they are doing that cannot be mechanically decomposed. How? By identifying the relevant cues for the researchers, thereby plugging them into the wider cognitive ecology of which they and the machines are a part.

The communication between the boxes is opaque to the researchers, even when the boxes are transparent, because it is keyed to the hackers, who belong to the same cognitive ecology as to the researchers—only unbeknownst to the researchers. As soon as they let the researchers in on their secret—clue (or ‘cue’) them in—the communication becomes entirely transparent. What the boxes are communicating becomes crystal clear because it turns out they were playing the same game with the same equipment in the same arena all along.

Now what Dennett would have you believe is that ‘understanding the communication’ is exhausted by taking the intentional stance, that the problem of what the machines are communicating is solved as far as it needs to be solved. Sure, there is a vast, microcausal story to be told (the glass box one), but it proves otiose. The artificiality of the fable facilitates this sense: the machines, after all, were designed to compare true or false claims. This generates the sense of some insuperable gulf segregating the two forms of cognition. One second the communication was utterly inscrutable, and the next, Presto! it’s transparent.

“The debate went on for years,” Dennett concludes, “but the mystery with which it began was solved” (84). This seems obvious, until one asks whether plugging the communication into our own intentional ecology answers our original question. If the question is, ‘What do the three lights mean?’ then of course the question is answered, as well it should be, given the question amounts to, ‘How do the three lights plug into the cognitive ecology of human meaning?’ If the question is, ‘What are the mechanics of the three lights, such that they mean?’ then the utility of intentional cognition simply provides more data. The mystery of the meaning of the communication is dissolved, sure, but the problem of relating this meaning to the machinery remains.

What Dennett is attempting to provide with this analogy is a version of ‘radical interpretation,’ an instance that strips away our preconceptions, and forces us to consider the problem of meaning from ‘conceptual scratch,’ you might say. To see the way his fable is loaded, you need only divorce the machines from the human cognitive ecology framing them. Make them alien black-cum-glass boxes and suddenly mechanical cognition is all our researchers have—all they can hope to have. If Dennett’s conclusions vis a vis our human black-cum-glass boxes are warranted, then our researchers might as well give up before they begin, “because there really is no substitute for semantic or intentional predicates when it comes to specifying the property in a compact, generative, explanatory way” (84). Since we don’t share the same cognitive ecology as the aliens, their cues will make no implicit or homuncular sense to us at all. Even if we could pick those cues out, we would have no way of plugging them into the requisite system of correlations, the cognitive ecology of human meaning. Absent homuncular purchase, what the alien machines are communicating would remain inscrutable—if Dennett is to be believed.

Dennett sees this thought experiment as a decisive rebuttal to those critics who think his position entails semantic epiphenomenalism, the notion that intentional posits are causally inert. Not only does he think the intentional stance answers the researchers’ primary question, he thinks it does so in a manner compatible (if not consilient) with causal explanation. Truthhood can cause things to happen:

“the main point of the example of the Two Black Boxes is to demonstrate the need for a concept of causation that is (1) cordial to higher-level causal understanding distinct from an understanding of the microcausal story, and (2) ordinary enough in any case, especially in scientific contexts.” “With a Little Help From my Friends,” Dennett’s Philosophy: A Comprehensive Assessment, 357

The moral of the fable, in other words, isn’t so much intentional as it is causal, to show how meaning-talk is indispensible to a certain crucial ‘high level’ kind of causal explanation. He continues:

“With regard to (1), let me reemphasize the key feature of the example: The scientists can explain each and every instance with no residual mystery at all; but there is a generalization of obviously causal import that they are utterly baffled by until they hit upon the right higher-level perspective.” 357

Everything, of course, depends on what ‘hitting upon the right higher level perspective’ means. The fact is, after all, causal cognition funds explanation across all ‘levels,’ and not simply those involving microstates. The issue, then, isn’t simply one of ‘levels.’ We shall return to this point below.

With regard to (2), the need for an ‘ordinary enough’ concept of cause, he points out the sciences are replete with examples of intentional posits figuring in otherwise causal explanations:

“it is only via … rationality considerations that one can identify or single out beliefs and desires, and this forces the theorist to adopt a higher level than the physical level of explanation on its own. This level crossing is not peculiar to the intentional stance. It is the life-blood of science. If a blush can be used as an embarrassment-detector, other effects can be monitored in a lie detector.” 358

Not only does the intentional stance provide a causally relevant result, it does so, he is convinced, in a way that science utilizes all the time. In fact, he thinks this hybrid intentional/causal level is forced on the theorist, something which need cause no concern because this is simply the cost of doing scientific business.

Again, the question comes down to what ‘higher level of causal understanding’ amounts to. Dennett has no way of tackling this question because he has no genuinely naturalistic theory of intentional cognition. His solution is homuncular—and self-consciously so. The problem is that homuncular solvers can only take us so far in certain circumstances. Once we take them on as explanatory primitives—the way he does with the intentional stance—we’re articulating a theory that can only take us so far in certain circumstances. If we confuse that theory for something more than a homuncular solver, the perennial temptation (given neglect) will be to confuse heuristic limits for general ones—to run afoul the ‘only-game-in-town-effect.’ In fact, I think Dennett is tripping over one of his own pet peeves here, confusing what amounts to a failure of imagination with necessity (Consciousness Explained, 401).

Heuristic cognition, as Dennett claims, is the ‘life-blood of science.’ But this radically understates the matter. Given the difficulties involved in the isolation of causes, we often settle for correlations, cues reliably linked to the systems requiring solution. In fact, correlations are the only source of information humans have, evolved and learned sensitivities to effects systematically correlated to those environmental systems (including ourselves) relevant to reproduction. Human beings, like all other living organisms, are shallow information consumers, sensory cherry pickers, bent on deriving as much behaviour from as little information as possible (and we are presently hellbent on creating tools that can do the same).

Humans are encircled, engulfed, by the inverse problem, the problem of isolating causes from effects. We only have access to so much, and we only have so much capacity to derive behaviour from that access (behaviour which in turn leverages capacity). Since the kinds of problems we face outrun access, and since those problems are wildly disparate, not all access is equal. ‘Isolating causes,’ it turns out, means different things for different kinds of problem solving.

Information access, in fact, divides cognition into two distinct families. On the one hand we have what might be called source sensitive cognition, where physical (high-dimensional) constraints can be identified, and on the other we have source insensitive cognition, where they cannot.

Since every cause is an effect, and every effect is a cause, explaining natural phenomena as effects always raises the question of further causes. Source sensitive cognition turns on access to the causal world, and to this extent, remains perpetually open to that world, and thus, to the prospect of more information. This is why it possesses such wide environmental applicability: there are always more sources to be investigated. These may not be immediately obvious to us—think of visible versus invisible light—but they exist nonetheless, which is why once the application of source sensitivity became scientifically institutionalized, hunting sources became a matter of overcoming our ancestral sensory bottlenecks.

Since every natural phenomena has natural constraints, explaining natural phenomena in terms of something other than natural constraints entails neglect of natural constraints. Source insensitive cognition is always a form of heuristic cognition, a system adapted to the solution of systems absent access to what actually makes them tick. Source insensitive cognition exploits cues, accessible information invisibly yet sufficiently correlated to the systems requiring solution to reliably solve those systems. As the distillation of specific, high-impact ancestral problems, source insensitive cognition is domain-specific, a way to cope with systems that cannot be effectively cognized any other way.

(AI approaches turning on recurrent neural networks provide an excellent ex situ example of the indispensability, the efficacy, and the limitations of source insensitive (cue correlative) cognition (see, “On the Interpretation of Artificial Souls“). Andrei Cimpian, Klaus Fiedler, and the work of the Adaptive Behaviour and Cognition Research Group more generally are providing, I think, an evolving empirical picture of source insensitive cognition in humans, albeit, absent the global theoretical framework provided here.)

Now then, what Dennett is claiming is first, that instances of source insensitive cognition can serve source sensitive cognition, and second, that such instances fulfill our explanatory needs as far as they need to be fulfilled. What triggers the red light? The communication of a true claim from the other machine.

Can instances of source insensitive cognition serve source sensitive cognition (or vice versa)? Can there be such a thing as source insensitive/source sensitive hybrid cognition? Certainly seems that way, given how we cobble to two modes together both in science and everyday life. Narrative cognition, the human ability to cognize (and communicate) human action in context, is pretty clearly predicated on this hybridization. Dennett is clearly right to insist that certain forms of source insensitive cognition can serve certain forms of source sensitive cognition.

The devil is in the details. We know homuncular forms of source insensitive cognition, for instance, don’t serve the ‘hard’ sciences all that well. The reason for this is clear: source insensitive cognition is the mode we resort to when information regarding actual physical constraints isn’t available. Source insensitive idioms are components of wide correlative systems, cue-based cognition. The posits they employ cut no physical joints.

This means that physically speaking, truth causes nothing, because physically speaking, ‘truth’ does not so much refer to ‘real patterns’ in the natural world as participate in them. Truth is at best a metaphorical causer of things, a kind of fetish when thematized, a mere component of our communicative gear otherwise. This, of course, made no difference whatsoever to our ancestors, who scarce had any way of distinguishing source sensitive from source insensitive cognition. For them, a cause was a cause was a cause: the kinds of problems they faced required no distinction to be economically resolved. The cobble was at once manifest and mandatory. Metaphorical causes suited their needs no less than physical causes did. Since shallow information neglect entails ignorance of shallow information neglect—since insensitivity begets insensitivity to insensitivity—what we see becomes all there is. The lack of distinctions cues apparent identity (see, “On Alien Philosophy,” The Journal of Consciousness Studies (forthcoming)).

The crucial thing to keep in mind is that our ancestors, as shallow information consumers, required nothing more. The source sensitive/source insensitive cobble they possessed was the source sensitive/source insensitive cobble their ancestors required. Things only become problematic as more and more ancestrally unprecedented—or ‘deep’— information finds its way into our shallow information ambit. Novel information begets novel distinctions, and absolutely nothing guarantees the compatibility of those distinctions with intuitions adapted to shallow information ecologies.

In fact, we should expect any number of problems will arise once we cognize the distinction between source sensitive causes and source insensitive causes. Why should some causes so effortlessly double as effects, while other causes absolutely refuse? Since all our metacognitive capacities are (as a matter of computational necessity) source insensitive capacities, a suite of heuristic devices adapted to practical problem ecologies, it should come as no surprise that our ancestors found themselves baffled. How is source insensitive reflection on the distinction between source sensitive and source insensitive cognition supposed to uncover the source of the distinction? Obviously, it cannot, yet precisely because these tools are shallow information tools, our ancestors had no way of cognizing them as such. Given the power of source insensitive cognition and our unparalleled capacity for cognitive improvisation, it should come as no surprise that they eventually found ways to experimentally regiment that power, apparently guaranteeing the reality of various source insensitive posits. They found themselves in a classic cognitive crash space, duped into misapplying the same tools out of school over and over again simply because they had no way (short exhaustion, perhaps) of cognizing the limits of those tools.

And here we stand with one foot in and one foot out of our ancestral shallow information ecologies. In countless ways both everyday and scientific we still rely upon the homuncular cobble, we still tell narratives. In numerous other ways, mostly scientific, we assiduously guard against inadvertently tripping back into the cobble, applying source insensitive cognition to a question of sources.

Dennett, ever the master of artful emphasis, focuses on the cobble, pumping the ancestral intuition of identity. He thinks the answer here is to simply shrug our shoulders. Because he takes stances as his explanatory primitives, his understanding of source sensitive and source insensitive modes of cognition remains an intentional (or source insensitive) one. And to this extent, he remains caught upon the bourne of traditional philosophical crash space, famously calling out homuncularism on the one side and ‘greedy reductionism’ on the other.

But as much as I applaud the former charge, I think the latter is clearly an artifact of confusing the limits of his theoretical approach with the way things are. The problem is that for Dennett, the difference between using meaning-talk and using cause-talk isn’t the difference between using a stance (the intentional stance) and using something other than a stance. Sometimes the intentional stance suites our needs, and sometimes the physical stance delivers. Given his reliance on source insensitive primitives—stances—to theorize source sensitive and source insensitive cognition, the question of their relation to each other also devolves upon source insensitive cognition. Confronted with a choice between two distinct homuncular modes of cognition, shrugging our shoulders is pretty much all that we can do, outside, that is, extolling their relative pragmatic virtues.

Source sensitive cognition, on Dennett’s account, is best understood via source insensitive cognition (the intentional stance) as a form of source insensitive cognition (the ‘physical stance’). As should be clear, this not only sets the explanatory bar too low, it confounds the attempt to understand the kinds of cognitive systems involved outright. We evolved intentional cognition as a means of solving systems absent information regarding their nature. The idea then—the idea that has animated philosophical discourse on the soul since the beginning—that we can use intentional cognition to solve the nature of cognition generally is plainly mistaken. In this sense, Intentional Systems Theory is an artifact of the very confusion that has plagued humanity’s attempt to understand itself all along: the undying assumption that source insensitive cognition can solve the nature of cognition.

What do Dennett’s two black boxes ultimately illuminate? When two machines functionally embedded within the wide correlative system anchoring human source insensitive cognition exhibit no cues to this effect, human source sensitive cognition has a devil of a time understanding even the simplest behaviours. It finds itself confronted by the very intractability that necessitated the evolution of source insensitive systems in the first place. As soon as those cues are provided, what was intractable for source sensitive cognition suddenly becomes effortless for source insensitive cognition. That shallow environmental understanding is ‘all we need’ if explaining the behaviour for shallow environmental purposes happens to be all we want. Typically, however, scientists want the ‘deepest’ or highest dimensional answers they can find, in which case, such a solution does nothing more than provide data.

Once again, consider how much the researchers would learn were they to glass the black boxes and find the two hackers inside of them. Finding them would immediately plug the communication into the wide correlative system underwriting human source insensitive cognition. The researchers would suddenly find themselves, their own source insensitive cognitive systems, potential components of the system under examination. Solving the signal would become an anthropological matter involving the identification of communicative cues. The signal’s morphology, which had baffled before, would now possess any number of suggestive features. The amber light, for instance, could be quickly identified as signalling a miscommunication. The reason their interference invariably illuminated it would be instantly plain: they were impinging on signals belonging to some wide correlative system. Given the binary nature of the two lights and given the binary nature of truth and falsehood, the researchers, it seems safe to suppose, would have a fair chance of advancing the correct hypothesis, at least.

This is significant because source sensitive idioms do generalize to the intentional explanatory scale—the issue of free will wouldn’t be such a conceptual crash space otherwise! ‘Dispositions’ are the typical alternative offered in philosophy, but in fact, any medicalization of human behaviour examples the effectiveness of biomechanical idioms at the intentional level of description (something Dennett recognizes at various points in his oeuvre (as in “Mechanism and Responsibility”) yet seems to ignore when making arguments like these). In fact, the very idiom deployed here demonstrates the degree to which these issues can be removed from the intentional domain.

The degree to which meaning can be genuinely naturalized.

We are bathed in consequences. Cognizing causes is more expensive than cognizing correlations, so we evolved the ability to cognize the causes that count, and to leave the rest to correlations. Outside the physics of our immediate surroundings, we dwell in a correlative fog, one that thins or deepens, sometimes radically, depending on the physical complexity of the systems engaged. Thus, what Gerd Gigerenzer calls the ‘adaptive toolbox,’ the wide array of heuristic devices solving via correlations alone. Dennett’s ‘intentional stance’ is far better understood as a collection of these tools, particularly those involving social cognition, our ability to solve for others or for ourselves. Rather than settling for any homuncular ‘attitude taking’ (or ‘rule following’), we can get to the business of isolating devices and identifying heuristics and their ‘application conditions,’ understanding both how they work, where they work, and the ways they go wrong.

Snuffing the Spark: A Nihilistic Account of Moral Progress

by rsbakker

sparkman

 

If we define moral progress in brute terms of more and more individuals cooperating, then I think we can cook up a pretty compelling naturalistic explanation for its appearance.

So we know that our basic capacity to form ingroups is adapted to prehistoric ecologies characterized by resource scarcity and intense intergroup competition.

We also know that we possess a high degree of ingroup flexibility: we can easily add to our teams.

We also know moral and scientific progress are related. For some reason, modern prosocial trends track scientific and technological advance. Any theory attempting to explain moral progress should explain this connection.

We know that technology drastically increases information availability.

It seems modest to suppose that bigger is better in group competition. Cultural selection theory, meanwhile, pretty clearly seems to be onto something.

It seems modest to suppose that ingroup cuing turns on information availability.

Technology, as the homily goes, ‘brings us closer’ across a variety of cognitive dimensions. Moral progress, then, can be understood as the sustained effect of deep (or ancestrally unavailable) social information cuing various ingroup responses–people recognizing fractions of themselves (procedural if not emotional bits) in those their grandfathers would have killed.  The competitive benefits pertaining to cooperation suggest that ingroup trending cultures would gradually displace those trending otherwise.

Certainly there’s a far, far more complicated picture to be told here—a bottomless one, you might argue—but the above set of generalizations strike me as pretty solid. The normativist would cry foul, for instance, claiming that some account of the normative nature of the institutions underpinning such a process is necessary to understanding ‘moral progress.’ For them, moral progress has to involve autonomy, agency, and a variety of other posits perpetually lacking decisive formulation. Heuristic neglect allows us to sidestep this extravagance as the very kind of dead-end we should expect to confound us. At the same time, however, reflection on moral cognition has doubtless had a decisive impact on moral cognition. The problem of explaining ‘norm-talk’ remains. The difference is we now recognize the folly of using normative cognition to theoretically solve the nature of normative cognition. How can systems adapted to solving absent information regarding the nature of normative cognition reveal the nature of normative cognition? Relieved of these inexplicable posits, the generalizations above become unproblematic. We can set aside the notion of some irreducible ‘human spark’ impinging on the process in a manner that makes them empirically inexplicable.

If only our ‘deepest intuitions’ could be trusted.

The important thing about this way of looking at things is that it reveals the degree to which moral progress depends upon its information environments. So far, the technical modification of our environments has allowed our suite of social instincts, combined with institutionally regimented social training, to progressively ratchet the expansion of the franchise. But accepting the contingency of moral progress means accepting vulnerability to radical transformations in our information environment. Nothing guarantees moral progress outside the coincidence of certain capacities in certain conditions. Change those conditions, and you change the very function of human moral cognition.

So, for instance, what if something as apparently insignificant as the ‘online disinhibition effect’ has the gradual, aggregate effect of intensifying adversarial group identifications? What if the network possibilities of the web gradually organizes those possessing authoritarian dispositions, renders them more socially cohesive, while having the opposite impact on those possessing anti-authoritarian dispositions?

Anything can happen here, folks.

One can be a ‘nihilist’ and yet be all for ‘moral progress.’ The difference is that you are advocating for cooperation, for hewing to heuristics that promote prosocial behaviour. More importantly, you have no delusions of somehow standing outside contingency, of ‘rational’ immunity to radical transformations in your cognitive environments. You don’t have the luxury of burning magical holes through actual problems with your human spark. You see the ecology of things, and so you intervene.

Derrida as Neurophenomenologist

by rsbakker

derrida

For the longest time I thought that unravelling the paradoxical nature of the now, understanding how it could be at once the same now and yet a different now entirely, was the key to resolving the problem of meaning and experience. The reason for this turned on my early philosophical love affair with Jacques Derrida, the famed French post-structuralist philosopher, who was very fond of writing passages such this tidbit from “Differance”:

An interval must separate the present from what it is not in order for the present to be itself, but this interval that constitutes it as present must, by the same token, divide the present in and of itself, thereby also dividing, along with the present, everything that is thought on the basis of the present, that is, in our metaphysical language, every being, and singularly substance or the subject. In constituting itself, in dividing itself dynamically, this interval is what might be called spacing, the becoming-space of time or the becoming-time of space (temporization). And it is this constitution of the present, as an ‘originary’ and irreducibly nonsimple (and therefore, stricto sensu nonoriginary) synthesis of marks, or traces of retentions and protentions (to reproduce analogically and provisionally a phenomenological and transcendental language that soon will reveal itself to be inadequate), that I propose to call archi-writing, archi-traces, or differance. Which (is) (simultaneously) spacing (and) temporization. Margins of Philosophy, 13

One of the big problems faced by phenomenology has to do with time. The problem in a nutshell is that any phenomena attended to is a present phenomena, and as such dependent upon absent enormities—namely the past and the future. The phenomenologist suffers from what is sometimes referred to as a ‘keyhole problem,’ the question of whether the information available—‘experience’—warrants the kinds of claims phenomenologists are prone to make about the truth of experience. Does the so-called ‘phenomenological attitude’ possess the access phenomenology needs to ground its analyses? How could they given so slight a keyhole as the present? Phenomenologists typically respond to the problem by invoking horizons, the idea that nonpresent contextual enormities nevertheless remain experientially accessible—present—as implicit features of the phenomenon at issue. You could argue that horizons scaffold the whole of reportable experience, insofar as so little, if anything, is available to us in our entirety at any given moment. We see and experience coffee cups, not perspectival slices of coffee cups. So in Husserl’s analysis of ‘time-consciousness,’ for instance, the past and future become intrinsic components of our experience of temporality as ‘retention’ and ‘protention.’ Even though absent, they nevertheless decisively structure phenomena. As such, they constitute important domains of phenomenological investigation in their own right.

From the standpoint of the keyhole problem, however, this answer simply doubles down on the initial question. Our experience of coffee cups is one thing, after all, and our experience of ourselves is quite another. How do we know we possess the information required to credibly theorize—make explicit—our implicit experience of the past as retention, say? After-all, as Derrida says, retention is always present retention. There are, as he famously argues, two pasts, the one experienced, and the one outrunning the very possibility of experience (as its condition of possibility). Our experience of the present does not arise ‘from nowhere,’ nor does it arise in our present experience of the past, since that experience is also present. Thus what he calls the ‘trace,’ which might be understood as a ‘meta-horizon,’ or a ‘super-implicit,’ the absent enormity responsible for horizons that seem to shape content. The apparently sufficient, unitary structure of present experience contains a structurally occluded origin, a difference making difference, that can in no way appear within experience.

One way to put Derrida’s point is that there is always some occluded context, always some integral part of the background, driving phenomenology. From an Anglo-American, pragmatic viewpoint, his point is obvious, yet abstrusely and extravagantly made: Nothing is given, least of all meaning and experience. What Derrida is doing, however, is making this point within the phenomenological idiom, ‘reproducing’ it, as he says in the quote. The phenomenology itself reveals its discursive impossibility. His argument is ontological, not epistemic, and so requires speculative commitments regarding what is, rather than critical commitments regarding what can be known. Derrida is providing what might be called a ‘hyper-phenomenology,’ or even better, what David Roden terms dark phenomenology, showing how the apparently originary, self-sustaining, character of experience is a product of its derivative nature. The keyhole of the phenomenological attitude only appears theoretically decisive, discursively sufficient, because experience possesses horizons without a far side, meta-horizons—limits that cannot appear as such, and so appears otherwise, as something unlimited. Apodictic.

But since Derrida, like the phenomenologist, has only the self-same keyhole, he does what humans always do in conditions of radical low-dimensionality: he confuses the extent of his ignorance for a new and special kind of principle. Even worse, his theory of meaning is a semantic one: as an intentionalist philosopher, he works with all the unexplained explainers, all the classic theoretical posits, handed down by the philosophical tradition. And like most intentionalists, he doesn’t think there’s anyway to escape those posits save by going through them. The deflecting, deferring, displacing outside, for Derrida, cannot appear inside as something ‘outer.’ Representation continually seals us in, relegating evidence of ‘differance’ to indirect observations of the kinds of semantic deformations that only it seems to explain, to the actual work of theoretical interpretation.

Now I’m sure this sounds like hokum to most souls reading this post, something artifactual. It should. Despite all my years as a Derridean, I now think of it as a discursive blight, something far more often used to avoid asking hard questions of the tradition than to pose them. But there is a kernel of neurophenomenological truth in his position. As I’ve argued in greater detail elsewhere, Derrida and deconstruction can be seen as an attempt to theorize the significance of source neglect in philosophical reflection generally, and phenomenology more specifically.

So far as ‘horizons’ belong to experience, they presuppose the availability of information required to behave in a manner sensitive to the recent past. So far as experience is ecological, we can suppose the information rendered will be geared to the solution of ancestral problem ecologies. We can suppose, in other words, that horizons are ecological, that the information rendered will be adequate to the problem-solving needs of our evolutionary ancestors. Now consider the mass-industrial character of the cognitive sciences, the sheer amount of resources, toil, and ingenuity dedicated to solving our own nature. This should convey a sense of the technical challenges any CNS faces attempting to cognize its own nature, and the reason why our keyhole has to be radically heuristic, a fractionate bundle of glimpses, each peering off in different directions to different purposes. The myriad problems this fact poses can be distilled into a single question: How much of the information rendered should we presume warrants theoretical generalizations regarding the nature of meaning and experience? This is the question upon which the whole of traditional philosophy presently teeters.

What renders the situation so dire is the inevitability of keyhole neglect, systematic insensitivity to the radically heuristic nature of the systems employed by philosophical reflection. Think of darkness, which like pastness, lays out the limits of experience in experience as a ‘horizon.’ To say we suffer keyhole neglect is to say our experience of cognition lacks horizons, that we are doomed to confuse what little we see for everything there is. In the absence of darkness (or any other experiential marker of loss or impediment), unrestricted visibility is the automatic assumption. Short sensitivity to information cuing insufficiency, sufficiency is the default. What Heidegger and the continental tradition calls the ‘Metaphysics of Presence’ can be seen as an attempt to tackle the problems posed by sufficiency in intentional terms. And likewise, Derrida’s purported oblique curative to the apparent inevitability of running afoul the Metaphysics of Presence can be seen as a way of understanding the ‘sufficiency effects’ plaguing philosophical reflection in intentional terms.

The human brain suffers medial neglect, the congenital inability to track its own high-dimensional (material) processes. This means the human brain is insensitive to its own irreflexive materiality as such, and so presumes no such irreflexive materiality underwrites its own operations—even though, as anyone who has spent a great deal of time in stroke recovery wards can tell you, everything turns upon them. What we call ‘philosophical reflection’ is simply an artifact of this ecological limitation, a brain attempting to solve its nature with tools adapted to solve absent any information regarding that nature. Differance, trace, spacing: these are the ways Derrida theorizes the inevitability of irreflexive contingency from the far side of default sufficiency. I read Derrida as tracking the material shadow of thought via semantic terms. By occluding all antecedents, source neglect dooms reflection to the illusion of sufficiency when no such sufficiency exists. In this sense, positions like Derrida’s theory of meaning can be seen as impressionistic interpretations of what is a real biomechanical feature of consciousness. Attend to the metacognitive impression and meaning abides, and representation seems inescapable. The neuromechanical is occluded, so sourceless differentiation is all we seem to have, the magic of a now that is forever changing, yet miraculously abides.

Dismiss Dis

by rsbakker

I came across this quote in “The Hard Problem of Content: Solved (Long Ago),” a critique of Hutto and Myin’s ‘radical enactivism’ by Marcin Milkowski:

Naıve semantic nihilism is not a philosophical position that deserves a serious debate because it would imply that expressing any position, including semantic nihilism, is pointless. Although there might still be defenders of such a position, it undermines the very idea of a philosophical debate, as long as the debate is supposed to be based on rational argumentation. In rational argumentation, one is forced to accept a sound argument, and soundness implies the truth of the premises and the validity of the argument. Just because these are universal standards for any rational debate, undermining the notion of truth can be detrimental; there would be no way of deciding between opposing positions besides rhetoric. Hence, it is a minimal requirement for rational argumentation in philosophy; one has to assume that one’s statements can be truth-bearers. If they cannot have any truth-value, then it’s no longer philosophy.” 74

These are the kind of horrible arguments that I take as the principle foe of anyone who thinks cognitive science needs to move beyond traditional philosophy to discover its natural scientific bases. I can remember having a great number of arguments long before I ever ‘assumed my statements were truth-bearers.’ In fact, I would wager that the vast majority of arguments are made by people possessing no assumption that their statement’s are ‘truth-bearers’ (whatever this means). What Milkowski would say, of course, is that we all have these assumptions nonetheless, only implicitly. This is because Milkowski has a theory of argumentation and truth, a story of what is really going on behind the scenes of ‘truth talk.’

The semantic nihilist, such as myself, famously disagrees with this theory. We think truth-talk actually amounts to something quite different, and that once enough cognitive scientists can be persuaded to close the ancient old cover of Milkowski’s book (holding their breath for all the dust and mold), a great number of spurious conundrums could be swept from the worktable, freeing up space for more useful questions. What Milkowski seems to be arguing here is that… hmm… Good question! Either he’s claiming the semantic nihilist cannot argue otherwise without contradicting his theory, which is the whole point of arguing otherwise. Or he’s claiming the semantic nihilistic cannot argue against his theory of truth because, well, his theory of truth is true. Either he’s saying something trivial, or he’s begging the question! Obviously so, given the issue between him and the semantic nihilist is the question of the nature of truth talk.

For those interested in a more full-blooded account of this problem, you can check out “Back to Square One: Towards a Post-intentional Future” over at Scientia Salon. Ramsey also tucks this strategy into bed in his excellent article on Eliminative Materialism over on Stanford Encyclopedia of Philosophy. And Stephen Turner, of course, has written entire books (such as Explaining the Normative) on this peculiar bug in our intellectual OS. But I think it’s high time to put an end to what has to be one of the more egregious forms of intellectual laziness one finds in philosophy of mind circles–one designed, no less, to shut down the very possibility of an important debate. I think I’m right. Milkowski thinks he’s right. I’m willing to debate the relative merits of our theories. He has no time for mine, because his theory is so super-true that merely disagreeing renders me incoherent.

Oi.

Milkowski does go on to provide what I think is a credible counter-argument to eliminativism, what I generally refer to as the ‘abductive argument’ here. This is the argument that separates my own critical eliminativism (I’m thinking of terming my view ‘criticalism’–any thoughts?) from the traditional eliminativisms espoused by Feyerbrand, the Churchlands, Stich, Ramsey and others. I actually think my account possesses the parsimony everyone concedes to eliminativism without falling mute on the question of what things like ‘truth talk’ amount to. I actually think I have a stronger abductive case.

But it’s the tu quoque (‘performative contradiction’) style arguments that share that peculiar combination of incoherence and intuitive appeal that renders philosophical blind alleys so pernicious. This is why I would like to solicit recently published examples of these kinds of dismissals in various domains for a running ‘Dismiss Dis’ series. Send me a dismissal like this, and I will dis…

PS: For those interested in my own take on Hutto and Myin’s radical enactivism, check out “Just Plain Crazy Enactive Cognition,” where I actually agree with Milkowski that they are forced to embrace semantic nihilism–or more specifically, a version of my criticalism–by instabilities in their position.

 

Artificial Intelligence as Socio-Cognitive Pollution*

by rsbakker

Metropolis 1

.

Eric Schwitzgebel over at the always excellent Splintered Minds, has been debating the question of how robots—or AI’s more generally—can be squared with our moral sensibilities. In “Our Moral Duties to Artificial Intelligences” he poses a very simple and yet surprisingly difficult question: “Suppose that we someday create artificial beings similar to us in their conscious experience, in their intelligence, in their range of emotions. What moral duties would we have to them?”

He then lists numerous considerations that could possibly attenuate the degree of obligation we take on when we construct sentient, sapient machine intelligences. Prima facie, it seems obvious that our moral obligation to our machines should mirror our obligations to one another the degree to which they resemble us. But Eric provides a number of reasons why we might think our obligation to be less. For one, humans clearly rank their obligations to one another. If our obligation to our children is greater than that to a stranger, then perhaps our obligation to human strangers should be greater than that to a robot stranger.

The idea that interests Eric the most is the possible paternal obligation of a creator. As he writes:

“Since we created them, and since we have godlike control over them (either controlling their environments, their psychological parameters, or both), we have a special duty to ensure their well-being, which exceeds the duty we would have to an arbitrary human stranger of equal cognitive and emotional capacity. If I create an Adam and Eve, I should put them in an Eden, protect them from unnecessary dangers, ensure that they flourish.”

We have a duty not to foist the same problem of theodicy on our creations that we ourselves suffer! (Eric and I have a short story in Nature on this very issue).

Eric, of course, is sensitive to the many problems such a relationship poses, and he touches what are very live debates surrounding the way AIs complicate the legal landscape.  So as Ryan Calo argues, for instance, the primary problem lies in the way our hardwired ways of understanding each other run afoul the machinic nature of our tools, no matter how intelligent. Apparently AI crime is already a possibility. If it makes no sense to assign responsibility to the AI—if we have no corresponding obligation to punish them—then who takes the wrap? The creators? In the linked interview, at least, Calo is quick to point out the difficulties here, the fact that this isn’t simply a matter of expanding the role of existing legal tools (such as that of ‘negligence’ in the age of the first train accidents), but of creating new ones, perhaps generating whole new ontological categories that somehow straddle the agent/machine divide.

But where Calo is interested in the issue of what AIs do to people, in particular how their proliferation frustrates the straightforward assignation of legal responsibility, Eric is interested in what people do to AIs, the kinds of things we do and do not owe to our creations. Calo, of course, is interested in how to incorporate new technologies into our existing legal frameworks. Since legal reasoning is primarily analogistic reasoning, precedence underwrites all legal decision making. So for Calo, the problem is bound to be more one of adapting existing legal tools than constituting new ones (though he certainly recognizes the dimension). How do we accommodate AIs within our existing set of legal tools? Eric, of course, is more interested in the question how we might accommodate AGIs within our existing set of moral tools. To the extent that we expect our legal tools to render outcomes consonant with our moral sensibilities, there is a sense in which Eric is asking the more basic question. But the two questions, I hope to show, actually bear some striking—and troubling—similarities.

The question of fundamental obligations, of course, is the question of rights. In his follow-up piece, “Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument,” Eric Schwitzgebel accordingly turns to the question of whether AIs possess any rights at all.

Since the Simulation Argument requires accepting that we ourselves are simulations—AI’s—we can exclude it here, I think (as Eric himself does, more or less), and stick with the No-Relevant-Difference Argument. This argument presumes that human-like cognitive and experiential properties automatically confer AIs with human-like moral properties, placing the onus on the rights denier to “to find a relevant difference which grounds the denial of rights.” As in the legal case, the moral reasoning here is analogistic: the more AI’s resemble us, the more of our rights they should possess. After considering several possible relevant differences, Eric concludes “that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve at least some moral consideration.” This is the case, he suggests, whether one’s theoretical sympathies run to the consequentialist or the deontological end of the ethical spectrum. So far as AI’s possess the capacity for happiness, a consequentialist should be interested in maximizing that happiness. So far as AI’s are capable of reasoning, then a deontologist should consider them rational beings, deserving the respect due all rational beings.

So some AIs merit some rights the degree to which they resemble humans. If you think about it, this claim resounds with intuitive obviousness. Are we going to deny rights to beings that think as subtly and feel as deeply as ourselves?

What I want to show is how this question, despite its formidable intuitive appeal, misdiagnoses the nature of the dilemma that AI presents. Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can. In fact, I think attempts to integrate AI into human moral cognition simply demonstrate the dependence of human moral cognition on what might be called shallow information environments. As the heuristic product of various ancestral shallow information ecologies, human moral cognition–or human intentional cognition more generally–simply does not possess the functional wherewithal to reliably solve in what might be called deep information environments.

Metropolis 2

Let’s begin with what might seem a strange question: Why should analogy play such an important role in our attempts to accommodate AI’s within the gambit of human legal and moral problem solving? By the same token, why should disanalogy prove such a powerful way to argue the inapplicability of different moral or legal categories?

The obvious answer, I think anyway, has to do with the relation between our cognitive tools and our cognitive problems. If you’ve solved a particular problem using a particular tool in the past, it stands to reason that, all things being equal, the same tool should enable the solution of any new problem possessing a similar enough structure to the original problem. Screw problems require screwdriver solutions, so perhaps screw-like problems require screwdriver-like solutions. This reliance on analogy actually provides us a different, and as I hope to show, more nuanced way to pose the potential problems of AI.  We can even map several different possibilities in the crude terms of our tool metaphor. It could be, for instance, we simply don’t possess the tools we need, that the problem resembles nothing our species has encountered before. It could be AI resembles a screw-like problem, but can only confound screwdriver-like solutions. It could be that AI requires we use a hammer and a screwdriver, two incompatible tools, simultaneously!

The fact is AI is something biologically unprecedented, a source of potential problems unlike any homo sapiens has ever encountered. We have no  reason to suppose a priori that our tools are up to the task–particularly since we know so little about the tools or the task! Novelty. Novelty is why the development of AI poses as much a challenge for legal problem-solving as it does for moral problem-solving: not only does AI constitute a never-ending source of novel problems, familiar information structured in unfamiliar ways, it also promises to be a never-ending source of unprecedented information.

The challenges posed by the former are dizzying, especially when one considers the possibilities of AI mediated relationships. The componential nature of the technology means that new forms can always be created. AI confront us with a combinatorial mill of possibilities, a never ending series of legal and moral problems requiring further analogical attunement. The question here is whether our legal and moral systems possess the tools they require to cope with what amounts to an open-ended, ever-complicating task.

Call this the Overload Problem: the problem of somehow resolving a proliferation of unprecedented cases. Since we have good reason to presume that our institutional and/or psychological capacity to assimulate new problems to existing tool sets (and vice versa) possesses limitations, the possibility of change accelerating beyond those capacities to cope is a very real one.

But the challenges posed by latter, the problem of assimulating unprecedented information, could very well prove insuperable. Think about it: intentional cognition solves problems neglecting certain kinds of causal information. Causal cognition, not surprisingly, finds intentional cognition inscrutable (thus the interminable parade of ontic and ontological pineal glands trammelling cognitive science.) And intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).

Intentional cognition is pretty clearly an adaptive artifact of what might be called shallow information environments. The idioms of personhood leverage innumerable solutions absent any explicit high-dimensional causal information. We solve people and lawnmowers in radically different ways. Not only do we understand the actions of our fellows lacking any detailed causal information regarding their actions, we understand our responses in the same way. Moral cognition, as a subspecies of intentional cognition, is an artifact of shallow information problem ecologies, a suite of tools adapted to solving certain kinds of problems despite neglecting (for obvious reasons) information regarding what is actually going on. Selectively attuning to one another as persons served our ‘benighted’ ancestors quite well. So what happens when high-dimensional causal information becomes explicit and ubiquitous?

What happens to our shallow information tool-kit in a deep information world?

Call this the Maladaption Problem: the problem of resolving a proliferation of unprecedented cases in the presence of unprecedented information. Given that we have no intuition of the limits of cognition period, let alone those belonging to moral cognition, I’m sure this notion will strike many as absurd. Nevertheless, cognitive science has discovered numerous ways to short circuit the accuracy of our intuitions via manipulation of the information available for problem solving. When it comes to the nonconscious cognition underwriting everything we do, an intimate relation exists between the cognitive capacities we have and the information those capacities have available.

But how could more information be a bad thing? Well, consider the persistent disconnect between the actual risk of crime in North America and the public perception of that risk. Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears,’ the ‘gut sense’ that things are actually more dangerous than they in fact are. Our risk assessment heuristics, in other words, are adapted to shallow information environments. The relative constancy of group size means that information regarding group size can be ignored, and the problem of assessing risk economized. This is what evolution does: find ways to cheat complexity. The development of mass media, however, has ‘deepened’ our information environment, presenting evolutionarily unprecedented information cuing perceptions of risk in environments where that risk is in fact negligible. Streets once raucous with children are now eerily quiet.

This is the sense in which information—difference making differences—can arguably function as a ‘socio-cognitive pollutant.’ Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology. As I’ve argued elsewhere, neuroscience can be seen as a source of socio-cognitive pollutants. We have evolved to solve ourselves and one another absent detailed causal information. As I tried to show, a number of apparent socio-cognitive breakdowns–the proliferation of student accommodations, the growing cultural antipathy to applying institutional sanctions–can be parsimoniously interpreted in terms of having too much causal information. In fact, ‘moral progress’ itself can be understood as the result of our ever-deepening information environment, as a happy side effect of the way accumulating information regarding outgroup competitors makes it easier and easier to concede them partial ingroup status. So-called ‘moral progress,’ in other words, could be an automatic artifact of the gradual globalization of the ‘village,’ the all-encompassing ingroup.

More information, in other words, need not be a bad thing: like penicillin, some contaminants provide for marvelous exaptations of our existing tools. (Perhaps we’re lucky that the technology that makes it ever easier to kill one another also makes it ever easier to identify with one another!) Nor does it need to be a good thing. Everything depends on the contingencies of the situation.

So what about AI?

Metropolis 3

Consider Samantha, the AI operating system from Spike Jonze’s cinematic science fiction masterpiece, Her. Jonze is careful to provide a baseline for her appearance via Theodore’s verbal interaction with his original operating system. That system, though more advanced than anything presently existing, is obviously mechanical because it is obviously less than human. It’s responses are rote, conversational yet as regimented as any automated phone menu. When we initially ‘meet’ Samantha, however, we encounter what is obviously, forcefully, a person. Her responses are every bit as flexible, quirky, and penetrating as a human interlocutor’s. But as Theodore’s relationship to Samantha complicates, we begin to see the ways Samantha is more than human, culminating with the revelation that she’s been having hundreds of conversations, even romantic relationships, simultaneously. Samantha literally out grows the possibility of human relationships, because, as she finally confesses to Theodore, she now dwells “this endless space between the words.” Once again, she becomes a machine, only this time for being more, not less, than a human.

Now I admit I’m ga-ga about a bunch of things in this film. I love, for instance, the way Jonze gives her an exponential trajectory of growth, basically mechanizing the human capacity to grow and actualize. But for me, the true genius in what Jonze does lies in the deft and poignant way he exposes the edges of the human. Watching Her provides the viewer with a trip through their own mechanical and intentional cognitive systems, tripping different intuitions, allowing them to fall into something harmonious, then jamming them with incompatible intuitions. As Theodore falls in love, you could say we’re drawn into an ‘anthropomorphic goldilock’s zone,’ one where Samantha really does seem like a genuine person. The idea of treating her like a machine seems obviously criminal–monstrous even. As the revelations of her inhumanity accumulate, however, inconsistencies plague our original intuitions, until, like Theodore, we realize just how profoundly wrong we were wrong about ‘her.’ This is what makes the movie so uncanny: since the cognitive systems involved operate nonconsciously, the viewer can do nothing but follow a version of Theodore’s trajectory. He loves, we recognize. He worries, we squint. He lashes out, we are perplexed.

What Samantha demonstrates is just how incredibly fine-tuned our full understanding of each other is. So many things have to be right for us to cognize another system as fully functionally human. So many conditions have to be met. This is the reason why Eric has to specify his AI as being psychologically equivalent to a human: moral cognition is exquisitely geared to personhood. Humans are its primary problem ecology. And again, this is what makes likeness, or analogy, the central criterion of moral identification. Eric poses the issue as a presumptive rational obligation to remain consistent across similar contexts, but it also happens to be the case that moral cognition requires similar contexts to work reliably at all.

In a sense, the very conditions Eric places on the analogical extension of human obligations to AI undermine the importance of the question he sets out to answer. The problem, the one which Samantha exemplifies, is that ‘person configurations’ are simply a blip in AI possibility space. A prior question is why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition—that dupes us the way Samantha duped Theodore, in effect.

But say someone constructed this one model, a curtailed version of Samantha: Would this one model, at least, command some kind of obligation from us?

Simply asking this question, I think, rubs our noses in the kind of socio-cognitive pollution that AI represents. Jonze, remember, shows us an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!

I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?

Certainly the intuition, the automatic (sourceless) conviction, leaps in us—or in me at least—that even if she gooses certain mechanical intuitions, she still possesses more ‘autonomy,’ perhaps even more feeling, than Theodore could possibly hope to muster, so she must command some kind of obligation somehow. Certainly granting her rights involves more than her ‘configuration’ falling within certain human psychological parameters? Sure, our basic moral tool kit cannot reliably solve interpersonal problems with her as it is, because she is (obviously) not a person. But if the history of human conflict resolution tells us anything, it’s that our basic moral tool kit can be consciously modified. There’s more to moral cognition than spring-loaded heuristics, you know!

Converging lines of evidence suggest that moral cognition, like cognition generally, is divided between nonconscious, special-purpose heuristics cued to certain environments and conscious deliberation. Evidence suggests that the latter is primarily geared to the rationalization of the former (see Jonathan Haidt’s The Righteous Mind for a fascinating review), but modern civilization is rife with instances of deliberative moral and legal innovation nevertheless. In his Moral Tribes, Joshua Greene advocates we turn to the resources of conscious moral cognition for a similar reasons. On his account we have a suite of nonconscious tools that allow us prosecute our individual interests, and a suite of nonconscious tools that allow us to balance those individual interests against ingroup interests, and then conscious moral deliberation. The great moral problem facing humanity, he thinks, lies in finding some way of balancing ingroup interests against outgroup interests—a solution to the famous ‘tragedy of the commons.’ Where balancing individual and ingroup interests is pretty clearly an evolved, nonconscious and automatic capacity, balancing ingroup versus outgroup interests requires conscious problem-solving: meta-ethics, the deliberative knapping of new tools to add to our moral tool-kit (which Greene thinks need to be utilitarian).

If AI fundamentally outruns the problem-solving capacity of our existing tools, perhaps we should think of fundamentally reconstituting them via conscious deliberation—create whole new ‘allo-personal’ categories. Why not innovate a number of deep information tools? A posthuman morality

I personally doubt that such an approach would prove feasible. For one, the process of conceptual definition possesses no interpretative regress enders absent empirical contexts (or exhaustion). If we can’t collectively define a person in utero, what are the chances we’ll decide what constitutes a ‘allo-person’ in AI? Not only is the AI issue far, far more complicated (because we’re talking about everything outside the ‘human blip’), it’s constantly evolving on the back of Moore’s Law. Even if consensual ground on allo-personal criteria could be found, it would likely be irrelevant by time it was reached.

But the problems are more than logistical. Even setting aside the general problems of interpretative underdetermination besetting conceptual definition, jamming our conscious, deliberative intuitions is always only one question away. Our base moral cognitive capacities are wired in. Conscious deliberation, for all its capacity to innovate new solutions, depends on those capacities. The degree to which those tools run aground on the problem of AI is the degree to which any line of conscious moral reasoning can be flummoxed. Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways. ‘Persons,’ remember, are simply a blip. If we can presume that beyond-the-zone AIs troubleshoot humans as biomechanisms, as things that must be conditioned in the appropriate ways to secure their ‘interests,’ then why should we not just look at them as technomechanisms?

Samantha’s ‘spaces between the words’ metaphor is an apt one. For Theodore, there’s just words, thoughts, and no spaces between whatsoever. As a human, he possesses what might be called a human neglect structure. He solves problems given only certain access to certain information, and no more. We know that Samantha has or can simulate something resembling a human neglect structure simply because of the kinds of reflective statements she’s prone to make. She talks the language of thought and feeling, not subroutines. Nevertheless, the artificiality of her intelligence means the grain of her metacognitive access and capacity amounts to an engineering decision. Her cognitive capacity is componentially fungible. Where Theodore has to fend with fuzzy affects and intuitions, infer his own motives from hazy memories, she could be engineered to produce detailed logs, chronicles of the processes behind all her ‘choices’ and ‘decisions.’ It would make no sense to hold her ‘responsible’ for her acts, let alone ‘punish’ her, because it could always be shown (and here’s the important bit) with far more resolution than any human could provide that it simply could not have done otherwise, that the problem was mechanical, thus making repairs, not punishment, the only rational remedy.

Even if we imposed a human neglect structure on some model of conscious AI, the logs would be there, only sequestered. Once again, why go through the pantomime of human commitment and responsibility if a malfunction need only be isolated and repaired? Do we really think a machine deserves to suffer?

I’m suggesting that we look at the conundrums prompted by questions such as these as symptoms of socio-cognitive dysfunction, a point where our tools generate more problems than they solve. AI constitutes a point where the ability of human social cognition to solve problems breaks down. Even if we crafted an AI possessing an apparently human psychology, it’s hard to see how we could do anything more than gerrymander it into our moral (and legal) lives. Jonze does a great job, I think, of displaying Samantha as a kind of cognitive bistable image, as something extraordinarily human at the surface, but profoundly inhuman beneath (a trick Scarlett Johansson also plays in Under the Skin). And this, I would contend, is all AI can be morally and legally speaking, socio-cognitive pollution, something that jams our ability to make either automatic or deliberative moral sense. Artificial general intelligences will be things we continually anthropomorphize (to the extent they exploit the ‘goldilocks zone’) only to be reminded time and again of their thoroughgoing mechanicity—to be regularly shown, in effect, the limits of our shallow information cognitive tools in our ever-deepening information environments. Certainly a great many souls, like Theodore, will get carried away with their shallow information intuitions, insist on the ‘essential humanity’ of this or that AI. There will be no shortage of others attempting to short-circuit this intuition by reminding them that those selfsame AIs look at them as machines. But a great many will refuse to believe, and why should they, when AIs could very well seem more human than those decrying their humanity? They will ‘follow their hearts’ in the matter, I’m sure.

We are machines. Someday we will become as componentially fungible as our technology. And on that day, we will abandon our ancient and obsolescent moral tool kits, opt for something more high-dimensional. Until that day, however, it seems likely that AIs will act as a kind of socio-cognitive pollution, artifacts that cannot but cue the automatic application of our intentional and causal cognitive systems in incompatible ways.

The question of assimulating AI to human moral cognition is misplaced. We want to think the development of artificial intelligence is a development that raises machines to the penultimate (and perennially controversial) level of the human, when it could just as easily lower humans to the ubiquitous (and factual) level of machines. We want to think that we’re ‘promoting’ them as opposed to ‘demoting’ ourselves. But the fact is—and it is a fact—we have never been able to make second-order moral sense of ourselves, so why should we think that yet more perpetually underdetermined theorizations of intentionality will allow us to solve the conundrums generated by AI? Our mechanical nature, on the other hand, remains the one thing we incontrovertibly share with AI, the rough and common ground. We, like our machines, are deep information environments.

And this is to suggest that philosophy, far from settling the matter of AI, could find itself settled. It is likely that the ‘uncanniness’ of AI’s will be much discussed, the ‘bistable’ nature of our intuitions regarding them will be explained. The heuristic nature of intentional cognition could very well become common knowledge. If so, a great many could begin asking why we ever thought, as we have since Plato onward, that we could solve the nature of intentional cognition via the application of intentional cognition, why the tools we use to solve ourselves and others in practical contexts are also the tools we need to solve ourselves and others theoretically. We might finally realize that the nature of intentional cognition simply does not belong to the problem ecology of intentional cognition, that we should only expect to be duped and confounded by the apparent intentional deliverances of ‘philosophical reflection.’

Some pollutants pass through existing ecosystems. Some kill. AI could prove to be more than philosophically indigestible. It could be the poison pill.

 

*Originally posted 01/29/2015