Three Pound Brain

No bells, just whistling in the dark…

Tag: post-intentional philosophy

The Zombie Enlightenment

by rsbakker

rick zombie

Understanding what comes next depends on understanding what’s going on now, which is to say, cognizing modernity. The premise, recall, is that, due to metacognitive myopia, traditional intentional vocabularies lock us into perpetual conundrums. This means understanding modernity requires some kind of post-intentional explanatory framework—we need some way to understand it in naturalistic terms. Since cognizing modernity requires cognizing the Enlightenment, this puts us on the hook for an alternative, post-intentional explanation of the processes at work—a zombie Enlightenment story.

I say ‘zombie,’ of course, as much to keep the horror of the perspective in view as to underscore the naturalistic character of the explanations. What follows is a dry-run of sorts, an attempt to sketch what has brought about this extraordinary era of accelerating transformation. Keep in mind the ludicrous speculative altitudes involved, but also remember that all such attempts to theorize macrosocial phenomena suffer this liability. I don’t think it’s so important that the case be made as some alternative be proposed at this point. For one, the mere existence of such an account, the bare fact of its plausibility, requires the intentionalist account for the superiority of their approach, and this, as we shall see below, can have a transformative effect on cognitive ecologies.

In zombie terms, the Enlightenment, as we think we know it, had nothing to do with the ‘power of reason’ to ‘emancipate,’ to free us from the tyranny of Kant’s ‘tutelary natures.’ This is the Myth. Likewise, Nietzsche’s Gegenaufklarung had nothing to do with somehow emancipating us from the tyrannical consequences of this emancipation. The so-called Counter-Enlightenment, or ‘postmodernism’ as it has come to be called, was a completion, or a consummation, if you wish. The antagonism is merely a perspectival artifact. Postmodernism, if anything, represents the processes characteristic of the zombie Enlightenment colonizing and ultimately overcoming various specialized fields of cultural endeavour.

To understand this one needs to understand something crucial about human nature, namely, the way understanding, all understanding, is blind understanding. The eye cannot be seen. Olfaction has no smell, just as touch has no texture. To enable knowledge, in other words, is to stand outside the circuit of what is known. A great many thinkers have transformed this observation into something both extraordinary and occult, positing all manner of inexplicable things by way of explanation, everything from transparencies to transcendentals to trace structures. But the primary reason is almost painfully mundane: the seeing eye cannot be seen simply because it is mechanically indisposed.

Human beings suffer ‘cognitive indisposition’ or as I like to call it, medial neglect, a ‘brain blindness’ so profound as to escape them altogether, to convince them, at every stage of their ignorance, that they could see pretty much everything they needed to see.

Now according to the Myth, the hundred million odd souls populating Europe in the 18th century shuffled about in unconscious acquiescence to authority, each generation blindly repeating the chauvinisms of the generation prior. The Enlightenment institutionalized inquiry, the asking of questions, and the asking of questions, far from merely setting up ‘choice situations’ between assertions, makes cognitive incapacity explicit. The Enlightenment, in other words, institutionalized the erosion of traditional authority, thus ‘freeing’ individuals to pursue other possible answers. The great dividend of the Enlightenment was nothing less than autonomy, the personal, political, and material empowerment of the individual via knowledge. They were blind, but now they could see–or at least so they thought.

Postmodernism, on the other hand, arose out of the recognition that inquiry has no end, that the apparent rational verities of the Enlightenment were every bit as vulnerable to delegitimization (‘deconstruction’) as the verities of the tradition that it swept away. Enlightenment critique was universally applicable, every bit as toxic to successor as to traditional claims. Enlightenment reason, therefore, could not itself be the answer, a conviction that the increasingly profound technical rationalization of Western society only seemed to confirm. The cognitive autonomy promised by Kant and his contemporaries had proven too radical, missing the masses altogether, and stranding intellectuals in the humanities, at least, with relativistic guesses. The Enlightenment deconstruction of religious narrative—the ‘death of God’—was at once the deconstruction of all absolute narratives, all foundations. Autonomy had collapsed into anomie.

This is the Myth of the Enlightenment, at least in cartoon thumbnail.

But if we set aside our traditional fetish for ‘reason’ and think of post-Medieval European society as a kind of information processing system, a zombie society, the story actually looks quite different. Far from the death of authority and the concomitant birth of a frightening, ‘postmodern autonomy,’ the ‘death of God’ becomes the death of supervision. Supervised learning, of course, refers to one of the dominant learning paradigms in artificial neural networks, one where training converges on known targets, as opposed to unsupervised learning, where training converges on unknown targets. So long as supervised cognitive ecologies monopolized European society, European thinkers were bound to run afoul the ‘only-game-in-town effect,’ the tendency to assume claims true for the simple want of alternatives. There were gains in cognitive efficiency, certainly, but they arose adventitiously, and had to brave selection in generally unforgiving social ecologies. Pockets of unsupervised learning appear in every supervised society, in fact, but in the European case, the economic and military largesse provided by these isolated pockets assured they would be reproduced across the continent. The process was gradual, of course. What we call the ‘Enlightenment’ doesn’t so much designate the process as the point when the only-game-in-town effect could no longer be sustained among the learned classes. In all corners of society, supervised optima found themselves competing more and more with unsupervised optima—and losing. What Kant and his contemporaries called ‘Enlightenment’ simply made explicit an ecology that European society had been incubating for centuries, one that rendered cognitive processes responsive to feedback via empirical and communicative selection.

On an information processing view, in other words, the European Enlightenment did not so much free up individuals as cognitive capacity. Once again, we need to appreciate the zombie nature of this view, how it elides ethical dimensions. On this view, traditional chauvinisms represent maladaptive optima, old fixes that now generate more problems than they solve. Groups were not so much oppressed, on this account, as underutilized. What we are prone to call ‘moral progress’ in folk political terms amounts to the optimization of collective neurocomputational resources. These problematic ethical and political consequences, of course, have no bearing on the accuracy of the view. Any cultural criticism that makes ideological orthodoxy a condition of theoretical veracity is nothing more than apologia in the worst sense, self-serving rationalization. In fact, since naturalistic theories are notorious for the ways they problematize our moral preconceptions, you might even say this kind of problematization is precisely what we should expect. Pursuing hard questions can only be tendentious if you cannot countenance hard answers.

The transition from a supervised to an unsupervised learning ecology was at once a transition from a slow selecting to a rapid selecting ecology. One of the great strengths of unsupervised learning, it turns out, is blind source separation, something your brain wonderfully illustrates for you every time you experience the famed ‘cocktail party effect.’ Artificial unsupervised learning algorithms, of course, allow for the causal sourcing of signals in a wide variety of scientific contexts. Causal sourcing, of course, amounts to identifying causes, which is to say, mechanical cognition, which in turn amounts to behavioural efficacy, the ability to remake environments. So far as behavioural efficacy cues selection, then, we suddenly find ourselves with a social ecology (‘science’) dedicated to the accumulation of ever more efficacies—ever more power over themselves and their environments.

Power begets power; efficiency, efficiency. Human ecologies were not only transformed, they were transformed in ways that facilitated transformation. Each new optimization selected and incorporated generated ecological changes, social or otherwise, changes bearing on the efficiency of previous optimizations. And so the shadow of maladaptation, or obsolescence, fell across all existing adaptations, be they behavioural or technological.

The inevitability of maladaptation, of course, merely expresses the contingency of ecology, the fact that all ecologies change over time. In ancestral (slow selecting) ecologies, the information required to cognize this process was scarce to nonexistent: the only game in town effect—the assumption of sufficiency in the absence of alternatives—was all but inevitable. Given the way cognitive invariance cues cognitive stability, the fact that we can trust our inheritance, the spectre of accelerating obsolescence could only represent a threat.

“Expect the unexpected,” a refrain that only modernity could abide, wonderfully recapitulates, I think, the inevitability of postmodernism. Cognitive instability became the only cognitive stability, the only humanistic ‘principle’ remaining. And thus the great (perhaps even perverse) irony of philosophical modernity: the search for stability in difference, and the development, across the humanities, of social behaviours (aesthetic or theoretical) bent on making obsolete.

Rather than wait for obsolescence to arise out ecological transformation, many began forcing the issue, isolating instances of the only game in town effect in various domains of aesthetic and theoretical behaviour, and adducing alternatives in an attempt to communicate their obsolescence. Supervised or ‘traditional’ ecologies readily broke down. Unsupervised learning ecologies, quickly became synonymous with cognitive stability—and more attractive for it. The scientific fetish for innovation found itself replicated in humanistic guise. Despite the artificial nature of this process, the lack of any alternative account of semantic instability gave rise to a new series of only game in town effects. What had begun as an unsupervised exploration of solution spaces, quickly lapsed into another supervised ecology. Avante garde and post-structuralist zombies adapted to exploit microsocial ecologies they themselves had fashioned.

The so-called ‘critique of Enlightenment reason,’ whether implicit in aesthetic behaviour or explicit in theoretical behaviour, demonstrates the profundity of medial neglect, the blindness of zombie components to the greater machinery compelling them. The Gegenaufklarung merely followed through on the actual processes of ‘ratcheting ecological innovation’ responsible, undermining, as it did, the myths that had been attached to those processes in lieu of actual understanding. In communicating the performative dimension of ‘reason’ and the irrationality of Enlightenment rationality, postmodernism cleared a certain space for post-intentional thinking, but little more. Otherwise it is best viewed as an inadvertent consummation of a logic it can only facilitate and never ‘deconstruct.’

Our fetish for knowledge and innovation remain. We have been trained to embrace an entirely unknown eventuality, and that training has been supervised.

Back to Square One: Toward a Post-intentional Future

by rsbakker

Scanners

Can be found at the esteemed Scientia Salon. Spread the link far and wide. For those who follow the blog, the arguments will be familiar: what should be interesting is watching what a far different, and far less charitable, group of philosophers make of them.

Scanners 2

 

Davidson’s Fork: An Eliminativist Radicalization of Radical Interpretation

by rsbakker

Davidson’s primary claim to philosophical fame lies in the substitution of the hoary question of meaning qua meaning with the more tractable question of what we need to know to understand others—the question of interpretation. Transforming the question of meaning into the question of interpretation forces considerations of meaning to account for the methodologies and kinds of evidence required to understand meaning. And this evidence happens to be empirical: the kinds of sounds actual speakers make in actual environments. Radical interpretation, you might say, is useful precisely because of the way the effortlessness of everyday interpretation obscures this fact. Starting from scratch allows our actual resources to come to the fore, as well as the need to continually test our formulations.

But it immediately confronts us with a conundrum. Radical Interpretation, as Davidson points out, requires some way of bootstrapping the interdependent roles played by belief and meaning. “Since we cannot hope to interpret linguistic activity without knowing what a speaker believes,” he writes, “and cannot found a theory of what he means on a prior discovery of his beliefs and intentions, I conclude that in interpreting utterances from scratch—in radical interpretation—we must somehow deliver simultaneously a theory of belief and a theory of meaning” (“Belief and the Basis of Meaning,” Inquiries into Truth and Interpretation, 144). The problem is that the interpretation of linguistic activity seems to require that we know what a speaker believes, knowledge that we can only secure if we already know what a speaker means.

The enormously influential solution Davidson gives the problem lies in the way certain, primitive beliefs can be non-linguistically cognized on the assumption of the speaker’s rationality. If we assume that the speaker believes as he should, that he believes it is raining when it is raining, snowing when it is snowing, and so on, if we take interpretative Charity as our principle, we have a chance of gradually correlating various utterances with the various conditions that make them true, of constructing interpretations applicable in practice.

Since Charity seems to be a presupposition of any interpretation whatsoever, the question of what it consists in would seem to become a kind of transcendental battleground. This is what makes Davidson such an important fork in the philosophical road. If you think Charity involves something irreducibly normative, then you think Davidson has struck upon interpretation as the locus requiring theoretical intentional cognition to be solved, a truly transcendental domain. So Brandom, for instance, takes Dennett’s interpretation of Charity in the form of the Intentional Stance as the foundation of his grand normative metaphysics (See, Making It Explicit, 55-62). What makes this such a slick move is the way it allows the Normativist to have things both ways, to remain an interpretativist (though Brandom does ultimately ascribe to original intentionality in Making It Explicit) about the reality of norms, while nevertheless treating norms as entirely real. Charity, in other words, provides a way to at once deny the natural reality of norms, while insisting they are real properties. Fictions possessing teeth.

If, on the other hand, you think Charity is not something irreducibly normative, then you think Davidson has struck upon interpretation as the locus where the glaring shortcomings of the transcendental are made plain. The problem of Radical Interpretation is the problem of interpreting behaviour. This is the whole point of going back to translation or interpretation in the first place: to start ‘from scratch,’ asking what, at minimum, is required for successful linguistic communication. By revealing behaviour as the primary source of information, Radical Interpretation shows how the problem is wholly empirical, how observation is all we have to go on. The second-order realm postulated by the Normativist simply does not exist, and as such, has nothing useful to offer the actual, empirical problem of translation.

As Stephen Turner writes:

“For Davidson, this whole machinery of a fixed set of normative practices revealed in the enthymemes of ordinary justificatory usage is simply unnecessary. We have no privileged access to meaning which we can then expressivistically articulate, because there is nothing like this—no massive structure of normative practices—to access. Instead we try to follow our fellow beings and their reasoning and acting, including their speaking: We make them intelligible. And we have a tool other than the normal machinery of predictive science that makes this possible: our own rationality.” “Davidson’s Normativity,” 364

Certainly various normative regimes/artifacts are useful (like Decision Theory), and others indispensible (like some formulation of predicate logic), but indispensability is not necessity. And ‘following,’ as Turner calls it, requires only imagination, empathy, not the possession of some kind of concept (which is somehow efficacious even though it doesn’t exist in nature). It is an empirical matter for cognitive science, not armchair theorizing, to decide.

Turner has spent decades developing what is far and away the most comprehensive critique of what he terms Normativism that I’ve ever encountered. His most recent book, Explaining the Normative, is essential reading for anyone attempting to gain perspective on Sellarsian attempts to recoup some essential domain for philosophy. For those interested in post-intentional philosophy more generally, and of ways to recharacterize various domains without ontologizing (or ‘quasi-ontologizing’) intentionality in the form of ‘practices,’ ‘language games,’ ‘games of giving and asking for reasons,’ and so on, then Turner is the place to start.

I hope to post a review of Explaining the Normative and delve into Turner’s views in greater detail in the near future, but for the nonce, I want to stick with Davidson. Recently reading Turner’s account of Davidson’s attitude to intentionality (“Davidson’s Normativity”) was something of a revelation for me. For the first time, I think I can interpret Radical Interpretation in my own terms. Blind Brain Theory provides a way to read Davidson’s account as an early eliminativist approximation of a full-blown naturalistic theory of interpretation.

A quick way to grasp the kernel of Blind Brain Theory runs as follows (a more thorough pass can be found here). The cause of my belief of a blue sky outside today is, of course, the blue sky outside today. But it is not as though I experience the blue sky causing me to experience the blue sky—I simply experience the blue sky. The ‘externalist’ axis of causation—the medial, or enabling, axis—is entirely occluded. All the machinery responsible for conscious experience is neglected: causal provenance is a victim of what might be called medial neglect. Now the fact that we can metacognize experience means that we’ve evolved some kind of metacognitive capacity, machinery for solving problems that require the brain to interpret its own operations, problems such as, say, ‘holding your tongue at Thanksgiving dinner.’ Medial neglect, as one might imagine, imposes a profound constraint on metacognitive problem-solving: namely, that only those problems that can be solved absent causal information can be solved at all. Given the astronomical causal complexities underwriting experience, this makes metacognitive problem-solving heuristic in the extreme. Metacognition hangs sideways in a system it cannot possibly hope to cognize in anything remotely approaching a high-dimensional manner, the manner that our brain cognizes its environments more generally.

If one views philosophical reflection as an exaptation of our evolved metacognitive problem-solvers for the purposes of theorizing the nature of experience, one can assume it has inherited this constraint. If metacognition cannot access information regarding the actual processes responsible for experience for the solution of any problem, then neither can philosophical reflection on experience. And since nature is causal, this is tantamount to saying that, for the purposes of theoretical metacognition at least, experience has no nature to be solved. And this raises the question of just what—if anything—theoretical metacognition (philosophical reflection) is ‘solving.’

In essence, Blind Brain Theory provides an empirical account of the notorious intractability of those philosophical problems arising out of theoretical metacognition. Traditional philosophical reflection, it claims, trades in a variety of different metacognitive illusions—many of which can be diagnosed and explained away, given the conceptual resources Blind Brain Theory provides. On its terms, the traditional dichotomy between natural and intentional concepts/phenomena is entirely to be expected—in fact, we should expect sapient aliens possessing convergently evolved brains to suffer their own versions of the same dichotomy.

Intentionalism takes our blindness to first-person cognitive activity as a kind of ontological demarcation when it is just an artifact of the way the integrated, high-dimensional systems registering the external environment fractures into an assembly of low-dimensional hacks registering the ‘inner.’ There is no demarcation, no ‘subject/object’ dichotomy, just environmentally integrated systems that cannot automatically cognize themselves as such (and so resort to hacks). Neglect allows us to see this dichotomy as a metacognitive artifact, and to thus interpret the first-person in terms entirely continuous with the third-person. Blind Brain Theory, in other words, naturalizes the intentional. It ‘externalizes’ everything.

So how does this picture bear on the issue of Charity and Radical Interpretation? In numerous ways, I think, many of which Davidson would not approve, but which do have the virtue of making his central claims perhaps more naturalistically perspicuous.

From the standpoint of our brains linguistically solving other brains, we take it for granted that solving other organisms requires solving something in addition to the inorganic structure and dynamics of our environments. The behaviour taken as our evidential base in Radical Interpretation already requires a vast amount of machinery and work. So basically we’re talking about the machinery and work required over and above this baseline—the machinery and work required to make behaviour intentionally, as opposed to merely causally, intelligible.

The primary problem is that the activity of intentional interpretation, unlike the activity interpreted, almost escapes cognition altogether. To say, as so many philosophers so often do, that intentionality is ‘irreducible’ is to say that it is naturalistically occult. So any account of interpretation automatically trades in blind spots, in the concatenation of activities that we cannot cognize. In the terms of Blind Brain Theory, any account of interpretation has to come to grips with medial neglect.

From this perspective, one can see Davidson’s project as an attempt to bootstrap an account of interpretation that remains honest or sensitive to medial neglect, the fact that 1) our brain simply cannot immediately cognize itself as a brain, which is to say, in terms continuous with its cognition of nature; and 2) that our brain cannot immediately cognize this inability, and so assumes no such inability. Thanks to medial neglect, every act of interpretation is hopelessly obscure. And this places a profound constraint on our ability to theoretically explicate interpretation. Certainly we have a variety of medial posits drawn from the vocabulary of folk-psychology, but all of these are naturalistically obscure, and so function as unexplained explainers. So the challenge for Davidson, then, is to theorize interpretation in a manner that respects what can and cannot be cognized—to regiment our blind spots in a manner that generates real, practically applicable understanding.

In other words, Davidson begins by biting the medial inscrutability bullet. If medial neglect makes it impossible to theoretically explicate medial terms, then perhaps we can find a way to leverage what (causally inexplicable) understanding they do seem to provide into something more regimented, into an apparatus, you might say, that poses all the mysteries as effectively as possible (and in this sense, his project is a direct descendent of Quine’s).

This is the signature virtue of Tarski’s ‘Convention T.’ “[T]he striking thing about T-sentences,” Davidson writes, “is that whatever machinery must operate to produce them, and whatever ontological wheels must turn, in the end a T-sentence states the truth conditions of a sentence using resources no richer than, because the same as, those of the sentence itself” (“Radical Interpretation, 132). By modifying Tarski’s formulation so that it takes truth instead of translation as basic, he can generate a theory based on an intentional, unexplained explainer—truth—that produces empirically testable results. Given that interpretation is the practical goal, the ontological status of the theory itself is moot: “All this apparatus is properly viewed as theoretical construction, beyond the reach of direct verification,” he writes. “It has done its work provided only it entails testable results in the form of T-sentences, and these make no mention of the machinery” (133).

The apparatus is warranted only to the extent that it enables further cognition. Indeed, given medial neglect, no further metacognitive explication of the apparatus is even possible. It may prove indispensible, but only empirically so, the way a hammer is to framing, and not as, say, the breath of God is to life, or more mysterious still, in some post facto ‘virtual yet efficacious’ sense. In fact, both of these latter characterizations betray the profundity of medial neglect, how readily we intuit the absence of various dimensions of information, say those of space and time, as a positive, as some kind of inexplicable something that, as Turner has been arguing for decades, begs far more questions than it pretends to solve.

The brain’s complexity is such, once again, that it cannot maintain anything remotely approaching the high-dimensional, all-purpose covariational regime it maintains with its immediate environment with itself. Only a variety of low-dimensional, special purpose cognitive tools are possible—an assemblage of ‘hacks.’ Thus the low-dimensional parade of inexplicables that constitute the ‘first-person.’ This is why complicating your intentional regimentations beyond what is practically needed simply makes no sense. Their status as specialized hacks means we have every reason to assume their misapplication in any given theoretical context. This isn’t to say that exaptation to other problems isn’t possible, only that efficacious problem-solving is our only guide to applicability. The normative proof is in the empirical pudding. Short of practical applications, high-dimensional solutions, the theoretician is simply stacking unexplained explainers into baroque piles. There’s a reason why second-order normative architectures rise and fall as fads. Their first-order moorings are the same, but as the Only-game-in-town Effect erodes beneath waves of alternative interpretation, they eventually break apart, often to be salvaged into some new account that feels so compelling for appearing, to some handful of souls at least, to be the only game in town at a later date.

So for Davidson, characterizing Radical Interpretation in terms of truth amounts to characterizing Radical Interpretation in terms of a genuine unexplained explainer, an activity that we can pragmatically decompose and rearticulate, and nothing more. The astonishing degree to which the behaviour itself underdetermines the interpretations made, simply speaks to the radically heuristic nature of the cognitive activities underwriting interpretation. It demonstrates, in other words, the incredibly domain specific nature of the cognitive tools used. A fortiori, it calls into question the assumption that whatever information metacognition can glean is remotely sufficient for theoretically cognizing the structure and dynamics of those tools.

From the standpoint of reflection, intentional cognition or ‘mindreading’ almost entirely amounts to simply ‘getting it’ (or as Turner says, ‘following’). Given the paucity of information over and above the sensory, our behaviour cognizing activity strikes us as non-dimensional in the course of that cognizing—medial neglect renders our ongoing cognitive activity invisible. The odd invisibility of our own communicative performances—the way, for instance, the telling (or listening) ‘disappears’ into the told—simply indicates the axis of medial neglect, the fact they we’re talking about activities the brain cannot identify or situate in the high-dimensional idiom of environmental cognition. At best, evolution has provided metacognitive access to various ‘flavours of activity,’ if you will, vague ways of ‘getting our getting’ or ‘following our following’ the behaviour of others, and not much more—as the history of philosophy should attest!

‘Linguistic understanding,’ on this account, amounts to standing in certain actual and potential systematic, causal relations with another speaker—of being a machine attuned to natural and social environments in some specific way. The great theoretical virtue of Blind Brain Theory is the way it allows us to reframe apparently essential semantic activities like interpretation in mechanical terms. When an anthropologist learns the language of another speaker nothing magical is imprinted or imbibed. The anthropologist ‘understands’ that the speaker is systematically interrelated to his environment the same as he, and so begins the painstaking process of mapping the other’s relations onto his own via observationally derived information regarding the speaker’s utterances in various circumstances. The behaviour-enabling covariational regime of one individual comes to systematically covary with that of another individual and thus form a circuit between them and the world. The ‘meaning’ now ‘shared’ consists in nothing more than this entirely mechanical ‘triangulation.’ Each stands in the relation of component to the other, forming a singular superordinate system possessing efficacies that did not previously exist. The possible advantages of ‘teamwork’ increase exponentially—which is arguably the primary reason our species evolved language at all.

The perplexities pile on when we begin demanding semantic answers to our semantic questions, when we ask, What is meaning? expecting an answer that accords with our experiences of meaning. Given that we possess nothing short of our experience of meaning with which to compare any theory of meaning, the demand that such a theory accord with that experience seems, on the face of things, to be eminently reasonable. But it still behooves us to interrogate the adequacy of that ‘experience as metacognized,’ especially now, given all that we have learned the past two decades. On a converging number of accounts, human consciousness is a mechanism for selecting, preserving, and broadcasting information for more general neural consumption. When we theoretically reflect on cognitive activity, such as ‘getting’ or ‘following’ our best research tells us we are relying on the memory traces of previous broadcasts. The situation poses a metacognitive nightmare, to say the least. Even if we could trust those memory traces to provide some kind of all-purpose schema (and we can’t), we have no access to the larger neurofunctional context of the broadcast, what produced the information and what consumed it for what—all we have are low-dimensional fragments that appear to be ethereal wholes. It’s as if we’re attempting to solve for a car using only its fuse-panel diagram—worse!

Like Quine before him, Davidson has no way of getting around intentionality, and so, also like Quine, he attempts to pass through it with as much epistemic piety as possible. But his ‘intentional instrumentalism’ will only take him so far. Short of any means of naturalizing meaning, he regularly finds himself struggling to see his way clear. The problem of first-person authority provides an illustrative case in point. The assumption that some foreign language speaker ‘holds true’ making utterances the way you ‘hold true’ making utterances can only facilitate interpretation, assist in ‘following his meaning,’ if it is the case that you can follow your own meaning. A number of issues arise out of this, not the least the suggestion that interpretation seems to require the very kind of metacognitive access that I have consistently been denying!

But following one’s own meaning is every bit as mysterious as following another’s. Ownership of utterances can be catastrophically misattributed in a number of brain pathologies. When it comes to self/other speech comprehension, we know the same machinery is involved, only yoked in different ways, and we know that machinery utterly eludes metacognition. To reiterate: the cryptic peculiarities of understanding meaning (and all other intentional phenomena) are largely the result of medial neglect, the point where human cognition, overmatched by its own complexity, divides to heuristically conquer. In a profound sense, metacognition finds itself in the same straits regarding the brain as social cognition does regarding other brains.

So what does the asymmetry of ‘first-person authority,’ the fact that meanings attributed to others can be wrong while meanings attributed to oneself cannot, amount to? Nothing more than the fact that the systematic integrity of you, as a blind system, is ‘dedicated’ in a way that the systematic integrity of our interpretative relations is not. ‘Teamwork machines’ are transitory couplings requiring real work to get off the ground, and then maintain against slippages. The ‘asymmetry’ Davidson wants to explain consists in nothing more than this. No work is required to ‘follow oneself,’ whereas work is required to follow others.

For all the astronomical biological complexity involved, it really is as simple as this. The philosophical hairball presently suffocating the issue of first-person authority is an artifact of the way that theoretical metacognition, blinkered by medial neglect, retrospectively schematizes the issue in terms of meaning. The ontologization of meaning transforms the question of first-person authority into an epistemic question, a question of how one could know. This, of course, divides into the question of implicit versus explicit knowing. Since all these concepts (knowing, implicit, explicit) are naturalistically occult, interpretation can be gamed indefinitely. Despite his epistemic piety, Davidson’s attempt to solve for first-person authority using intentional idioms was doomed from the outset.

It’s worth noting an interesting connection to Heidegger in all this, a way, perhaps, to see the shadow of Blind Brain Theory operating in a quite different philosophical system. Heidegger, who harboured his own doubts regarding philosophical reflection, would see the philosophical hairball described above as yet another consequence of the ‘metaphysics of presence,’ the elision of the ‘ontological difference’ between being and beings. For him, the problem isn’t that meaning is being ontologized so much as it is being ontologized in the wrong way. His conflation of meaning with being essentially dissolves the epistemic problem the same way as my elimination of meaning, albeit in a manner that renders everything intentionally occult.

So what is meaning? A matter of intersystematic calibration. When we ask someone to ‘explain what they mean’ we are asking them to tweak our linguistic machinery so as to facilitate function. The details are, without a doubt, astronomically complex, and almost certain to surprise and trouble us. But one of the great virtues of mechanistic explanations lies in the nonmysterious way it can generalize over functions, move from proteins to organelles to cells to organs to organisms to collectives to ecologies to biospheres and so on. The ‘physical stance’ scales up with far more economy than some (like Dennett) would have you believe. And since it comprises our most reliable explanatory idiom, we should expect it to eventually yield the kind of clarity evinced above. Is it simply a coincidence that the interpretative asymmetry that Davidson and so many other philosophers have intentionally characterized directly corresponds with the kind of work required to maintain mechanical systematicity between two distinct systems? Do we just happen to ‘get the meaning wrong’ whenever covariant slippages occur, or is the former simply the latter glimpsed darkly?

Which takes us, at long last, to the issue of ‘Charity,’ the indispensability of taking others as reliably holding their utterances true to the process of interpretation. As should be clear by now, there is no such thing. We no more take Charity to the interpretation of behaviour than your wireless takes Charity to your ISP. There is no ‘attitude of holding true,’ no ‘intentional stance.’ Certainly, sometimes we ‘try’—or are at least conscious of making an effort. Otherwise understanding simply happens. The question is simply how we can fill in the blanks in a manner that converges on actual theoretical cognition, as opposed to endless regress. Behaviour is tracked, social heuristics are cued, an interpretation is neurally selected for conscious broadcasting and we say, ‘Ah! ‘Es regnet,’ means ‘It is raining’!

The Eliminativist rennovation of Radical Interpretation makes plain everything that theoretical reflection has hitherto neglected. In other words, what it makes plain is the ‘pre-established harmony’ needed to follow another, the monstrous amount of evolutionary and cultural stage-setting required simply to get to interpretative scratch. The enormity of this stage setting is directly related to the heuristic specificity of the systems we’ve developed to manage them, the very specificity that renders second-order discourse of the nature of ‘intentional phenomena’ dubious in the extreme.

As the skeptics have been arguing since antiquity.

Cognition Obscura (Reprise)

by rsbakker

(Originally posted September 24, 2013… Wishing everyone a thoughtful holiday!)

The Amazing Complicating Grain

On July 4th, 1054, Chinese astronomers noticed the appearance of a ‘guest star’ in the proximity of Zeta Tauri lasting for nearly two years before becoming too faint to be detected by the naked eye. The Chaco Canyon Anasazi also witnessed the event, leaving behind this famous petroglyph:

1054-supernova-petrograph-1

Centuries would pass before John Bevis would rediscover it in 1731, as would Charles Messier in 1758, who initially confused it with Halley’s Comet, and decided to begin cataloguing ‘cloudy’ celestial objects–or ‘nebulae’–to help astronomers avoid his mistake. In 1844, William Parsons, the Earl of Rosse, made the following drawing of the guest star become comet become cloudy celestial object:

m1rosse

It was on the basis of this diagram that he gave what has since become the most studied extra-solar object in astronomical history its contemporary name: the ‘Crab Nebula.’ When he revisited the object with his 72-inch reflector telescope in 1848, however, he saw something quite different:

william-parsons-crab-nebula-2 In 1921, John Charles Duncan was able to discern the expansion of the Crab Nebula using the revolutionary capacity of the Mount Wilson Observatory to produce images like this:

crabduncan

And nowadays, of course, we are regularly dazzled not only by photographs like this:

hubble-crab-nebula

produced by Hubble, but those produced by a gallery of other observational platforms as well:

600px-800crabThe tremendous amount of information produced has provided astronomers with an incredibly detailed understanding of supernovae and nebula formation.

What I find so interesting about this progression lies in what might be called the ‘amazing complicating grain.’ What do I mean by this? Well, there’s the myriad ways the accumulation of data feeds theory formation, of course, how scientific models tend to become progressively more accurate as the kinds and quantities of information accessed increases. But what I’m primarily interested in is what happens when you turn this structure upside down, when you look at the Chinese ‘guest star’ or Anasazi petroglyph against the baseline of what we presently know. What assumptions were made and why? How were those assumptions overthrown? Why were those assumptions almost certain to be wrong?

Why, for instance, did the Chinese assume that SN1054 was simply another star, notable only for its ‘guest-like’ transience? I’m sure a good number of people might think this is a genuinely stupid question: the imperialistic nature of our preconceptions seems to go without saying. The medieval Chinese thought SN1054 was another star rather than a supernova simply because points of light in the sky, stars, were pretty much all they knew. The old provides our only means of understanding the new. This is arguably why Messier first assumed the Crab Nebula was another comet in 1758: it was only when he obtained information distinguishing it (the lack of visible motion) from comets that he realized he was looking at something else, a cloudy celestial object.

But if you think about it, these ‘identification effects’–the ways the absence of systematic differences making systematic differences (or information) underwrite assumptions of ‘default identity’–are profoundly mysterious. Our cosmological understanding has been nothing if not a process of continual systematic differentiation or ever increasing resolution in the polydimensional sense of the natural. In a peculiar sense, our ignorance is our fundamental medium, the ‘stuff’ from which the distinctions pertaining to actual cognition are hewn.

.

The Superunknown

Another way to look at this transformation of detail and understanding is in terms of ‘unknown unknowns,’ or as I’ll refer to it here, the ‘superunknown’ (cue crashing guitars). The Hubble image and the Anasazi petroglyph not only provide drastically different quantities of information organized in drastically different ways, they anchor what might be called drastically different information ecologies. One might say that they are cognitive ‘tools,’ meaningful to the extent they organize interests and practices, which is to say, possess normative consequences. Or one might say they are ‘representations,’ meaningful insofar as they ‘correspond’ to what is the case. The perspective I want to take here, however, is natural, that of physical systems interacting with physical systems. On this perspective, information our brain cannot access makes no difference to cognition. All the information we presently possess regarding supernova and nebula formulation simply was not accessible to the ancient Anasazi or Chinese. As a result, it simply could not impact their attempts to cognize SN-1054. More importantly, not only did they lack access to this information, they also lacked access to any information regarding this lack of information. Their understanding was their only understanding, hedged with portent and mystery, certainly, but sufficient for their practices nonetheless.

The bulk of SN-1054 as we know it, in other words, was superunknown to our ancestors. And, the same as the spark-plugs in your garage make no difference to the operation of your car, that information made no cognizable difference to the way they cognized the skies. The petroglyph understanding of the Anasazi, though doubtless hedged with mystery and curiosity, was for them the entirety of their understanding. It was, in a word, sufficient. Here we see the power–if it can be called such–exercised by the invisibility of ignorance. Who hasn’t read ancient myths or even contemporary religious claims and wondered how anyone could have possibly believed such ‘nonsense’? But the answer is quite simple: those lacking the information and/or capacity required to cognize that nonsense as nonsense! They left the spark-plugs in the garage.

Thus the explanatory ubiquity of ‘They didn’t know any better.’ We seem to implicitly understand, if not the tropistic or mechanistic nature of cognition, then at least the ironclad correlation between information availability and cognition. This is one of the cornerstones of what is called ‘mindreading,’ our ability to predict, explain, and manipulate our fellows. And this is how the superunknown, information that makes no cognizable difference, can be said to ‘make a difference’ after all–and a profound one at that. The car won’t run, we say, because the spark-plugs are in the garage. Likewise, medieval Chinese astronomers, we assume, believed SN-1054 was a novel star because telescopes, among other things, were in the future. In other words, making no difference makes a difference to the functioning of complex systems attuned to those differences.

This is the implicit foundational moral of Plato’s Allegory of the Cave: How can shadows come to seem real? Well, simply occlude any information telling you otherwise. Next to nothing, in other words, can strike us as everything there is, short of access to anything more–such as information pertaining to the possibility that there is something more. And this, I’m arguing, is the best way of looking at human metacognition at any given point in time, as a collection of prisoners chained inside the cave of our skull assuming they see everything there is to see for the simple want of information–that the answer lies in here somehow! On the one hand we have the question of just what neural processing gets ‘lit up’  in conscious experience (say, via information integration or EMF effects) given that an astronomical proportion of it remains ‘dark.’ What are the contingencies underwriting what accesses what for what function? How heuristically constrained are those processes? On the other hand we have the problem of metacognition, the question of the information and cognitive resources available for theoretical reflection on the so-called ‘first-person.’ And, once again, what are the contingencies underwriting what accesses what for what function? How heuristically constrained are those processes?

The longer one mulls these questions, the more the concepts of traditional philosophy of mind come to resemble Anasazi petroglyphs–which is to say, an enterprise requiring the superunknown. Placed on this continuum of availabilty, the assumption that introspection, despite all the constraints it faces, gets enough of the information it needs to at least roughly cognize mind and consciousness as they are becomes at best, a claim crying out for justification, and at worst, wildly implausible. To say philosophy lacks the information and/or cognitive resources it requires to resolve its debates is a platitude, one so worn as not chafe any contemplative skin whatsoever. No enemy is safer or more convenient as an old enemy, and skepticism is as ancient as philosophy itself. But to say that science is showing that metacognition lacks the information and/or cognitive resources philosophy requires to resolve its debates is to say something quite a bit more prickly.

Cognitive science is revealing the superunknown of the soul as surely as astronomy and physics are revealing the superunknown of the sky. Whether we move inward or outward the process is pretty much the same, as we should suspect, given that the soul is simply more nature. The tie-dye knot of conscious experience has been drawn from the pot and is slowly being unravelled, and we’re only now discovering the fragmentary, arbitrary, even ornamental nature of what we once counted as our most powerful and obvious ‘intuitions.’

This is how the Blind Brain Theory treats the puzzles of the first-person: as artifacts of illusion and neglect. The informatic and heuristic resources available for cognition at any given moment constrains what can be cognized. We attribute subjectivity to ourselves as well as to others, not because we actually have subjectivity, but because it’s the best we can manage given the fragmentary information we got.  Just as the medieval Chinese and Anasazi were prisoners of their technical limitations, you and I are captives of our metacognitive neural limitations.

As straightforward as this might sound, however, it turns out to be far more difficult to conceptualize in the first-person than in astronomy. Where the incorporation of the astronomical superunknown into our understanding of SN-1054 seems relatively intuitive, the incorporation of the neural superunknown into our understanding of ourselves threatens to confound intelligibility altogether. So why the difference?

The answer lies in the relation between the information and the cognitive resources we have available. In the case of SN-1054, the information provided happens to be the very information that our cognitive systems have evolved to decipher, namely, environmental information. The information provided by Hubble, for instance, is continuous with the information our brain generally uses to mechanically navigate and exploit our environments—more of the same. In the case of the first-person, however, the information accessed in metacognition falls drastically short what our cognitive systems require to conceive us in an environmentally continuous manner. And indeed, given the constraints pertaining to metacognition, the inefficiencies pertaining to evolutionary youth, the sheer complexity of its object, not to mention its structural complicity with its object, this is precisely what we should expect: selective blindness to whole dimensions of information.

So one might visualize the difference between the Anasazi and our contemporary astronomical understanding of SN-1054 as progressive turns of a screw:

Partial and Full Spiral (1)

where the contemporary understanding can be seen as adding more and more information, ‘twists,’ to the same set of dimensions. The difference between our intuitive and our contemporary neuroscientific understanding of ourselves, on the other hand, is more like:

Circle and Spiral

where our screw is viewed on end instead of from the side, occluding the dimensions constitutive of the screw. The ‘O’ is actually a screw, obviously so, but for the simple want of information appears to be something radically different, something self-continuous and curiously flat, completely lacking empirical depth. Since these dimensions remain superunknown, they quite simply make no metacognitive difference. In the same way additional environmental information generally always complicates prior experiential and cognitive unities, the absence of information can be seen as simplifying experiential unities. Paint can become swarming ants, and swarming ants can look like paint. The primary difference with the first-person, once again, is that the experiential simplification you experience, say, watching a movie scene fade to white is ‘bent’ across entire dimensions of missing information—as is the case with our ‘O’ and our screw. The empirical depth of the latter is folded into the flat continuity of the former. On this line of interpretation, the first-person is best understood as a metacognitive version of the ‘flicker fusion’ effect in psycho-physics, or the way sleep can consign an entire plane flight to oblivion. You might say that neglect is the sleep of identity.

As the only game in information town, the ‘O’ intuitively strikes us as essentially what we are, rather than a perspectival artifact of information scarcity and heuristic inapplicability. And since this ‘O’ seems to frame the possibility of the screw, things are apt to become more confusing still, with proponents of ‘O’-ism claiming the ontological priority of  an impoverished cognitive perspective over ‘screwism’ and its embarrassment of informatic riches, and with proponents of screwism claiming the reverse, but lacking any means to forcefully extend and demonstrate their counterintuitive positions.

One can analogically visualize this competition of framing intuitions as the difference between,

Spiral in Circlewhere the natural screw takes the ‘O’ as its condition, and,

Circle in Spiralwhere the ‘O’ takes the natural screw as its condition, with the caveat that one understands the margins of the ‘O’ asymptotically, which is to say, as superunknown. A better visualize analogy lies in the margins of your present visual field, which is somehow bounded without possessing any visible boundary. Since the limits of conscious cognition always outrun the possibility of conscious cognition, conscious cognition, or ‘thought,’ seems to hang ‘nowhere,’ or at the very least ‘beyond’ the empirical, rendering the notion of ‘transcendental constraint’ an easy-to-intuit metacognitive move.

In this way, one might diagnose the constitutive transcendental as a metacognitive artifact of neglect. A symptom of brain blindness.

This is essentially the ambit of the Blind Brain Theory: to explain the incompatibility of the intentional with the natural in terms of what information we should expect to be available to metacognition. Insofar as the whole of traditional philosophy turns on ‘reflection,’ BBT amounts to a wholesale reconceptualization of the philosophical tradition as well. It is, without any doubt, the most radical parade of possibilities to ever trammel my imagination—a truly post-intentional philosophy—and I feel as though I have just begun to chart the troubling extent of its implicature. The motive of this piece is to simply convey the gestalt of this undiscovered country with enough sideways clarity to convince a few daring souls to drag out their theoretical canoes.

To summarize then: Taking the mechanistic paradigm of the life sciences as our baseline for ontological accuracy (and what else would we take?), the mental can be reinterpreted in terms of various kinds of dimensional loss. What follows is a list of some of these peculiarities and a provisional sketch of their corresponding ‘blind brain’ explanation. I view each of these theoretical vignettes as nothing more than an inaugural attempt, pixillated petroglyphs that are bound to be complicated and refined should the above hunches find empirical confirmation. If you find yourself reading with a squint, I ask only that you ponder the extraordinary fact that all these puzzling phenomena are characterized by missing information. Given the relation between information availability and cognitive reliability, is it simply a coincidence that we find them so difficult to understand? I’ll attempt to provide ways to visualize these sketches to facilitate understanding where I can, keeping in mind the way diagrams both elide and add dimensions.

.

Concept/Intuition                                                               Kind of Informatic Loss/Incapacity

Nowness – Insufficient temporal information regarding the time of information processing is integrated into conscious awareness. Metacognition, therefore, cannot make second-order before-and-after distinctions (or, put differently, is ‘laterally insensitive’ to the ‘time of timing’), leading to the faulty assumption of second-order temporal identity, and hence the ‘paradox of the now’ so famously described by Aristotle and Augustine.

Circle and Spiral - Now and Environmental TimeSo again, metacognitive neglect means our brains simply cannot track the time of their own operations the way it can track the time of the environment that systematically engages it. Since the absence of information is the absence of distinctions, our experience of time as metacognized ‘fuses’ into the paradoxical temporal identity in difference we term the now.

Reflexivity – Insufficient temporal information regarding the time of information processing is integrated into conscious awareness. Metacognition, therefore, can only make granular second-order sequential distinctions, leading to the faulty metacognitive assumption of mental reflexivity, or contemporaneous self-relatedness (either intentional as in the analytic tradition, or nonintentional as well, as posited in the continental tradition), the sense that cognition can be cognized as it cognizes, rather than always only post facto. Thus, once again, the mysterious (even miraculous) appearance of the mental, since mechanically, all the processes involved in the generation of consciousness are irreflexive. Resources engaged in tracking cannot themselves be tracked. In nature the loop can be tightened, but never cinched the way it appears to be in experience.

Circle and Spiral - UntitledOnce again, experience as metacognized fuses, consigning vast amounts of information to the superunknown, in this case, the dimension of irreflexivity. The mental is not only flattened into a mere informatic shadow, it becomes bizarrely self-continuous as well.

Personal Identity – Insufficient information regarding the sequential or irreflexive processing of information integrated into conscious awareness, as per above. Metacognition attributes psychological continuity, even ontological simplicity, to ‘us’ simply because it neglects the information required to cognize myriad, and many cases profound, discontinuities. The same way sleep elides travel, making it seem like you simply ‘awaken someplace else,’ so too does metacognitive neglect occlude any possible consciousness of moment to moment discontinuity.

Conscious Unity – Insufficient information regarding the disparate neural complexities responsible for consciousness. Metacognition, therefore, cannot make the relevant distinctions, and so assumes unity. Once again, the mundane assertion of identity in the absence of distinctions is the culprit. So the character below strikes us as continuous,

X

even though it is actually composite,

X pixilated

simply for want of discriminations, or additional information.

Meaning Holism – Insufficient information regarding the disparate neural complexities responsible for conscious meaning. Metacognition, therefore, cannot make the high-dimensional distinctions required to track external relations, and so mistakes the mechanical systematicity of the pertinent neural structures and functions (such as the neural interconnectivity requisite for ‘winner take all’ systems) for a lower dimensional ‘internal relationality.’ ‘Meaning,’ therefore, appears to be differential in some elusive formal, as opposed to merely mechanical, sense.

Volition – Insufficient information regarding neural/environmental production and attenuation of behaviour integrated into conscious awareness. Unable to track the neurofunctional provenance of behaviour, metacognition posits ‘choice,’ the determination of behaviour ex-nihilo.

Circle and Spiral - VolitionOnce again, the lack of access to a given dimension of information forces metacognition to rely on an ad hoc heuristic, ‘choice,’ which only becomes a problem when theoretical metacognition, blind to its heuristic occlusion of dimensionality, feeds it to cognitive systems primarily adapted to high-dimensional environmental information.

Purposiveness – Insufficient information regarding neural/environmental production and attenuation of behaviour integrated into conscious awareness. Cognition thus resorts to noncausal heuristics keyed to solving behaviours rather than those keyed to solving environmental regularities—or ‘mindreading.’ Blind to the heuristic nature of these systems, theoretical metacognition attributes efficacy to predicted outcomes. Constraint is intuited in terms of the predicted effect of a given behaviour as opposed to its causal matrix. What comes after appears to determine what comes before, or ‘cranes,’ to borrow Dennett’s metaphor, become ‘skyhooks.’ Situationally adapted behaviours become ‘goal-directed actions.’

Value – Insufficient information regarding neural/environmental production and attenuation of behaviour integrated into conscious awareness. Blind to the behavioural feedback dynamics that effect avoidance or engagement, metacognition resorts to heuristic attributions of ‘value’ to effect further avoidance or engagement (either socially or individually). Blind to the radically heuristic nature of these attributions, theoretical metacognition attributes environmental reality to these attributions.

Normativity – Insufficient information regarding neural/environmental production and attenuation of behaviour integrated into conscious awareness. Cognition thus resorts to noncausal heuristics geared to solving behaviours rather than those geared to solving environmental regularities. Blind to these heuristic systems, deliberative metacognition attributes efficacy or constraint to predicted outcomes. Constraint is intuited in terms of the predicted effect of a given behaviour as opposed to its causal matrix. What comes after appears to determine what comes before. Situationally adapted behaviours become ‘goal-directed actions.’ Blind to the dynamics of those behavioural patterns producing environmental effects that effect their extinction or reproduction (that generate attractors), metacognition resorts to drastically heuristic attributions of ‘rightness’ and ‘wrongness,’ further effecting the extinction or reproduction of behavioural patterns (either socially or individually). Blind to the heuristic nature of these attributions, theoretical metacognition attributes environmental reality to them. Behavioural patterns become ‘rules,’ apparent noncausal constraints.

Aboutness (or Intentionality Proper) – Insufficient information regarding processing of environmental information integrated into conscious awareness. Even though we are mechanically embedded as a component of our environments, outside of certain brute interactions, information regarding this systematic causal interrelation is unavailable for cognition. Forced to cognize/communicate this relation absent this causal information, metacognition resorts to ‘aboutness’. Blind to the radically heuristic nature of aboutness, theoretical metacognition attributes environmental reality to the relation, even though it obviously neglects the convoluted circuit of causal feedback that actually characterizes the neural-environmental relation.

The easiest way to visualize this dynamic is to evince it as,

Spiral - Untitledwhere the screw diagrams the dimensional complexity of the natural within an apparent frame that collapses many of those dimensions—your present ‘first person’ experience of seeing the figure above. This allows us to complicate the diagram thus,

Spiral in Circlebearing in mind that the explicit limit of the ‘O’ diagramming your first-person experiential frame is actually implicit or asymptotic, which is to say, occluded from conscious experience as it was in the initial diagram. Since the actual relation between ‘you’ (or your ‘thought,’ or your ‘utterance,’ or your ‘belief,’ and ‘etc.’) and what is cognized/perceived—experienced—outruns experience, you find yourself stranded with the bald fact of a relation, an ineluctable coincidence of you and your object, or ‘aboutness,’

Spiral as intentional objectwhere ‘you’ simply are related to an independent object world. The collapse of the causal dimension of your environmental relatedness into the superunknown requires a variety of ‘heuristic fixes’ to adequately metacognize. This then provides the basis for the typically mysterious metacognitive intuitions that inform intentional concepts such as representation, reference, content, truth, and the like.

Representation – Insufficient information regarding processing of environmental information integrated into conscious awareness. Even though we are mechanically embedded as a component of our environments, outside of certain brute interactions, information regarding this systematic causal interrelation is unavailable for cognition. Forced to cognize/communicate this relation absent this causal information, metacognition resorts to ‘aboutness’. Blind to the radically heuristic nature of aboutness, theoretical metacognition attributes environmental reality to the relation, even though it obviously neglects the convoluted circuit of causal feedback that actually characterizes the neural-environmental relation. Subsequent theoretical analysis of cognition, therefore, attributes aboutness to the various components apparently identified, producing the metacognitive illusion of representation.

Our implicit conscious experience of some natural phenomenon,

Spiral - Untitledbecomes explicit,

Spiral - Representationreplacing the simple unmediated (or ‘transparent’) intentionality intuited in the former with a more complex mediated intentionality that is more easily shoehorned into our natural understanding of cognition, given that the latter deals in complex mechanistic mediations of information.

Truth – Insufficient information regarding processing of environmental information integrated into conscious awareness. Even though we are mechanically embedded as a component of our environments, outside of certain brute interactions, information regarding this systematic causal interrelation is unavailable for cognition. Forced to cognize/communicate this relation absent this causal information, metacognition resorts to ‘aboutness’. Since the mechanical effectiveness of any specific conscious experience is a product of the very system occluded from metacognition, it is intuited as given in the absence of exceptions—which is to say, as ‘true.’ Truth is the radically heuristic way the brain metacognizes the effectiveness of its cognitive functions. Insofar as possible exceptions remain superunknown, the effectiveness of any relation metacognized as ‘true’ will remain apparently exceptionless, what obtains no matter how we find ourselves environmentally embedded—as a ‘view from nowhere.’ Thus your ongoing first-person experience of,

Spiral - Untitledwill be implicitly assumed true period, or exceptionless, (sufficient for effective problem solving in all ecologies), barring any quirk of information availability (‘perspective’) that flags potential problem solving limitations, such as a diagnosis of psychosis, awakening from a nap, the use of a microscope, etc. This allows us to conceive the natural basis for the antithesis between truth and context: as a heuristic artifact of neglect, truth literally requires the occlusion of information pertaining to cognitive function to be metacognitively intuited.

So in terms of our visual analogy, truth can be seen as the cognitive aspect of ‘O,’ how the screw of nature appears with most of its dimensions collapsed, as apparently ‘timeless and immutable,’

Circlefor simple want of information pertaining to its concrete contingencies. As more and more of the screw’s dimensions are revealed, however, the more temporal and mutable—contingent—it becomes. Truth evaporates… or so we intuit.

.

Lacuna Obligata

Given the facility with which Blind Brain Theory allows these concepts to be naturally reconceptualised, my hunch is that many others may be likewise demystified. Aprioricity, for instance, clearly turns on some kind of metacognitive ‘priority neglect,’ whereas abstraction clearly involves some kind of ‘grain neglect.’ It’s important to note that these diagnoses do not impeach the implicit effectiveness of many of these concepts so much as what theoretical metacognition, or ‘philosophical reflection,’ has generally made of them. It is precisely the neglect of information that allows our naive employment of these heuristics to be effective within the limited sphere of those problem ecologies they are adapted to solve. This is actually what I think the later Wittgenstein was after in his attempts to argue the matching of conceptual grammars with language-games: he simply lacked the conceptual resources to see that normativity and aboutness were of a piece. It is only when philosophers, as reliant upon deliberative theoretical metacognition as they are, misconstrue what are parochial problem solvers for more universal ones, that we find ourselves in the intractable morass that is traditional philosophy.

To understand the difference between the natural and the first-person we need a positive way to characterize that difference. We have to find a way to let that difference make a difference. Neglect is that way. The trick lies in conceiving the way the neglect of various dimensions of information dupes theoretical metacognition into intuiting the various structural peculiarities traditionally ascribed to the first-person. So once again, where running the clock of astronomical discovery backward merely subtracts information from a fixed dimensional frame,

Photo to petroglyphexplaining the first-person requires the subtraction of dimensions as well,

origami turtlethat we engage in a kind of ‘conceptual origami,’ conceive the first-person, in spite of its intuitive immediacy, as what the brain looks like when whole dimensions of information are folded away.

And this, I realize, is not easy. Nevertheless, tracking environments requires resources which themselves cannot be tracked, thus occluding cognitive neurofunctionality from cognition—imposing ‘medial neglect.’ The brain thus becomes superunknown relative to itself. Its own astronomical complexity is the barricade that strands metacognitive intuitions with intentionality as traditionally conceived. Anyone disagreeing with this needs to explain how it is human metacognition overcomes this boggling complexity. Otherwise, all the provocative questions raised here remain: Is it simply a coincidence that intentional concepts exhibit such similar patterns of information privation? For instance, is it a coincidence that the curious causal bottomlessness that haunts normativity—the notion of ‘rules’ and ‘ends’ somehow ‘constraining’ via some kind of obscure relation to causal mechanism—also haunts the ‘aiming’ that informs conceptions of representation? Or volition? Or truth?

The Blind Brain theory says, Nope. If the lack of information, the ‘superunknown,’ is what limits our ability to cognize nature, then it makes sense to assume that it also limits our ability to cognize ourselves. If the lack of information is what prevents us from seeing our way past traditional conceits regarding the world, it makes sense to think it also prevents us from seeing our way past cherished traditional conceits regarding ourselves. If information privation plays any role in ignorance or misconception at all, we should assume that the grandiose edifice of traditional human self-understanding is about to founder in the ongoing informatic Flood…

That what we have hitherto called ‘human’ has been raised upon shades of cognition obscura.

How to Build a First Person (Using only Natural Materials)

by rsbakker

Aphorism of the Day: Birth is the only surrender to fate possible.

.

In film you have the famous ‘establishing shot,’ a brief visual survey, usually a long or medium shot, of the space the ensuing sequence will analyze along more intimate angles. Space, you could say, is the conclusion that comes first, the register that always precedes its analysis. Some directors play with this, continually force their audience into the analysis absent any spatial analysand. The viewer is thrown, disoriented as a result. Sometimes directors build outward, using the lure of established space as a kind of narrative instrument. Sometimes they shackle the eye to detail, mechanically denying events their place, and so inciting claustrophobia in the airy void of the theatre. They use the space represented to wage war against the space of representing.

If the same has happened here, it’s been entirely inadvertent. I’m not sure how I’ll look back at this year–this attempt to sketch out ‘post-intentional philosophy.’ It’s been a tremendously creative time, to be sure. A hundred thousand words for the beast that is The Unholy Consult, and easily as much written here. I’m not sure I’ve ever enjoyed such a period of intense creativity. These posts have simply been dropping in my head, one after another, some as long as journal articles, most all of them bristling with detail, jargon, and counterintuitive complexities. When I think about it, I’m blown away that Three Pound Brain has grown the way it has, half-again over last year…

For I wanketh.

Large.

Now I want to think the explanation is simple, that against all reason, I’ve managed to climb into a new space, an undiscovered country. But all I know for sure is that I’m arguing something genuinely new–something genuinely radical. So folly or not, I pursue, run down what seem to be the never-ending permutations of this murderous take on the human soul. We have yet to see what science will make of us. And we have very little reason to believe our hearts won’t be broken the way human hearts are almost always broken when they pitch traditional hope against scientific indifference. Who knows? Three Pound Brain could be the place, the cradle where our most epic delusion dies.

Either way, the time has come to pan back, crank up the depth of field, and finally provide some kind of establishing shot. This ain’t going to be easy–for me or you. At a certain level the formulations are almost preposterously simplistic (a ‘machinology’ as noir-realism, I think, termed it). I’m talking about the brain in exceedingly general terms, after all. I could delve into the (of course stochastic) mechanics in more detail, I suppose, go ‘neuroanatomical’ in an effort to add more empirical plumage. I still intend to write about the elegant way the Blind Brain Theory falls out of Bayesian predictive-coding models of the brain.

But for the nonce, I don’t need to. The apparently insuperable conundrums of the first person, the consciousness we think we have, can be explained using some quite granular structural and developmental assumptions. We just need to turn our normal way of looking at things upside down–to stop viewing our metacognitive image of meaning and agency as some kind of stupendous achievement. Why? Because doing so takes theoretical metacognition at its word, something that cognitive science has shown–quite decisively–to be the province of fools. If anything, the ‘stupendous achievement’ is the one possessing far and away the greatest evolutionary pedigree and utilizing the most neural resources: environmental cognition. Taking this as our baseline, we can begin diagnosing the ancient perplexities of the metacognitive image as the result of informatic occlusion and cognitive overreach.

We could be a kind of dream, you and I, one that isn’t even useful in any recognizable manner. This is where the difficulty lies: the way BBT requires we contravene our most fundamental intuitions.

It’s all about the worst case scenario. Philosophy, to paraphrase Brassier, is no sop to desire. If science stands poised to break us, then thought must submit to this breaking in advance. The world never wants for apologists: there will always be an army of Rosenthals and Badious. Someone needs to think these things, no matter how dehumanizing or alienating they seem to be.  Besides, only those who dare thinking the post-intentional need fear ‘losing’ anything. If meaning and morality are the genuine emergent realities that the vast bulk of thinkers, analytic or continental, assume them to be, they should be able to withstand any sustained attempt to explain them away.

And if not? Well then, welcome to the future.

.

So, how do you build a first person?

Imagine the sum of information, understood in the deliberately vague sense of systematic differences making systematic differences, comprising you and your immediate environment. The holy grail of consciousness research is simply understanding how what you are experiencing this very moment fits into this ‘natural informatic field.’ The brass ring, in other words, is one of understanding how you qua person resides in you qua organism–or in other words, explaining how mechanism generates consciousness and intentionality.

Now until recently, science could only track natural processes up to your porch. You qua organism are a mansion of astronomical complexities, and even as modern medicine overran your outer defences, your brain remained an unconquerable citadel, the one place in nature where the old, prescientific games of giving-and-asking-for-reasons could flourish. This is why I continually talk about the ‘bonfire of the humanities,’ the impending collapse of the traditional discourses of the soul. This is why I continually speak of BBT in eschatological terms, pose it as a precursor of the posthuman: if scientifically confirmed, it means that Man-the-meaning-maker is of a piece with Man-the-image-of-God and Man-the-centre-of-the-universe, that noocentrism will join biocentrism and geocentrism in the reliquary of human intellectual conceit and folly. And this is why I mourn ‘Akratic Culture,’ society fissured by the scission of knowledge and experience, with managerial powers exploiting the mechanistic efficiencies of the former, and the client masses fleeing into the intentional opacities of the latter, seeking refuge in vacant affirmation and subreptive autonomy.

So how does the soul fit into the natural informatic field? BBT argues that the best way to conceive the difference between the first and third person is in terms of informatic neglect. Since the structure and function of the brain is dedicated to reliably modelling the structure and function of its environment, the brain remains that part of the environment that it cannot reliably model. BBT terms the modelling structure and function ‘medial’ and the modelled structure and function ‘lateral.’ The brain’s inability to model its modelling, it terms medial neglect. Medial neglect simply means the brain cannot cognize itself as a brain, and so must cognize itself otherwise. This ‘otherwise’ is what we call the soul, mind, consciousness, the first-person, being-in-the-world, etc.

So consider a perspective on a brain:

Diagram brain

Note that the target here is your perspective on the diagrammed brain, not the brain itself. Since the structure and function of your brain are dedicated to modelling the structure and function of your environment, the modelling nowhere appears within the modelled as anything resembling the modelled, even though we know the brain modelling is as much a brain as the brain modelled. The former, rather, provides the ‘occluded frame’ of the latter. At any given moment your perspective ‘hangs,’ as it were, outside of everything. You can pause and reflect on your perspective, of course, model your modelling, as say, something like this:

Diagram brain perspective 1

but only from the standpoint of another ‘occluded frame,’ the oblivion of medial neglect. This second diagram, in other words, can only model the medial, neurofunctional information neglected in the first by once again neglecting that information. No matter how many times we stack these diagrams, how far we press the Rylean regress, we will still be stranded with medial neglect, the ‘unframed frame’ of the first person. The reason for this, it is important to note, is purely mechanical as opposed to semantic: the machinery of modelling simply cannot model itself as it models.

But even though medial neglect means thoroughgoing neurofunctional occlusion–the brains only appear within the first person–these diagrams show it is by no means complete. As mentioned above, the brain’s inability to model itself as a brain (another natural mechanism in its environment) means it must model itself as a ‘perspective,’ something at once situated within its environment, and somehow mysteriously hanging outside of it–both local and nonlocal.

Many of the apparent peculiarities belonging to consciousness and intentionality as we intuit them, on the BBT account, turn on either medial neglect directly or one of a number of other structural and developmental confounds such as brain complexity, evolutionary caprice, and access invariance. The brain, unable to model itself as a brain, is forced to rely on what little metacognitive information its structure and evolutionary development afford.

This is where informatic neglect becomes a problem more generally, which is to say, over and above the problems posed by medial neglect in particular. We now know human cognition is fractionate, a collection of situation specific problem-solving devices, and yet we have no direct awareness of relying on anything save a singular, universal capacity for problem-solving. We regularly rely on dubious information, resort to the wrong device on the wrong occasion, entirely convinced of the justness of our cause, the truth of our theory, or what have you.

Mistakes like these and others reveal the profound and peculiar structural role informatic neglect plays in conscious experience. In the absence of information pertaining to our (medial) causal relation to our environment, we experience aboutness. In the absence of discriminations (in the absence of information) we experience wholes. In the absence of information regarding the insufficiency of information, we presume sufficiency.

But the most difficult-to-grasp structural quirk of informatic neglect has to be the ‘local nonlocality’ we encountered above, what I’ve been calling asymptosis, the fact that the various limits of cognitive and perceptual modalities cannot figure within those cognitive and perceptual modalities. As mechanical, no neural subsystem can model its modelling as it models. This is why, for instance, you cannot see the limits of your visual field–or why, in other words, the boundary of your visual field is asymptotic.

So in the diagrams above, you see a brain and none of the neural machinery responsible for that seeing primarily because of informatic neglect. It is you, a whole (and autonomous) person, seeing that brain and not a fractionate conglomerate of subpersonal cognitive mechanisms because of informatic neglect. Likewise, this metacognitive appraisal that it is ‘you’ looking at a brain is self-evident because of informatic neglect: you have no information to the contrary. And lastly, the ‘frame’ (the medial neurofunctionality) of what you see constitutively outruns what you see because, once again, of informatic neglect.

This is all just to say that the intentional, holistic, sufficient, and asymptotic structure of the first person simply follows from the fact that the brain is biomechanical.

This claim may seem innocuous, but it is big, I assure you, monstrously big. Why? Because, aside from at long last providing a parsimonious theoretical means of naturalizing consciousness and intentionality, it also argues that they (as intuitively conceived) are largely cognitive illusions, kinds of ‘natural anosognosias’ that we cannot but suffer given the constraints and confounds facing neural metacognition. It means that the very form of ‘subjectivity’ (and not merely the ‘self’) actually is a kind of dream.

Make no mistake, if the Blind Brain Theory (or something like it) turns out to be correct, it will be the last theory in the history of philosophy as traditionally conceived. Why? Because BBT is as much a translation manual as a theory, a potential way to transform the great intentional problems of philosophy into the mechanical subject matter of cognitive neuroscience.

Trust me, I know how out-and-out preposterous this sounds… But as I said above, the gates of the soul have been battered down.

Since the devil is in the details, it might pay to finesse this sketch with more information. So to return to what I termed the natural informatic field above, the sum of all the static and dynamic systematic differences that constitute you qua organism. How specifically does informatic neglect allow us to plug the phenomenal/intentional into the physical/mechanical?

From a life sciences perspective, the natural informatic field consists of externally-related structures and irreflexive processes. Our brain is that portion of the Field biologically adapted to model and interact with the rest of the Field (the environment) via information collected from the Field. The conscious subsystem of the brain is that portion of the Field biologically adapted to model and interact with the rest of the Field via information collected from the brain. All we need ask is what information is available to what cognitive resources as the conscious subsystem generates its model. In a sense, all we need do is subtract varieties and densities of information from the pot of overall information. I know the conceptual jargon makes this all seem dreadfully complicated, but it really is this simple.

So, what information can the conscious subsystem of the brain provide what cognitive resources in the course of generating its model? No causal information regarding its own neurofunctionality, as we have seen. The model, therefore, will have to be medially acausal. No temporal information regarding its own neurofunctionality either. The model, therefore, will have to be medially atemporal. Minimal information regarding its own structural complexity, given the constraints and confounds mentioned above. The model, therefore, will be structurally undifferentiated relative to environmental models. Minimal information regarding its own informatic and cognitive limitations, once again, given the aforementioned constraints and confounds. The model, therefore, will be both canonical (because of sufficiency) and intractable (because incompatible with existing, environmentally-oriented cognitive resources).

Now the key principle that seems to make this work is the way neglect leverages varieties of identity. BBT, in effect, interprets the appearance of consciousness as a kind of ‘flicker fusion writ large.’ In the absence of distinctions, the brain (for reasons that will fall out of any successful scientific theory of consciousness proper) conjures experiential continuities. Occlusion equals identity, according to BBT.

What makes the first person as it appears so peculiar from the standpoint of environmental cognition has to do with ‘informatic captivity’ or access invariance, our brain’s inability to vary its informatic relationship to itself the way it can its environments. So, on the BBT account, the ‘unity of consciousness’ that so impressed Descartes is simply of a piece with the way, in the absence of information, we confuse aggregates for individuals more generally, as when we confuse ants on the sidewalk with spilled paint, for instance. But where cognition can vary its access and so accumulate the information required to revise ‘spilled paint’ into ‘swarming ants’ in our environment, metacognition is trapped with the spilled paint of the ‘soul.’ The first person appears to be an internally-related ‘whole,’ in other words, simply because we lack the information to cognize it otherwise. The holistic consciousness we think we enjoy, in other words, is a kind of cartoon.

(This underscores the way the external-relationality characteristic of our environment is an informatic and cognitive achievement, something the human brain has evolved to model and exploit. On the BBT account, internal-relationality is generally a symptom of missing information, a structurally and developmentally imposed loss of dimensionality.)

But what makes the first person so intractable, a hitherto inexhaustible source of perplexity, only becomes apparent when we consider the diachronic dimension of this ‘fusion in occlusion,’ the way neglect winnows the implacable irreflexivity of the natural into the labile reflexivity of the mental. The conscious system’s inability to model its modelling as it models applies to temporal modelling as well. The temporal system can no more ‘time its timing’ than the visual system can ‘see its seeing.’ This means that metacognition has no way to intuit the ‘time of timing,’ leading, once again, to default identity and all the paradoxes belonging to the ‘now.’ The temporal field is ‘locally nonlocal’ or asymptotic, muddy and fleeting yet apparently monolithic and self-identical.

So, in a manner similar to the way information privation collapses external-relationality into apparent internal-relationality, it also collapses irreflexivity into apparent reflexivity. Conscious cognition can track environmental irreflexivity readily enough, but it cannot track this tracking and so intuits otherwise. The first person cartoon suffers the diachronic hallucination of fundamental continuity in time. Once again metacognition mistakes oblivion (or less dramatically, incapacity) for identity.

To get a sense of how radical this is one need only consider the very paradigm of atemporal reflexivity in philosophy, the a priori. On the BBT account, what we call the a priori is what algorithmic nature looks like from the inside. No matter how much content you hollow out of your formalisms, you are still talking about something magical, still begging what Eugene Wigner famously called ‘the unreasonable effectiveness of mathematics,’ the question of why an externally-related, irreflexive nature should prove so amenable to an internally-related, reflexive mathematics. BBT answers: because mathematics is itself natural, it’s most systematically ‘viral’ expression. It collapses the disjunct, asserts continuity where the tradition perceives the inexplicable. Mathematics only seems ‘supra-natural’ because until recently it could only be explored performatively in the ‘laboratory’ of our own brains, and because of the way metacognition shears away its informatic dimensions. Given the illusion of sufficiency, the a priori cartoon strucks us as the efficacious source of a special, transcendental form of cognition. Only now, as computational complexities force mathematicians and physicists to rely more and more on machines, mechanical implementations that (by some cosmic coincidence) are entirely capable of performing ‘semantic’ operations without the least whiff of ‘understanding,’ are we in a position to entertain the possibility that ‘formal semantics’ are simply another ghost in the human machine.

And the list of radical reinterpretations goes on–after a year of manic exploration and elaboration I feel like I’ve scarcely scratched the surface. I could use some help, if anyone is so inclined!

So with that in ‘mind,’ I leave you with the following establishing shot: Consciousness as you conceive/perceive it this very moment now is the tissue of neglect, painted on the same informatic canvas with the same cognitive brushes as our environment, only blinkered and impressionistic in the extreme. Reflexivity, internal-relationality, sufficiency, and intentionality, can all be seen as hallucinatory artifacts of informatic closure and scarcity, the result of a brain forced to make the most with the least using only the resources it has at hand. This is a picture of the first person as an informatically intergrated series of scraps of access, forced by structural bottlenecks to profoundly misrecognize itself as something somehow hooked upon the transcendental, self-sufficient and whole….

To see you.