Three Pound Brain

No bells, just whistling in the dark…

The Real Problem with ‘Correlation’

by rsbakker

stick zombies

Since presuming that intentional cognition can get behind intentional cognition belongs to the correlation problem, any attempt to understand the problem requires we eschew theoretical applications of intentional idioms. Getting a clear view, in other words, requires that we ‘zombify’ human cognition, adopt a thoroughly mechanical vantage that simply ignores intentionality and intentional properties. As it so happens, this is the view that commands whatever consensus one can find regarding these issues. Though the story I’ll tell is a complicated one, it should also be a noncontroversial one, at least insofar as it appeals to nothing more than naturalistic platitudes.

I first started giving these ‘zombie interpretations’ of different issues in philosophy and cognitive science a few years back.[1] Everyone in cognitive science agrees that consciousness and cognition turn on the physical somehow. This means that purely mechanical descriptions of the activities typically communicated via intentional idioms have to be relevant somehow (so long as they are accurate, at least). The idea behind ‘zombie interpretation’ is to explain as much as possible using only the mechanistic assumptions of the biological sciences—to see how far generalizing over physical processes can take our perennial attempt to understand meaning.

Zombies are ultimately only a conceit here, a way for the reader to keep the ‘explanatory gap’ clearly in view. In the institutional literature, ‘p-zombies’ are used for a variety of purposes, most famously to anchor arguments against physicalism. If a complete physical description of the world need not include consciousness, then the brute fact of consciousness implies that physicalism is incomplete. However, since this argument itself turns on the correlation problem, it will not concern us here. The point, oddly enough, is to adhere to an explanatory domain where we all pretty much agree, to speculate using only facts and assumptions belonging to the biological sciences—the idea being, of course, that these facts and assumptions are ultimately all that’s required. Zombies allow us to do that.

Philosophy Now zombie pic

So then, devoid of intentionality, zombies lurch through life possessing only contingent, physical comportments to their environment. Far from warehousing ‘representations’ possessing inexplicable intentional properties, their brains are filled with systems that dynamically interact with their world, devices designed to isolate select signals from environmental noise. Zombies do not so much ‘represent their world’ as possess statistically reliable behavioural sensitivities to their environments.

So where ‘subjects’ possess famously inexplicable semantic relations to the world, zombies possess only contingent, empirically tractable relations to the world. Thanks to evolution and learning, they just happen to be constituted such that, when placed in certain environments, gene conserving behaviours tend to reliably happen. Where subjects are thought to be ‘agents,’ perennially upstream sources of efficacy, zombies are components, subsystems at once upstream and downstream the superordinate machinery of nature. They are astounding subsystems to be sure, but they are subsystems all the same, just more nature—machinery.

What makes them astounding lies in the way their neurobiological complexity leverages behaviour out of sensitivity. Zombies do not possess distributed bits imbued with the occult property of aboutness; they do not model or represent their worlds in any intentional sense. Rather, their constitution lets ongoing environmental contact tune their relationship to subsequent environments, gradually accumulating the covariant complexities required to drive effective zombie behaviour. Nothing more is required. Rather than possessing ‘action enabling knowledge,’ zombies possess behaviour enabling information, where ‘information’ is understood in the bald sense of systematic differences making systematic differences.

A ‘cognitive comportment,’ as I’ll use it here, refers to any complex of neural sensitivities subserving instances of zombie behaviour. It comes in at least two distinct flavours: causal comportments, where neurobiology is tuned to what generally makes what happen, and correlative comportments, where zombie neurobiology is tuned to what generally accompanies what happens. Both systems allow our zombies to predict and systematically engage their environments, but they differ in a number of crucial respects. To understand these differences we need some way of understanding what positions zombies upstream their environments–or what leverages happy zombie outcomes.

The zombie brain, much like the human brain, confronts a dilemma. Since all perceptual information consists of sensitivity to selective effects (photons striking the eye, vibrations the ear, etc.), the brain needs some way of isolating the relevant causes of those effects (a rushing tiger, say) to generate the appropriate behavioural response (trip your mother-in-law, then run). The problem, however, is that these effects are ambiguous: a great many causes could be responsible. The brain is confronted with a version of the inverse problem, what I will call the medial inverse problem for reasons that will soon be clear. Since it has nothing to go on but more effects, which are themselves ambiguous, how could it hope to isolate the causes it needs to survive?

By allowing sensitivities to discrepancies between the patterns initially cued and subsequent sensory effects to select—and ultimately shape—the patterns subsequently cued. As it turns out, zombie brains are Bayesian brains.[2] Allowing discrepancies to both drive and sculpt the pattern-matching process automatically optimizes the process, allowing the system to bootstrap wide-ranging behavioural sensitivities to environments in turn. In the intentionality laden idiom of theoretical neuroscience, the brain is a ‘prediction error minimization’ machine, continually testing occurrent signals against ‘guesses’ (priors) triggered by earlier signals. Success (discrepancy minimization) quite automatically begets success, allowing the system to continually improve its capacity to make predictions—and here’s the important thing—using only sensory signals.[3]

But isolating the entities/behaviour causing sensory effects is one thing; isolating the entities/behaviour causing those entities/behaviour is quite another. And it’s here that the chasm between causal cognition and correlative cognition yawns wide. Once our brain’s discrepancy minimization processes isolate the relevant entities/behaviours—solve the medial inverse problem—the problem of prediction simply arises anew. It’s not enough to recognize avalanches as avalanches or tigers as tigers, we have to figure out what they will do. The brain, in effect, faces a second species of inverse problem, what might be called the lateral inverse problem. And once again, it’s forced to rely on sensitivities to patterns (to trigger predictions to test against subsequent signals, and so on).[4]

Nature, of course, abounds with patterns. So the problem is one of tuning a Bayesian subsystem like the zombie brain to the patterns (such as ‘avalanche behaviour’ or ‘tiger behaviour’) it needs to engage its environments given only sensory effects. The zombie brain, in other words, needs to wring behavioural sensitivities to distal processes out of a sensitivity to proximal effects. Though they are adept at comporting themselves to what causes their sensory effects (to solving the medial inverse problem), our zombies are almost entirely insensitive to the causes behind those causes. The etiological ambiguity behind the medial inverse problem pales in comparison to the etiological ambiguity comprising the lateral inverse problem, simply because sensory effects are directly correlated to the former, and only indirectly correlated to the latter. Given the limitations of zombie cognition, in other words, zombie environments are ‘black box’ environments, effectively impenetrable to causal cognition.

Part of the problem is that zombies lack any ready means of distinguishing causality from correlation on the basis of sensory information alone. Not only are sensory effects ambiguous between causes, they are ambiguous between causes and correlations as well. Cause cannot be directly perceived. A broader, engineered signal and greater resources are required to cognize its machinations with any reliability—only zombie science can furnish zombies with ‘white box’ environments. Fortunately for their prescientific ancestors, evolution only required that zombies solve the lateral inverse problem so far. Mere correlations, despite burying the underlying signal, remain systematically linked to that signal, allowing for a quite different way of minimizing discrepancies.

Zombies, once again, are subsystems whose downstream ‘componency’ consists in sensitivities to select information. The amount of environmental signal that can be filtered from that information depends on the capacity of the brain. Now any kind of differential sensitivity to an environment serves organisms in good stead. To advert to the famous example, frogs don’t need the merest comportment to fly mechanics to catch flies. All they require is a select comportment to select information reliably related to flies and fly behaviour, not to what constitutes flies and fly behaviour. And if a frog did need as much, then it would have evolved to eat something other than flies. Simple, systematic relationships are not only all that is required to solve a great number of biological problems, they are very often the only way those problems can be solved, given evolutionary exigencies. This is especially the case with complicated systems such as those comprising life.

So zombies, for instance, have no way of causally cognizing other zombies. They likewise have no way of causally cognizing themselves, at least absent the broader signal and greater computational resources provided by zombie science. As a result, they possess at best correlative comportments both to each other and to themselves.

Idoits guide to zombies

So what does this mean? What does it mean to solve systems on basis of inexpensive correlative comportments as opposed to far more expensive causal comportments? And more specifically, what does it mean to be limited to extreme versions of such comportments when it comes to zombie social cognition and metacognition?

In answer to the first question, at least three, interrelated differences can be isolated:

Unlike causal (white box) comportments, correlative (black box) comportments are idiosyncratic. As we saw above, any number of behaviourally relevant patterns can be extracted from sensory signals. How a particular problem is solved depends on evolutionary and learning contingencies. Causal comportments, on the other hand, involve behavioural sensitivity to the driving environmental mechanics. They turn on sensitivities to upstream systems that are quite independent of the signal and its idiosyncrasies.

Unlike causal (white box) comportments, correlative (black box) comportments are parasitic, or differentially mediated. To say that correlative comportments are ‘parasitic’ is to say they depend upon occluded differential relations between the patterns extracted from sensory effects and the environmental mechanics they ultimately solve. Frogs, once again, need only a systematic sensory relation to fly behaviour, not fly mechanics, which they can neglect, even though fly mechanics drives fly behaviour. A ‘black box solution’ serves. The patterns available in the sensory effects of fly behaviour are sufficient for fly catching given the cognitive resources possessed by frogs. Correlative comportments amount to the use of ‘surface features’—sensory effects—to anticipate outcomes driven by otherwise hidden mechanisms. Causal comportments, which consist of behavioural sensitivities (also derived from sensory effects) to the actual mechanics involved, are not parasitic in this sense.

Unlike causal (white box) comportments, correlative (black box) comportments are ecological, or problem relative. Both causal comportments and correlative comportments are ‘ecological’ insofar as both generate solutions on the basis of finite information and computational capacity. But where causal comportments solve the lateral inverse problem via genuine behavioural sensitivities to the mechanics of their environments, correlative comportments (such as that belonging to our frog) solve it via behavioural sensitivities to patterns differentially related to the mechanics of their environments. Correlative comportments, as we have seen, are idiosyncratically parasitic upon the mechanics of their environments. The space of possible solutions belonging to any correlative comportment is therefore relative to the particular patterns seized upon, and their differential relationships to the actual mechanics responsible. Different patterns possessing different systematic relationships will possess different ‘problem ecologies,’ which is to say, different domains of efficacy. Since correlative comportments are themselves causal, however, causal comportments apply to all correlative domains. Thus the manifest ‘objectivity’ of causal cognition relative to the ‘subjectivity’ of correlative cognition. 

So far, so good. Correlative comportments are idiosyncratic, parasitic, and ecological in a way that causal comportments are not. In each case, what distinguishes causal comportments is an actual behavioural sensitivity to the actual mechanics of the system. Zombies are immersed in potential signals, awash in causal differences, information, that could make a reproductive difference. The difficulties attendant upon the medial and lateral inverse problems, the problems of what and what-next, render the extraction of causal signals enormously difficult, even when the systems involved are simple. The systematic nature of their environments, however, allow them to use behavioural sensitivities as ‘cues,’ signals differentially related to various systems, to behaviourally interact with those systems despite the lack of any behavioural sensitivity to their particulars. So in research on contingencies, for instance, the dependency of ‘contingency inferences’ on ‘sampling,’ the kinds of stimulus input available, has long been known, as have the kinds of biases and fallacies that result. Only recently, however, have researchers realized the difficulty of accurately making such inferences given the kinds of information available in vivo, and the degree to which we out and out depend on so-called ‘pseudocontingency heuristics’ [5]. Likewise, research into ‘spontaneous explanation’ and  ‘essentialism,’ the default attribution of intrinsic traits and capacities in everyday explanation, clearly suggests that low-dimensional opportunism is the rule when it comes to human cognition.[6] The more we learn about human cognition, in other words, the more obvious the above story becomes.

So then what is the real problem with correlation? The difficulty turns on the fact that black box cognition, solving systems via correlative cues, can itself only be cognized in black box terms.

Given their complexity, zombies are black boxes to themselves as much to others. And this is what has cued so much pain behaviour in so many zombie philosophers. As a black box, zombies cannot cognize themselves as black boxes: the correlative nature of their correlative comportments utterly escapes them (short, once again, the information provided by zombie science). Zombie metacognition is blind to the structure and dynamics of zombie metacognition, and thus prone to what might be called ‘white box illusions.’ Absent behavioural sensitivity to the especially constrained nature of their correlative comportments to themselves, insufficient data is processed in the same manner as sufficient data, thus delivering the system to ‘crash space,’ domains rendered intractable by the systematic misapplication of tools adapted to different problem ecologies. Unable to place themselves downstream their incapacity, they behave as though no such incapacity exists, suffering what amounts to a form of zombie anosognosia.

Perhaps this difficulty shouldn’t be considered all that surprising: after all, the story told here is a white box story, a causal one, and therefore one requiring extraction from the ambiguities of effects and correlations. The absence of this information effectively ‘black-boxes’ the black box nature of correlative cognition. Zombies cued to solve for that efficacy accordingly run afoul the problem of processing woefully scant data as sufficient, black boxes as white boxes, thus precluding the development of effective, behavioural sensitivities to the actual processes involved.  The real Problem of Correlation, in other words, is that correlative modes systematically confound cognition of correlative comportments. Questions regarding the nature of our correlative comportments simply do not lie within the problem space of our correlative comportments—and how could they, when they’re designed to solve absent sensitivity to what’s actually going on?

And this is why zombies not only have philosophers, they have a history of philosophy as well. White box illusions have proven especially persistent, despite the spectacular absence of systematic one-to-one correspondences between the apparent white box that zombies are disposed to report as ‘mind’ and the biological white box emerging out of zombie science. Short any genuine behavioural sensitivity to the causal structure of their correlative comportments, zombies can at most generate faux-solutions, reports anchored to the systematic nature of their conundrum, and nothing more. Like automatons, they endlessly report low-dimensional, black box posits the way they report high-dimensional environmental features—and here’s the thing—using the very same terms that humans use. Zombies constantly utter terms like ‘minds,’ ‘experiences,’ ‘norms,’ and so on. Zombies, you could say, possess a profound disposition to identify themselves and each other as humans.

Just like us.

MJ zombie



[1] See, Davidson’s Fork: An Eliminativist Radicalization of Radical Interpretation, The Blind Mechanic, The Blind Mechanic II: Reza Negarestani and the Labour of Ghosts, Zombie Interpretation: Eliminating Kriegel’s Asymmetry Argument, and Zombie Mary versus Zombie God and Jesus: Against Lawrence Bonjour’s “Against Materialism”

[2] For an overview of Bayesian approaches, see Andy Clark, “Whatever next? Predictive brains, situated agents, and the future of cognitive science.”

[3]  The following presumes an ecological (as opposed to an inferential) understanding of the Bayesian brain. See Nico Orlandi, “Bayesian perception is ecological perception.”

[4] Absent identification there is no possibility of prediction. The analogy between this distinction and the ancient distinction between being and becoming (or even the modern one between the transcendental and the empirical) is interesting to say the least.

[5] See Klaus Fiedler et al, “Pseudocontingencies: Logically Unwarranted but Smart Inferences.”

[6] See Andrei Cimpian, “The Inherence Heuristic: Generating Everyday Explanations,” or Cimpian and Salomon, “The inherence heuristic: An intuitive means of making sense of the world, and a potential precursor to psychological essentialism.”

How Science Reveals the Limits of ‘Nooaesthetics’ (A Reply to Alva Noë)

by rsbakker

As a full-time artist (novelist) who has long ago given up on the ability of traditional aesthetics (or as I’ll refer to it here, ‘nooaesthetics’) to do much more than recontextualize art in ways that yoke it to different ingroup agendas, I look at the ongoing war between the sciences and the scholarly traditions of the human as profoundly exciting. The old, perpetually underdetermined convolutions are in the process of being swept away—and good riddance! Alva Noë, however, sees things differently.

So much of rhetoric turns on asking only those questions that flatter your view. And far too often, this amounts to asking the wrong questions, in particular, those questions that only point your way. All the other questions, you pass over in strategic silence. Noë provides a classic example of this tactic in “How Art Reveals the Limits of Neuroscience,” his recent critique of ‘neuroaesthetics’ in the The Chronicle of Higher Education.

So for instance, it seems pretty clear that art is a human activity, a quintessentially human activity according to some. As a human activity, it seems pretty clear that our understanding of art turns on our understanding of humanity. As it turns out, we find ourselves in the early stages of the most radical revolution in our understanding of the human ever… Period. So it stands to reason that a revolution in our understanding of the human will amount to a revolution in our understanding of human activities—such as art.

The problem with revolutions, of course, is that they involve the overthrow of entrenched authorities, those invested in the old claims and the old ways of doing business. This is why revolutions always give rise to apologists, to individuals possessing the rhetorical means of rationalizing the old ways, while delegitimizing the new.

Noë, in this context at least, is pretty clearly the apologist, applying words as poultices, ways to soothe those who confuse old, obsolete necessities with absolute ones. He could have framed his critique of neuroaesthetics in this more comprehensive light, but that would have the unwelcome effect of raising other questions, the kind that reveal the poverty of the case he assembles. The fact is, for all the purported shortcomings of neuroaesthetics he considers, he utterly fails to explain why ‘nooaesthetics,’ the analysis, interpretation, and evaluation of art using the resources of the tradition, is any better.

The problem, as Noë sees it, runs as follows:

“The basic problem with the brain theory of art is that neuroscience continues to be straitjacketed by an ideology about what we are. Each of us, according to this ideology, is a brain in a vat of flesh and bone, or, to change the image, we are like submariners in a windowless craft (the body) afloat in a dark ocean of energy (the world). We know nothing of what there is around us except what shows up on our internal screens.”

As a description of parts of neuroscience, this is certainly the case. But as a high-profile spokesperson for enactive cognition, Noë knows full well that the representational paradigm is a fiercely debated one in the cognitive sciences. But it suits his rhetorical purposes to choose the most theoretically ill-equipped foes, because, as we shall see, his theoretical equipment isn’t all that capable either.

As a one-time Heideggerean, I recognize Noë’s tactics as my own from way back when: charge your opponent with presupposing some ‘problematic ontological assumption,’ then show how this or that cognitive register is distorted by said assumption. Among the most venerable of those problematic assumptions has to be the charge of ‘Cartesianism,’ one that has become so overdetermined as to be meaningless without some kind of qualification. Noë describes his understanding as follows:

“Crucially, this picture — you are your brain; the body is the brain’s vessel; the world, including other people, are unknowable stimuli, sources of irradiation of the nervous system — is not one of neuroscience’s findings. It is rather something that has been taken for granted by neuroscience from the start: Descartes’s conception with a materialist makeover.”

In cognitive science circles, Noë is notorious for the breezy way he consigns cognitive scientists to his ‘Cartesian box.’ For a fellow anti-representationalist such as myself, I often find his disregard for the nuances posed by his detractors troubling. Consider:

“Careful work on the conceptual foundations of cognitive neuroscience has questioned the plausibility of straightforward mind-brain reduction. But many neuroscientists, even those not working on such grand issues as the nature of consciousness, art, and love, are committed to a single proposition that is, in fact, tantamount to a Cartesian idea they might be embarrassed to endorse outright. The momentous proposition is this: Every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition is in your brain. We may not know how the brain manages this feat, but, so it is said, we are beginning to understand. And this new knowledge — of how the organization of bits of matter inside your head can be your personality, thoughts, understanding, wonderings, religious or sexual impulses — is surely among the most exciting and important in all of science, or so it is claimed.”

I hate to say it, but this is a mischaracterization. One has to remember that before cognitive science, theory was all we had when it came to the human. Guesswork, profound to the extent that we consider ourselves profound, but guesswork all the same. Cognitive science, in its many-pronged attempt to scientifically explain the human, has inherited all this guesswork. What Noë calls ‘careful work’ simply refers to his brand of guesswork, enactive cognition, and its concerns, like the question of how the ‘mind’ is related to the ‘brain,’ are as old as the hills. ‘Straightforward mind brain reduction,’ as he calls it, has always been questioned. This mystery is a bullet that everyone in the cognitive sciences bites in some way or another. The ‘momentous proposition’ that the majority of neuroscientists assume isn’t that “[e]very thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition is in [our] brain,” but rather that every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition involves our brain. Noë’s Cartesian box assumption is nowhere so simple or so pervasive as he would have you believe.

He knows this, of course, which is why he devotes the next paragraph to dispatching those scientists who want (like Noë himself does, ultimately) to have it both ways. He needs his Cartesian box to better frame the contest in clear-cut ‘us against them’ terms. The fact that cognitive science is a muddle of theoretical dissension—and moreover, that it knows as much—simply does not serve his tradition redeeming narrative. So you find him claiming:

“The concern of science, humanities, and art, is, or ought to be, the active life of the whole, embodied, environmentally and socially situated animal. The brain is necessary for human life and consciousness. But it can’t be the whole story. Our lives do not unfold in our brains. Instead of thinking of the Creator Brain that builds up the virtual world in which we find ourselves in our heads, think of the brain’s job as enabling us to achieve access to the places where we find ourselves and the stuff we share those places with.”

These, of course, are platitudes. In philosophical debates, when representationalists critique proponents of embodied or enactive cognition like Noë, they always begin by pointing out their agreement with claims like these. They entirely agree that environments condition experience, but disagree (given ‘environmentally off-line’ phenomena such as mental imagery or dreams) that they are directly constitutive of experience. The scientific view is de facto a situated view, a view committed to understanding natural systems in context, as contingent products of their environments. As it turns out, the best way to do this involves looking at these systems mechanically, not in any ‘clockwork’ deterministic sense, but in the far richer sense reveal by the life sciences. To understand how a natural system fits into its environment, we need to understand it, statistically if not precisely, as a component of larger systems. The only way to do this is figure how, as a matter of fact, it works, which is to say, to understand its own components. And it just so happens that the brain is the most complicated machine we have ever encountered.

The overarching concern of science is always the whole; it just so happens that the study of minutiae is crucial to understanding the whole. Does this lead to institutional myopia? Of course it does. Scientists are human like anyone else, every bit as prone to map local concerns across global ones. The same goes for English professors and art critics and novelists and Noë. The difference, of course, is the kind of cognitive authority possessed by scientists. Where the artistic decisions I make as a novelist can potentially enrich lives, discoveries in science can also save them, perhaps even create new forms of life altogether.

Science is bloody powerful. This, ultimately, is what makes the revolution in our human self-understanding out and out inevitable. Scientific theory, unlike theory elsewhere, commands consensus, because scientific theory, unlike theory elsewhere, reliably provides us with direct power over ourselves and our environments. Scientific understanding, when genuine, cannot but revolutionize. Nooaesthetic understanding, like religious or philosophical understanding, simply has no way of arbitrating its theoretical claims. It is, compared to science at least, toothless.

And it always has been. Only the absence of any real scientific understanding of the human has allowed us to pretend otherwise all these years, to think our armchair theory games were more than mere games. And that’s changing.

So of course it makes sense to be wary of scientific myopia, especially given what science has taught us about our cognitive foibles. Humans oversimplify, and science, like art and traditional aesthetics, is a human enterprise. The difference is that science, unlike traditional aesthetics, revolutionizes our collective understanding of ourselves and the world.

The very reason we need to guard against scientific myopia, in other words, is also the very reason why science is doomed to revolutionize the aesthetic. We need to be wary of things like Cartesian thinking simply because it really is the case that our every thought, feeling, experience, impression, value, argument, emotion, attitude, inclination, belief, desire, and ambition turns on our biology in some fundamental respect. The only real question is how.

But Noë is making a far different and far less plausible claim: that contemporary neuroscience has no place in aesthetics.

“Neuroscience is too individual, too internal, too representational, too idealistic, and too antirealistic to be a suitable technique for studying art. Art isn’t really a phenomenon at all, not in the sense that photosynthesis or eyesight are phenomena that stand in need of explanation. Art is, rather, a mode of investigation, a style of research, into what we are. Art also gives us an opportunity to observe ourselves in the act of knowing the world.”

The reason for this, Noë is quick to point out, isn’t that the sciences of the human don’t have important things to say about a human activity such as art—of course it does—but because “neuroscience has failed to frame a plausible conception of human nature and experience.”

Neuroscience, in other words, possesses no solution to the mind-body problem. Like biology before the institutionalization of evolution, cognitive science lacks the theoretical framework required to unify the myriad phenomena of the human. But then, so does Noë, who only has philosophy to throw at the problem, philosophy that, by his own admission, neuroscience does not find all that compelling.

Which at last frames the question of neuroaesthetics the way Noë should have framed it in the beginning. Say we agree with Noë, and decide that neuroaesthetics has no place in art criticism. Okay, so what does? The possibility that neuroaesthetics ‘gets art wrong’ tells us nothing about the ability of nooaesthetics, traditional art criticism turning on folk-psychological idioms, to get art right. After all, the fact that science has overthrown every single traditional domain of speculation it has encountered strongly suggests that nooaesthetics has got art wrong as well. What grounds do we have for assuming that, in this one domain at least, our guesswork has managed to get things right? Like any other domain of traditional speculation on the human, theorists can’t even formulate their explananda in a consensus commanding way, let alone explain them. Noë can confidently declare to know ‘What Art Is’ if he wants, but ultimately he’s taking a very high number in a very long line at a wicket that, for all anyone knows, has always been closed.

The fact is, despite all the verbiage Noë has provided, it seems pretty clear that neuroaesthetics—even if inevitably myopic in, this, the age of its infancy—will play an ever more important role in our understanding of art, and that the nooaesthetic conceits of our past will correspondingly dwindle ever further into the mists of prescientific fable and myth.

As this artist thinks they should.

Anarcho-ecologies and the Problem of Transhumanism

by rsbakker

So a couple weeks back I posed the Augmentation Paradox:

The more you ‘improve’ some ancestral cognitive capacity, the more you degrade all ancestral cognitive capacities turning on the ancestral form of that cognitive capacity.

I’ve been debating this for several days now (primarily with David Roden, Steve Fuller, Rick Searle, and others over at Enemy Industry), as well as scribbling down thoughts on my own. One of the ideas falling out of these exchanges and ruminations is something that might be called ‘anarcho-ecology.’

Let’s define an ‘anarcho-ecocology’ as an ecology too variable to permit human heuristic cognition. Now we know that such an ecology is possible because we know that heuristics use cues possessing stable differential relations to systems to solve systems. The reliability of these cues depends on the stability of those differential relations, which in turn depends on the invariance of the systems to be solved. This simply unpacks the platitude that we are adapted to the world the way it is (or perhaps to be more precise (and apropos this post) the way it was). Anarcho-ecologies arise when systems, either targeted or targeting, begin changing so rapidly that ‘cuing,’ the process of forming stable differential relations to the target systems, becomes infeasible.  They are problem-solving domains where crash space has become absolute.

I propose that Transhumanism, understood as “an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities,” is actually promoting the creation of anarcho-ecologies, and as such, the eventual obsolescence of human heuristic cognition. And since intentional cognition constitutes a paradigmatic form of human heuristic cognition, this amounts to saying that Transhumanism is committed to what I’ve been calling the Semantic Apocalypse.

The argument, as I’ve been posing it, looks like this:

1) Heuristic cognition depends on stable, taken-for-granted backgrounds.

2) Intentional cognition is heuristic cognition.

/3) Intentional cognition depends on stable, taken-for-granted backgrounds.

4) Transhumanism entails the continual transformation of stable, taken-for-granted backgrounds.

/5) Transhumanism entails the collapse of intentional cognition.

Let’s call this the ‘Anarcho-ecological Argument Against Transhumanism,’ or AAAT.

Now at first blush, I’m sure this argument must seem preposterous, but I assure you, it’s stone-cold serious. So long as the reliability of intentional cognition turns on invariant, ancestral backgrounds, transformations in those backgrounds will compromise intentional cognition. Consider ants as a low-dimensional analogue. As an eusocial species they form ‘super-organisms,’ collectives exhibiting ‘swarm intelligence,’ where simple patterns of interaction–chemical, acoustic, and tactile communicative protocols–between individuals scale to produce collective solutions to what seem to be complex problems. Now if every ant were suddenly given idiosyncratic communicative protocols–different chemicals, different sounds, different sensitivities–it seems rather obvious that the colony would simply collapse. Lacking any intrasystematic cohesion, it just would not be able to resolve any problems.

Now of course humans, though arguably eusocial, are nowhere near so simple as ants. Human soldiers don’t automatically pace out pheromone trails, they have to be ‘convinced’ that this is what they ‘should’ do. Where ants need only cue one another, humans need to both cue and decode each other. Individual humans, unlike ants, possess ‘autonomy.’ And this disanalogy between ants and humans, I think, handily isolates why most people simply assume that AAAT has to be wrong, that it is obviously too ‘reductive’ is some way. They understand the ‘cue’ part of the argument, appreciate the way changing those systems that intentional cognition takes for granted will transform ancestrally reliable cues into miscues. It’s the decode part, they think, that saves the transhumanist day.  We humans, unlike ants, are not passive consumers of our social environments. Miscues can be identified, diagnosed, and then overcome, precisely because we are autonomous.

So much for AAAT.

Except that it entirely agrees. The argument says nothing about the possibility of somehow decoding intentional miscues (like those we witnessed in spectacular fashion with Ashley Madison’s use of bots to simulate interested women), it only claims that such decoding will not involve intentional cognition, insofar as intentional cognition is heuristic cognition, and heuristic cognition requires invariant backgrounds, stable ecologies. Since Transhumanism does not endorse any coercive, collective augmentations of human capacities, Transhumanists generally see augmentation in consumer terms, something that individuals are free to choose or to eschew given the resources at their disposal. Not only will individuals be continually transforming their capacities, they will be doing so idiomatically. The invariant background that intentional cognition is so exquisitely adapted to exploit will become a supermarket of endless enhancement possibilities–or so they hope. And as that happens, intentional cognition will become increasingly unreliable, and ultimately, obsolete.

To return to our ant analogy, then, we can see that it’s not simply a matter of humans possessing autonomy (however this is defined). Humans, like ants, possess specifically social adaptations, entirely unconscious sensitivities to cues provided by others. We generally ‘solve’ one another effortlessly and automatically, and only turn to ‘decoding,’ deliberative problem-solving, when these reflexive forms of cognition let us down. The fact is, decoding is metabolically expensive, and we tend to avoid it as often as we can. Even more significantly (but not surprisingly), we tend to regard instances of decoding successful to the extent that we can once again resume relying on our thoughtless social reflexes. This is why, despite whatever ‘autonomy’ we might possess, we remain ant-like, blind problem-solvers, in this respect. We have literally evolved to participate in co-dependent communities, to cooperate when cooperation served our ancestors, to compete when competition served our ancestors, to condemn when condemnation served our ancestors, and so on. We do these things automatically, without ‘decoding,’ simply because they worked well enough in the past, given the kinds of systems that required solving (meaning others, even ourselves). We take their solving power for granted.

Humans, for all their vaunted ‘autonomy,’ remain social animals, biologically designed to take advantage of what we are without having to know what we are. This is the design–the one that allows us to blindly solve our social environments–that Transhumanism actively wants to render obsolete.

But before you shout, ‘Good riddance!’ it’s worth remembering that this also happens to be the design upon which all discourse regarding meaning and freedom happens to depend. Intentional discourse. The language of humanism…

Because as it turns out, ‘human’ is a heuristic construct through and through.




by rsbakker

Akrasis (or, social akrasis) refers to the technologically driven socio-economic process, already underway at the beginning of the 20th century, which would eventually lead to Choir.

Where critics in the early 21st century continued to decry the myriad cruelties of the capitalist system, they failed to grasp the greater peril hidden in the way capitalism panders to human yens. Quick to exploit the discoveries arising out of cognitive science, market economies spontaneously retooled to ever more effectively cue and service consumer demand, eventually reconfiguring the relation between buyer and seller into subpersonal circuits (triggering the notorious shift to ‘whim marketing,’ the data tracking of ‘desires’ independent of the individuals hosting them). The ecological nature of human cognition all but assured the mass manipulative character of this transformation. The human dependency on proximal information to cue what amount to ancestral guesses regarding the nature of their social and natural environments provided sellers with countless ways to game human decision making. The global economy was gradually reorganized to optimize what amounted to human cognitive shortcomings. We became our own parasite.

Just as technological transformation (in particular, the scaling of AI) began crashing the utility of our heuristic modes of meaning making, it began to provide virtual surrogates, ways to enable the exercise of otherwise unreliable cognitive capacities. In other words, even as the world became ever more inhuman, our environments became ever more anthropomorphic, ever more ‘smart’ and ‘immersive.’ Thus ‘akrasis,’ the ancient term referring to the state of acting against one’s judgment, which here describes a society acting against the human capacity to judge altogether, a society bent upon the systematic substitution of actual autonomy for simulated autonomy.

Humans, after all, have evolved to leverage the signal of select upstream interventions, assuming it a reliable component of their environments. Once we developed the capacity to hack these latter signals, the world effectively became a drug.

Akrasis has a long history, as long as life itself, according to certain theories. Before the 21st century, the process appeared ‘enlightening,’ but only because the limitations of the technologies involved (painting, literacy, etc.) rendered the resulting transformations manageable. But the rate of transformation continued to accelerate, while the human capacity to adapt remained constant. The outcome was inevitable. As the bandwidth of our interventions approached then surpassed the bandwidth of our central nervous systems, the simulation of meaning became the measure of meaning. Our very frame of reference had been engulfed. For billions, the only obvious direction of success—the direction of ‘cognitive comfort’—lay away from the world and into technology. So they defected in their billions, embracing signals, environments, manufactured entirely from predatory code. Culture became indistinguishable from cheat space—as did, for those embracing virtual fitness indicators, experience itself.

By 2050, we had become an advanced akratic civilization, a species whose ancestral modes of meaning-making had been utterly compromised. Art was an early casualty, though decades would be required to recognize as much. Fantasy, after all, was encouraged in all forms, especially those, like art or religion, laying claim to obsolete authority gradients. To believe in art was to display market vulnerabilities, or to be so poor as to be insignificant. No different than believing in God.

Social akrasis is now generally regarded as a thermodynamic process intrinsic to life, the mechanical outcome of biology falling within the behavioural purview of biology. Numerous simulations have demonstrated that ‘outcome convergent’ or ‘optimizing’ systems, once provided the base capacity required to extract excess capacity from their environments, will simply bootstrap until they reach a point where the system detaches from its environment altogether, begins converging upon the signal of some environmental outcome, rather than any actual environmental outcome.

Thus the famous ‘Junkie Solution’ to Fermi’s Paradox (as recently confirmed by the Gala Semantic Supercomputer at MIT).

And thus Choir.

The Augmentation Paradox

by rsbakker

So, thanks to the great discussion on the ‘Knowledge of Wisdom Paradox,’ here’s a sharper way to characterize the ecological stakes of the posthuman:

The Augmentation Paradox: The more you ‘improve’ some ancestral capacity, the more you degrade all ancestral capacities turning on the ancestral form of that capacity.

It’s not a paradox in the formal sense, of course. Also note that the dependency between ancestral capacities can be a dependency within or between individuals. Imagine a ‘confabulation detector,’ a device that shuts down your verbal reporting system whenever the neural signature of confabulation is detected, effectively freeing you from the dream world we all inhabit, while effectively exiling you from all social activities requiring confabulation (you now trigger ‘linguistic pause’ alerts), and perhaps dooming you to suffer debilitating depression.

It seems to me that something like this has to be floating around somewhere–in debates regarding transhumanism especially. If most all artificial augmentations entail natural degradations, then the question becomes one of what is gained overall. One can imagine, for instance, certain capacities degrading gracefully, while others (like the socio-cognitive capacities of those conned by Ashley Madison bots, for instance) collapsing catastrophically. So the question has to be, What guarantee do we have that augmentations will recoup degradations?

The point being, of course, that we’re not tinkering with cognitive technologies on the ground so much as on the 115th floor. It’s 3.8 billion years down!

Either way, the plausibility of the transhumanist project pretty clearly depends on somehow resolving the Augmentation Paradox in their favour.

BBT Creep: The Inherence Heuristic

by rsbakker

Exciting stuff! For years now the research has been creeping toward my grim semantic worst-case scenario–but “The inherence heuristic” is getting close, very close, especially the way it explicitly turns on the importance of heuristic neglect. The pieces have been there for quite some time; now researchers are beginning to put them together.

One way of looking at blind brain theory’s charge against intentionalism is that so-called intentional phenomena are pretty clear cut examples of inherence heuristics as discussed in this article, ways to handle complex systems absent any causal handle on those systems.  When Cimpion and Saloman write,

“To reiterate, the pool of facts activated by the mental shotgun for the purpose of generating an explanation for a pattern may often be heavily biased toward the inherent characteristics of that pattern’s constituents. As a result, when the storytelling part of the heuristic process takes over and attempts to make sense of the information at its disposal, it will have a rather limited number of options. That is, it will often be forced to construct a story that explains the existence of a pattern in terms of the inherent features of the entities within that pattern rather than in terms of factors external to it. However, the one-sided nature of the information delivered by the mental shotgun is not an impediment to the storytelling process. Quite the contrary – the less information is available, the easier it will be to fit it all into a coherent story.” 464

I think they are also describing what’s going on when philosophers attempt to theoretically solve intentionality, intentional cognition, relying primarily on the resources of intentional cognition. In fact, once you understand the heuristic nature of intentional cognition, the interminable nature of intentional philosophy becomes very easy to understand. We have no way of carving the complexities of cognition at the joints of the world, so we carve it at the joints of the problem instead. When your neighbour repairs your robotic body servant, rather than cognizing all the years he spent training to be a spy before being inserted into your daily routines, you ‘attribute’ him ‘knowledge,’ something miraculously efficacious in its own  right, inherent. And for the vast majority of problems you encounter, it works. Then the philosopher asks, ‘What is knowledge?’ and because adducing causal information scrambles our intuitions of ‘inherence,’ he declares only intentional idioms can cognize intentional phenomena, and the species remains stumped to this very day. Exactly as we should expect. Why should we think tools adapted to do without information regarding our nature can decode their own nature? What would this ‘nature’ be?

The best way to understand intentional philosophy, on a blind brain view, is as a discursive ‘crash space,’ a point where the application of our cognitive tools outruns their effectiveness in ways near and far. I’ve spent the last few years, now, providing various diagnoses of the kinds of theoretical wrecks we find in this space. Articles such as this convince me I won’t be alone for much longer!

So to give a brief example. Once one understands the degree to which intentional idioms turn on ‘inherence heuristics’–ways to manage causal systems absent any behavioural sensitivity to the mechanics of those systems–you can understand the deceptiveness of things like ‘intentional stances,’ the way they provide an answer that functions more like a get-out-of-jail-free card than any kind of explanation.

Given that ‘intentional stances’ belong to intentional cognition, then the fact that intentional cognition solves problems neglecting what is actually going on reflects rather poorly on the theoretical fortunes of the intentional stance. The fact is ‘intentional stances’ leave us with a very low dimensional understanding of our actual straits when it comes to understanding cognition–as we should expect, given that it utilizes a low dimensional heuristic system geared to solving practical problems on the fly and theoretical problems not at all.

All along I’ve been trying to show the way heuristics allow us to solve the explanatory gap, to finally get rid of intentional occultisms like the intentional stance and replace them with a more austere, and more explanatorily comprehensive picture. Now that the cat’s out of the bag, more and more cognitive scientists are going to explore the very real consequences of heuristic neglect. They will use it to map out the neglect structure of the human brain in ever finer detail, thus revealing where our intuitions trip over their own heuristic limits, and people will begin to see how thought can be construed as mangles of parallel-distributed processing meat. It will be clear that the ‘real patterns’ are not the ones required to redeem reflection, or its jargon. Nothing can do that now. Mark my words, inherence heuristics have a bright explanatory future.

Bonfire bright.

The Knowledge of Wisdom Paradox

by rsbakker

Consider: We’ve evolved to solve environments using as little information as possible. This means we’ve evolved to solve environments ignoring as much information as possible. This means we’ve evolved to take as much of our environments for granted as possible. This means evolution has encoded an extraordinary amount of implicit knowledge into our cognitive systems. You could say that each and every one of us constitutes a kind of solution to an ‘evolutionary frame problem.’

Thus the ‘Knowledge of Wisdom Paradox.’ The more explicit knowledge we accumulate, the more we can environmentally intervene. The more we environmentally intervene, the more we change the taken-for-granted backgrounds. The more we change taken-for-granted backgrounds, the less reliable our implicit knowledge becomes.

In other words, the more robust/reliable our explicit knowledge tends to become, the less robust/reliable our implicit knowledge tends to become. Has anyone come across a version of this paradox anywhere? It actually strikes me as a very parsimonious way to make sense of how intelligence manages to make such idiots of some individuals. And its implications for our future are nothing if not profound.


Alienating Philosophies

by rsbakker

I still have no dates to report for The Unholy Consult, but I’m hoping that all the pieces will begin falling together this week. As soon as I know, I will post, I promise. In the meantime, for those interested, I do have some linkage to share.

Buzzfeed Books were kind enough to include The Prince of Nothing in their Top 51 Fantasy Series Ever Written a few days back, proving yet again why I need to get off my ass and get some real publicity shots.

As well, my “Alien Philosophy” piece from the previous two weeks has garnered some thoughtful responses both from Peter Hankins at Conscious Entities, and from Rick Searle at both Utopia or Dystopia and the Institute for Ethics and Emerging Technologies. The discussion is just getting warmed up, so by all means, join in!

I didn’t want to say anything until the post had a chance to be judged on its own merits, but “Alien Philosophy” is actually an extract from my attempt to write a “reader friendly” introduction to Through the Brain Darkly. Though I think it works well enough as a stand alone article, I’ve all but given up on it as intro material, and quite frankly, feel like a fool for ever thinking it possibly could be. Soooooo it’s back to the drawing board for me…

Alien Philosophy (cont’d)

by rsbakker

B: Thespian Souls

Given a convergent environmental and biological predicament, we can suppose our Thespians would have at least flirted with something resembling Aristotle’s dualism of heaven and earth. But as I hope to show, the ecological approach pays even bigger theoretical dividends when one considers what has to be the primary domain of human philosophical speculation: ourselves.

With evolutionary convergence, we can presume our Thespians would be eusocial, [1] displaying the same degree of highly flexible interdependence as us. This observation, as we shall see, possesses some startling consequences. Cognitive science is awash in ‘big questions’ (philosophy), among them the problem of what is typically called ‘mindreading,’ our capacity to explain/predict/manipulate one another on the basis of behavioural data alone. How do humans regularly predict the output of something so preposterously complicated as human brains on the basis of so little information?

The question is equally applicable to our Thespians, who would, like humans, possess formidable socio-cognitive capacities. As potent as those capacities were, however, we can also suppose they would be bounded, and—here’s the thing—radically so. When one Thespian attempts to cognize another, they, like us, will possess no access whatsoever to the biological systems actually driving behaviour. This means that Thespians, like us, would need to rely on so-called ‘fast and frugal heuristics’ to solve each other. [2] That is to say they would possess systems geared to the detection of specific information structures, behavioural precursors that reliably correlate, as opposed to cause, various behavioural outcomes. In other words, we can assume that Thespians will possess a suite of powerful, special purpose tools adapted to solving systems in the absence of causal information.

Evolutionary convergence means Thespians would understand one another (as well as other complex life) in terms that systematically neglect their high-dimensional, biological nature. As suggestive as this is, things get really interesting when we consider the way Thespians pose the same basic problem of computational intractability (the so-called ‘curse of dimensionality’) to themselves as they do to their fellows. The constraints pertaining to Thespian social cognition, in other words, also apply to Thespian metacognition, particularly with respect to complexity. Each Thespian, after all, is just another Thespian, and so poses the same basic challenge to metacognition as they pose to social cognition. By sheer dint of complexity, we can expect the Thespian brain would remain opaque to itself as such. This means something that will turn out to be quite important: namely that Thespian self-understanding, much like ours, would systematically neglect their high-dimensional, biological nature. [3]

This suggests that life, and intelligent life in particular, would increasingly stand out as a remarkable exception as the Thespians cobbled together a mechanical understanding of nature. Why so? Because it seems a stretch to suppose they would possess a capacity so extravagant as accurate ‘meta-metacognition.’ Lacking such a capacity would strand them with disparate families of behaviours and entities, each correlated with different intuitions, which would have to be recognized as such before any taxonomy could be made. Some entities and behaviours could be understood in terms of mechanical conditions, while others could not. So as extraordinary as it sounds, it seems plausible to think that our Thespians, in the course of their intellectual development, would stumble across some version of their own ‘fact-value distinction.’ All we need do is posit a handful of ecological constraints.

But of course things aren’t nearly so simple. Metacognition may solve for Thespians the same ‘fast and frugal’ manner as social cognition, but it entertains a far different relationship to its putative target. Unlike social cognition, which tracks functionally distinct systems (others) via the senses, metacognition is literally hardwired to the systems it tracks. So even though metacognition faces the same computational challenge as social cognition—cognizing a Thespian—it requires a radically different set of tools to do so. [4]

It serves to recall that evolved intelligence is environmentally oriented intelligence. Designs thrive or vanish depending on their ability to secure the resources required to successfully reproduce. Because of this, we can expect that all intelligent aliens, not just Thespians, would possess highdimensional cognitive relations with their environments. Consider our own array of sensory modalities, how the environmental here and now ‘hogs bandwidth.’ The degree to which your environment dominates your experience is the degree to which you’re filtered to solve your environments. We live in the world simply because we’re distilled from it, the result of billions of years of environmental tuning. We can presume our aliens would be thoroughly ‘in the world’ as well, that the bulk of their cognitive capacities would be tasked with the behavioural management of their immediate environments for similar evolutionary reasons.

Since all cognitive capacities are environmentally selected, we can expect whatever basic metacognitive capacity the Thespians possess will also be geared to the solution of environmental problems. Thespian metacognition will be an evolutionary artifact of getting certain practical matters right in certain high-impact environments, plain and simple. Add to this the problem of computational intractability (which metacognition shares with social cognition) and it becomes almost certain that Thespian metacognition would consist of multiple fast and frugal heuristics (because solving on the basis of scarce data requires less, not more, parameters geared to particular information structures to be effective). [5] We have very good reason to suspect the Thespian brain would access and process its own structure and dynamics in ways that would cut far more corners than joints. As is the case with social cognition, it would belong to Thespian nature to neglect Thespian nature—to cognize the cognizer as something other, something geared to practical contexts.

Thespians would cognize themselves and their fellows via correlational, as opposed to causal, heuristic cognition. The curse of dimensionality necessitates it. It’s hard, I think, to overstate the impact this would have on an alien species attempting to cognize their nature. What it means is that the Thespians would possess a way to engineer systematically efficacious comportments to themselves, each other, even their environments, without being able to reverse engineer those relationships. What it means, in other words, is that a great deal of their knowledge would be impenetrable—tacit, implicit, automatic, or what have you. Thespians, like humans, would be able to solve a great many problems regarding their relations to themselves, their fellows, and their world without possessing the foggiest idea of how. The ignorance here is structural ignorance, as opposed to the ignorance, say, belonging to original naivete. One would expect the Thespians would be ignorant of their nature absent the cultural scaffolding required to unravel the mad complexity of their brains. But the problem isn’t simply that Thespians would be blind to their inner nature; they would also be blind to this blindness. Since their metacognitive capacities consistently yield the information required to solve in practical, ancestral contexts, the application of those capacities to the theoretical question of their nature would be doomed from the outset. Our Thespians would consistently get themselves wrong.

Is it fair to say they would be amazed by their incapacity, the way our ancestors were? [6] Maybe—who knows. But we could say, given the ecological considerations adduced here, that they would attempt to solve themselves assuming, at least initially, that they could be solved, despite the woefully inadequate resources at their disposal.

In other words, our Thespians would very likely suffer what might be called theoretical anosognosia. In clinical contexts, anosognosia applies to patients who, due to some kind of pathology, exhibit unawareness of sensory or cognitive deficits. Perhaps the most famous example is Anton-Babinski Syndrome, where physiologically blind patients persistently claim they can in fact see. This is precisely what we could expect from our Thespians vis a vis their ‘inner eye.’ The function of metacognitive systems is to engineer environmental solutions via the strategic uptake of limited amounts of information, not to reverse engineer the nature of the brain it belongs to. Repurposing these systems means repurposing systems that generally take the adequacy of their resources for granted. When we catch our tongue at Christmas dinner, we just do; we ‘implicitly assume’ the reliability our metacognitive capacity to filter our speech. It seems wildly implausible to suppose that theoretically repurposing these systems would magically engender a new biological capacity to automatically assess the theoretical viability of the resources available. It stands to reason, rather, that we would assume sufficiency the same as before, only to find ourselves confounded after the fact.

Of course, saying that our Thespians suffer theoretical anosognosia amounts to saying they would suffer chronic, theoretical hallucinations. And once again, ecological considerations provide a way to guess at the kinds of hallucinations they might suffer.

Dualism is perhaps the most obvious. Aristotle, recall, drew his conclusions assuming the sufficiency of the information available. Contrasting the circular, ageless, repeating motion of the stars and planets to the linear riot of his immediate surroundings, he concluded that the celestial and the terrestrial comprised two distinct ontological orders governed by different natural laws, a dichotomy that prevailed some 1800 years. The moral is quite clear: Where and how we find ourselves within a system determines what kind of information we can access regarding that system, including information pertaining to the sufficiency of that information. Lacking instrumentation, Aristotle simply found himself in a position where the ontological distinction between heaven and earth appeared obvious. Unable to cognize the limits imposed by his position within the observed systems, he had no idea that he was simply cognizing one unified system from two radically different perspectives, one too near, the other too far.

Trapped in a similar structural bind vis a vis themselves, our navel-gazing Thespians would almost certainly mistake properties pertaining to neglect with properties pertaining to what is, distortions in signal, for facts of being. Once again, since the posits possessing those properties belong to correlative cognitive systems, they would resist causal cognition. No matter how hard Thespian philosophers tried, they would find themselves unable to square their apparent functions with the machinations of nature more generally. Correlative functions would appear autonomous, as somehow operating outside the laws of nature. Embedded in their environment in a manner that structurally precludes accurately intuiting that embedment, our alien philosophers would conceive themselves as something apart, ontologically distinct. Thespian philosophy would have its own versions of ‘souls’ or ‘minds’ or ‘Dasein’ or ‘a priori’ or what have you—a disparate order somehow ‘accounting’ for various correlative cognitive modes, by anchoring the bare cognition of constraint in posits (inherited or not) rationalized on the back of Thespian fashion.

Dualisms, however, require that manifest continuities be explained, or explained away. Lacking any ability to intuit the actual machinations binding them to their environments, Thespians would be forced to rely on the correlative deliverances of metacognition to cognize their relation to their world—doing so, moreover, without the least inkling of as much. Given theoretical anosognosia (the inability to intuit metacognitive incapacity), it stands to reason that they would advance any number of acausal versions of this relationship, something similar to ‘aboutness,’ and so reap similar bewilderment. Like us, they would find themselves perpetually unable to decisively characterize ‘knowledge of the world.’ One could easily imagine the perpetually underdetermined nature of these accounts convincing some Thespian philosophers that the deliverances of metacognition comprised the whole of existence (engendering Thespian idealism), or were at least the most certain, most proximate thing, and therefore required the most thorough and painstaking examination (engendering a Thespian phenomenology)…

Could this be right?

This story is pretty complex, so it serves to review the modesty of our working assumptions. The presumption of interstellar evolutionary convergence warranted assuming that Thespian cognition, like human cognition, would be bounded, a complex bundle of ‘kluges,’ heuristic solutions to a wide variety of ecological problems. The fact that Thespians would have to navigate both brute and intricate causal environments, troubleshoot both inorganic and organic contexts, licenses the claim that Thespian cognition would be bifurcated between causal systems and a suite of correlational systems, largely consisting of ‘fast and frugal heuristics,’ given the complexity and/or the inaccessibility of the systems involved. This warranted claiming that both Thespian social cognition and metacognition would be correlational, heuristic systems adapted to solve very complicated ecologies on the basis of scarce data. This posed the inevitable problem of neglect, the fact that Thespians would have no intuitive way of assessing the adequacy of their metacognitive deliverances once they applied them to theoretical questions. This let us suppose theoretical anosognosia, the probability that Thespian philosophers would assume the sufficiency of radically inadequate resources—systematically confuse artifacts of heuristic neglect for natural properties belonging to extraordinary kinds. And this let us suggest they would have their own controversies regarding mind-body dualism, intentionality, even knowledge of the external world.

As with Thespian natural philosophy, any number of caveats can be raised at any number of junctures, I’m sure. What if, for instance, Thespians were simply more pragmatic, less inclined to suffer speculation in the absence of decisive application? Such a dispositional difference could easily tilt the balance in favour of skepticism, relegating the philosopher to the ghettos of Thespian intellectual life. Or what if Thespians were more impressed by authority, to the point where reflection could only be interrogated refracted through the lens of purported revelation? There can be no doubt that my account neglects countless relevant details. Questions like these chip away at the intuition that the Thespians, or something like them, might be real

Luckily, however, this doesn’t matter. The point of posing the problem of xenophilosophy wasn’t so much to argue that Thespians are out there, as it was, strangely enough, to recognize them in here

After all, this exercise in engineering alien philosophy is at once an exercise in reverse-engineering our own. Blind Brain Theory only needs Thespians to be plausible to demonstrate its abductive scope, the fact that it can potentially explain a great many perplexing things on nature’s dime alone.

So then what have we found? That traditional philosophy something best understood as… what?

A kind of cognitive pathology?

A disease?


IV: Conclusion

It’s worth, I think, spilling a few words on the subject of that damnable word, ‘experience.’ Dogmatic eliminativism is a religion without gods or ceremony, a relentlessly contrarian creed. And this has placed it in the untenable dialectical position of apparently denying what is most obvious. After all, what could be more obvious than experience?

What do I mean by ‘experience’? Well, the first thing I generally think of is Holocaust, and the palpable power of the Survivor.

Blind Brain Theory paints a theoretical portrait wherein experience remains the most obvious thing in practical, correlational ecologies, while becoming a deeply deceptive, largely chimerical artifact in high-dimensional, causal ones. We have no inkling of tripping across ecological boundaries when we propose to theoretically examine the character of experience. What was given to deliberative metacognition in some practical context (ruminating upon a social gaffe, say) is now simply given to deliberative metacognition in an artificial one—‘philosophical reflection.’ The difference between applications is nothing if not extreme, and yet conclusions are drawn assuming sufficiency, again and again and again—for millennia.

Think of the difference between your experience and your environment, say, in terms of the difference between concentrating on a mental image of your house and actually observing it. Think of how few questions the mental image can answer compared to the visual image. Where’s the grass the thickest? Is there birdshit on the lane? Which branch comes closest to the ground? These questions just don’t make sense in the context of mental imagery.

Experience, like mental imagery, is something that only answers certain questions. Of course, the great, even cosmic irony is that this is the answer that has been staring us in the fucking face all along. Why else would experience remain an enduring part of philosophy, the institution that asks how things in the most general sense hang together in the most general sense without any rational hope of answer?

Experience is obvious—it can be nothing but obvious. The palpable power of the Holocaust Survivor is, I think, as profound a testament to the humanity of experience as there is. Their experience is automatically our own. Even philosophers shut up! It correlates us in a manner as ancient as our species, allows us to engineer the new. At the same time, it cannot but dupe and radically underdetermine our ancient, Sisyphean ambition to peer into the soul through the glass of the soul. As soon as we turn our rational eye to experience in general, let alone the conditions of possibility of experience, we run afoul illusions, impossible images that, in our diseased state, we insist are real.

This is what our creaking bookshelves shout in sum. The narratives, they proclaim experience in all its obvious glory, while treatise after philosophical treatise mutters upon the boundary of where our competence quite clearly comes to an end. Where we bicker.


At least we have reason to believe that philosophers are not alone in the universe.



[1] In the broad sense proposed by Wilson in The Social Conquest of the Earth.

[2] This amounts to taking a position in the mindreading debate that some theorists would find problematic, particularly those skeptical of modularity and/or with representationalist sympathies. Since the present account provides a parsimonious means of explaining away the intuitions informing both positions, it would be premature to engage the debate regarding either at this juncture. The point is to demonstrate what heuristic neglect, as a theoretical interpretative tool, allows us to do.

[3] The representationalist would cry foul at this point, claim the existence of some coherent ‘functional level’ accessible to deliberative metacognition (the mind) allows for accurate and exhaustive description. But once again, since heuristic neglect explains why we’re so prone to develop intuitions along these lines, we can sidestep this debate as well. Nobody knows what the mind is, or whatever it is they take themselves to be describing. The more interesting question is one of whether a heuristic neglect account can be squared with the research pertaining directly to this field. I suspect so, but for the interim I leave this to individuals more skilled and more serious than myself to investigate.

[4] In the literature, accounts that claim metacognitive functions for mindreading are typically called ‘symmetrical theories.’ Substantial research supports the claim that metacognitive reporting involves social cognition. See Carruthers, “How we know our own minds: the relationship between mindreading and metacognition,” for an outstanding review.

[5] Gerd Gigerenzer and the Adaptive Behaviour and Cognition Research Group have demonstrated that simple heuristics are often far more effective than even optimization methods possessing far greater resources. “As the amount of data available to make predictions in an environment shrinks, the advantage of simple heuristics over complex algorithms grows” (Hertwig and Hoffrage, “The Research Agenda,” Simple Heuristics in a Social World, 23).

[6] “Quid est enim tempus? Quis hoc facile breuiterque explicauerit? Quis hoc ad uerbum de illo proferendum uel cogitatione comprehenderit? Quid autem familiarius et notius in loquendo commemoramus quam tempus? Et intellegimus utique cum id loquimur, intellegimus etiam cum alio loquente id audimus. Quid est ergo tempus? Si nemo ex me quærat, scio; si quærenti explicare uelim, nescio.

Alien Philosophy

by rsbakker

The highest species concept may be that of a terrestrial rational being; however, we shall not be able to name its character because we have no knowledge of non-terrestrial rational beings that would enable us to indicate their characteristic property and so to characterize this terrestrial being among rational beings in general. It seems, therefore, that the problem of indicating the character of the human species is absolutely insoluble, because the solution would have to be made through experience by means of the comparison of two species of rational being, but experience does not offer us this. (Kant: Anthropology from a Pragmatic Point of View, 225)


Are there alien philosophers orbiting some faraway star, opining in bursts of symbolically articulated smells, or parsing distinctions-without-differences via the clasp of neural genitalia? What would an alien philosophy look like? Do we have any reason to think we might find some of them recognizable? Do the Greys have their own version of Plato? Is there a little green Nietzsche describing little green armies of little green metaphors?


I: The Story Thus Far

A couple years back, I published a piece in Scientia Salon, “Back to Square One: Toward a Post-intentional Future,” that challenged the intentional realist to warrant their theoretical interpretations of the human. What is the nature of the data that drives their intentional accounts? What kind of metacognitive capacity can they bring to bear?

I asked these questions precisely because they cannot be answered. The intentionalist has next to no clue as to the nature, let alone the provenance, of their data, and even less inkling as to the metacognitive resources at their disposal. They have theories, of course, but it is the proliferation of theories that is precisely the problem. Make no mistake: the failure of their project, their consistent inability to formulate their explananda, let alone provide any decisive explanations, is the primary reason why cognitive science devolves so quickly into philosophy.

But if chronic theoretical underdetermination is the embarrassment of intentionalism, then theoretical silence has to be the embarrassment of eliminativism. If meaning realism offers too much in the way of theory—endless, interminable speculation—then meaning skepticism offers too little. Absent plausible alternatives, intentionalists naturally assume intrinsic intentionality is the only game in town. As a result, eliminativists who use intentional idioms are regularly accused of incoherence, of relying upon the very intentionality they’re claiming to eliminate. Of course eliminativists will be quick to point out the question-begging nature of this criticism: They need not posit an alternate theory of their own to dispute intentional theories of the human. But they find themselves in a dialectical quandary, nonetheless. In the absence of any real theory of meaning, they have no substantive way of actually contributing to the domain of the meaningful. And this is the real charge against the eliminativist, the complaint that any account of the human that cannot explain the experience of being human is barely worth the name. [1] Something has to explain intentional idioms and phenomena, their apparent power and peculiarity; If not intrinsic or original intentionality, then what?

My own project, however, pursues a very different brand of eliminativism. I started my philosophical career as an avowed intentionalist, a one-time Heideggerean and Wittgensteinian. For decades I genuinely thought philosophy had somehow stumbled into ‘Square Two.’ No matter what doubts I entertained regarding this or that intentional account, I was nevertheless certain that some intentional account had to be right. I was invested, and even though the ruthless elegance of eliminativism made me anxious, I took comfort in the standard shibboleths and rationalizations. Scientism! Positivism! All theoretical cognition presupposes lived life! Quality before quantity! Intentional domains require intentional yardsticks!

Then, in the course of writing a dissertation on fundamental ontology, I stumbled across a new, privative way of understanding the purported plenum of the first-person, a way of interpreting intentional idioms and phenomena that required no original meaning, no spooky functions or enigmatic emergences—nor any intentional stances for that matter. Blind Brain Theory begins with the assumption that theoretically motivated reflection upon experience co-opts neurobiological resources adapted to far different kinds of problems. As a co-option, we have no reason to assume that ‘experience’ (whatever it amounts to) yields what philosophical reflection requires to determine the nature of experience. Since the systems are adapted to discharge far different tasks, reflection has no means of determining scarcity and so generally presumes sufficiency. It cannot source the efficacy of rules so rules become the source. It cannot source temporal awareness so the now becomes the standing now. It cannot source decisions so decisions (the result of astronomically complicated winner-take-all processes) become ‘choices.’ The list goes on. From a small set of empirically modest claims, Blind Brain Theory provides what I think is the first comprehensive, systematic way to both eliminate and explain intentionality.

In other words, my reasons for becoming an eliminativist were abductive to begin with. I abandoned intentionalism, not because of its perpetual theoretical disarray (though this had always concerned me), but because I became convinced that eliminativism could actually do a better job explaining the domain of meaning. Where old school, ‘dogmatic eliminativists’ argue that meaning must be natural somehow, my own ‘critical eliminativism’ shows how. I remain horrified by this how, but then I also feel like a fool for ever thinking the issue would end any other way. If one takes mediocrity seriously, then we should expect that science will explode, rather than canonize our prescientific conceits, no matter how near or dear.

But how to show others? What could be more familiar, more entrenched than the intentional philosophical tradition? And what could be more disparate than eliminativism? To quote Dewey from Experience and Nature, “The greater the gap, the disparity, between what has become a familiar possession and the traits presented in new subject-matter, the greater is the burden imposed upon reflection” (Experience and Nature, ix). Since the use of exotic subject matters to shed light on familiar problems is as powerful a tool for philosophy as for my chosen profession, speculative fiction, I propose to consider the question of alien philosophy, or ‘xenophilosophy,’ as a way to ease the burden. What I want to show is how, reasoning from robust biological assumptions, one can plausibly claim that aliens—call them ‘Thespians’—would also suffer their own versions of our own (hitherto intractable) ‘problem of meaning.’ The degree to which this story is plausible, I will contend, is the degree to which critical eliminativism deserves serious consideration. It’s the parsimony of eliminativism that makes it so attractive. If one could combine this parsimony with a comprehensive explanation of intentionality, then eliminativism would very quickly cease to be a fringe opinion.


II: Aliens and Philosophy

Of course, the plausibility of humanoid aliens possessing any kind of philosophy requires the plausibility of humanoid aliens. In popular media, aliens are almost always exotic versions of ourselves, possessing their own exotic versions of the capacities and institutions we happen to have. This is no accident. Science fiction is always about the here and now—about recontextualizations of what we know. As a result, the aliens you tend to meet tend to seem suspiciously humanoid, psychologically if not physically. Spock always has some ‘mind’ with which to ‘meld’. To ask the question of alien philosophy, one might complain, is to buy into this conceit, which although flattering, is almost certainly not true.

And yet the environmental filtration of mutations on earth has produced innumerable examples of convergent evolution, different species evolving similar morphologies and functions, the same solutions to the same problems, using entirely different DNA. As you might imagine, however, the notion of interstellar convergence is a controversial one. [2] Supposing the existence of extraterrestrial intelligence is one thing—cognition is almost certainly integral to complex life elsewhere in the universe—but we know nothing about the kinds of possible biological intelligences nature permits. Short of actual contact with intelligent aliens, we have no way of gauging how far we can extrapolate from our case. [3] All too often, ignorance of alternatives dupes us into making ‘only game in town assumptions,’ so confusing mere possibility with necessity. But this debate need not worry us here. Perhaps the cluster of characteristics we identify with ‘humanoid’ expresses a high-probability recipe for evolving intelligence—perhaps not. Either way, our existence proves that our particular recipe is on file, that aliens we might describe as ‘humanoid’ are entirely possible.

So we have our humanoid aliens, at least as far as we need them here. But the question of what alien philosophy looks like also presupposes we know what human philosophy looks like. In “Philosophy and the Scientific Image of Man,” Wilfred Sellars defines the aim of philosophy as comprehending “how things in the broadest possible sense of the term hang together in the broadest possible sense of the term” (1). Philosophy famously attempts to comprehend the ‘big picture.’ The problem with this definition is that it overlooks the relationship between philosophy and ignorance, and so fails to distinguish philosophical inquiry from scientific or religious inquiry. Philosophy is invested in a specific kind of ‘big picture,’ one that acknowledges the theoretical/speculative nature of its claims, while remaining beyond the pale of scientific arbitration. Philosophy is better defined, then, as the attempt to comprehend how things in general hang together in general absent conclusive information.

All too often philosophy is understood in positive terms, either as an archive of theoretical claims, or as a capacity to ‘see beyond’ or ‘peer into.’ On this definition, however, philosophy characterizes a certain relationship to the unknown, one where inquiry eschews supernatural authority, and yet lacks the methodological, technical, and institutional resources of science. Philosophy is the attempt to theoretically explain in the absence of decisive warrant, to argue general claims that cannot, for whatever reason, be presently arbitrated. This is why questions serve as the basic organizing principles of the institution, the shared boughs from which various approaches branch and twig in endless disputation. Philosophy is where we ponder the general questions we cannot decisively answer, grapple with ignorances we cannot readily overcome.


III: Evolution and Ecology

A: Thespian Nature

It might seem innocuous enough defining philosophy in privative terms as the attempt to cognize in conditions of information scarcity, but it turns out to be crucial to our ability to make guesses regarding potential alien analogues. This is because it transforms the question of alien philosophy into a question of alien ignorance. If we can guess at the kinds of ignorance a biological intelligence would suffer, then we can guess at the kinds of questions they would ask, as well as the kinds of answers that might occur to them. And this, as it turns out, is perhaps not so difficult as one might suppose.

The reason is evolution. Thanks to evolution, we know that alien cognition would be bounded cognition, that it would consist of ‘good enough’ capacities adapted to multifarious environmental, reproductive impediments. Taking this ecological view of cognition, it turns out, allows us to make a good number of educated guesses. (And recall, plausibility is all that we’re aiming for here).

So for instance, we can assume tight symmetries between the sensory information accessed, the behavioural resources developed, and the impediments overcome. If gamma rays made no difference to their survival, they would not perceive them. Gamma rays, for Thespians, would be unknown unknowns, at least pending the development of alien science. The same can be said for evolution, planetary physics—pretty much any instance of theoretical cognition you can adduce. Evolution assures that cognitive expenditures, the ability to intuit this or that, will always be bound in some manner to some set of ancestral environments. Evolution means that information that makes no reproductive difference makes no biological difference.

An ecological view, in other words, allows us to naturalistically motivate something we might have been tempted to assume outright: original naivete. The possession of sensory and cognitive apparatuses comparable to our own means Thespians will possess a humanoid neglect structure, a pattern of ignorances they cannot even begin to question, that is, pending the development of philosophy. The Thespians would not simply be ignorant of the microscopic and macroscopic constituents and machinations explaining their environments, they would be oblivious to them. Like our own ancestors, they wouldn’t even know they didn’t know.

Theoretical knowledge is a cultural achievement. Our Thespians would have to learn the big picture details underwriting their immediate environments, undergo their own revolutions and paradigm shifts as they accumulate data and refine interpretations. We can expect them to possess an implicit grasp of local physics, for instance, but no explicit, theoretical understanding of physics in general. So Thespians, it seems safe to say, would have their own version of natural philosophy, a history of attempts to answer big picture questions about the nature of Nature in the absence of decisive data.

Not only can we say their nascent, natural theories will be underdetermined, we can also say something about the kinds of problems Thespians will face, and so something of the shape of their natural philosophy. For instance, needing only the capacity to cognize movement within inertial frames, we can suppose that planetary physics would escape them. Quite simply, without direct information regarding the movement of the ground, the Thespians would have no sense of the ground changing position. They would assume that their sky was moving, not their world. Their cosmological musings, in other words, would begin supposing ‘default geocentrism,’ an assumption that would only require rationalization once others, pondering the movement of the skies, began posing alternatives.

One need only read On the Heavens to appreciate how the availability of information can constrain a theoretical debate. Given the imprecision of the observational information at his disposal, for instance, Aristotle’s stellar parallax argument becomes well-nigh devastating. If the earth revolves around the sun, then surely such a drastic change in position would impact our observations of the stars, the same way driving into a city via two different routes changes our view of downtown. But Aristotle, of course, had no decisive way of fathoming the preposterous distances involved—nor did anyone, until Galileo turned his Dutch Spyglass to the sky. [4]

Aristotle, in other words, was victimized not so much by poor reasoning as by various perspectival illusions following from a neglect structure we can presume our Thespians share. And this warrants further guesses. Consider Aristotle’s claim that the heavens and the earth comprise two distinct ontological orders. Of course purity and circles rule the celestial, and of course grit and lines rule the terrestrial—that is, given the evidence of the naked eye from the surface of the earth. The farther away something is, the less information observation yields, the fewer distinctions we’re capable of making, the more uniform and unitary it is bound to seem—which is to say, the less earthly. An inability to map intuitive physical assumptions onto the movements of the firmament, meanwhile, simply makes those movements appear all the more exceptional. In terms of the information available, it seems safe to suppose our Thespians would at least face the temptation of Aristotle’s long-lived ontological distinction.

I say ‘temptation,’ because certainly any number of caveats can be raised here. Heliocentrism, for instance, is far more obvious in our polar latitudes (where the earth’s rotation is as plain as the summer sun in the sky), so there are observational variables that could have drastically impacted the debate even in our own case. Who knows? If it weren’t for the consistent failure of ancient heliocentric models to make correct predictions (the models assumed circular orbits), things could have gone differently in our own history. The problem of where the earth resides in the whole might have been short-lived.

But it would have been a problem all the same, simply because the motionlessness of the earth and the relative proximity of the heavens would have been our (erroneous) default assumptions. Bound cognition suggests our Thespians would find themselves in much the same situation. Their world would feel motionless. Their heavens would seem to consist of simpler stuff following different laws. Any Thespian arguing heliocentrism would have to explain these observations away, argue how they could be moving while standing still, how the physics of the ground belongs to the physics of the sky.

We can say this because, thanks to an ecological view, we can make plausible empirical guesses as to the kinds of information and capacities Thespians would have available. Not only can we predict what would have remained unknown unknowns for them, we can also predict what might be called ‘unknown half-knowns.’ Where unknown unknowns refer to things we can’t even question, unknown half-knowns refer to theoretical errors we cannot perceive simply because the information required to do so remains—you guessed it—unknown unknown.

Think of Plato’s allegory of the cave. The chained prisoners confuse the shadows for everything because, unable to move their heads from side to side, they just don’t ‘know any different.’ This is something we understand so intuitively we scarce ever pause to ponder it: the absence of information or cognitive capacity has positive cognitive consequences. Absent certain difference making differences, the ground will be cognized as motionless rather than moving, and celestial objects will be cognized as simples rather than complex entities in their own right. The ground might as well be motionless and the sky might as well be simple as far as evolution is concerned. Once again, distinctions that make no reproductive difference make no biological difference. Our lack of radio telescope eyes is no genetic or environmental fluke: such information simply wasn’t relevant to our survival.

This means that a propensity to theorize ‘ground/sky dualism’ is built into our biology. This is quite an incredible claim, if you think about it, but each step in our path has been fairly conservative, given that mere plausibility is our aim. We should expect Thespian cognition to be bounded cognition. We should expect them to assume the ground motionless, and the constituents of the sky simple. We can suppose this because we can suppose them to be ignorant of their ignorances, just as we were (and remain). Cognizing the ontological continuity of heaven and earth requires the proper data for the proper interpretation. Given a roughly convergent sensory predicament, it seems safe to say that aliens would be prone as we were to mistake differences in signal with differences in being and so have to discover the universality of nature the same as we did.

But if we can assume our Thespians—or at least some of them—would be prone to misinterpret their environments the way we did, what about themselves? For centuries now humanity has been revising and sharpening its understanding of the cosmos, to the point of drafting plausible theories regarding the first second of creation, and yet we remain every bit as stumped regarding ourselves as Aristotle. Is it fair to say that our Thespians would suffer the same millennial myopia?

Would they have their own version of our interminable philosophy of the soul?



[1] The eliminativism at issue here is meaning eliminativism, and not, as Stich, Churchland, and many others have advocated, psychological eliminativism. But where meaning eliminativism clearly entails psychological eliminativism, it is not at all obvious the psychological eliminativism entails meaning eliminativism. This was why Stich, found himself so perplexed by the implications of reference (see his, Deconstructing the Mind, especially Chapter 1). To assume that folk psychology is a mistaken theory is to assume that folk psychology is representational, something that is true or false of the world. The critical eliminativism espoused here suffers no such difficulty, but at the added cost of needing to explain meaning in general, and not simply commonsense human psychology.

[2] See Kathryn Denning’s excellent, “Social Evolution in Cosmic Context,”

[3] Nicolas Rescher, for instance, makes hash of the time-honoured assumption that aliens would possess a science comparable to our own by cataloguing the myriad contingencies of the human institution. See Finitude, 28, or Unknowability, “Problems of Alien Cognition,” 21-39.

[4] Stellar parallax, on this planet at least, was not measured until 1838.


Get every new post delivered to your Inbox.

Join 687 other followers