Three Pound Brain

No bells, just whistling in the dark…

Month: August, 2013

Teenage Incompatibility Intuitions

by rsbakker

I’m not sure what to make of this argument, or why it strikes me as powerful.  I’m hoping people will take me to task on Conscious Entities, where I pulled it out of my ass. But I thought I would post it here as well. I suppose I’ve been making (pulling?) it for years, but it seems have more (heft?) bite than usual…

At the root of the problem of free will is what might be called an ‘incompatibility intuition.’ The biomechanical nature of the brain seems to contradict our metacognitive sense of ‘free will.’

Here’s an observation you don’t see that often: The incompatibility intuition is so direct that teenagers regularly grasp it without a single philosophy class, and yet it takes years of specialized training to follow, let alone make, a case for discounting that intuition.

Here’s another observation: The human capacity to rationalize what they cherish is well-nigh bottomless in fuzzy conceptual contexts.

Here’s my theory: The human brain cannot solve the inverse problem of itself, and so must rely on heuristics, ways to solve issues of behavioural provenance in a manner that neglects the natural facts of that provenance. ‘Free will’ is one of those heuristics. Since it constitutes a way to understand behavioural provenance absent information regarding its biomechanistic provenance, it is not surprisingly incompatible with reflection on that information–thus the incompatibility intuition. It was nothing but a rule of thumb to begin with. Why should we expect it to apply to empirical contexts?

But here’s the rub: It takes years of specialized training to understand this as well! But it does have the virtue of explaining… teenage incompatibility angst.

Advertisements

Attention All Attention Skeptics

by rsbakker

Wayne Wu, this month’s featured scholar on Brains, has another fascinating round of posts up on the topic of attention. In “What is Attention?” he mentions how he seems to be bumping into more and more attention skeptics, people who either doubt the cognitive science research community’s ability to find any consensus definition of attention, or doubt that attention exists at all.

“How in the world could that possibly be?” he asks.

As it turns out, this question is largely rhetorical for Wu, so I thought I would take a preliminary run at an answer. I should note that the research and literature on this topic runs very deep, and that I’m only conversant with the broad issues. But Wu’s argument amounts to what Anthony Chemero calls ‘Hegelian arguments’ in cognitive science: an nonempirical attempt to regiment empirical priorities. It is philosophical, through and through.

Wu begins with what he takes to be a consensus starting point: the tasks performed by attention in various modalities of perception. These tasks, he points out, provide the experimental paradigms generally used to research attention, be it visual search or spatial cuing or object tracking and so on. As he writes, “for each of these attention paradigms, there is a fundamental assumption, namely that the task defines some relevant targets, and that where the subject selects those targets to perform the task, the subject is attending to the target.” What he wants to argue is that an implicit definition of attention is built into the very structure of these experimental paradigms. Adopt any one of them, Wu argues, then you are tacitly endorsing a sufficient condition of the form, If S selects X for task T, then S attends to X.

This, Wu thinks, warrants prioritizing task-based approaches to attention over neural-based approaches, simply because, as he puts it, “to investigate each of these phenomena, neuroscientists must use task-based attention paradigms where by defining a task and a target, they can carefully control their subjects’ behavior and, where the subject correctly performs the task, they can infer that their subjects are attending as they should.” In other words, they must endorse his sufficient condition to even get their research off the ground.

I admit, it sounds pretty ironclad, but only, as I hope to show, because of the role played by neglect.

What fascinated me reading this particular argument for the first time was the way it recapitulates the structure of so many arguments against so-called ‘eliminativist’ approaches in cognitive science. Wu’s terms obscure this fact, but one need only review the concepts he employs to describe his ‘task-based’ approach to see that he is giving us a broadly intentional construal of the ‘general experimental paradigm,’ one where experimenters ‘control’ behaviour such that subjects perform as they ‘should.’ What he calls, the ‘neural based’ approach to attention, on the contrary, is primarily mechanical in emphasis, bent on describing and explaining what is actually happening in our brains when we ‘attend to X.’ What Wu is arguing, in effect, is that any mechanical approach to the question to attention conceptually and operationally presupposes the intentional structure of his general experimental paradigm–and obviously so.

But this claim is far from obvious. Given Blind Brain Theory (BBT), for example, the ‘obvious’ way to approach the question of attention is directly opposite Wu’s. On BBT, our intuitive conception of attention is blinkered to the degree it turns on metacognition. Our brains are all but opaque to our brains, thanks to their astronomical complexity, among other things. Given considerations such as these, it is almost certainly the case that the intentional characterization employed by Wu in arguing his task-based approach involves drastic heuristic simplifications of what is actually going on. Sure, these intentional characterizations of the ‘general experimental paradigm’ suit the needs of scientists and subjects alike in any number of communicative contexts, but their applicability to questions like ‘What is attention?’ is by no means clear. What if it’s the case that the very information neglected to facilitate Wu’s ‘task stance’ is the very information required to answer the question of attention?

The heuristic nature of the task stance means neural-based approaches do not presuppose task-based approaches either conceptually or operationally–any more than this blog post presupposes stenographer’s shorthand. Sure, short-hand efficiently discharges a number of functions within a comparatively restricted domain, namely, those problem ecologies (such as court proceeedings) it is adapted to solve. This blog post, however, is not one of those ecologies. Precisely the same, I think, can be said of what I’m calling Wu’s ‘task stance,’ the big difference being that its ‘applicability conditions’ are nowhere near so clear. The task stance is the most economical way to conceive the experimental scene because it the most economical way to conceive human action. But why should either of those economies apply to the empirical question of attention?

This is where neglect comes in. You see, what makes Wu’s task-based interpretation of the ‘general experimental scene’ seem the obvious ‘only game in town’–what makes it paradigmatic–is simply the fact that the neural-based description of that same experimental scene remains unknown. As the only way Wu can think of to describe the scene, it seems to become the only way for the scene to be conceived, or ‘paradigmatic.’ The issue of its heuristic applicability to the question, What is attention? is accordingly lost. As the operational kernel of every attempt to understand attention, applicability seems to be implicitly given.

So the reason Wu’s argument leapt out for me, why the task stance, far from seeming the only game in town, struck me as a parochial means of understanding the experimental scene, is simply because BBT has, for quite some time now, had me looking at psychological experimentation mechanistically as kinds of information extracting meta-systems consisting of the regimented interactions of various subsystems, what we intuitively think of as ‘researchers’ and ‘subjects’ and ‘experimental apparatuses.’ As a result, I now generally look at intentional characterizations like Wu’s against this baseline, as information-neglecting heuristics, not so much accurate descriptions of what is going on as economical ways to navigate what is going on given certain problem contexts.

From this biomechanical standpoint, Wu is attempting to tackle the question of attention ‘on the cheap.’ But unless one wants to argue that intentional characterizations are not heuristic, or that they are heuristic but somehow remain applicable to the theoretical question of what attention is (despite neglecting, as intentional heuristics seem to do, the very causal information so much scientific explanation requires), then it would seem that the biomechanical way of understanding attention is the only way of knowing what it is apart from our nebulous experiences. And since knowing what it is within our experience turns on what attention is apart from that experience, it would seem that even this ‘intuitive,’ or ‘phenomenological’ aspect of attention, requires the priority of neural-based approaches to be understood.

If it exists at all.

My guess is that like other folk faculties, attention will be progressively revealed to be more fractionate, that its intuitive lack of internal structure–it’s simplicity–will be shown to be a byproduct of neglect. Attention, as we think we presently know it, is actually quite easy to doubt when you look at research into other faculties like memory and reason. The more we learn, the more complicated attention becomes, and the more informatically impoverished intuition is shown to be. Memory isn’t an aviary. Reason isn’t a charioteer battling unruly moral and immoral horses. Odds are, attention isn’t a selective spotlight. We should expect fractionation, surprises–continuous complication. And even if you have faith in the theoretical accuracy of metacognition, the bottom line is you simply don’t know where those intuitions sit on the information food chain. Nothing need be accurate about our intuitions of brain function for the brain to function. Given this, using them to conceptually and operationally anchor an empirical research program smacks less of necessity than a leap of faith.

The Decline and Fall of the Noocentric Empire

by rsbakker

The Semantic Apocalypse debate winds on, with Ben Cain over at Rants Within the Undead God, and Stephen Craig Hickman over at noir-realism. The irony is that although we three actually don’t disagree about that much, the disputed remainder is nothing less than the whole of human aspiration since the Enlightenment.

Philosophically wounded souls disputing existential salvage rights? Or narcissistic dogs fighting over hyperintellectualized scraps?

One way to look at what I’m arguing is in terms of the ‘third variable problem’ in psychology. When presented with a statistical correlation, say between the availability of contraception and a high rate of teen promiscuity, the impulse is to assume some causal connection between the two, even though any number of third variables–‘unknown unknowns‘–could be responsible, say, the ubiquity of pornography or what have you. Once again, it comes down to the invisibility of ignorance, the way the availability of information constrains cognition. Absent information pertaining to third variables, cognition generally operates as if no such information existed, not even as an absence–precisely as we should expect, given that we are biomechanisms.

So I think we all agree on the following three premises:

1) Our traditional notion of the human (the ‘manifest image’) substantially turns on the information available to metacognition.

2) Historically, information regarding the human available to metacognition has been dramatically constrained.

3) The sciences of the brain are presently generating immense quantities of hitherto unavailable information regarding the human.

The dispute lies in our respective assessments of this situation. For my money, the most crucial claim is the following:

4) Absent information pertaining to the absence of information, cognition assumes the adequacy of the information available, no matter how inadequate it may be.

This is what I generally call ‘sufficiency’ (or elsewhere, the ‘Principle of Informatic Adumbration’ (PIA)). What sufficiency essentially means is that metacognition is very nearly theoretically useless as a mode for cognizing what we are. Certainly it discharges a myriad of these functions–it is a metabolically expensive adaptation after all–but the provison accurate theoretical cognition of ‘subjectivity’ is almost certainly not among them. Given the informatically impoverished status of metacognition, or (2), sufficiency means that the flood of information asserted by (3) could reveal a potentially bottomless parade of third variable confounds despite any intuitions to the contrary, that the first person could genuinely feel like the most certain, indubitable thing in the world, and still be utterly illusory. This means that metacognition, contrary to the assumption of the tradition, is no better placed than cognition more generally when it comes to theoretically modelling nature absent the institutional prostheses of science.

And this suggests that the flood of scientific information in the domain of the human is going to do what such floods have done in every other domain of human inquiry: wash every thing away, and reveal something utterly indifferent to our cherished traditional conceits. Something inhuman

Mental Content R.I.P. (1977-2013)

by rsbakker

Fred Dretske’s recent death has got me rereading his Knowledge and the Flow of Information, and thinking how strange projects like his, projects that take the ‘mental’ at face value, will likely seem in the near future. The thought I want to consider here is the possibility that as soon as the question of aboutness is tied to the question of information, information is no longer understood (which isn’t to say that I understand it!).

The argument is pretty straightforward:

1) Aboutness is a cognitive heuristic, a highly schematic way to conceive the relation between ourselves and our environments.

2) Heuristics are domain specific cognitive devices that provide computational efficiencies by exploiting specific information structures in our environments.

3) Aboutness is a domain specific cognitive device.

So when Fred Dretske defines information as information about, the first question he needs to ask is whether aboutness is applicable to what he takes to be his problem ecology: basically the naturalization of ‘mental content.’ The only way to answer this question is to inquire into what the primary problem ecology of aboutness could be. And one way to determine this is to simply look at the information we neglect when we think in terms of aboutness. As it turns out, the information neglected happens to the grist of naturalistic explanation: causal information. The brain’s own complexity renders the causal cognition of its environmental comportments–‘beliefs’–impossible. Thus the heuristic convenience of aboutness.

So essentially Dretske is trying to naturalize mental content using a heuristic that systematically neglects all the information relevant to the naturalization of mental content.

Not surprisingly, he runs into problems.

Now we know for a fact that we are causally related to our environment, but this issue of being intentionally related, of being a subject in a world of objects, has proven to be a tough nut to crack, philosophically and scientifically. The problem, as I’ve suggested elsewhere, is that since we have no metacognitive access to the ecological constraints pertaining to traditionally fundamental heuristics such as aboutness, we assume their universality, and so continually misapply what are parochial cognitive devices to ecologies they are simply not adapted to. This continual misapplication forms the discursive bulk of philosophy.

Dretske’s explanandum, mental content, is a metacognitive posit. Given medial neglect (the brain’s abject inability to accurately cognize its neuromechanical functions) metacognition must simply make do, or ‘go heuristic.’ Mental content, you could say, is simply the best the brain can make of its environmental comportments given the information and resources available. Any natural explanation of mental content, therefore, will require some account of this information and resources–the very thing provided by BBT. We must, in other words, understand just what it is we are trying to explain before we can have any hope of explaining it.

So with reference to causal theories of mental content like Dretske’s, the problem always comes back to relevance, the question of sorting content-determining from non-content-determining causes, for a reason. What fixes the about relation, such that it makes sense to say that X represents a dog, as opposed to a dog or a fox in the dark or etc? If we look at mental content as a mere metacognitive posit, a schematic way to grasp an astronomically complex causal process after the fact, then this question is moot. Aboutness and mental content are simply kinds of metacognitive shorthand having everything to do with the problematic way the brain relates to itself and very little to do with the way the brain relates to the world. Given the post hoc, heuristic status of aboutness, all that causal complexity is simply ‘taken for granted.’ From our informatically impoverished metacognitive standpoint, in other words, X just represents a dog, not a fox in the dark, or anything else. As soon as we try to nail down the sufficient conditions for X representing dogs and dogs only we’re trying to explain a heuristic circumvention of information–aboutness–in terms of the information it circumvents–causal mechanism–as if no information were circumvented at all.

Content determination is the primary problem afflicting causal theories of mental content because ‘mental content’ is literally a metacognitive tool for understanding our environmental comportments in the absence of information pertaining to content determination!

The Political Event Horizon

by rsbakker

Cognition Obscura (II) is still up on the blocks. I’m literally on the final stretch of TUC – I’m guessing I’ll have the (monstrously huge!) first draft completed in a two weeks time – and it has been tyrannizing my output. But still, I’ve resolved to post something on TPB at least once a week, no matter how slim or anodyne.

A few months back I had an opportunity to talk to Paul Glimcher at the Centre for Theoretical Neuroscience at the University of Waterloo about the prospect of his work being used by corporations and other large institutions to more effectively steer consumer (or voter) decision-making. He actually never answered the question, electing instead to critique Neurofocus, the marketing giant I had raised as an example, and whose work I have cited several times here on TPB. (I raised this same question to Dennett a couple of years ago, and strangely enough, he elected to do the same thing, which was imply that the methods and technologies employed by Neurofocus, if they work at all, are about to superceded by the real thing). Glimcher had presented a paper reviewing the way his lab had demystified the so-called ‘choice paradox’ (popularized by Barry Schwartz’s The Paradox of Choice: Why More is Less), effectively explaining in neuromechanical terms why it is the surfeit of alternatives, or greater degrees of consumer freedom, tend to make us more miserable. The full story is complicated, but it basically boils down to the neural architecture of the brain and the way choice relevant contextual clutter actually dims the clarity of desired options. But I urge everyone to take a close look at what Glimcher is up to – his thumbnail sketch of neuroeconomics is outstanding – and to consider what the science might look like in fifty year’s time, given the kinds of resources this field of research commands.

I mention this because I seem to be bumping into Nick Land and ‘Accelerationism‘ all over the web, this notion that arcane philosophical bickering – of the kind found here – will actually have any role in the social and economic upheavals to come. The big problem I have with the debate – as far as I understand it, at least – is that it remains mired in what might be called ‘continuity bias,’ and so has no real grasp on the nature of our collective dilemma. Everyone seems to think that we are dealing with a technologically mediated social and economic transformation, when it seems clear to me, at least, that we are actually witnessing the beginning of a biological revolution, the next great twist in Evolution itself. So I thought it worthwhile to repost, “The Posthuman asEvolution 3.0”:

So for years now I’ve had this pet way of understanding evolution in terms of effect feedback (EF) mechanisms, structures whose functions produce effects that alter the original structure. Morphological effect feedback mechanisms started the show: DNA and reproductive mutation (and other mechanisms) allowed adaptive, informatic reorganization according to the environmental effectiveness of various morphological outputs. Life’s great invention, as they say, was death.

This original EF process was slow, and adaptive reorganization was transgenerational. At a certain point, however, morphological outputs became sophisticated enough to enable a secondary, intragenerational EF process, what might be called behavioural effect feedback. At this level, the central nervous system, rather than DNA, was the site of adaptive reorganization, producing behavioural outputs that are selected or extinguished according to their effectiveness in situ.

For whatever reason, I decided to plug the notion of the posthuman into this framework the other day. The idea was that the evolution from Morphological EF to Behavioural EF follows a predictable course, one that, given the proper analysis, could possibly tell us what to expect from the posthuman. The question I had in my head when I began this was whether we were groping our way to some entirely new EF platform, something that could effect adaptive, informatic reorganization beyond morphology and behaviour.

First, consider some of the key differences between the processes:

Morphological EF is transgenerational, whereas Behavioural EF is circumstantial – as I mentioned above. Adaptive informatic reorganization is therefore periodic and inflexible in the former case, and relatively continuous and flexible in the latter. In other words, morphology is circumstantially static, while behaviour is circumstantially plastic.

Morphological EF operates as a fundamental physiological generative (in the case of the brain) and performative (in the case of the body) constraint on Behavioural EF. Our brains limit the behaviours we can conceive, and our bodies limit the behaviours we can perform.

Morphologies and their generators (genetic codes) are functionally inseparable, while behaviours and their generators (brains) are functionally separable. Behaviours are disposable.

Defined in these terms, the posthuman is simply the point where neural adaptive reorganization generates behaviours (in this case, tool-making) such that morphological EF ceases to be a periodic and inflexible physiological generative and performative constraint on behavioural EF. Put differently, the posthuman is the point where morphology becomes circumstantially plastic. You could say tools, which allow us to circumvent morphological constraints on behaviour, have already accomplished this. Spades make for deeper ditches. Writing makes for bottomless memories. But tool-use is clearly a transitional step, ways to accessorize a morphology that itself remains circumstantially static. The posthuman is the point where we put our body on the lathe (with the rest of our tools).

In a strange, teleonomic sense, you could say that the process is one of effect feedback bootstrapping, where behaviour revolutionizes morphology, which revolutionizes behaviour, which revolutionizes morphology, and so on. We are not so much witnessing the collapse of morphology into behaviour as the acceleration of the circuit between the two approaching some kind of asymptotic limit that we cannot imagine. What happens when the mouth of behaviour after digesting the tail and spine of morphology, finally consumes the head?

What’s at stake, in other words, is nothing other than the fundamental EF structure of life itself. It makes my head spin, trying to fathom what might arise in its place.

Some more crazy thoughts falling out of this:

1) The posthuman is clearly an evolutionary event. We just need to switch to the register of information to see this. We’re accustomed to being told that dramatic evolutionary changes outrun our human frame of reference, which is just another way of saying that we generally think of evolution as something that doesn’t touch us. This was why, I think, I’ve been thinking the posthuman by analogy to the Enlightenment, which is to say, as primarily a cultural event distinguished by a certain breakdown in material constraints. No longer. Now I see it as an evolutionary event literally on par with the development of Morphological and Behavioural EF. As perhaps I should have all along, given that posthuman enthusiasts like Kurzweil go on and on about the death of death, which is to say, the obsolescence of a fundamental evolutionary invention.

2) The posthuman is not a human event. We may be the thin edge of the wedge, but every great transformation in evolution drags the whole biosphere in tow. The posthuman is arguably more profound than the development of multicellular life.

3) The posthuman, therefore, need not directly involve us. AI could be the primary vehicle.

4) Calling our descendents ‘transhuman’ makes even less sense than calling birds ‘transdinosaurs.’

5) It reveals posthuman optimism for the wishful thinking it is. If this transformation doesn’t warrant existential alarm, what on earth does?

Okay, so that might not be my most lucid post. But the idea is pretty straightforward: physiology, the primary enabling constraint of behaviour, is about to fall into the clutches of behaviour. The primary enabling constraint of behaviour is in the process of becoming a product of behaviour. Think of the combinatorial explosion brought about by our increasing ability to overcome environmental constraints on our behaviour. The combinatorial explosion to come, I think it’s fair to say, handily lies beyond our ability to cognize. There simply is no horizon of expectation that we can depend on…

So what does this mean for politics? What does ‘politics’ mean for that matter? Is a ‘post-posterity politics’ – a politics shorn of expectation – even possible? Or are all politics simply palliative at this point? Are we marooned with a negative or apophatic politics, a kind of quietistic intellectual exercise where we inventory all the things that politics can no longer be?  Or is there time yet for the kind of cultural and political ‘triage’ I endorse, the demand that one immediately engage outgroup interests in their own cultural idioms, because today is likely already too late.

Cognition Obscura (I)

by rsbakker

 

On July 4th, 1054, Chinese astronomers noticed the appearance of a ‘guest star’ in the proximity of Zeta Tauri lasting for nearly two years before becoming too faint to be detected by the naked eye. The Chaco Canyon Anasazi apparently also witnessed the event, leaving behind this famous petraglyph:

1054-supernova-petrograph-1

Centuries would pass before John Bevis would rediscover it in 1731, as would Charles Messier in 1758, who initially confused it with Halley’s Comet and decided to begin cataloguing ‘cloudy’ celestial objects–or ‘nebulae’–to help astronomers avoid his mistake. In 1844, William Parsons, the Earl of Rosse, made the following drawing of the guest-star-become-comet-become-cloudy-celestial-object:

m1rosse

It was on the basis of this diagram that he gave the Chinese guest star–what has since become the most studied extra-solar object in astronomical history–its contemporary name: the ‘Crab Nebula.’ When he revisited the object with his 72-inch reflector telescope in 1848, however, he saw something quite different:

william-parsons-crab-nebula-2

Then in 1921, John Charles Duncan was able to discern the expansion of the Crab Nebula using the revolutionary capacity of the Mount Wilson Observatory to produce images like this:

crabduncan

And nowadays, of course, we are regularly dazzled not only by photographs like this:

hubble-crab-nebula

generated by Hubble, but those produced by a gamut of other observational platforms as well:

600px-800crab

The tremendous amount of information produced has provided astronomers with an incredibly detailed understanding of supernovae and nebula formation.

What I find so interesting about this progression lies in what might be called the ‘structure of informatic disclosure.’ What do I mean by this? Well, there’s the myriad ways the accumulation of data feeds theory formation, of course, how scientific models tend to become progressively more accurate as the kinds and quantities of information accessed increases. But what I’m primarily interested in is what happens when you turn this structure upside down, when you look at the Chinese ‘guest star’ or Anasazi petraglyph against the baseline of what we presently know. What assumptions were made and why? How were those assumptions overthrown? Why were those assumptions almost certain to be wrong?

Why, for instance, did the Chinese assume that SN1054 was simply another star, notable only for its ‘guest-like’ transience? I’m sure a good number of people might think this is a genuinely stupid question: the imperialistic nature of our preconceptions seems to go without saying. The medieval Chinese thought SN1054 was another star rather than a supernova simply because points of light in the sky, stars, were all they knew. The old provides our only means of understanding the new. This is arguably why Messier first assumed the Crab Nebula was another comet in 1758: it was only when he obtained information distinguishing it (the lack of visible motion) from comets that he realized he was looking at something else, a cloudy celestial object.

But if you think about it, these ‘identification effects’–the ways the absence of systematic differences making systematic differences (or information) underwrite assumptions of ‘default identity’–are profoundly mysterious. We’ve gone from an enigmatic prick of light to an intimate understanding of nebula dynamics and structure. Our cosmological understanding has been nothing if not a process of continual systematic differentiation or ever increasing resolution in the polydimensional sense of the natural. In a peculiar sense, our ignorance is the fundamental medium here, the ‘stuff’ from which the distinctions pertaining to actual cognition are hewn.