Three Pound Brain

No bells, just whistling in the dark…

Month: July, 2013

Cognitive Deficits Predicted by the Blind Brain Theory

by rsbakker

One of the reasons that BBT has such a fierce hold on my imagination despite the anomie it so regularly inspires has to do with the way it allows for the identification and parsimonious explanation of a number of cognitive deficits that have plagued humanity since the beginning of recorded history. The fact that no one has been able to tackle them in a comprehensive manner despite their manifest nature (the fact that we encounter these effects all the time) combined with the way they simply fall out of the explanatory resources provided by BBT, makes me fear/hope I’m onto something quite profound over and above the way it seems to make hash of a broad range of more abstract perplexities.

Of course, all these cognitive deficits need to be ecologically qualified: mistaking the fact of the matter in a manner that economizes neurocomputational loads can generate cognitive efficiencies as well. The very neglect that renders heuristics inapplicable to the bulk of problem ecologies renders them that much more effective when it comes to the subset of problem ecologies they are adapted to. It’s not an all or nothing affair, and as I have found discussing the philosophical implications of these ‘cognitive illusions,’ the question of applicability is where the primary battleground lies.

All of these effects are closely related, and I cannot help but think that any dedicated research project would lead to substantial conceptual calving as various ‘problem-ecologies’ are identified, if not the discovery of some more comprehensive way to regiment the domain altogether. If you happen to be a psychology graduate student, then I invite you to begin brainstorming possible experimental designs–or failing that, redefinitions that would make experimental operationalization more feasible.

.

ORIGINATION EFFECTS: Informatic neglect leads metacognition to intuit causal discontinuities between complex systems and their environments. The behaviour of the resulting systems seems to arise ‘ex nihilo’ and thus to be noncausally constrained, leading to metacognitive posits such as ‘rules,’ ‘reasons,’ ‘goals,’ ‘desires,’ and so forth.

OUT-OF-PLAY EFFECTS: Informatic neglect leads us to intuit the absence of some dimension of information as immunity to that dimension. So what makes geocentrism so compelling, for instance, is the absence of any information pertaining to the earth moving in space. The earth simply lies outside the realm of movement, or ‘out of celestial play.’ The same might be said regarding that philosophical darling of darlings, the ‘a priori’–not to mention ‘transcendentalism’ more generally. Since metacognition cannot access any information pertaining to neurofunctional precursors of, say, mathematical, logical, or transcendental operators, and since, just as importantly, metacognition has no access to information pertaining to this lack, the default assumption is that these operators somehow ‘lie outside’ the natural.

ONLY-GAME-IN-TOWN EFFECTS: Since information inaccessible to our brain simply makes no difference to neurofunctionality, the lack of information pertaining to the insufficiency of information, generates the illusion of implicit sufficiency, the default assumption that the information available is all the information required. What Daniel Kahneman (2012) rather cumbersomely calls WYSIATI (or ‘What-You-See-Is-All-There-Is) in his work represents a special case of this effect. As do what might be called IGNORANCE-ANCHORED CERTAINTY EFFECTS, the way humans seem to be so prone to think a given interpretation is the one and only true interpretation when they are ignorant of alternative interpretations. So this would be why, for instance, Kant was convinced of the apodictic nature of his transcendental deductions, even though hindsight has rendered his conviction more than a little preposterous. ‘Hindsight is 20/20,’ as they say, because assessing sufficiency becomes progressively more viable after the informatic dust has settled.

SIMPLICITY/IDENTIFICATION EFFECTS: Informatic neglect leads us to intuit complexes as simples. The most basic experimental example of this is found in the psychophysical phenomena of ‘flicker fusion,’ the way oscillating lights and sounds will be perceived as continuous when the frequency passes beyond certain thresholds. The fact is, all of experience, cognitive or perceptual, is characterized by such ‘fusion illusions.’ In the absence of information–or difference making differences–we consciously experience and/or cognize identities, which is to say, mistake matter of fact heterogeneities for homogeneities. Still frames become ‘moving pictures.’ Ants on the sidewalk become spilled paint. A bottomless universe becomes a local celestial sphere. Whole cultures become cartoon caricatures. Brains become minds. And so on.

Advertisements

The Difference between Silence and Lies

by rsbakker

So Lisa Bortolotti has been posting on Brains on the issue of delusion, self-deception, and confabulation for the past couple weeks, and this has got me thinking about some old TPB themes in light of some of my more recent mechanistic speculations vis a vis BBT. What follows is just a thumbnail sketch of how I see the issue that Bortolotti is presently researching: the question of whether confabulation can actually occasion epistemic benefits. She cites the famous Nisbett and Wilson experiment where individuals were first asked to assess the quality of ‘different’ pairs of socks and then to explain their evaluations, which they did, even though the socks were in fact identical. To date, a veritable mountain of evidence supports the claim that our cognitive processes cannot be consciously cognized, that our brains are blind to themselves at least in this one important respect.

So what do I think is going on? Human reproductive success turns on status. Our status substantially turns on our reliability, which in turn frees up cognitive resources. It’s no accident that trust and taking-for-granted are so closely linked: a crucial part of trusting someone is never having to burn calories thinking about them.

This suggests substantial evolutionary pressure for tools dedicated to assessing other-reliability and promoting self-reliability. Why are we so prone to rationalize? To impress other brains with our reliability, and so cue them to the pursue other problems–to trust.

I find this interesting because it provides a roughly mechanistic way to characterize ‘reasons’ as ‘reliability indicators,’ as a means to redirect the computational resources of other brains away from the problem of what is the case (when it comes to shared environments) as well as away from the problem of our brain, the potential threat that our own brain poses to the reproductive success of other brains. Our gift for confabulatory rationalization is the result of evolutionary dividends accruing to those brains that could spare other brains the trouble of assessing our reliability. But why is it confabulatory? In other words, if we’re prone to lie all the goddamn time to better profit from our ingroup compatriots, why should we be clueless about it? Robert Trivers has recently proposed a ‘cognitive load thesis’: we evolved confabulation because it’s less work, and less work makes for better deception. “We hide reality from our conscious minds,” he writes, “the better to hide it from onlookers” (The Folly of Fools, 9). But this presumes that the ‘reality’ was ever available–or ‘unhidden’–in the first place, when this is almost certainly not the case. Why evolve the computationally exhorbitant capacity to track ‘motives’ in our brain when simply making up even better motives is so much easier?

So the provision of reliability indicators (reasons) provides the informatic basis for managing the reliability estimates made by others. The brain is ‘black-boxed,’ introducing what might be called ‘dark reliability,’ a suite of dispositional tendencies whose reliability can only be assessed post-behaviourally. The provision of ad hoc reliability indicators (confabulated reasons) provides accessible post-behavioural information that the brain’s supervisory systems can then use to better manage the assessments of others. So cowards will brag about courage, increasing the overall tendency to do courageous things, given the influence of supervisory systems devoted to maximizing status. Thus one can talk about the role confabulation plays in ‘reliability bootstrapping.’ Some information, whether accurate or not, is more valuable than no information because no information has no mechanical impact whatsoever, and so is useless for reliability bootstrapping. The primary ‘epistemic’ role of confabulation, on an account like this, simply would be to give metacognition something to be accurate ‘about.’

You might expect a correlation between unreliability and the tendency to provide reliability indicators. The more unreliable an individual is, the more prone they will be to rationalize. It certainly seems that we’re inclined to paper over breaches of dark reliability with excessive verbiage: Shakespeare’s Falstaff is a type for good reason. Is there any empirical evidence of this?

Russell Smith Shrugged

by rsbakker

Holidays are upon me. But an old time foil o’ mine, Russell Smith, has managed to put me into an old time mood with a piece in today’s Globe and Mail.

The topic, predictably, is genre versus literature. And the argument, predictably, is the standard ‘argument’ given by those at the high end of any cultural authority gradient: Those on the bottom have no reason to bellyache because there is no bottom, the implication being, of course, that really, when all is said and done, ‘they’re just jealous.’

Smith is confused by what, for him, amounts to a mythical injustice. “Every day,” he writes, “I read angry emails and posts from sci-fi writers complaining about the terrible snobbery and irrelevance of the literary establishment which still doesn’t give major awards to the speculative or fantastical, or give it enough review space in the books pages of newspapers.” Now group specific dissatisfaction of any sort always begs for some kind of consideration of motivations. But Smith glides over this question, perhaps realizing the trickiness that awaits. Implying ‘They’re just jealous!’ is one thing, but actually writing as much would place him in some uncomfortable company. So he simply declares that he has never heard anyone in his ingroup explicitly dismiss genre–as if only those who explicitly embrace bigotry can be bigots. And as if he and his cohort don’t regularly deride the ignorant masses via their ignorant tastes. The guy doubles as a fashion columnist, after all.

Because make no mistake, Russell Smith is a cultural bigot through and through–and of the worst kind, in fact. He is a status quo apologist convinced he has nothing to apologize for, who feels hurt and bewildered and quite frankly, annoyed, by the deluge of small-minded belly-aching he has to listen to. And since he belongs to an ingroup that self-identifies itself as ‘critical’ and ‘open’–namely, as all those things each and every ingroup is not–he simply assumes that he has to be right. His is the enlightened institution. There’s no need to ask the motivation question, no need to consider the possibility that the perception of cultural inequity is all that cultural inequity amounts to (even though, it is the case that only writers that primarily self-identify themselves as ‘literary’ win the awards and the funding).

To get a sense of how bad his argument is, consider:

There is a paradox at the heart of these complaints: They proclaim the artificiality of genre divisions while simultaneously demanding respect for a specific one. Are we to abolish genres or privilege one? Either you want a level playing field or you don’t.

Sound appealing? Sensible? Well, let’s spice up the stakes a bit, see if it doesn’t sound more familiar:

There is a paradox at the heart of these complaints: They proclaim the artificiality of racial divisions while simultaneously demanding respect for a specific one. Are we to abolish races or privilege one? Either you want a level playing field or you don’t.

He doesn’t get it because he has no need to get it, because he belongs to what remains, in far too many cultural corners, the privileged ingroup. “Why does this argument even need to be made?” he writes. After all, so very many literary novels contain magical or surreal or futuristic elements, such as “Invisible Cities by Italo Calvino, The Tin Drum by Gunter Grass, and Beloved by Toni Morrison.”

Apparently his people love our stuff when his people write about it.

Wake up. It’s about power, idiot, not the statistical distribution of tropes. It’s about who has it, and who don’t.

Unfortunately for us, mainstream literature is not nearly as irrelevant as it should be. It remains a fat, greasy parasite that continues to feed on far too much talent, continues to convince far too many bright and sensitive souls to turn their backs on their greater culture (in what is, without any doubt, the most momentous epoch in human history) all in the name of accumulating ingroup prestige within a socially obsolescent institution. All writers are post-posterity writers, nowadays, and if they truly want to walk their egalitarian, prosocial talk, then they need to reach out with their writing, self-consciously game the preference parsing algorithms that increasingly command our horizons of cultural exposure. In other words, they need to do the very opposite of what conservative apologists like Smith continually urge, which is to bury their heads in sand at the bottom of the hourglass.

“These category questions,” Smith writes, “are marketing ones, not literary ones.” Once upon a time, maybe, but certainly not anymore. If literature is as literature does, then what we call ‘literary’ today does precious little that can be called literary–thanks to marketing. The outgroup philistines that literary writers pretend to ‘challenge’ let alone ‘illuminate’ no longer stumble into their books, leaving only classroom captives to complete the literary circuit (with dead or moribund authors, no less). Literature describes a certain, transformative connection between writers and readers, and marketing just happens to be all about connecting buyers with sellers. Given that confirmation is the primary engine of consumer demand, literature is simply writing that jumps the tracks, that somehow, someway, finds itself in the wrong hands. The rest, as DFW would say, is fucking Entertainment. More apology.

The future of literature in the age of information technology lies with genre, plain and simple, with writers possessing the wherewithal to turn their backs on apologetic apparatchiks like Smith, and actually contribute to building the critical popular culture we will need to survive the strange, strange days ahead. The alternative–Smith’s alternative–is to preach to the choir, apologize and reinforce, cater to expectations–do all the things that ‘sellouts’ do–then to endlessly declare yourself a missionary of transformation. Quite a scam, I would say.