The Difference between Silence and Lies

by rsbakker

So Lisa Bortolotti has been posting on Brains on the issue of delusion, self-deception, and confabulation for the past couple weeks, and this has got me thinking about some old TPB themes in light of some of my more recent mechanistic speculations vis a vis BBT. What follows is just a thumbnail sketch of how I see the issue that Bortolotti is presently researching: the question of whether confabulation can actually occasion epistemic benefits. She cites the famous Nisbett and Wilson experiment where individuals were first asked to assess the quality of ‘different’ pairs of socks and then to explain their evaluations, which they did, even though the socks were in fact identical. To date, a veritable mountain of evidence supports the claim that our cognitive processes cannot be consciously cognized, that our brains are blind to themselves at least in this one important respect.

So what do I think is going on? Human reproductive success turns on status. Our status substantially turns on our reliability, which in turn frees up cognitive resources. It’s no accident that trust and taking-for-granted are so closely linked: a crucial part of trusting someone is never having to burn calories thinking about them.

This suggests substantial evolutionary pressure for tools dedicated to assessing other-reliability and promoting self-reliability. Why are we so prone to rationalize? To impress other brains with our reliability, and so cue them to the pursue other problems–to trust.

I find this interesting because it provides a roughly mechanistic way to characterize ‘reasons’ as ‘reliability indicators,’ as a means to redirect the computational resources of other brains away from the problem of what is the case (when it comes to shared environments) as well as away from the problem of our brain, the potential threat that our own brain poses to the reproductive success of other brains. Our gift for confabulatory rationalization is the result of evolutionary dividends accruing to those brains that could spare other brains the trouble of assessing our reliability. But why is it confabulatory? In other words, if we’re prone to lie all the goddamn time to better profit from our ingroup compatriots, why should we be clueless about it? Robert Trivers has recently proposed a ‘cognitive load thesis’: we evolved confabulation because it’s less work, and less work makes for better deception. “We hide reality from our conscious minds,” he writes, “the better to hide it from onlookers” (The Folly of Fools, 9). But this presumes that the ‘reality’ was ever available–or ‘unhidden’–in the first place, when this is almost certainly not the case. Why evolve the computationally exhorbitant capacity to track ‘motives’ in our brain when simply making up even better motives is so much easier?

So the provision of reliability indicators (reasons) provides the informatic basis for managing the reliability estimates made by others. The brain is ‘black-boxed,’ introducing what might be called ‘dark reliability,’ a suite of dispositional tendencies whose reliability can only be assessed post-behaviourally. The provision of ad hoc reliability indicators (confabulated reasons) provides accessible post-behavioural information that the brain’s supervisory systems can then use to better manage the assessments of others. So cowards will brag about courage, increasing the overall tendency to do courageous things, given the influence of supervisory systems devoted to maximizing status. Thus one can talk about the role confabulation plays in ‘reliability bootstrapping.’ Some information, whether accurate or not, is more valuable than no information because no information has no mechanical impact whatsoever, and so is useless for reliability bootstrapping. The primary ‘epistemic’ role of confabulation, on an account like this, simply would be to give metacognition something to be accurate ‘about.’

You might expect a correlation between unreliability and the tendency to provide reliability indicators. The more unreliable an individual is, the more prone they will be to rationalize. It certainly seems that we’re inclined to paper over breaches of dark reliability with excessive verbiage: Shakespeare’s Falstaff is a type for good reason. Is there any empirical evidence of this?