The Point Being…
by rsbakker
Louie Savva has our podcast interview up over at Everything is Pointless. It was fun stuff, despite the fact that this one time farm boy has devolved into a complete technical bumbleclad.
It also really got me thinking about the most challenging whirlpool at the heart of my theory, and how to best pilot understanding around it. Say the human brain possessed two cognitive systems A and X, the one dedicated to prediction absent access to sources, the other dedicated to prediction via access to sources. And say the brain had various devious ways of combining these systems to solve even more problems. Now imagine the conscious subsystem mediating these systems is entirely insensitive to this structure, so that toggling between them leaves no trace in experience.
Now consider the manifest absurdity:
It is true that there is no such thing as truth.
If truth talk belonged to system A, and such thing talk belonged to system X, then it really could be true that there’s no such thing as truth. But given conscious insensitivity to this, we would have no way of discerning the distinct cognitive ecologies involved, and so presume One Big Happy Cognition by default. If there is no such thing as truth, we would cry, then no statement could be true.
How does one argue against that? short knowledge of the heuristic, fractionate structure of human cognition. Small wonder we’ve been so baffled by our attempts to make sense of ourselves! Our intuitions walk us into the same traps over and over.
Just a few clarifications to make sure I understand your position-
Which system am I predominantly using when I perform the following tasks?
1. Solving an algebraic equation.
2. Tying my shoe.
3. Having a conversation with a work colleague about a practical problem.
4. Flirting.
5. Having a philosophical discussion.
Also, if “truth” is just a heuristic of some kind, why does it seem to have such strong predictive value across domains? Aren’t heuristics supposed to be domain-specific?
I think most human cognition is source neglect, that we’re trick accumulators first and reverse-engineers second (which is why science had to be discovered). (2) engages folk physical assumptions, but everything else is largely source neglect (unless the practical problem in (3) was mechanical). Of course things are way more complicated than this: I’m just convinced I’ve nailed the complication required to find our way out of the labyrinth. The key, which I think you astutely pointed out in my “Real Systems” piece, lies in understanding the variety of source neglect cognitions.
Truth-talk is very useful in linguistic contexts, and language is very useful across domains. The notion that truth has predictive value begs the question of what truth is, and so tosses us headlong into the traditional miasma.
“everything else is largely source neglect”
So, mechanical problems are system X (access to source; “such thing” talk), but not system A (source neglect; “truth”)? I think this is where I’m stumbling because even in mechanical contexts where we have (some) access to sources of information truth propositions (equalities) are still extremely useful and predictive.
“Truth-talk is very useful in linguistic contexts, and language is very useful across domains.”
Alright, I have no issues with this.
It’s all predictive of the same environment, source neglect or source access, so I guess I’m not sure what’s causing you the problem. When they complement each other, we exploit the dividends. When they compete, they generally recommend incompatible solutions. Is it the hybridizations, the fact that source neglect cognition (where we predict on the basis of correlations (associations)) can be intimately entangled with source access cognition?
Or are you sensing the abysmally clumsy, low res medium that conscious cognition is?
So, definitely yes on that last part.
Here’s my difficulty- let’s take the following three statements.
1. It is true that Bill believes that the fridge has cheese in it. (“intentional” folk psychology)
2. It is true that the sky is blue. (trivial observable, but contingent on shared understanding of “blue”)
3. It is true that a Lorentz transformation is necessary to preserve the spacetime interval in a Minkowski space. (mechanical, non-contingent on any private subjective data)
So, I have no problem seeing how one might claim that “there is no such thing as truth” for #1, since its plausible to reject folk psychology and intentional states. I can also maybe accept that “there is no such thing as truth” for #2, for similar reasons. But #3 is a problem since the truth just *seems* (there’s that word again, if there were ever an indicator of the occluded frame it’s the word “seems”) to “be there” regardless of any cognitive or neuronal idiosyncrasy.
I think a Truth Table would be an even better example of what I think you’re getting at… How about this: The regularities are there regardless, no matter what the context. ‘Is true’ belongs to the communicative machinery we use to exploit those regularities. Capital ‘T’ Truth belongs to our history of attempts to understand how ‘is true’ works. As you point out, ‘is true’ works differently in different contexts, typically because the regularities involved work differently. In many contexts, especially those involving the tools/cogs (like logic and mathematics) we’ve knapped to exploit/conserve systematicity, the word ‘truth’ has taken on technical connotations that seem to cut against my claim here, but where and how? What does the ‘truth’ preserved in logical formulations have to do with Truth? What if what’s preserved is the applicability of ‘is true.’ Exploiting regularities requires systematic sensitivity to regularities. Mathematics and logic regiment our orientation (the sum of our behavioural dispositions) relative our environments in ever more negentropic ways. We would need ways to track/communicate optimizations despite global insensitivity to the physical systems involved. ‘Is true’ is simply a way to track optimization (or better, happy iteration) while blind to optimization.
Yes, a truth table would be an even better example of the problems I think #3 causes. In many ways, almost all communication and discourse has a ton of implicit truth tables hidden in the background.
Let’s say I say to Bill “there’s cheese in the refrigerator”. In the background of my subconscious, there’s neuronal activity corresponding to an unfathomable magnitude of truth tables-
Is.Bill(Person)? -> T
Is.Bill(In Kitchen)? -> T
Is.Cheese(In Fridge)?-> T
etc…
If truth is non-existent, then what exactly is that machinery doing? “Tracking regularities”? The word regularity implies some sort of consistency or truth-like concept.
I don’t know, maybe I’m recalcitrant because to me the only reason science has the power it does is that it hones itself unerringly closer to Truth with every happy iteration, even if can’t ever completely get there.
I actually think this way of looking at things is more scientific. Just think of it in gross high dimensional terms, the muckety muck of the physical, buzzing into quantum indeterminacy. All we are is another worm in the gut of this beast, entertaining countless points of physical contact. Now just what is ‘objective Truth’ supposed to be? It’s a hypostatization of neglect, a way to report the illusion of uttering something unconditioned.
Otherwise, I’ve yet to meet a natural regularity that cared a whit for truth! But there’s all kinds of unexplained explainers here, just minus all the intentional ones.
How do you know the truth tables aren’t post hoc, simply a way for us to regiment what’s going on behind the scenes in order to solve further, specific sets of problems, and no more? Neural processing can duplicate digital computation, but it certainly isn’t engaged in any! Just because we break certain cognitive functions into truth-tables doesn’t mean our brain is doing them! But even if our brains had happened on the syntax of what we call truth tables, why would you think that syntax was anything other than merely mechanical. Why insist on the added, inexplicable property, truth?
Jorge, your example 3 seems like a case of
“And say the brain had various devious ways of combining these systems to solve even more problems. Now imagine the conscious subsystem mediating these systems is entirely insensitive to this structure, so that toggling between them leaves no trace in experience.”
When Thomas Jefferson wrote “we hold these truths to be self evident, that all men are created equal…” he meant truth in the same way that I think you mean it in your example 3. The use of ‘truth’ in these different contexts makes the differences between different kinds of truth difficult to perceive. Using the same language for scientific and rhetorical purposes elides the difference between science and rhetoric, so at least some people who read the Declaration of Independence think the evidence for the claim that all men are created equal is of the same kind as the evidence for Einstein’s claims regarding Special Relativity. In the Jefferson case the misunderstanding can be especially pernicious because he is claiming to have a source, self-evidence. Perhaps in general truth talk misleads by implying the existence of sources where none exist, or at least none exist of the kind referred to by you and Einstein.
And once you try to figure out the difference between Einstein’s truth and Jefferson’s truth you find yourself trying to define ‘truth’ and we know that “walks us into the same traps…”
Well, if X deals in predictions/bets and truth is a social commitment to a bet (with commitment leveraging various survival benefits), then to say ‘It is true that there is no such thing as truth.’ is to make a commitment about an inability to commit. Something to consider is: Think of it as a system A exaptation to manage to deal in system X’s terms. If system A thinking is stuck in talking to other system As, then it has to basically excuse itself from commitments to talk in something that is more like X thinking. All commitments are off, all bets are on. Like an action movie.
Except system A tends to think it is, ahem, the truth of the matter. And so has real trouble conceding to the gamble world, as X conceives it, more so that it deals in commitments, not gambles. Marriages, not partnerships (despite the divorce rate statistics).
So part of the trap is a mechanical propensity to deal in commitments, with only thin methods of working outside that mechanically imposed frame of thought, eg ‘It is true that there is no such thing as truth’
Or so I write, off the cuff (wait, do I describe a gamble…?)
is bumbleclad Canadian for bumbaclot?
Rorty taught me that the only way to avoid getting trapped in certain philosophical flyjars is to refuse to enter them.
His goal should have been to reveal their nature and so convince the world to abandon them! But then he was trapped in his own normative flyjar.
like many such folks he couldn’t overcome his old habits even when he saw the limits of them, lord knows I can relate.
One future thing you might want to spell out more are the so-what implications for folks who aren’t philosophers and so aren’t stuck in this particular dead-end.
Ayuh. I especially need to map out potential research projects, experimental and observational. Good reminder…
dmf
One of my favorite phrases from this blog is ‘epistemic humility.’ I think the more we learn not to trust hunches, intuition, instincts etc. the better off we’ll be. If there is one great takeaway from this whole BBT experience it has been ‘suspect your own mind.’
I think you were quite clear (with a helpful question or two to help you define terms) but I may not be a good measure of the audience at large.
Scratching sound in podcast is god trying to claw its way out of fantasy
Or claw its way back in!
It’s the “perplexities of consciousness”…
I was hoping they would clip that bit!
If they had don that, this private conversation (and all its meaning) amongst the listeners would not exist.
> It is true that there is no such thing as truth.
Sounds like a beginning of an aporetic cant, doesn’t it?
One of the things I’ve been meaning to do is spend some time with Zen koans, looking at them as ways of toggling between incompatible systems… It’s just a suspicion, but…
Scott, are you familiar with the Madhyamaka Buddhist concept of two truths? When I first read your post, that was what came to mind.
Oxford University Press published a collection of essays a few years ago titled Moonshadows: Conventional Truth in Buddhist Philosophy that might be of interest to you.
Also, in the SEP entry on Nagarjuna, there is a section on Language & Truth that might be of interest. https://plato.stanford.edu/entries/nagarjuna/#LanTru
Two posts ago Stephen provided this link:
https://blogs.scientificamerican.com/guest-blog/transcending-the-brain/
I suggested that we should not uncritically accept that drug fueled hallucinations are richer experiences than the experiences we have of the world through our ordinary senses. We do not have access to the neurological machinery that enables either hallucination or ordinary perception, so we have no way of knowing which kind of experience is “richer.” We guess, and apparently conclude that the more novel experience is richer. If we measure the metabolic activity associated with hallucination and the metabolic activity associated with ordinary perception and find that ordinary perception requires more resources we hypothesize that “consciousness is fundamental and spatially unbound,” I’m not sure what that means, but it sounds as though the article means to imply that consciousness is magical. When you don’t have direct perceptual access to the machinery that makes conscious experience possible your theories about that experience have to tend to the magical. Perhaps humans have a general tendency to offer magical explanations when our perceptual apparatus does not provide the opportunity to offer mechanical explanations.
It makes sense that when we don’t have the access needed to offer mechanical explanations we offer magical explanations. The alternative would be to refrain from offering explanations at all, and humans have a powerful preference for fake knowledge over honest ignorance.
Yeah, I thought it was a pretty naïve guest blog, assuming the way it did, that everything about the experiences reported were artifactual, EXCEPT the feeling of unity, transcendence, etc, which were perceptual. ‘Magical’ can be seen as a synonym for shallow information in a sense. We make fetishes out of cues: this is why I think Cimpian’s ‘inherence heuristics’ represent a big step in my crazy direction! Truth is nothing if not a fetish for some.
Might be interesting to do a 5-MEO-DMT correlates of consciousness study. Perhaps it’s already been done.
Great podcast! The technical issues weren’t the big deal you made them out to be. Trust me, TSACast has suffered far worse.
Making the ground fertile. Keep on, buddy.
Thank you, Mike. I think the How Stuff Works interview went even better, but only because Robert was well versed in both my fiction and my philosophy! First interview where I had to where both hats.
Article reminded me of Neuropath…
https://blogs.scientificamerican.com/cross-check/will-neuroweapons-micro-drones-and-other-killer-apps-really-make-us-safer/
Think I’m going the way of the dodo…
[…] enough, the statement I gave you in the previous post could be called a koan, of […]
With the phrase ‘Fiction is a fiction’ are you trying to get at something like that while the objects and people referred to might not exist, the processing of the text or other medium involved certainly does? It’s kind of like in comparison if someone was lifting a weight, we’d see the muscles straining – and if they lifted a non existent weight, we wouldn’t see any strain. But a brain scan of someone dealing with complex fiction might show a large amount of mental strain as if dealing with an actual situation.
Would someone be kind enough to help me understand Blind Brain. Below, written in my stupid little way, is what I think Blind Brain is getting at, at least in some general way.
Cheers.
Say we’ve determined that a single person, based on metrics that we all agree upon, has the most accurate appraisal of one’s own subjectivity that does or could possibly exist. That we all agree this person represents the peak of human introspection capabilities. So what Blind Brain Theory would claim is that the findings of this appraisal must be an illusion because the brain is intrinsically unable to perform this function.
Hi Sam. You have the gist. Check out, “The Dime Spared.” The core of the theory is that the apparent peculiarities of the first person POV (subjectivity) redound upon confusing artifacts of neglect for positive, apparently inexplicable properties belonging to consciousness.
I imagine this is something you’ve probably touched on at some point here over the years, but I was curious about your opinions on the ideas put forth by Donald Hoffman and his notion of “conscious realism”? I recently re-engaged with some of his material (I’d checked it out before but it was a bit over my head at the time, though certainly intriguing enough to stick in my brain) and it seems as though there’s a good deal of overlap between his theory and your own BBT. Particularly fascinating – and chilling – are the results of simulations that appear to show just how much evolution favors Fitness over Truth. I also found his “user interface” analogy of consciousness very cool (and terrifying). I’m just curious about your perspective on these ideas, and where it dovetails/diverges with BBT as you understand it? What about his idea of “conscious agents” as constituents of consciousness?
Much thanks!
He’s a fascinating character, I agree. I actually quote his lab’s fitness vs. truth simulations in my JCS piece. But like Dennett and all the others who’ve offered HCI’s as a metaphor for experience as metacognized, he actually has no real theory of cognition to make good on the metaphor. Philosophical naivete leads him to draw some ontologically extravagant conclusions. Check out, “To Ping or not to Ping.”
Reblogged this on The Ratliff Notepad.