Necessary Magic: A Reply to Ben Cain
by rsbakker
First, I wanted to mention some excellent BBT related reflections that I think are worth linking: “The Blind Mind-unmaker” at Speculum Criticum, and “Speculative Posthumanism” at Steven Craig Hickman’s noir-realism (for my money the best post-postie site on the web).
Before beginning this Reply, I need to thank Ben, not only for the uniformly wonderful posts he’s afforded us all, but also for the tremendous amount of work he’s put into critiquing BBT. As much as I disagree with him, he has helped me clarify a myriad of issues, as well as show me where I’m most apt to be troubled in the future. If it weren’t for him throwing tomatoes at the pulpit, I’m sure I would have starved!
My first and most obvious complaint, of course, turns on his use of loaded terminology. ‘Scientism’ is far and away the reflex complaint I receive when discussing BBT, a way to pass intellectual judgment without doing any intellectual work. Ben certainly does some argumentative work, here, but labelling the position with a term taken to be a pejorative by the vast majority of readers is to actively court rhetorical short-circuits, to invite readers to skip any critical consideration of the arguments, and leap straight to the judgment. ‘Absolutist’ is even more loaded in this respect.
So to be clear, BBT is neither ‘scientistic’ nor ‘absolutist’ as either of these terms are commonly understood. It is, rather, naturalistic and skeptical…
This is an important distinction. To give an example, one of the reasons I think epic fantasy possesses the ideological significance it does, turns on the fetishization of prescientific historical contexts. One of the reasons it does this, I’ve argued, is not simply to make room for magic, but to recover the cognitive legitimacy of traditional forms of theoretical claim-making. Not only is magic possible, gods are real, and philosophy matters.
My position begins with the problem of theoretical cognition. What is the problem? Namely, that we now know, as a matter of empirical fact, far, far too much about human cognition to trust any traditional, prescientific theorization. Our intuition of correctness is far too unreliable to warrant much in the way of commitment to traditional theoretical discourses. This is probably why we all fucking disagree all the fucking time, why discourses in philosophy largely peter out for want of interest rather than finding any decisive arbitration: our own functioning provides the bottommost baseline for any and all estimates, and it is pretty clearly systematically skewed to deliver the beliefs we want or need to be true.
Philosophers simply possess no reliable means of seeing their way through their myriad blind-spots and biases. I understand why people hate this possibility, but you gotta admit, it seems a pretty damn good bet. Something has to explain the crazy cognitive differences one finds between scientific theorization (with consequences like thermonuclear apocalypse, extended life-expectancy, this web-page, etc.) and other forms of theorization – the difference I call ‘accuracy’ but you can call anything you like. Science is an institutional prosthetic, a great shambling mechanism allowing the successful arbitration of theoretical claims in spite of individual human theoretical incompetence.
We’re a bunch of fucking dummies, and we have the abattoir of history to prove it. Whatever the evolutionary impetus ‘to theorize’ was, it certainly had precious little to do with ‘getting things right.’ Fact is, across all traditional cultures theoretical capacity is devoted to cognitive activities that are in no way connected with accuracy. One can make any number of guesses as to the actual functions, but we can be pretty certain ‘truth’ was not on the menu. So when we theorize, in other words, we’re yoking systems that simply were not designed for accuracy.
And so, we cross our fingers. Because as the past has shown, the weight of tradition counts for nothing. And since the happy picture is founded on a conspiracy of intuition and tradition, there’s really nothing the noocentrist can do except hope, despite longer and longer odds, that their horse will somehow pull through.
The point is, I don’t claim that all and only scientific theoretical claims are ‘true.’ Not at all. I’m pretty confident there’s plenty of false claims floating around under the guise of ‘scientific fact’ and plenty of true ones drifting about concealed as ‘philosophical wanking.’ What I claim is that humans are theoretically incompetent, and that science is the one institutional prosthetic that clearly affords them some competence. Science is a vast, shambling wreck that nevertheless works miracles. Ceteris paribus, does all the heavy lifting from this point. All things being equal, when a traditional domain falls within the purview of science, science wins. Astrology may remain an ongoing concern, but it pulls no real institutional levers. Secular society is scientistic society: it provides the space allowing us to make these very claims, debate it’s own legitimacy… And for good reason.
So for those of you following this running debate, take note of what really is a curious fact: Not one of the philosophers taking me to task for ‘scientism’ has actually addressed the main, motivating argument. Not one. Over the years I’ve been accused of scientism more times than I can count, and I’m still waiting for someone to tackle the bloody rub! Given what we have learned from cognitive psychology and cognitive neuroscience, why should we trust (as opposed to consider or entertain) theoretical claims outside the sciences.
Because it feels right? Because it’s what we’ve always believed? Because social order depends upon believing it?
Everyone steers clear the theoretical competence argument. The tactic, rather, has been to isolate what seem to be rhetorically vulnerable claims/implications of mine and attack those, or to frame the debate in a very general way on the back of the very assumptions under question. I see this position of mine as a trap – genuinely. I would love to find a way out, find my way back to the kinds conclusions Ben attempts to draw here, but I just don’t see how anything more than ignorance and hope holds positions like his together, at least not anymore.
So Ben simply frames BBT as a scientistic absolutist position. Because of this unfortunate rhetorical posturing, I’ll simply reply in kind, and call any position that denies the kind of radical and exhaustive revisionism suggested by BBT a form of atavism, one that becomes progressively more pollyanna the degree to which it asserts the immunity of tradition, prescientific discourses to this process. Against the scientistic absolutism of BBT, then, we can say Ben is posing a form of pollyanna atavism.
Now Ben and I have been debating these issues back and forth for quite some time, and the primary issue between us, bar none, turns on what might be called the ‘Presupposition Problem,’ which is perhaps most economically and eloquently expressed in the following passage drawn from his piece:
As I said, the scientific picture includes the content of scientific theories but also the practice of science itself that produces them. After all, the point of scientism isn’t just that people will possibly have a complete understanding of nature, but that science alone makes that understanding likely. But at least as understood intuitively, scientific methods involve epistemic, aesthetic, and pragmatic standards that scientists want their theories to meet. So while we presently indulge in the prescientific talk of normativity, the suspicion is that science tends to conflict with our intuitions. And yet if science is the only kind of knowledge, how will scientists understand their scientific practice scientifically, if such methods appear normative? For the statement of scientism to be coherent, that appearance of how science itself works would likewise have to be illusory and so science would have to be part of a natural process that can be understood in purely causal, value-neutral terms.
Now the first thing to note is the way he rhetorically postures the possibility that science is our reliable source of theoretically accurate knowledge into something that ‘just has to be wrong.’ He considers none of the scientific evidence for why this might be so–or as I would argue, why this is obviously so. He simply relies on the likely fact that the majority of readers want or assume that theoretical cognition is possible outside the sciences. One of the reasons we are so theoretically incompetent left to our own intuitive and traditional devices is that we have a genius for believing those things that confirm our preexisting assumptions. But he hasn’t actually given any evidence for supposing theoretical competence, so much as pandered to our assumption that this must be so.
The second thing to note is the way this paragraph actually assumes the very claim that the so-called ‘scientistic absolutist’ is calling into question: namely that our second-order characterizations of so-called intentional terminology are true. Perhaps the terms will be replaced. Perhaps they won’t. What will happen at the very least, however, is that they will be incrementally redefined in light of new scientific information. This is what arguably makes atavistic positions like Ben’s so pollyanna. Does he literally think this process won’t happen, that the traditional speculative discourses that have provided us with our present understanding of terms like ‘right,’ ‘rule,’ ‘aboutness’ and so on enjoy a kind of special immunity to revisionary scientific ‘disenchantment’ that no other traditional speculative discourse has in the course of history?
The last sentence of the passage should read, “For the statement of scientism to be coherent, our present, prescientific understanding of how science itself works will have to turn out to be as wrong as our past prescientific understandings of every other complicated process.” My argument simply asks, What are the chances? What are the chances that we got this one enormously complicated phenomena right?
Ben never tackles this question. BBT represents a theoretical worst-case scenario, one where the evolutionary serendipities of human cognition have rendered us incoherent. It is a viable empirical possibility that we evolved in such a way that we cannot function short of any number of systematic deceptions. Subreption, or the control of behaviour via deception, is rife throughout the natural world. Nothing exempts us, least of all our intuitions to the contrary. By continually implying the extreme difficulty if not the out-and-out impossibility of living life according to a causal-mechanical theoretical self-understanding, Ben is merely outlining the shape of the dilemma predicted by BBT. As soon as he takes the further step of using these implications to argue the falsehood or ‘incoherence’ of BBT, he is at best missing the point and at worst begging the question. What he sees as conceptual disqualification, I see as exemplification of our very real straits.
Since the dilemma is a very real empirical possibility, one that Ben himself admits, it becomes difficult to understand what he thinks he has accomplished. Is his argument empirical? Is he adducing scientific evidence against BBT, showing us how, contrary to my claims, accurate metacognition is not only computationally possible, it’s also probable? Or is he, rather, arguing from a certain abstract altitude, looking for ways to make BBT look bad from a rational or transcendental standpoint, which is to say, the very standpoint it threatens to empirically undermine?
I fear I can see no way in which the latter approach fails to beg the question!
He writes, “we’ve evolved mental modules that compel us to read psychological and social patterns into data, thus compelling us to survive by working in groups.” I’m not sure what he means by ‘mental modules,’ so I’ll replace this with ‘neural mechanisms’–something we have evolved as a matter of empirical fact. The question then becomes one of how this counts against the meaning skepticism evinced by BBT. It’s not as though these neural mechanisms are themselves ‘psychological’ or ‘social’. Or put differently, it’s not as though the work they do is anything other than mechanistic, or merely natural.
So when he continues to say, “[t]his means the absolutist must show that there’s currently no benefit to thinking of science in normative terms, that this way of thinking really is just an idle, illusory byproduct,” his problem becomes quite stark. BBT isn’t saying that we don’t possess compulsory neural mechanisms geared to troubleshooting other brains. BBT agrees that, given our existing neurobiology, we have to rely on this ‘psychosocial toolbox’ to mechanistically resolve psychosocial problems. The machinery of the brain does all the work–after all, what else is there? What he calls ‘thinking of science in normative terms’ is a mechanistic enterprise, something our brains do. Since metacognition is all but blind to the mechanistic nature of the brain, it cognizes cognition otherwise, in nonmechanical, acausal, magical terms. Normative judgements, intentional relations, and so on: these are simply ways our brain naturally mischaracterizes its own activity.
Again, statements like the above either miss the point or beg the question. Ben is banking on your default assumptions here, relying on the fact that your immersion in noocentric culture will incline you to assent to his arguments and criticisms. And he skates over the rather important question of what is doing all the work, if not assemblages of neuromechanisms. And if its mechanisms doing all the work, then what work, if any, does normativity qua normativity do?
Or consider his critique of ‘function talk,’ and the perplexing insistence that ‘function’ must mean what he thinks it means, namely something that necessarily (?) involves teleology. Again, Why? Because it ‘just seems that way’? I define ‘neural functions’ in terms of structurally fixed patterns of neural activity. Where’s the telos in this? In fact, the whole literature of biosemantics, a philosophical domain as beset with controversy and discursive deadlock as any other, arose as an attempt to resolve the inability of previous philosophical positions to naturally square the circle of normativity! Does it succeed? How could it, when it provides no criteria by which success could be adjudicated.
Ben goes even further out on his precarious philosophical limb when he begins mulling the metaphoric nature of language, and the philosophical mysteries pertaining to causality. This is actually a common strategy, one that skeptics (like TPB’s other regular guest-blogger, Roger) are all too familiar with. Problematize philosophical speculation on what seem to be fairly direct, platitudinal grounds, and the philosopher is bound to throw more speculation at you, telling you What Skepticism Really Is and why therefore, it can be ignored. Just as the skeptic need only shrug and say, How do you know? BBT need only shrug and say, Why should anyone care? Cause is an unexplained explainer, sure. It’s not clear how this impugns the theoretical power of science in any way whatsoever. Should we say, ‘Shit. No wonder my lawnmower doesn’t work!’ Of course not. It’s even less clear how this tack bears on BBT in particular. If taking down science as a whole is a precondition for taking down BBT, then that would actually seem to redound in the theory’s favour.
As for the problems posed by the metaphorics of language, it seems pretty clear that Ben has wandered into the very self-undermining mire that he wants to foist on BBT–a position that actually provides a way of empirically understanding why such issues are so baffling! Whose metaphors are problematic from the standpoint of cognition? The one’s arising in BBT, which admits theoretical adjudication, or the one’s arising in his argument? Are metaphors somehow antithetical to mechanistic explanation, but amenable to intentional speculation?
Perhaps the issue is neither here nor there regarding the dispute between us.
Or consider: “But now we arrive at a mere definitional matter, because this so-called illusion is the way that mammals like us tend to perceive things as a basis for understanding them.” No. The cognitive illusions isolated by BBT are not the way ‘mammals like us’ tend to perceive things as a basis for understanding. The understanding comes first, I fear, and the philosopher and his myriad confusions (arising from the aforementioned cognitive illusions) comes next, attempting, and notoriously failing, to ‘understand’ this understanding. Our brains are remarkably efficacious mechanisms, as their evolutionary pedigree suggests they would have to be. What we call ‘understanding’ is as much a product of its activity as anything else in the ambit of experience. And when all is said and done, that understanding will be understood in mechanical, not normative or intentional, terms. Fact is, we’ve already travelled quite some distance down this road. Suffering a sudden cognitive impairment? Dollars to doughnuts the doctor is going to give you a mechanical explanation.
“Recall,” Ben writes toward his conclusion, “that scientism is the prediction that science will eclipse the arts when it comes to telling us about the real world.” Prediction? More like observation! Find a funny growth on your skin? A lump on your breast and/or testicle? Car won’t start? Computer won’t connect to the internet? Mechanism and more mechanism. And as the sciences grow in power and intricacy, this list continues to grow. A kid in your class has difficulty with impulse control? Can’t call them ‘lazy’ any more. A parent starts behaving bizarrely? Can’t call them ‘crazy’ anymore. A commercial for product X is incredibly successful? A politician is swept into office? You turn to your right instead of your left at the mall?
Mechanistic explanations are–quite obviously, I think–the rising tide. Art? Machines are already writing novels and articles, painting pictures. Cognitive neuroscience has already made explanatory inroads into issues of composition and reception. Is this a trend that is set to retreat, or continue?
The myth that Ben would have you buy into is nothing other than the myth I would dearly love to be able to affirm: the notion that our metacognitive sense of self and beauty and morality and meaning (talk about undefined terms!) is not only ‘more than enough,’ it is somehow magically immune to the slow onslaught of accumulating mechanical information. All BBT does is place these notions on an informatic gradient, high dimensional at one end, and low dimensional on the other. It denies them their atavistic claim to autonomous adequacy across domains, and shows how they are continuous with the rest of the natural world. And guess what? In doing so it makes us very small and painfully contingent, and in a manner eerily consistent with the overthrow of geocentrism and biocentrism. Like earth or homo sapiens, the brain only looks special from certain, parochial perspective, the very kind of limited perspective that the sciences enable us to overcome.
The myth of noocentrism.
Which brings us to the most telling shortcoming of Ben’s critique. I’ve mentioned the way he fails to consider any of the evidence of human theoretical incompetence, and really only assumes the opposite. I’ve mentioned the way he repeatedly begs the question, arguing the incoherence of BBT by supposing it must rely on the very intentionality it explains away. I’ve called attention to the vacuousness of problematizing science’s unexplained explainers, and how it’s not clear that the problems pertaining to metaphor aren’t even more debilitating to his position. But far and away, the biggest weakness lies in his failure to provide any positive account of just what it is he’s defending. In a sense, he’s simply relying on exhaustion to do his work for him, the fact that the signature failure of philosophy to ‘clarify’ any intentional term has become such old news it is scarce worth mentioning. The fact that BBT has a very parsimonious strategy for explaining these failures he passes over in silence. In fact, he’s careful to cast his net just wide enough to catch BBT in his implicature without having to consider its theoretical virtues in any detailed manner – and without, he thinks, obligating himself to provide a positive account of his own. He wants intentionality to be both necessary and magic, to belong to this family of things that for reasons never made clear simply cannot be mechanically explained–or in other words, natural.
A great many of our intuitions lead us astray. What we need to do is gerrymander those that don’t in a way that allows us to avoid running afoul those that do. This is what science does: allows us to sort the intuitive wheat from the intuitive chaff. Does Ben really think that intuition can theoretically bootstrap itself absent science, that one can transcendentally guarantee the autonomy and the adequacy of the intentional? Does he really believe the flood of neuroscientific information is going to leave this one family of things untouched, that, despite staring at ourselves through informatic peepholes, we nevertheless somehow managed to get ourselves right?
This is a tall, tall order. As I think he’s beginning to realize…
I’m certainly at a loss.
re: “philosophical mysteries pertaining to causality”, I’ve always been a fan of the philosophical stance on causality that comes closest to representing realities in opposition to common sense notions of time and space, such as quantum superposition, which is the Buddhist idea of Pratītyasamutpāda, which is sort of like causality without determinism.
The intersection between Pratītyasamutpāda and Śūnyatā reminds me a lot of meshing scientific ideas of quantum superposition and the arising out of “nothingness” (for lack of a better heuristic) of the universe, a la Laurence Krauss’ amazing lecture on the subject. Of course, Śūnyatā also reminds me a lot of the Blind Brain Theory’s account of the “self”/”mind”/”soul”/”I”, especially as expounded by Nāgārjuna…which, to bring things full circle, is a poetic cry which maintains itself as an echo of the aesthetic’s role in contemporary knowledge, even after most contemporary Western strands of poetics have failed to maintain with the scientific image (aside from the potentialities inherent in “new aesthetics”, albeit largely unexplored potentialities due to a nostalgia saturation that borders on the pubescent…we can make art installations that make you think you see 1980s 8-bit visuals in the everyday, but no one in the “new aesthetics” wants to make drone vision glitch-art to expand our understanding of what military surveillance does beyond the bandwidth of the furthest borders of homo sapiens visual perception).
I always feel, by the way, that where you and Benjamin Cain seem to disagree are not just where each of you misapprehends the other in some fashion, but in an exponential zone of mutual misapprehension in some kind of venn diagram constructed out of Calabi-Yau manifolds instead of circles. It’s simultaneously a joy and a frustration to read, but I know the two of you drink from the same Klein bottle, so when the Semantic Post-Apocalypse reaches critical mass I will hold both of your bodies of cheeky theory close to my informatic glitch of “observation” of “experience” with arms of abstraction as NeuroFocus 2.0’s Board of Directors beams a final series of “adjustments” at our subpersonal assemblages.
Also, where’s Roger Eichorn to act as adjudicator and mediator in this marriage counselling session we call guest philoso-blogging? I trust his partiality to impartiality.
I don’t know if our talking past each other on certain issues is quite as complicated as the math of string theory. The thing is that the prevalence of the manifest image (the intuitive picture of the self) is ambiguous, so it can be explained in different ways and we need to decide which explanation is best or whether different explanations are useful for different purposes. (This latter point raises the absolutism vs pluralism distinction which came up in RSB’s recent back-and-forth with Terence Blake. That’s where the word “scientism” came from in this context.)
RSB explains the manifest image as an artifact of limitations of neural mechanisms. In particular, these mechanisms are blind to themselves; after all, our five senses are pointed outward, not inward. A transcendentalist says that however the manifest image is caused, it’s epistemically foundational and so that image gives us what Kant called synthetic a priori knowledge. RSB says it’s foolish to bet against science and in favour of intuitions; indeed, this is just the god of the gaps strategy.
The transcendentalist tries to undercut reductive explanations of the mind by pointing out that they presuppose parts of the manifest image (e.g. intentionality or normativity). For example, if we say the mechanistic picture of the mind is *better than* the manifest image, we’re presupposing a concept of normativity that derives from the latter, and if we say that meanings and values are only illusions or mischaracterizations, we’re presupposing the distinction between reality and appearance, and thus between true and false. So mechanistic scientism self-destructs. This is why, contra RSB, the transcendentalist doesn’t merely pander to people’s hope for the autonomy of folk psychology, without offering any evidence of the presuppositions; the transcendentalist lets the mechanist hang himself by his own monologue, as it were.
RSB says in response that the assertion that there’s such a presupposition begs the question, since the alleged presuppositions can be interpreted mechanistically rather than intuitively; moreover, BBT explains why we’re tempted always to fall back on the manifest image. Thus, the manifest image is explained away even if the data can always be (mis)interpreted in the traditional terms, or as he puts it, if the ambiguities can always be gamed. The transcendentalist says that if we can always fall back on that intuitive viewpoint, it serves as a window onto the self that may have its advantages as well as its disadvantages. The window may be dirty, cracked, and warped, but as Dennett said, we can use the short forms from the intentional stance to predict our behaviour.
Back and forth it goes. I push the transcendentalist line here, partly to learn about BBT by testing it against a devil’s advocate position. I’m not committed to Kant’s transcendentalism, to thinking that the manifest image is *necessarily* foundational. I’m open to the possibility of a posthuman revolution in which we’ll be able to stop thinking of the self in any of the traditional terms. But I think that precisely because the triumph of the mechanistic view of the self would be so revolutionary, we have here a concession to the transcendentalist: we wouldn’t be human anymore if we lost our intuitive self-image. That’s why it’s so hard to think of the self in strictly causal, mechanistic terms, because we’re not yet posthuman.
Incidentally, I’m writing an article in response to “Necessary Magic,” to clarify the issues that are relevant to the disagreement (mechanism vs transcendentalism, etc).
Admittedly, I’m too recently arrived here to read full history of your debate.
Yes, you can be skeptic about science itself. Yes, you can question the objectivity of the community that pursues it.
But, it is THE tool we have at modeling nature. We have no other, nor we will ever have another. That is the fact that is not refutable.
The fact remains that science is a quantitative discipline seeking to describe nature in as objective a fashion as possible (disregarding the fact that everything
may be an illusion, like in the Matrix movie, but that will never get us anywhere). Everything else, including language and debate as a form (even in a strict sense) is purely qualitative.
Even if you coin a firm definition of something, it is still only at descriptive and qualitative level, which means no further than our brain operates in the first place.
I think you summed it rather nicely with:
“This is what science does: allows us to sort the intuitive wheat from the intuitive chaff.”
Exactly. I admit (commented on Ben’s post as much) that we are ill-equipped to be totally objective or 100% sure that we are on the right path or even that
math follows nature as a precise, even if abstract, rendering of the real. Furthermore, we may even come to the natural limit of our power to cope with these complexities.
But I don’t agree that it will make arts something pejorative, unworthy or archaic. Our sense of beauty persists through every discovery made, and, furthermore, scientific discoveries evoke similar sense. We just might come to an abrupt end of major discoveries because of our own limitations.
I’m glad you’re admitting BBT is (or should) be connected to the mechanistic principles “down in the attic”. There’s no escaping it. From what I’ve read, it is certainly very interesting, but still qualitative (words and analogies vs math). It will need to be related to the micro/neuro biology down below at some point or the other.
Healthy (vs paralyzing) skepticism is one of the science first hallmarks. Point being – you may be skeptic about science. But – in order to get anything done at all – you must embrace it. No other way.
If I heard the word “scientism” in a bus or in a coffee shop without any additional context given, I swear I’d think it is related to the blind faith people put in “people in white coats” on TV repeating those magic words “scientists said” without any understanding what the hell science is in the first place. We have some slick hi-tech, but only a fragment of people understand it. Only the “science entity” prevents it from appearing magical and supernatural, so it is perhaps not that strange that people simply shift their religious fervor from God to Science without any net gain in actual knowledge.
Sorry if I derailed from your original thoughts too much.
[…] blog Three Pound Brain that I first encountered Cain’s mind. Bakker has just published a critical reply to Cain’s guest post a few days ago on the philosophical difficulties facing scientism. […]
Now the first thing to note is the way he rhetorically postures the possibility that science is our reliable source of theoretically accurate knowledge into something that ‘just has to be wrong.’
Is that really what Ben is saying in the quoted text?
I could just as much interpret ‘and so science would have to be part of a natural process that can be understood in purely causal, value-neutral terms.’ in a way where I’d argue to Ben that A: scientific terms actually are already purely causal and value neutral or B: If the terms aren’t so, then Ben himself is arguing for scientific wording to be created and used. Ie, he wouldn’t be dismissing science, only refining it’s practice, with such a call.
How is he saying purely it just has to be wrong, when he even seems to be giving a conditional for how it’d need to be to be right (ie, the purely causal and value neutral treatment)? A conditional that seems very approachable, even?
A great many of our intuitions lead us astray. What we need to do is gerrymander those that don’t in a way that allows us to avoid running afoul those that do. This is what science does: allows us to sort the intuitive wheat from the intuitive chaff.
I don’t understand the grounding of this statement – it basically requires someone to already gerrymander the intuititions collectively called ‘science’ in order to know that science allows us to avoid running afoul of those other intuitions that lead us astray. It hints of believer talk, because of how its phrasing shows no sense of a circular logic problem there – ie, that ‘science’ being the superior intuition is the obvious thing, so you get a description which literally tells us one intuition missleads us because…another intuition says so. Science comes out ahead amongst equals because…science already came out ahead amongst equals.
I say this having alot of my ‘chips’ placed along the science side of the roulette table myself.
Years back someone literally shocked me in a conversation by saying they didn’t really buy into the emperical. What was odd in that conversation is they introduced the word ’emperical’ to the conversation, while it wasn’t a word I was familiar with at the time. All in all, I reeled at that point – how do I prove the superiority of the emperical/science? With science??? I was screwed!
So it makes me wonder instead, are these posts a heartfelt appeal? Heh, if so, I’m probably to heartless to be qualified to comment on it – so I dunno, more digital ink spilt on the intar webs!
Theoretical incompetence applies as much to my second-order discourse about science as any other speculation. There’s just something about thermonuclear explosions, dagnabbit! So this is why I always appeal to what I call the ‘cognitive difference,’ particularly when debating continental types. A claim-making institution that can erase cities in a twinkling is doing something different – so what is it? I call it accuracy, here, whereas others would be inclined to call it ‘truth.’ But like I say, I don’t care what the term is, so long as the difference isn’t swept under the rug the way it used to be in Continental circles.
Never see much validation of nukes. At best you get Vox, who tries to say nukes are engineers all the way down.
Hold on – is ‘theoretical incompetence’ is a way of nearly saying one is wrong, without actually saying that? Or am I wro…theoretically incompetent on that?
Callan,
Science is not an intuition, although some of the greatest leaps in the field have been sparked by intuition. It is quantitative, ergo incompatible with intuition.
Still, intuition has played a great part in the speed of research.
For example, much of the string theory research frenzy is a great deal about the intuition. People feel it is right.
So either we might thank that intuition later for giving us the necessary perseverance or cursing it for derailing us for solid decades.
‘Quantitative’ itself isn’t an intuition? Its somehow escaped being a mere estimate in ones skull and is beyond that?
Take your example on string theory – what happens if it’s eventually proven ‘true’ – then we’ll talk so much about true and we won’t mention intuition anymore? Until we so little mention the I word that it’ll be come as dissassociated with intuition as ‘quantitative’ is?
Would you argue that constructing a CPU, for instance, is based on mere estimates? Or that a CPU somehow never escaped my skull? Right now, it enables me to process and post this reply to you.
Quantitative is the way of describing things – in as detailed and universal way possible. Qualitative is more descriptive and really only suited for humans. Describing the acoustic tone by it’s wavelength and amplitude is way more precise than saying “it’s loud”.
You can do a lot by with quantitative description. You can directly control the world outside yourself – construct, modify, peruse.
With qualitative description you can pretty much only influence what’s inside yourself. It’s useless on the outside world. Thus, we can immediately disregard qualitative as a purely “magic show” trait, one that exclusively exists inside our heads. Now, quantitative is tricky because it is also a trait of our brain, but it is supposed to be an model, approximation, a layer of abstraction that is analogous to the outside.Change one and you know how to change another. You cannot do it with qualitative descriptors.
What really bugs me is there’s no function that transforms quantitative -> qualitative descriptors. Not in quantitative approach, even less in qualitative.
Science and quantification does not equal real. It is a formalized description. But math is a wonderful tool. It’s very purpose is to prevent us from lying to ourselves. But then again, intuition is a beautiful tool as well.
Would you argue that constructing a CPU, for instance, is based on mere estimates?
That’s my estimate.
Or that a CPU somehow never escaped my skull?
I guess that to be the case.
Right now, it enables me to process and post this reply to you.
Well it is true that by configuration of fire, wind, water and earth, the four elements of the universe, we have a…oh wait, that quantitative got thrown out of general usage.
Kind of makes me lothe to say anything is definite – just the latest, seemingly most effecious (curse this blogs words!) trend going on at the time of writing. As one example, quantum physics and such seem to make a joke of anything that appears to be solid as being solid, at the very least. Lots of quantitatives seem to get undermined.
The use of the word ‘Quantitative’ seems to be a legitimising method that attempts to move away from bets placed at the table of life and instead ‘it’s known’.
@Callan
For your estimates, I beg for an explanation. If I’m reading you correctly, and I’m not sure that I am, you leaving room for doubt that there’s actually no “outside” world at all?
Now, onto fire, water, wind and earth. None of what I’m about to write is a quantitative descriptor because you can’t get quantities and relations out of it but nevertheless:
Fire is a qualitative term.
Quantitatively it is described by chemistry (abstract layer or two above particle physics) as oxidation – gain of oxygen atom(s) in a molecule.
Precise mechanism if defined by quantitative formulas from which you can see “how much”, in “what way” and depicts every possible outcome of it given the input variables (which you can also quantify).
Water is a qualitative term.
It describes a union of two atoms of hydrogen and one of oxygen. It can be quantitatively depicted in various relations and variables, like it’s density, state-change temperatures, electron configuration, etc. It is also depicted by various quantitative formulas in both macroscopic and microscopic domains (note that we use macroscopic formulas just to simplify things, not to say we have two separate sets for the scope needed).
And wind is a flow of fluid governed by fluid dynamics and thermodynamics. I’m not about to start writing about THAT (besides, I always hated it). 🙂
You see, we know nothing about water, fire, earth and wind beyond what we’ve experienced with our own senses until we start to describe it quantitatively.
About quantum physics…I’m not sure if anything at all got undermined really, except our intuitive expectations of an simple and elegant universe.
The whole quantum physics is intuitively totally askew, wrong and downright disturbing. Also, it is counter-intuitive in a way that the absolute chaos and probability down below suddenly collapses to macro-scale determinism and simplicity.
Solidity was an error of our perception – it is based on resolution and scope. Similar to color dots on TV screen combined to make RGB composite color. Or the perceptively fluid animation being only 24 frames per second trick. But, even if something got undermined, I find it a good thing. It means we have at least some tools to be properly skeptic.
I think weve got a bit of ‘end of history’ thing going on here, MS – you’re not following how ‘water’ was treated as quantitative – until it started to be taken as two hydrogen and oxygen atoms. You’re not considering that what you know as quantitative could as much be transformed into qualative at any point in future, just as much as water did. Instead you’re at the end of history – nothing will change your quantitives into qualatives. Thus making your sense of ‘quantative’ seem more than mere intuition – something grounded and never open to change.
Also to me the way I read your phrasing, you seem to try and have it both ways a bit
Would you argue that constructing a CPU, for instance, is based on mere estimates?
Science and quantification does not equal real.
How did we construct the CPU if quantification isn’t real? It was maybe just an estimate of what would seem to work out, and it seemingly did? But that’s arguing the construction of a CPU based on mere estimates?
I’d argue that water could never be taken as quantitative. Quantitative deals in quantities. Numbers, if you will. How much, when, in what order and “what happens if”?
We seem to be arguing about the nature of math itself. Is it something real? Are the laws of nature math? Are they the same? Is there 1:1 relation in respect to results? Or N:1 (I mean N mathematical formulations yielding the same result)?
I can give no absolute answers, Callan, I’m sorry. I’m a healthy skeptic about anything, especially the state of today’s science. But the point I’m trying to make is that we can actually work and analyze quantitative relations (math models) and furthermore they yield very satisfying results. Nature comes in quantities and the physics laws simply depict how quantity of x depends on quantity of y in any given situation. We could possibly analyze human brain in down->top fashion with math. But qualitative descriptions are really
playing on shared human experience and emotion.
You could never explain the emotion “sad” to a person that was never sad before (or the effects of ecstasy to a person that never perused it 😛 ).
If you could somehow drill down the exact numerical equations producing this state – well, then you could. Like with water.
Let’s say you’re Captain Kirk on a deep space mission (where else? 🙂 ) on an alien planet. They’ve never seen water, so the word “water” does not
cut it. However, you can use your quantitative knowledge (2 hydrogen + 1 oxygen atom) to construct it and show it to them (if they don’t dig math equations).
But…CPU is not based on estimates, but rather mathematical modeling. For instance, it sprang from totally abstract (mind induced) model of Turing’s machine and Finite State Machine. We constructed it from whatever was available to us (semiconductors). We took whatever we could construct that followed this model’s tenants. It is not optimal in any way, compared to vast complexities of human body but it’s completely exact.
On paper. The “real stuff” it is composed of (semiconductors) were tweaked to the point that no unexpected stuff happened on regular basis.
So, we employed “reality stuff” in order for our mathematical model to become real. The requirements of this “reality stuff”? Very simple.
If you have no voltage on input (0), produce voltage on output (1). Basic transistor. Our model is simple, but the requirements – macroscopic – stable voltage, heating concerns, electron leakage, clock limits, etc – were the ones giving us real problems. So you can say our ideal model of the CPU is exact, however, the real stuff making it tick is a macroscopic estimate of what’s going on beneath. Exactly how many electrons on the output? How many doped molecules in semiconductor? The answer? Enough. 🙂