Man the Meaning-Faker
by rsbakker
Ben has posted an excellent piece on Brassier’s Nihil Unbound and his position on nihilism more generally over at RWUG. “Nihilism,” Ben writes in a pithy summary of Ray’s view, “is the philosophy needed for living with intellectual integrity as one of the living dead.”
I remember when I first read Nihil Unbound what hooked me was Ray’s refusal to buy into any of the traditional Continental prophylactic moves, his insistence that truth trumps meaning no matter how cherished that meaning might be. The want of traditional Continental philosophy has been to adopt various preemptive theoretical attitudes vis a vis science, to insist that science presupposes some kind of x, whether it be an existential interpretation of the Lebenswelt, where experience is asserted as the ontological condition of possibility of science (understood as a mere ‘ontic’ discourse), or some normative interpretation of the institutional context of science, where thought is asserted as the practical condition of possibility of science (understood as one language game among others). I have espoused both of these positions in my day, and no longer find either even remotely convincing, simply because I finally realized that posing a mysterious, never-to-be-arbitrated speculative diagnosis of What Science Is as the grounds for appraising the status of scientific theoretical claims is to simply get things backward in a suspiciously self-serving way. It struck me as using Ted Bundy’s testimony to convict Mother Theresa, and to sentence her to never wave her empirical yardsticks anywhere near my oh-so grandiose and yet fantastically fragile speculative claims. Obviously so.
Nihil Unbound excited me so much because I had thought that Ray had actually managed to move past these prophylactic gestures. The biggest shortcoming of the book, I had thought, was simply the problem faced by all projects that attempt to move past meaning, all attempts at post-intentional philosophy: namely, the inability to account for meaning. It’s one thing to say meaning is bunk, but short of explaining why we find it so compelling, the best one can do is hang upon the perennial incompatibilities between science and meaning, knowledge and experience. Meaning either has to be explained or explained away before anyone can attempt to move on in any remotely convincing fashion. Otherwise, all the old and powerful arguments securing the apparent ineliminability of the semantic remain unanswered.
I was so excited by Nihil Unbound, you could say, because I thought I had the very thing it was missing: a parsimonious and comprehensive way to explain meaning away–the Blind Brain Theory. As it turns out, Ray himself came to the same conclusion regarding the book’s main shortcoming, the problem was (from my perspective at least) he felt the need to turn backward to address it: to seize on a positive account of meaning deflationary enough to seem consistent with disenchantment, but ultimately recuperative all the same–inferentialism. As he explains in his After Nature interview:
[Nihil Unbound] contends that nature is not the repository of purpose and that consciousness is not the fulcrum of thought. The cogency of these claims presupposes an account of thought and meaning that is neither Aristotelian—everything has meaning because everything exists for a reason—nor phenomenological—consciousness is the basis of thought and the ultimate source of meaning. The absence of any such account is the book’s principal weakness (it has many others, but this is perhaps the most serious). It wasn’t until after its completion that I realized Sellars’ account of thought and meaning offered precisely what I needed. To think is to connect and disconnect concepts according to proprieties of inference. Meanings are rule-governed functions supervening on the pattern-conforming behaviour of language-using animals. This distinction between semantic rules and physical regularities is dialectical, not metaphysical.
And so, like a scorned theoretical lover, I find myself writing the odd letter–or post–bent on showing him why his recuperative inferentialism simply will not work.
The irony is that this pretty accurately summarizes my long-standing debate with Ben as well! They both take themselves to be staring the Beast of abject meaninglessness in the eye, but they succumb to their own noocentric intuitions in the end–or so my desolate view has it. Both raise conceptual barricades against the terrifying prospect that they themselves are merely more nature, not nature + x, that the boundary between them and the bottomless universe they both acknowledge is meaningless is simply technical.
What I would like to show is how easily those conceptual barricades can be torn down.
.
“We should avoid scientism and nihilism, on the one hand,” Ben writes, “and delusion and irresponsible faith, on the other.” He wants our dilemma to be a false one, pines for some third way that is not scientific, but remains rational in some respect. Everything, however, hangs upon this ‘some respect.’ He thinks reason understood as instrument of truth is unworkable, because such reason collapses into scientific reason, which inevitably leads to nihilism. He thinks reason as instrument of interest is also unworkable, because he seems to recognize, as did Adorno, that instrumental rationality is incapable of providing meaning. It can only deliver the goods, never the Good–the how and not the why. You could say the whole of contemporary consumer society attests to the paradox of a rationality that can only serve appetite. Reason, as Ben likes to say, is ‘accursed.’
In this sense, he’s actually working through the classic Continental problematic in the classic Continental way: by positing a variant discursive mode while problematizing the ‘presuppositions’ of science. He’s at pains, for instance, to continually contextualize science, to emphasize the fact that it’s just one set of human practices out of many, then to assert that, as such, it’s adapted to its own institutional ecology. Thus, having characterized What Science Is to this minimal extent, he can then point to all the other ecologies out there, and it seems to simply follow that science simply isn’t applicable. With this picture in place, he can then lay the charge of ‘scientism’ any time anyone applies scientific cognitive standards outside what he deems the proper discursive ecology of science.
I can remember when I thought all this was just a no-brainer! As clear as yesterday…
Where he differs from most historical Continental approaches to this problem is that he maintains, as most Analytically trained thinkers do, a wary respect for the Cognitive Difference, the fact that science isn’t just another discursive institution, it is the objective discursive institution. This is what forces him to the brink of nihilism with Brassier: the fact that he must concede all of the natural world to science. This is what he means by ‘delusion and irresponsible faith’ above: those forms of theoretical claim-making that refuse to concede this ecology–one might say the ‘ecology ecology’–to science.
Now, back in the old days, it was easy for Continental thinkers to believe science to be ecologically constrained, to be necessarily limited to its domain, and to thus secure the cognitive legitimacy of their discourses against its boggling power. The days of that profound theoretical sleep, I fear, are over. As I said above, the hard fact is that science was really only ever technically constrained, that the complexities of the human–particularly those belonging to the brain–allowed the discourses of the human to carry on with business as usual. As cognitive science develops, however, the technical obstructions fall–it really is only a question of how far this process will go. I personally think ‘all the way’ is far and away the most probable answer.
Both Ben and Ray, however, want to draw two different types of lines in the sand. For Ray, the line lies in Sellarsian notion of ‘parity’ between the conceptual level of giving and asking for reasons and the ontological level of scientific explanation. Insofar as he recognizes the Cognitive Difference, he concedes the ontological priority of science. The possibility of parity lies in
the recognition that the manifest image furnishes us with the fundamental framework in terms of which we understand ourselves as ‘concept mongers,’ creatures continually engaged in giving and asking for reasons. But we are able to do things with concepts precisely insofar as concepts are able to do things to us. It is this capacity to be gripped by concepts that makes us answerable to conceptual norms. And it is this susceptibility to norms that makes us subjects. (“The View from Nowhere”)
The ontological priority of science over meaning flips into conceptual parity simply because meaning provides the condition of science understood as a self-correcting practice. Short of meaning, Ray contends, we can neither motivate nor make sense of our scientific practice. What prevents this account from lapsing into the traditional Continental mould is the refusal to give the conceptual superordinance of meaning an ontological interpretation. Meaning, on Ray’s Sellarsian account, is made. Science monopolizes cognition of the natural, and the natural exhausts ontology–the devil is given its due. Meaning arises out of practical necessity as an invented how that is conceptually incompatible to the natural what, but indispensable for the cognition of that what all the same.
Essentially, this is the great trick of pragmatic naturalism. And like many such tricks it unravels quickly if you simply ask the right questions. Since the vast majority of scientists don’t know what inferentialism is, we have to assume this inventing is implicit, that we play ‘the game of giving and asking for reasons’ without knowing. But why don’t we know? And if we don’t know, who’s to say that we’re ‘playing’ any sort of ‘game’ at all, let alone the one posited by Sellars and refined and utilized by the likes of Ray? Perhaps we’re doing something radically different that only resembles a ‘game’ for want of any substantive information. This has certainly been the case with the vast majority of our nonscientific theoretical claims.
This certainly provides ample ground to be skeptical of inferentialism. But how are we to know one way or another for sure?
This is where the wave flops up and washes Ray’s particular line in the sand away. The only way to know is to gather information and test our various interpretations–to do the science. Given that Ray has already conceded the incompatibility between the conceptual regimes of science and meaning, the prospects don’t look all that good. Science has a pesky tendency to revolutionize.
For Ben, on the other hand, the line in the sand lies more in the possibility of subjective capacity than in the necessity of normative constraint. Indeed, his primary issue with Nihil Unbound lies with how Ray, as he sees it, systematically denigrates this capacity. As he writes:
I agree with Brassier that rationality by itself leads to nihilism, disenchantment, angst, and so forth. Reason is accursed. But I don’t think the two perspectives are incommensurable so that the choice between them must be arbitrary. On the contrary, the perspectives are themselves naturally interrelated. We can speak of objective and subjective truth. The former is the trauma of learning that nature is fundamentally physical, that in itself, prior to our transformation of it, the universe is a harsh, mostly barren wasteland that’s doomed to destruction. By contrast, subjective truth is the feeling of rightness that results when instead of keeling over in horror after the world’s physicality slaps us in the face, we creatively undo that loathsome undeadness and surround ourselves with a more palatable version of the world that’s full of concrete vessels of purpose and ideality. So subjective truth is a salve for the trauma of objective truth, even as objective truth is a check on the vices of irrationality brought on by a wholesale escape into our fantasy worlds. The fact is we must live with both inclinations and we should avoid their opposite pitfalls.
Ben also thinks that science is inescapably wedded to meaning. Like Ray, he believes that its origins in human practice are important, but more as proof against lapsing into naive scientism than as the ‘fundamental (but fictional) frame’ that Ray makes of it. He realizes the difficulty of preempting the cognitive authority of science on speculative grounds in a way that Ray does not. For Ben, the key relation between science and meaning isn’t preemptive and authoritarian, it is consequential and creative. The important fiction, for him, lies in our response to the scientific monopolization of the natural–the Undead God, as he puts it.
Since the creativity simply follows from the straits imposed by the scientific monopolization of the natural, it’s the consequence that becomes the most crucial. Whimsey is creative, as is madness. Bigotry can be creative as well. Ben, in a sense, reverses the authority gradient posited by Ray, arguing that science needs to be the constraint on meaning, what prevents human meaning creation from lapsing into ‘delusion and irresponsible faith.’ Meaning, in other words, requires science to be rational.
But again, we bump into a simple question that seems to unravel the whole. The problem of meaning is primarily the problem of the incompatibility of meaning and science. Given this incompatibility, what kind of constraint is science supposed to provide? How can it constrain something it simply cannot cognize as real in any manner we find intuitively recognizable? The tempting answer, the one that certainly seems to accord with the way science is actually used in debates regarding meaning, is that such constraints are opportunistic at best.
For Ray, embracing meaning in this sense amounts to embracing irrationalism, and the corresponding inability to sort outright delusion from ‘meaning proper.’ But Ben can bite this bullet and defer, I think, acknowledge that it’s simply part and parcel of the collective debate on which meanings our society should aspire to. The fact that this debate is open-ended no way impugns the subjective truth of any given meaning, the fact that, as unreal as it may be for the universe, it remains ‘true for me.’ He can, in other words, continue to claim that “[i]f nihilism is the view that the universe is absolutely meaningless, nihilism is false because there is plenty of meaning on our planet.”
Can’t he? Not at all, really.
The first thing to note is that simply positing subjective truth as a solution to the problem of meaning is question-begging. The question of whether there is meaning in the universe is also the question of whether there is any such thing as ‘subjective truth.’ The only real warrant he could have for resorting to it is the notion that it is conceptually primitive, somehow, that it poses an inescapable boundary condition of intelligible thought.
But if it seems this way–and I appreciate that it does for great number of thinkers–then it is for the simple want of alternatives. On the Blind Brain Theory, for instance, meaning as both Ray and Ben theorize it is a metacognitive illusion through and through–which means that Ben’s subjective truth is also the product of our metacognitive incapacity. The argument for why this is the case is quite direct, no matter how counter-intuitive the conclusions may seem. Science tells us that human cognition is heuristic all the way down. This means that the subject-object dyad is also heuristic, which is to say, a way to make sense in the absence of certain kinds of information. As such, it necessarily relies on the information structure of a given problem ecology to effectively resolve problems. So the question immediately becomes: is the subject-object dyad applicable to the problem of meaning?
Well, as the problem of circularity I adduced above might suggest, we have good reason to think not. Once you appreciate the heuristic peculiarities of meaning concepts the explanation for the prevailing incompatibility between science and meaning that both Ray and Ben acknowledge becomes quite clear in naturalistic outline at least. So where science conceives the human as organic subsystems within larger environmental systems, the subject-object dyad conceives the human as a subject set over and against a world of objects. It occludes, and therefore problem solves, without the benefit of the very mechanical systematicity that science has revealed. Small wonder it suffers compatibility issues! The subject-object dyad elides the mechanistic facts of perception (the role played by sensory media), provides us with gross mechanical information regarding the ‘object,’ and yields next to no mechanical information about its own operations–we have to rely on metacognition for that! Both thoroughly occlude what we are in fact–which I fear is far more akin to the red dot on Jupiter than any notional ‘subject.’ If science is to exercise any substantive constraint, both subject and object have to be seen as cross-sections, lower dimensional projections, of something far more complicated than any Lebenswelt. Applying them as conceptual boundary conditions the way Ben does is not so different from using naive physics to argue quantum field theory.
The thing is, once you realize that the subject-object paradigm is heuristic, then it simply isn’t a matter of subjectivity versus objectivity, so much as systems which are neither. There is no ‘objective subjective,’ for instance: the binary simplicity of the formulation should tip us to the fact that something’s fishy. ‘Subjective truth’ is a heuristic misapplied twice. Now this is an admittedly difficult way to think: the problem-ecologies of our metacognitive heuristics are not intuitively available to us, let alone the fact that we swap between numerous varieties of heuristic tools whenever we tackle questions such as Ben’s and Ray’s. Only neglect makes our dim inklings seem ‘obvious.’ Only neglect makes ‘subjective truth’ seem universal and self-evidential. Only neglect lends normative contexts like ‘the game of giving and asking for reasons’ their veneer of preemptive necessity.
But as I keep saying: all of this is about to be revolutionized. The apparent universal applicability of these ways of thinking will be relegated to the scholastic dustbin soon enough.
The thing to realize about my argument is that it doesn’t need to be scientifically vindicated to have a powerful impact on Ben’s position. The subject-object paradigm is either heuristic, or… If it is heuristic it has an effective ecology. The onus accordingly falls on him to argue the applicability of his boundary conditions. Given the abject inability of philosophy to resolve any of its issues, something has to be holding things up. Could it be that traditional philosophy of meaning is planked with serial missapplications?
Well, it’s very possible! That’s the problem, the fact that this is so very possible. This is where reason bottoms out, consumes its own tail, and is remade as something alien to the metacognitive intuitions both Ray and Ben are seeking to preserve, even if in attenuated, deflationary forms.
And really, why should we think these particular prescientific inklings would end any other way? That Man the Meaning-Maker, the human we concocted in the absence of any substantial scientific information about ourselves, would be the one blinkered posit to be vindicated?
.
“Science tells us that human cognition is heuristic all the way down.”
A simple, maybe stupid question, but this THE point I want to explore, since it is central to your thesis. What books/articles do you recommend related to the scientific work that tells us this? I’d like to see the actual work being done that you’re reading.
You’ve mentioned Daniel Kahneman, for instance.
Oh, just two more things: First, looking forward to your book, as are many of us.
Second—this is for haig: Let us know, somehow, when your blog is up and running. 🙂
I’m pretty sure I owe haig an email. I’ll let him know!
Dan has the manuscript for Through the Brain Darkly. Hopefully I’ll have some concrete details soon.
Good to see you poking around, Joseph!
This is the research gold-mine, as far as I’m concerned. Heuristics are invoked everywhere in the literature but there has been very, very little work explicitly dedicated to it. It seems to be one of those things most cogsci researchers have assumed. My friend Sheldon Chow has his dissertation on heuristics up, which he’s presently reworking into what I think will be a watershed monograph on the topic (incorporating, he’s told me, my mechanical emphasis). In addition to Kahneman and Tversky, there’s Polya, and Herbert Simon, of course. I found my way primarily via Todd, Gigarenzer and the ABC research group – and it’s primarily their operationalization that I rely on (but Chow has some problems with this). Sheldon has searched high and low for work by others, but bumpkus.
Meanwhile the basic understanding I present has proven remarkably ironclad. It really seems like it hasn’t occurred to anyone to use them the way I’m using them.
Scott, this blog has been on on fire of late. I, for one, will be fascinated to see how BBT can be further elaborated. I think it’s basically on the right track. But am I right in suggesting that it needs something along the lines of a theory of content to unpack claims about the heuristic nature of our self-understanding? BBT seems to imply that meta-cognitive heuristics reduce complex computation or physical process to lower dimensional models that misrepresent their nature. But misrepresentation presupposes representation (just as representation presupposes the possibility of misrepresentation). Can BBT remain agnostic about the nature of the misrepresentation it posits?
Writing this from Rome, Where I’m attending the Posthuman Human conference. More about that when I get back. Hope all is well, David
I’ve been asking RSB this same question. What is it, mechanistically, for the scientific image to have a grip on reality and for the manifest image to be merely an illusion? If there’s no presupposition of semantics here, BBT needs another account of the difference. If all there are are mechanisms, systems, and natural processes, there’s really only causal relations to work with. So scientific methods are “effective problem-solvers,” whereas intuitions shoot blanks. But this kind of pragmatism or instrumental rationality looks like it presupposes our interests. So what is it to have an interest or a goal? Maybe all of this reduces to biological functions (naturally selected results of certain processes). At any rate, I too am interested in this issue you raise.
I’ve answered this question several times now I thought. Directly or indirectly, it’s been the subject of many a post, anyway. To have a ‘goal’ is to be causally related to the environment in a certain way, a relation that entirely outruns our brain’s ability to metacognize, leaving us with the radically heuristic conceptual peculiarities of ‘purposiveness.’ As we learn more and more about what actually drives the production of behaviour, and as the operational utility of ‘goal talk’ dwindles, we’ll come to see the apparent necessity that impels you to assert a definite function to ‘purpose’ as we intuit it is simply an artifact of the nascency of the science – the lack of information. It will turn out to be surprising and fractionate and mechanistic–something that we can actually tinker with, transform. How else will it turn out? is my question.
To say the manifest image is skewed by several profound ‘illusions’ is no more necessarily semantic than it is to say that Muller-Lyers Illusion is an illusion. It’s just cognitive systems working maladapted information. There’s no correspondance implied. This is one possible flat, miminalist ontology that falls out of the way BBT levels the first person, one where the all you have are systems nested within systems, where some sub-sub-systems, like the apparently indubitable theoretical metacognitive appraisals of the ‘human’ (as purposive, meaningful etc.) amount to dead end vectors of function, problems posed that admit no solution. Of course we keep uttering the terms and they seem functional in some way. I’m not debating that. It’s important to remember that what you intuit of those instances when you reflect on your utterance of those terms is in no way a reliable indication of what ‘you’ are doing in any grand theoretical sense. It’s also important to remember that what your second-order deliberations access isn’t the ‘skin’ of what’s going on, it’s more like shrapnel, strewn throughout the astronomically complicated functional webs of the human brain.
In a real way, BBT flips the onus. It says, hey, here’s a bunch of damn good reason and a growing pile of empirical evidence that noocentrism is an illusion. In the face of this, why should anyone believe otherwise?
Hi Benjamin. I think you’ve asked the question that the whole thing pivots around. My position is that the ‘mechanism’ that places scientific claims to objectivity beyond question is that of starvation, coercion and murder – violence generally, and its social equivalents – exile, isolation and rejection. Killing is ‘where the buck stops’, epistemologically speaking – survival moment-to-moment is the necessary grounds of making any truth claim. This, after all, is the difference between fact and opinion – fact gets you killed if you ignore it, truth doesn’t. (And the inevitable corollary – opinion backed by sufficient force IS simply fact. Ask Stalin.)
Anyway, I believe that’s the point where science escapes the suspicion that it’s just another worldview among many; just another ‘way of knowing’. Turns out that science is really, really good at killing – as the saying goes, if prayer could boil cities we’d all be at Jesus Camp. The value of any methodology is simply its contribution to the combat power of its practitioners. So far, science carries the biggest stick.
The ‘biggest stick’ really is the best way to describe it, I think. ‘Objectivity,’ like ‘subjectivity,’ is heuristic, and insofar as it elides the systematic nature of the brain/environment it is a problematic boundary concept.
And I like your appraisal of machinery as violence as well. It’s one of the reasons I see BBT as a worst-case scenario, and why I have such missapprehensions about the posthuman.
So what is it to have an interest or a goal?
Overly trite responce: What is it to have a coin pulled from behind ones ear?
Only the dismantling of the ‘situation’ answers it, and the dismantling does not answer the question. Only, at best, renders the question moot.
Scott, you certainly have addressed this question before. I see your point about different causal relations, one of which is hidden while the other leads us down the garden path. I’m not sure if the distinction between “high” and “low resolution” can be decisive, though, since that seems to presuppose the representational aspect of an image. What’s an image besides being a representation? Yes, it’s an effect lying in the middle of some mechanism, but how else to distinguish that effect without positing the semantic relation?
It’s the same with respect to the idea of a mere “artifact” of some process. We can say one of the two causal relations is naturally selected and functional, while the other is maladapted, but what happens when the maladaptation becomes an exaptation? What happens if, following up on David Clark’s point about death, both causal relations help us survive? So the hidden mechanisms in our brain are obviously needed for our survival, but so too the “illusion” has salutary effects; the fictions we tell to fill the inner blankness with which introspection leaves us provide necessary distractions so we don’t play God with too much technoscience and so we keep the peace by respecting each other as special creatures rather than merely as intrinsically worthless machines. There are many just-so stories we could tell to award the manifest image a biological, life-supporting function. I don’t know if any of them is true, but they strike me as plausible.
Moreover, I think we’re assuming that the mechanisms uncovered by cognitive science are adapted and functional, whereas the mechanisms unfolding from the manifest image are mere artifacts, illusions, distractions, confusions, and so on. But if it’s cognitive science that’s maladaptive? What if godlike knowledge of our true inner nature is grotesquely maladaptive, which is to say hazardous to our survival and to the reproduction of our genes? In that case, does the superficial semantic distinction get flipped around, so that the causal relation producing the manifest image becomes “factual” (via its adaptational advantage), whereas the inner mechanisms discovered by science become mere artifacts and distractions, because knowing about those mechanisms is bad for our health? That seems pretty counterintuitive to me, which means there’s more to say about the difference between two images of ourselves (the factual vs the illusory ones) than just that one is better for our survival.
It only ‘seems’ to presuppose. An image is a component of a larger mechanistic system, one that we heuristically take as a ‘representation’ because the occlusion of that system forces our metacognitive systems to posit the relation in some other noncausal way – thus aboutness and its occult (naturally inexplicable) properties. Like I’ve said many times regarding other preemptive intentional moves you’ve made, all I need is a plausible account of why it strikes us as ‘necessarily semantic’ to put this strategy of yours to bed. It’s this account that you need to problematize if you want to make these preemptive moves without begging the question against me. I think by now the ball is clearly in your court: If representation isn’t heuristic in the manner BBT describes then what is it?
Regarding exaptation, I’m not sure what I can do other than reiterate what I’ve posed to you before: BBT suggests that metacognition hosts any number of ‘exaptations,’ sure. Please give me your theory of what these exaptations are and how they function. Given that theoretical metacognition possesses no information regarding its neurofunctional context, it seems to me that most you could do is speculate on the things it can’t do, such as cognize its own exaptational functions. It can’t even tell us whether metacognition is a unitary faculty as opposed to a fractionate collection of heuristic kluges! How is it supposed to intuit the kinds of happy problem-solving it is capable of?
So, although I agree with you that intentional theoretical metacognition, though providing nothing in the way of ‘accuracy,’ simply has to have had some role in driving cultural evolution, I think it’s a damn difficult thing (but nevertheless very interesting) to say how it has done so, aside from, ‘In ways we presently have no way of knowing.’ This is what I’ve been urging you to explore our last couple exchanges: to speculate on the possible ways the theoretical metacognition specific to intentional speculation has discharged your ‘noble lie’ function, just for instance, in a manner consistent with it’s low-dimensional heuristic status.
This strikes me as a damn interesting question:
If by ‘factual’ you mean what makes you prosperous and happy, then you mean something different than what I mean. For me, it means any high-dimensional environmental relation that reliably enables effective mechanical interventions. I actually take the spectre of unlivability as an important moral of BBT, and my suspicion is that cognition via the high-dimensional systems we’ve developed through science will ultimately cut our throat. But the question I’m asking is simply the question that cognitive science is asking: what, to the best of our knowledge, is actually the case. Since high-dimensional cognition facilitates environmental intervention, it will be pursued no matter what. Too many competitive advantages to be had!
David Clark,
Hmm, are we asking whether the scientific image is more powerful than the manifest one? I’m not sure the answer’s so obvious. The manifest image has greatly impacted our history in countless ways. Think of the religious codifications that fed into wars, the politically correct delusions that keep the masses sane and comfortable in their skin. Meanwhile, under the hood, as it were, the hidden neural processes chug along and we’re certainly empowered by our knowledge of nature.
But the point is that if it’s destructive power that makes science special, I’m not sure that that value bottoms out in a biological function, since that power could just as easily be interpreted as maladaptive, as something that makes us less fit in the evolutionary sense, given that technoscience threatens the continuation of our species and cognitive science threatens us with a social apocalypse. So if it’s not the biological function you’re talking about, I think you might be presupposing some other ideal, such as Nietzsche’s will to power. But normativity is a no-no in this context. We don’t want to say that power (the big stick) is good unless that goodness is reducible to some aspect of a mechanism, such as the enhanced capacity of a mechanism to reproduce genes.
Scott,
You do indeed have an account of how we would succumb to the illusion of the semantic, ethereal quality of images. But I don’t think that account answers the question that was left on the table. What *are* images, if we forget about their semantic aspect? You’ve said they’re things we’re forced to cognize as semantic because we’re blind to the neural system that’s really producing the mental state in question. Are you saying an image is really just some unknown mental processing, so that what it is to feel pain is just, roughly, the firing of neurons X, while what it is to have an image in mind is nothing more than the firing of neurons Y? Do you really expect that the neural distinctions will suffice to explain the difference between feeling pain and thinking of what you call a low or a high res image of something?
Regarding exaptations, I will indeed think more about how the noble lie function would work. The post on mythopeic vs scientific thought I’d planned to put up in a week or two bears on this, I think. I’m not sure I understand how you’re framing the problem, though. You seem to be assuming that the exaptation would have to be cognitive, so that the blindness issue would still prevent the thing from taking off. Who says a noble lie has to be thought of in cognitive terms? The evolutionary advantage might be put in strictly causal terms of a mechanism for crowd control, so the blindness to the brain is irrelevant. Noble lies could work as means of distraction. We’re curious about some things, so there’s a niche there that can be exploited. We find some myths more attractive than others, and those myths have social functions that have nothing to do with getting the facts right. Still, in a broad sense, the myths would be cognitive in that they’re part of a coherent worldview.
If you’re still asking how the exaptation could arise psychologically, in terms of the functions of our mental systems, again why can’t I just rely on BBT to provide that answer? You already explain how the noble lies arise: we bumble and stumble toward glorifying certain mental states and my point is that once we glorify them, evolutionary benefits ensue so that we wind up with an exaptation. Why doesn’t that suffice for philosophical purposes?
Now you’re actually asking the question of what consciousness is: BBT is just a theory explaining why consciousness perplexes us the way it does, why it has to, in fact, given the informatic straits of metacognition. As such, it’s actually consistent with a number of different theories on the table. The long time hunch I have is that it implies something like McFadden’s CEMI theory, but it’s just a hunch. The big thing is that it allows consciousness researchers to set aside most of the hitherto baffling ‘properties’ they don’t need to explain, since these are far better understood as artifacts of our metacognitive limitations.
So the answer to your question (which is the classic one), “Do you really expect that the neural distinctions will suffice to explain the difference between feeling pain and thinking of what you call a low or a high res image of something?” is yes, insofar as their is anything to be explained. The feeling of pain is simply a very low-dimensional way to cognize what our brains, bodies, and environments are doing. What else would it be? That’s the hard question, and it’s yours to answer if you think something else is going on, not mine. BBT unravels the quale knot rather handily, in fact. It even explains why it’s so hard to get past the intuition that something ‘special’ has to be going on!
The reason I keep scare-quoting ‘exaptation’ is that we’re talking two levels here, the biological and the cultural. Deliberative theoretical metacognition is a historical achievement: the ‘folk theories’ we told before seem to have primarily discharged nonepistemic functions such as signalling group identity and interpersonal reliability, facilitating ingroup cohesion, motivating outgroup competition, and so on. The problem you face, I think, is that you want to be able to sort folk theorization into good and bad (or adaptive and maladaptive), to stake out some kind of authority for your own brand of intentional theorization, ‘philosophy,’ over and above the folk theorization of, say, fundamentalist Christianity or New Age chicanery. On BBT, both are nonepistemic (efficacy as opposed to accuracy oriented) and both are cognitive (in the most general sense of solving environments). This is why it causes you more grief than otherwise, I think. It seems to provide an explanatory basis for what would be Ray’s charge of irrationalism. Perhaps it has the resources to provide you with the criteria you need – I really don’t know. But at this level of analysis, at least, it poses what seems to be a pressing problem. If anything, the religions you excoriate seem to have a greater claim to exaptational cognitive efficacy.
This is how I pitch your dilemma in the post anyway!
Scott,
I take your point about BBT’s limited scope. That’s fair enough, of course.
I’m not sure I see the difference between the cultural and biological levels of explanation, on your view, since what would distinguish the cultural level would seem to be that it’s full of normative prescriptions. If you’re talking about the difference between the individual and the group, I think biology covers both, no?
Anyway, yeah, once we’re talking about how best to manage groups for evolutionary purposes, a whole lot of space opens up for the possible utility not just of noble lies like those in the traditional religions, but of subversive esoteric worldviews for the elites who are in management positions and whose job may be to serve as mutations to keep the group flexible, and so on and so forth. These are mostly just-so stories anyway, which means that we can judge their plausibility but we can’t really falsify them. So they’re a dime a dozen.
Remember that my aim here has been to show only that the core of BBT is *consistent* with my existential philosophy. I think biological just-so stories about exaptations go a long way to demonstrating that consistency. Whether my philosophical claims are actually TRUE is another question. I think of philosophies more as works of art than as scientific theories, so the question of truth here sounds out of place to me. Indeed, the mechanistic viewpoint should have no place for truth either. Again, this is why postmodern antirealism and relativism are arguably better at honouring the scientific worldview than is ultrarational analytic philosophy. The latter is a little more like dogmatic theology in the seriousness with which it continues to take modern Enlightenment myths (the myths that Nietzsche said wouldn’t withstand the call for scientific nihilism).
You’re right: I should have said evolutionary and cultural levels. It’s the difference between heuristics we’re stranded with at birth versus those we come by subsequently.
‘Truth’ understood semantically is out-of-place, but understood as a (rather draconian) heuristic it shouldn’t be a problem at all. I would say the same of ‘Art,’ which is so often cast as the antithesis of Truth. One of the theoretical strengths of BBT lies in the way it dissolves these procrustean antitheses and leaves us with mechanisms/systems possessing different ‘vectors of effectiveness.’ (I’m starting to think we both have a lot of work to do in this regard, figuring how to reconceptualize this terrain naturalistically! What I want to resist is the intuitive lapse into pragmatism, which has the tendency to preempt the natural, transform the world into an arena of competitive and cooperative interests.) Some heuristics, such as the complex involved in causal reasoning, solve (effect outcomes that facilitate the system’s capacity to effect outcomes) aross a vast domain of problems, others only a small set of problems. Surely you would agree that interpreting your doctor’s mechanistic advice aesthetically is to simply misunderstand the problem-ecologies involved, and to so solve nothing. The question is one of whether things are different in the philosopher’s case. Advancing ‘the world is filled with meaning that we make’ as an aesthetic (as opposed to epistemic) claim meant to solve the practical problems of living, you have to admit, grinds quite a few gears! You’re not saying the that there’s any such thing as ‘meaning,’ but that we’re better off talking as though their was. Or put differently, you’re saying we need to hijack the epistemic problem-solving vector to effect solutions on various existential vectors. I don’t disagree with this, if this is indeed what you mean. But it certainly comes across as a good old fashioned knowledge claim in the Brassier piece, you have to admit! And I would add that it actually exemplifies the upshot of BBT, which is that intentional ways of thinking require ignorance to function effectively.
Yes, Scott, I’ve noticed that you resist a pragmatic construal of BBT’s upshot. I wonder how the heuristic view of mental processes differs from, say, game theory, in which our behaviour is egoistic and chosen on the basis of instrumental rationality. We maximize our utility, and so forth, where the question of which goals is best is left up to the individual or is otherwise naturalized by appealing to evolutionary functions and cultural norms. Is this a species of the pragmatism that doesn’t appeal to you?
As to aesthetics in a medical context, I agree that that would be out of place, but the reason why is straightforward: aesthetics applies to an evaluation of ends, whereas the goal of health is presupposed in a medical context, and rationality suffices as the method for evaluating the means “prescribed” by the doctor. The doctor deals only with hypothetical imperatives, which aren’t normative, so there’s no need to haul out the big gun of aesthetics or some religious feeling about something held to be sacred.
Likewise, the “practical problems of living” are about the means of achieving certain goals, where the goals are presupposed. Only when the goals/priorities/desires/ends are up for grabs do we face an existential crisis, a view from nowhere that requires a leap of faith, aesthetic taste, or a religious tradition.
Am I saying there’s no such thing as meaning? If by “thing,” we mean something made up of ingredients found entirely in a physicalistic ontology, I don’t suppose there’s any meaning. However, I’m open to the emergence of properties and processes that don’t reduce to physics (to “things” in the literal sense). Nature strikes me as a very creative place. We were created and we in turn create many peculiar *things* that aren’t entirely thing-like. Our artifacts are carriers of value and purpose that come into being especially when they’re used by creatures that stumble onto this exapted way of living (i.e. this niche). We don’t live entirely for our genes, unlike most other animals. We live for various unrealistic goals, including our moral ideals. Our artifacts help us achieve them, so we’re all caught up in an uncanny process, which seems to simulate our fantasies about the supernatural. For example, we wanted to be one with the gods and our machines make us godlike.
Is simulated meaning the same as real meaning? Well, taking the computational view of the mind, all our minds are simulated anyway, since we’re machines. But a machine that does what a mind can do is a mind. Likewise, artifacts that are intelligently designed to fulfill, in effect, our dream of handling our existential worry, by negating nature’s meaninglessness, carry the meaning we invest in that dream. What I’m trying to think through is the sort of purely natural process that might be involved here. Why would nature simulate something that’s superficially supernatural (i.e. something apparently normative and that otherwise corresponds to the manifest image)? Apparently, something altogether novel is being created in our corner of the galaxy. (“And what rough beast, its hour come round at last, //Slouches towards Bethlehem to be born?”) This is why the thought of posthumanism intrigues me, especially since the invention of culture thousands of years ago was already sufficiently revolutionary.
There’s no ‘mind’ on my account, just the brain as seen via two radically different perspectives, one utilizing all the ancient and powerful machinery of environmental cognition, the other a far younger metacognitive subsystem gulled, via philosophical reflection, into feeding its diverse and informatically impoverished wiretaps to our environmental cognitive systems. This is the thing I keep saying, over and over and over: in a first-order sense, ‘goal talk’ simply yokes heuristics we use to cognize other minds. There’s no problem. As soon as you ‘reflect,’ assert the ‘reality’ of goals, speak of them as belonging to the furniture of what is, as ‘simulation’ even, you’re treating metacognitive information as if it were environmental information, when it’s plainly not, cannot be, in fact, short of some kind of magically powerful metacognitive faculty we plainly don’t possess. There literally is no ‘meaning as simulation.’ It’s simply a misfire like any other, like a kind of visual illusion where the cunning deployment of visual cues triggers the application of visual-perceptual heuristics ‘out of school.’
But I understand how hard it will be for me to convince people of this: as hard, I suspect, as convincing someone with Anton’s Syndrome that they really are blind, even though they are convinced they can see it all right there, as plain as day. It really is a form of ‘natural anosognosia’ I’m talking about.
But nevertheless, you don’t have ‘goals’ any more than evolution has (or simulates) ‘designs’: But since metacognition lacks any access to the systems actually driving your behaviour, it assigns ‘constraint’ to whatever it can access, in this case, sketchy information regarding the system charged with mediating behaviour. THIS is all that any of us are talking about when we use ‘goal’ in first order discourse. The systematicity you find in things like game theory actually pertains to the systematicity of the neural systems involved. As our understanding of those systems grows, the more dots we’ll be able to connect, the more quaint and prescientific our existing understanding will be.
“Information” can be treated in a number of ways, but I take it you’re using the word to refer to signals of some source, or effects that inform us about their cause. But why couldn’t metacognitive systems deal directly with causes rather than just effects? For example, when someone thousands of years ago was very hungry, he felt grumbling in his stomach and that pain caused him, say, to hunt down an animal and cook it over a fire, thus turning something in his environment into a more useful form (from his perspective). More specifically, the pain of hunger caused associated mental states, such as memories of what he did the last time he was hungry, or the pain fired up his imagination and caused him to plan some new method of attack or of preparing the meal, and those mental states were intermediary causes of his actions. So if the person had to interpret what’s going on in his mind when he’s hungry, he’d say he wants to eat.
To say that there’s no such desire is to treat all of his known mental states as effects (signals), because all the true causes of his action are the neural ones and he has no internal access to them. But he does have access to his feelings of pain which seem to trigger those associated mental states. You surely grant that the folk interpretation of this hunger isn’t entirely useless. Maybe it’s not as useful as a fine-grained analysis of his neural transactions, but this doesn’t imply there’s no such thing as desires. On a pragmatic view of science, theories are just models and you should use the ones that give you the best results under certain conditions. Folk psychology is a model in that sense. All models simplify, including cognitive scientific models of “systems” in the brain.
Anyway, the teleological interpretation of desires comes about because certain mental states seem to cause us to modify the world in a way that matches more closely the way we prefer things to be. That teleological model of human activity has been used to predict perhaps ten trillion instances of such behaviour over a period of tens of thousands of years. I’d agree that desires probably aren’t what the folk think they are, but I’d bet that instead of eliminating desires completely from our ontology, cognitive scientists will redescribe them in some more objective language. That bet would rest on the strong induction that because folk psychology has been a very successful model, it’s likely not that far off from the truth.
Now you’ll say that folk psychology has been useful only because it’s piggybacked on the brain in which the true causes of our behaviour lie. But this would concede that the manifest image isn’t wholly wrongheaded, because it’s had a line on the facts all along–even if we’ve been calling them by other names, abstracting from most of the details which we couldn’t do anything with anyway, because we lacked the technology to deal with such an information glut. Likewise, the cognitive science of mental functions could be undermined by the deeper science that gets into the brain’s chemistry. But don’t you want to say that the larger mental systems and mechanisms exist, that they emerge from the chemistry, that the universe doesn’t consist only of subatomic particles and their quantum interactions? Why can’t the folk use the same reasoning to give some credit to their teleological talk of desires? Isn’t your theoretical elimination of ordinary mental states inconsistent?
I mean illusion in a nonintentional sense, as ‘cognitive systems working maladapted information’ as I put it to Ben below.
The need for content simply evaporates. The conceptual peculiarities of content that have driven so much philosophy of mind are explained via neglect: aboutness and the normativity belonging to its implicature are distinctive for heuristics because the information neglected is the very information required for a high-dimensional, high-res, understanding of our subordinate and superordinate systematic relations – namely causal information. They are attempts to solve the question of our systematic relation to our environments absent almost all causal information regarding that relation, so small surprise they are incompatible with that information. The explanatory role of neglect is what’s all important in my account in that it defuses all the intentional bombs belonging to the first person, thus allowing mechanical barbarisms to sack the human soul.
Once normativity and aboutness are explained away, the human system can be treated as any other natural system. So for instance, all you have to do is feed mechanically effective structural isomorphisms – ‘replications’ – to Clark’s ‘representation hungry’ cognitive systems and nothing jumps out as especially problematic: you can remain within the mechanistic register and still explain the metacognitive intuitions that most representationalists take as criterial.
BBT actually provides the very thing radical embodied theorists are after, though at the cost of all intentionality whatsoever. I don’t think Gibson would have liked it!
Checking blogs in Rome, David? That’s hardcore dude! Vita bella!
Sorry, “fact gets you killed if you ignore it, opinion doesn’t”. <– edit button 🙂
And what exactly do you mean by “meaning” in this context?
The whole grab-bag of intentional concepts.
And how do you do that? I mean, how do you exercise a capacity (to mean something) which, according to what you mean with your sentences above (allthough I found it somewhat difficult to find meaning in some of them), you do not have?
Actually, you’re exercising capacities you don’t even know you have is the better way to put it. Someone could damage any number of neural circuits and destroy your ability to write. All I’m saying that you have only the most impoverished access to these things via reflection, leading you to posit the existence of things that actually don’t exist. So I can flip the question around: How can you be so certain that what you think intuitively self-evident isn’t deceptive through and through?
(And how can you be certain of anything?) But (seriously now!) could you please reformulate your view – or, for a start, your last comment – without using the same intentional vocabulary you want to abolish? Because if you do not avoid using expressions like “to know that”, “to say that”, “reflection”, “to be certain that”, “to think”, “deceptive” while hypothesizing they are “deceptive through and through”, you will continue to fail to expess yourself consistently. And if you should, while trying to rid your BBT of those words (best to start with the first B I think), happen to realize that you actually can not do without them, you should consider to think about what that could mean.
What’s wrong with the vocabulary? It’s your definitions I dispute.
Isn’t it kind of…not very functional to use intentional language to ask someone to not use intentional language, Copper? To do so begs an responce based on the intentional – it certainly doesn’t ask for a machine code responce. Like asking for ice cubes by activating a flame thrower.
Atleast indulge the reply as being rather like comments programmers put in programming code, to explain to themselves what the code they wrote is doing (yes, this is common practice amongst coders – or atleast the good ones). The comments both explain to the programmer what his code does, and yet the actual code is a far more accurate description of what it does. A description so accurate, it’s hard to understand at a glance – thus the use of comments by programmers.
Atleast don’t pick at people for speaking in comments only.
[…] Man the Meaning-Faker (rsbakker.wordpress.com) […]
And – my definitions? I cannot remember to have defined anything. I also do not think something is wrong with intentional concepts. I just asked kindly for a formulation of your view that does not contradict itself by expressing it’s condamnation of “the whole grab-bag of intentional concepts” as “deceptive through and through” by relying on the same concepts for it’s formulation. Also: Perhaps it is not me that needs to challenge his intuitions about the meaning of the word “meaning”. I am quite sure one need not be not quite so sure as you seem to be, that our intentional vocabulary refers to something we “metacognize” with some inner sense, if there is such a thing (I doubt it). Maybe goals and reference are something to be experienced by plain eyesight, maybe their existence is as compatible with the existence of electrons and leptons etc. as is the existence of chairs an tables.
You think my use of that vocabulary entails intentional commitments you have that are incompatible with my claims regarding mechanism, so it seems fair to assume that we are using the words in different ways. If that’s not the problem then what is? I appreciate the easy route is to simply assume I HAVE to be using them the way you think you use them. If you think as much, then you need to make an argument to this effect.
I don’t recall referring to any ‘inner sense.’ But I’m curious: if my uses contradict your notion of the intentional, then what informs your notions, if not metacognitive intuitions on what it is you’re doing when you employ them?
And how do you do that: use the vocabulary in an unproblematic way when everybody else (so do you seem to think, it’s not my oppinion) employs it in a way you critizise so desperately? Do you think of different things while using the words? Or does your use of them follow different rules?
And is it possible that in fact everybody (except for a few confused philosophers who mistake themselves for scientists) uses that vocabulary in an unproblematic way, so that you, so to speak, are preaching to the choir?
As for my metacognitive intuitions: I have none. It’s a genetic defect and it’s making my live horribly complicated, so please do not make fun of it.
I just describe them in nonintentional theoretical terms after I use them, similar to how you describe them in intentional theoretical terms. And yes, it is primarily philosophers who are confused.
Ben,
Maybe it’s not as useful as a fine-grained analysis of his neural transactions, but this doesn’t imply there’s no such thing as desires.
Picture a multiple number of domino lines. Lots and lots of them, some tumbling.
Where do you distinguish desire or any particular desire, in there? Why are you drawing any distinction – it’s all domino’s and more domino’s – all the way down.
Sure one can damn the torpedoes and draw a line in the uniform, undifferentiated sand. But can one honestly treat that line as really there?
Reminds me of the Dr Who episode where they lose a loved one to cyberman conversion – they find her, but then she walks away into a crowd of cyberman. Indistinguishable.
Gotta love a blog where you can use a pop culture ref in regards to philosophy and actually get sombre appraisal…
This same reductive line of reasoning (just lines of dominoes) leads to the end of all the special sciences, including cognitive science and BBT. There are physicists who think the only thing that’s real is the entire universe taken as a whole, so that everything that goes on inside at any particular place or time is an illusion. But this mystical view of reality is impractical. For practical purposes, we use language to refer to patterns that impact our lives, such as the patterns involved in having beliefs and desires.
One of the issues here that hasn’t gotten much discussion, I think, is the debate between metaphysical realists and pragmatists. How realistic or pragmatic should we be in interpreting the ontological implications of scientific theories?
I’ve mulled over the idea there is no number higher than 1 in the universe. It gets no higher – there is just 1. Interesting to have parallel developed that idea along with the physicists.
I’m not asking you to adopt the following idea wholesale, Ben, but more of a side option to mull: The idea is that the extinction of all life isn’t, looking at things currently known, a big deal at all. In regards to such an idea, the notion that the view is ‘impractical’ doesn’t hold any implication. So what if it’s impractical, in regard to this idea?
You seem to have dismissed me by saying it’s impractical – as if it being impractical somehow matters?
Atleast in the idea I’m pitching (to just mull over as a side idea), you’ve got no ground beneath you at all, let alone in a position to dismiss. In regard to the idea I’m pitching, aint nothing holy about practicality.
Perhaps it sounds really nihilistic, but I remind you I have not asked you to adopt it wholesale. Simply as a side idea to mull over. It’s like asking someone to think about things from a nazi’s point of view – thats not asking someone to be a nazi, despite how repulsive the thought experiment might be. Or did I just enact Godwin’s law – dang! 😛
I’m not dismissing anyone, Callan. Certainly, mystics don’t care if mysticism is impractical, since they think our highest goal should be withdrawal from the world. My point is about the nature of science and rationality. How pragmatic are they? Critics of string theory say it’s untestable theology, not science. The reason special sciences emerge is that we want to generalize about sub-patterns. Just as most theists have to live in the real world, as opposed to spending all their time contemplating God’s transcendence, naturalists can’t spend all their time thinking of everything’s oneness in some invisible dimension. For one thing, I doubt the human brain has the cognitive capacity to encompass that mystical thought, without implicitly breaking things into parts or levels. But as soon as we make distinctions and employ limited concepts, we divide the world into parts and thus posit emergent properties, such as those that correspond to the manifest image.
Scott,
But I understand how hard it will be for me to convince people of this: as hard, I suspect, as convincing someone with Anton’s Syndrome that they really are blind, even though they are convinced they can see it all right there, as plain as day. It really is a form of ‘natural anosognosia’ I’m talking about.
It’s a side point, but what’s the metric for determining if someone understands this? What if you are the Anton’s patient in terms of being convinced someone is convinced?
One of the problems we seem to be having is about the difference between science as a body of knowledge about the universe and science as a body of practice performed by scientists. Science as a body of knowledge about the universe is embodied in the universe itself whether any scientists perform science on the universe or not. The claims philosophers make about science might apply to science as a body of practice but they don’t apply to science as a body of knowledge. The universe, and the knowledge about the nature of the universe embodied therein, exist whether scientists (or philosophers or theologians) exist or not.
This distinction between practice and content also presumes the universality of what are also radically heuristic ways of conceptualizing the problematic. What’s actually going on (however this is eventually understood) has got to be a whole lot more difficult to engage. Moving past these dichotomies, which simply send you twirling back into all the old debates, is one of the key advantages provided by BBT. There is no ‘universe independent of us’ anymore than there is any ‘universe independent of itself.’ There is no subject or object, no practice or content, outside a certain procrustean projection of the problematic. This is basically what I’m trying to sketch out here on the blog over and above the analysis and critique of intentionality: ways to conceptualize a post-intentional understanding of our place in the universe. The question is one of how, given the apparently mandatory nature of these ‘frame’ heuristics, we can see our way past our intuitions and arrive at some reliable way of engaging these questions.
I have a bunch of stuff in the oven, but nothing close to satisfactory. But once you appreciate the heuristic nature of these traditional boundary concepts, then you have a way to begin speculating as to how they might be superceded, and the more creative beans that pile in the better!
You can tell they understand because they are swinging slowly back and forth, or their blind brains are on the wall behind them, and their bodies are cooling to the ambient temperature.
So many words… let me summarize. Your brain is like a square. You couldn’t possibly understand why your wrong, but I can because my brain is like a cube. I see everything you see times infinity. Its because my Metencephalon has an extra line segment.
“I just describe them in nonintentional theoretical terms after I use them, similar to how you describe them in intentional theoretical terms.”
1. I do not see that Copper describes any of the nonintentional concepts in any way whatsoever. But perhaps I have not read all of his comments.
2. No, you do not. Example:
“Blind Brain Theory (of Conscious Structuration) – Proposal that the central, most perplexing features of consciousness are the result of thalamocortical ‘information horizons,’ in effect, the ways the CONSCIOUS portions of the brain are BLIND to the complexities of their immediate neural environment. BBT HYPOTHESIZES that various PHENOMENAL structural peculiarities such as presence, self-identity, and intentionality, are simply a consequence of INFORMATIC asymmetry, the fact that the thalamocortical system can only ACCESS a small fraction of the greater brain’s overall processing load.”
So Copper is absolutely right to ask for a formulation of your theory that does not performatively contradict itself by relying on intentional concepts.
Copper maintained that I HAD to be using certain concepts a certain way – did he not? Namely, he maintained what you’re maintaining, that I have to be using certain concepts the way you take yourself to be using them. My theory is that what you (and pretty much every philosopher) theorize on the basis of scant metacognitive access as ‘intentional concepts’ are best understood as mechanical heuristics. So again, I appreciate that you have theories as to what all these capitalized concepts mean, and I understand that you think them self-evident, and obviously ‘intentional.’ I see them as heuristics, cognitive short-cuts that are – like everything else living – biomechanical. There’s nothing wrong with the heuristics in and of themselves: the problem is the theoretical metacognition – ‘philosophical reflection’ – that claims to describe or explain them.
So when you use these capitalized terms your brain is doing certain things – something mechanical is going on. All I’m saying is that the second-order theorizations of normativity and intentionality that philosophy heaps on this brute doing is largely specious, the result of various kinds of information privation. So tell me why I’m doomed to buy into your second-order theorizations as first-order facts of the matter? Because if you can’t make a compelling case for this, then charges of performative contradiction obviously beg the question, don’t they?
It’s a pretty simple question. And I hate my theory more than enough to want to be wrong.
“Copper maintained that I HAD to be using certain concepts a certain way – did he not?”
I do not see where he does that.
“Namely, he maintained what you’re maintaining, that I have to be using certain concepts the way you take yourself to be using them.”
1. Of course you can define the words you use as you wish to. But if the rules of your usage of them (your definitions) differ fundamentally from the rules according to which they are normally used, you should not pretend you are talking about the same things which everybody else is talking about, and that you are telling them surprising new things about them. I also can invent a theory according to which, for example, bricks are edible, and tell people, when they tell me that it is false, that it is not, because they cannot assume that I have to be using certain concepts in the way they take themselves to be using them; that I use the word “brick” synonymus with the word “bread”.
2. Your formulation is quite interesting, because it acknowledges the difference of the meaning of your words – the way you use them – and what you take to be their meaning. So you mean (although you do not realize it, I think) that the meaning of a sentence is nothing mental, and so – according to your identification of mind and brain – nothing neural. Problem of meaning solved.
“My theory is that what you (and pretty much every philosopher) theorize on the basis of scant metacognitive access as ‘intentional concepts’ are best understood as mechanical heuristics. So again, I appreciate that you have theories as to what all these capitalized concepts mean, and I understand that you think them self-evident, and obviously ‘intentional. I see them as heuristics, cognitive short-cuts that are – like everything else living – biomechanical. There’s nothing wrong with the heuristics in and of themselves: the problem is the theoretical metacognition – ‘philosophical reflection’ – that claims to describe or explain them’”
So all philosophers agree on something? Incredible. I can be no philosopher then, because my intentional concepts – the sets of rules according to which I use words like “meaning” – are not informed by introspection (if you mean something else by “metacognition”, tell me, I am not sure what else to make of that word). I learned my language and it’s rules by imitating people talking. I did not see their brains or minds, only their sensible behaviour. That’s the source of my concepts. The criteria for the application of intentional vocabulary are behavioural. By the word “intention”, no inner episode is meant. And that is no theory, it’s plain fact: Look at people use the words I just used.
“So when you use these capitalized terms your brain is doing certain things – something mechanical is going on.”
It’s quite reasonable to assume so. I do too. But when I talk about meaning, I do not talk about something in my head. I talk about my use of those words, and it’s rules. It’s also quite reasonable to believe scientists when they tell me, that my behaviour is causally connected to proceedings in my brain.
“All I’m saying is that the second-order theorizations of normativity and intentionality that philosophy heaps on this brute doing is largely specious, the result of various kinds of information privation. So tell me why I’m doomed to buy into your second-order theorizations as first-order facts of the matter? Because if you can’t make a compelling case for this, then charges of performative contradiction obviously beg the question, don’t they? It’s a pretty simple question.”
Yes it is. You buy your language, and with it it’s rules, because you use it. And as long as you violate it the way you do, you may think that you understand what you are saying, but really you do not. YOU beg the question by without any argument defining that intentionality is something we metacognize.
“And I hate my theory more than enough to want to be wrong.”
Your emotional concern with your theory is quite obvious: You repeat it endlessly, find it confirmed everywhere and are immune to couterarguments. In short: you are religious. Or, maybe, paranoid. Maybe you should leave your theory for a while (you could use the enormous load of free time to finish your book, for example).
What is it with the animus? Life advice? Really?
I’m asking for counterarguments, but literally, all I’m given are accusations of performative contradiction. Over and over and over. I keep asking the same questions, and nobody answers. Is attitude the best you can muster?
So once again: Where do you derive your assumptions regarding normativity? What is it that evidences your conception of ‘rule’? What’s it based on?
This looks like the Churchlands’ creepy cyberpunk bullshit…
Sounds like a potty mouth fallacy!
Brassier’s stuff seems to teem with anxiety about sex, just screaming for pop psychoanalysis so obviously that perhaps a joke.
[…] Man the Meaning-Faker (rsbakker.wordpress.com) […]