The Decline and Fall of the Noocentric Empire
by rsbakker
The Semantic Apocalypse debate winds on, with Ben Cain over at Rants Within the Undead God, and Stephen Craig Hickman over at noir-realism. The irony is that although we three actually don’t disagree about that much, the disputed remainder is nothing less than the whole of human aspiration since the Enlightenment.
Philosophically wounded souls disputing existential salvage rights? Or narcissistic dogs fighting over hyperintellectualized scraps?
One way to look at what I’m arguing is in terms of the ‘third variable problem’ in psychology. When presented with a statistical correlation, say between the availability of contraception and a high rate of teen promiscuity, the impulse is to assume some causal connection between the two, even though any number of third variables–‘unknown unknowns‘–could be responsible, say, the ubiquity of pornography or what have you. Once again, it comes down to the invisibility of ignorance, the way the availability of information constrains cognition. Absent information pertaining to third variables, cognition generally operates as if no such information existed, not even as an absence–precisely as we should expect, given that we are biomechanisms.
So I think we all agree on the following three premises:
1) Our traditional notion of the human (the ‘manifest image’) substantially turns on the information available to metacognition.
2) Historically, information regarding the human available to metacognition has been dramatically constrained.
3) The sciences of the brain are presently generating immense quantities of hitherto unavailable information regarding the human.
The dispute lies in our respective assessments of this situation. For my money, the most crucial claim is the following:
4) Absent information pertaining to the absence of information, cognition assumes the adequacy of the information available, no matter how inadequate it may be.
This is what I generally call ‘sufficiency’ (or elsewhere, the ‘Principle of Informatic Adumbration’ (PIA)). What sufficiency essentially means is that metacognition is very nearly theoretically useless as a mode for cognizing what we are. Certainly it discharges a myriad of these functions–it is a metabolically expensive adaptation after all–but the provison accurate theoretical cognition of ‘subjectivity’ is almost certainly not among them. Given the informatically impoverished status of metacognition, or (2), sufficiency means that the flood of information asserted by (3) could reveal a potentially bottomless parade of third variable confounds despite any intuitions to the contrary, that the first person could genuinely feel like the most certain, indubitable thing in the world, and still be utterly illusory. This means that metacognition, contrary to the assumption of the tradition, is no better placed than cognition more generally when it comes to theoretically modelling nature absent the institutional prostheses of science.
And this suggests that the flood of scientific information in the domain of the human is going to do what such floods have done in every other domain of human inquiry: wash every thing away, and reveal something utterly indifferent to our cherished traditional conceits. Something inhuman…
The third variable confounds formulation is interesting. But when you speak of “the adequacy of the information available,” in (4), you’re assuming the metacognitive purpose of scientifically (theoretically) getting at the facts of our nature. You’re saying that given the brain’s lack of natively-obtained information about itself, the brain can’t do science (know itself with a theory of the facts) without doing science (turning to the scientific methods rather than relying merely on intuition or philosophical speculation). This becomes almost a tautology, doesn’t it?
My point is that the folk attempt to know ourselves isn’t really any such thing, just as religion isn’t really about telling us the bare facts. Intuitions may have adequate information to perform a nonscientific function, such as the function of deceiving ourselves for our own good, so that we don’t get carried away with cognitive purposes and go insane due to an over-abundance of information. If that’s how the mechanism of intuition or philosophical speculation works, there’s not much of a purely mechanistic objection to intuition or speculation, is there? We can say the folk methods fail, given scientific criteria of success, but it’s scientism to assume that all epistemic business is scientific business. Many people may require nonrational or pragmatic justifications of some of their beliefs.
I’m saying the brain can’t theoretically know itself without doing science, just like it can’t know plate tectonics. I’m not sure where the tautology comes in. The vast bulk of reliable theoretical cognition happens to be scientific cognition – as I’m sure you agree. Dem just the cognitive breaks. If it’s scientism to acknowledge as much, then you’re as guilty of it as I am, Ben! The difference is that you want to cut theoretical metacognition some kind of a break, acknowledge that it nevertheless does valuable theoretically mediated work despite its inapplicability to theoretical problem-ecologies. I don’t disagree that this as a possibility, I just don’t know what that ‘valuable work’ might be, or how it could be remotely reliably determined short of scientific investigation. Do you? In the most general terms you can cite heuristic simplification, and I would agree. But for me the next question stops everyone in their tracks: Heuristic simplification how adapted to what? What information is neglected and accessed to accomplish what domain-specific task?
If you look at humanistic discourses as whole, you have to admit it looks frightfully willy-nilly, with nary a regress ender in sight save institutional exhaustion. I have my ‘normative’ chits here and there on the table, but when I’m honest, I really don’t have a fucking clue what I’m talking about. It just ‘feels right.’ Thus my perpetual akrasia.
Another way to phrase your dilemma is: How can any lie be noble if nobility itself is a lie? It seems to me that you cannot do away with some kind of commitment to speculative metacognitive theoretical accuracy to get this ‘noble lie’ tack off the ground…
The tautology would be that, because “theoretically know” means “scientifically know,” you’re saying “the brain can’t scientifically know itself without doing science.” That is, the brain can’t fulfill the function best carried out by the mechanisms/methods of science (including the social mechanisms), which function BBT speaks of in terms of accuracy and the manifest image calls getting at the truth or the facts, without using those mechanisms. This isn’t quite a tautology because maybe there are other mechanisms in the universe that can carry out that function. As far as we know, though, science is the best way of doing that part of our epistemic business, which is to give us an accurate take on the world. So I agree that if that estimation of science amounts to scientism, I’m guilty of scientism too.
But that’s not how I’m using “scientism” in this discussion. I think the bad kind of scientism comes out in your question, “How can any lie be noble if nobility itself is a lie?” As far as I can see, the job of one mechanism needn’t dictate how other mechanisms should perform if those other mechanisms have their own jobs to do. Imagine a scientist walking into the White House and demanding that the President stop trying to govern the country and start conducting experiments, as though science (theoretical accuracy) were the only worthwhile job in the world.
So when you say that nobility is a lie, you’re using the scientific standard/function of accuracy to judge intuition and speculation even though the latter may have been adapted or exapted to perform some nonscientific task, such as the task of keeping most of us in the dark about the facts of our nature. Intuitions and speculations aren’t “noble” or worthy *from the scientific perspective,* since from that perspective, accuracy is the main goal and intuitions and speculations can’t achieve that goal. But “nobility” may be defined by some nonscientific job requirements.
Whether these nonscientific jobs (self-deception for our health, etc) count as cognitive or epistemic is a matter of definition, but I think they’re cognitive if we consider that epistemic justification may be partly pragmatic.
All I’m doing is describing your problem, which happens to be all of our problems. What on earth is wrong with my question? Accusing me of ‘bad scientism’ isn’t an answer, you know that. All I’m using is the only actual yardstick for reliable theoretical cognition we seem to have. What you’re doing is shouting what I once shouted all the time: Keep your yardstick away from my claims! It’s not applicable! To which I need only ask, Why so? Sure, you can adduce innumerable theoretical arguments, but what you can’t do is give any reason why I should take any of them seriously, given what we now know as a matter of fact about human cognition. Should I just pretend not to know about the myriad biases that afflict human reasoning? Of course not. And neither should you. I concede that you point is more ‘appealing’: I just wanna know how it works! And at the same time I’m pointing out that no matter how many verbal walls you raise, how many inferential citadels you fashion, ALL the things you’re attempting to sequester from natural scientific scrutiny are going to find themselves objects of natural scientific scrutiny.
Is there such a thing as a ‘noble lie’? Do you seriously think you can exempt this question from the canon of natural scientific inquiry? Of course not. Do you seriously you think that what is discovered is not going to somehow revolutionize our prescientific theoretical intuitions? Short of some kind of response to these questions I just don’t know how your position amounts to much more than foot-stomping, Ben.
What you need is some kind of ‘normative autonomy’ argument – the very argument I’ve since given up trying to find by looking backwards. Otherwise, to use Dennett’s characterization of the problem, ‘creeping depersonalization’ will continue apace, and yes, Presidents will rely ever more on mechanical understandings of the human. They already do to what I think is a horrific extent!
I agree with much of what you say here, including your point that merely calling something scientism doesn’t prove anything. As you say, for every charge of scientism, there’s the possible counter-charge of over-defensiveness due to a turf incursion. But I think my point is more straightforward than this. All I’m saying is there are different aspects of cognition and knowledge isn’t or needn’t be about getting only the facts straight (there’s also epistemic justification to consider, the giving of reasons which may be pragmatic). So when we criticize intuitions for failing on scientific grounds, we beg that question about the nature of knowledge.
So take the point about our cognitive biases. Science explains how they work and those explanations conflict with our naive interpretation of ourselves. But again, my point is just that maybe those biases have their own job to do. Maybe they’re supposed to mislead us, for good, objective evolutionary reasons. Maybe those biases keep us alive and sane, and maybe that kind of emotional health is needed for knowledge. Were a sociopathic computer to record all the facts in the universe, would we say that computer knows what it’s talking about? Instrumental knowledge of how things work needn’t be the only kind of knowledge.
The semantic issue of what we should count as knowledge isn’t terribly interesting, I’m sure you’ll agree. Maybe we’re talking about the difference between knowledge and wisdom, and you’re saying that progress in scientific (“theoretical”) knowledge seems to entail that there’s no such thing as wisdom, since normativity turns out to be an illusion.
This is the way we typically talk past each other! I agree with pretty much all that you say, but still think you have a problem – or better, that you are underestimating the severity of the problem. So I’m totally in the Gigerenzer camp, for instance, when it comes to the question of biases: what skews in one context enables in another. This is simply the issue of seeing heuristics as ecologically particular or domain specific. But what cognitive psychology reveals is the role played by neglect: we have no metacognitive awareness that we are using specific cognitive heuristics, let alone whether we are using them adaptively, let alone whether the adaptive application (which could be ‘peer intimidation’) is one we would want to sign off on. So we have an array of first-order problems.
On top of this we have the second-order problem: whether our intuitive second-order (deliberative, theoretical) understanding of ‘purposiveness,’ say, is applicable. This is Dennett’s big mistake: it’s one thing to say we possess a variety of socially adaptive heuristics, that these are necessarily adapted to ‘real patterns,’ but it is quite another to say that our deliberative metacognitive intuitions tell us anything of the structure of those particular tools. It’s this philosophical reflective level that I’m pressing you on. My question is the skeptic’s question, at this point. I’m saying that although it makes eminent sense to employ ‘purpose’ in everyday first-order contexts, that has very little to do with our second-order theoretical deliberations on the ‘nature of purpose.’
The challenge BBT raises is that there’s just no ‘knowledge’ here of any kind, mechanical, instrumental, or what have you. Intentional conceptuality, on this account, is chimerical in all contexts save the philosophical, where it provides a living of sorts for a few befuddled souls, and – generally speaking – only serves to confuse and obfuscate under the banner of profundity in all those contexts you would want to call everyday or ‘existential.’ I sometimes think this is what people are picking up on when they excoriate philosophers, that they somehow sense that cognitive tools are being applied out of school.
Anyway, it’s this level I’m pressing you on. You keep talking about these functions intentional-concepts-as-theorized possibly serve, but you end there. When I try to think of examples, I think things like democracy, human rights, etc., but then I consider just how messy the process of coming to these institutions was, and it seems to me that whatever the role intentional-concepts-as-theorized played, it was orthogonal to the assumptions of those employing them, that murder and exhaustion and emulation did all the heavy lifting. There’s case to be made here to be sure. What I’m saying is that you need to make it to make your argument credible!
So it looks like we’re disagreeing about what counts as knowledge. I agree that by themselves, intuitive snap-judgments aren’t so reliable if we want to know the natural facts. This is because our system of intuitions is thoroughly biased and blind to what’s really going on when we think.
But I think intuitions, feelings, and normative judgments may be necessary to knowledge even if they’re not sufficient. They prepare us to attend to certain kinds of evidence, to brave the unknown, and to distract and delude us so that we can foolishly continue to investigate matters with much more rigour even unto our self-destruction.
This is part of the pragmatic aspect of cognition that I think you’re not appreciating. It’s like saying that training to be a boxer doesn’t suffice to win you a fight as a boxer. If you step into the ring against a fighter and pretend you’re still just skipping rope and going up against a mere punching bag, you’ll get knocked out. But the training is necessary to being a good boxer.
Also, there’s Mikkel’s way of putting it: “I think the downstream effects of the existential heuristic on the cognitive ones is interesting: i.e. that metacognition itself is affected by an explicit choice in believing one existential heuristic over another.” I think this gets at the point about the needed leap of faith to sustain any worldview since all worldviews bottom-out on our cognitive biases which we rationalize with myths to keep us chugging along, to avoid embarrassment and terror due to our startling cognitive deficiencies.
Now if you want to talk specifically about philosophy, I don’t think philosophers are befuddled just because they rely too much on their biased nonscientific modes of inquiry. Philosophers are befuddled because they’re too skeptical for their own good. To put it in terms of the boxer, their training has gone wrong in that they no longer take things at face value. Unlike nonphilosophers who don’t engage in so much meta-reflection (except in an inherited way, as when the masses subscribe to their priest’s theology), philosophical worldviews are much more crackpot than intuitive. Philosophers are forced to wildly speculate, because they’re existentially homeless, set adrift from their inclinations and bewildered by their assimilation of the latest objective, dehumanizing theories.
Philosophers are confused not because intuitions lead them astray, but because they understand the limits of intuitions and are forced to flail in that greater freedom. Thus, philosophers don’t produce knowledge so much as potential ways of being people again–even after science shows that in nature we’re really inhuman and after philosophers lose faith in our intuitions because they come to see them as prejudices. Philosophers thus try to re-establish the pragmatic preconditions of knowledge, as modern-day prophets.
So I see the lack of progress in Western philosophy as evidence for this synthesis of BBT and existentialism.
I’ve been pretty consistent all the way through specifying theoretical cognition as my target. I entirely agree that practical in situ knowledge is knowledge. I just don’t think it’s ‘normative’ or ‘intentional’ in any of the puzzling ways that philosophers have described it! This is what I keep saying: There’s an immense difference between saying, ‘You broke the rules!’ and an account of the ‘game of giving and asking for reasons’ a la Brandom or Sellars, say. The first doesn’t require the latter because the latter simply isn’t describing anything aside from a murky set of shared intuitions based on information too low-dimensional to be of any use. This is what I thought you were arguing for: the ‘utility’ of these second-order theoretical characterizations as ‘noble lies.’
I’ve been arguing for the utility of intuitive myths as well, in so far as their effects have fitness value. More specifically, I’ve been arguing that this point about their advantage is consistent with BBT. They’re not useful in terms of providing us a theory of the facts, so to that extent I agree with BBT’s conclusion. But intuitions and the manifest image might be preconditions of all human theorizing. We have no idea if we’d still have the stomach to investigate nature, without the delusions that keep us happy and sane. So given a wider, pragmatic view of cognition, the mechanist should give the manifest image its due.
This is where my view lines up best with the ancient skeptics, the question of whether our metacognitive intuitions “might be preconditions of all human theorizing” our own experience. Why not cease and desist? If it becomes clear that gross cognitive distortions prevail when we speak of experience in an ontological sense, why not wholly embrace the implicit attitude and abandon second-order intentional theorizing altogether? If Roger’s following this, I hope he decides to pile in, because it seems some account of ataraxia might come in handy here… I dunno.
And yet our everyday ‘implicit attitude’ is doubtless shot through with the results of intentional theorizing: the informational posture that enables discursive problem-solving that we presently possess has been conditioned by intentional theorizing in incalculable ways. It would be ludicrous to claim that it does not enable something, somehow…
If you think about, the question I’ve been asking you is pretty damn interesting: What do these fictions accomplish, how? Is there anything they uniquely provide? Answering them would take you into entirely new territory, I think.
I’m certainly interested in that question, Scott. I see it in terms of existential inauthenticity and the matrix of politically correct delusions that keep us mentally healthy and happy in spite of the growing understanding of the dark facts of naturalism. I think the question you’re getting at is whether there’s a more dignified way of reaping the benefits of the manifest image, without entertaining that image. I see this as the question of how to go on living without deceiving yourself, knowing the harsh facts of life, including the fact that we’re not even people in the conventional sense. This is effectively the question of how to be posthuman, isn’t it? My blog is centered around these questions, although I think you’re taking them to be more empirical than philosophical and religious.
I think with the tautology Ben is tackling what he percieves is the claim we can’t take on any theoretical work and get it right – the claim he percieves is we needs science…and his point is science is the practice of getting theoretical work right*.
Implicit in this notion of his is that if we are getting some theory right there, can you argue we aren’t getting theory right elsewhere, outside of the sciences?
It’s probably worth conceding to some degree, because the early development of using, maintaining and igniting fire could even to some degree be called science. Science could be, suitably enough, called a Frankenstien of practices cherry picked in fragmentary form from many diverse cultural practices.
But I’d say to Ben, the less closely related they are to the practice of science, the less theoretically right (‘right’ as in the right that makes our computers or dialasis machines work) they are. And alot of these practices are distant, distant relations. Like chimpanzees to our sapien.
Some chimps can use a stick as a tool to get grubs out of a dead log. But that hardly cuts it in comparison to us. Some cultures do have nodes of theoretical correctness. But it’s just not gunna cut it in comparison to science – they aren’t going to trump scienctific practice, any more than a chimp can out engineer you.
just as religion isn’t really about telling us the bare facts.
Get one of them to say that and you have a point (in regard to that individual and any others who would repeat their statement). Otherwise actually no, it’s not what they’d say they do. That’s not a fair evaluation. Even as from our evaluatory perspective, it is a fair evaluation.
* But that’s with much observation…which is basically the pivotal difference, given that inherant to this notion of observation is the potential for disconfirmation of claims.
This isn’t quite my point about scientism, although I think “scientism” often means what you say it does here. Scientism is generally supposed to be some sort of overreach by proponents of science. They say science is the only possible kind of knowledge or something like that. BBT seems to be judging all mental mechanisms by the scientific standard of theoretical accuracy, whereas I’m saying some of those mechanisms may have a nonscientific job to do, such as the job keeping us ignorant about the facts of our nature. Their job is to keep us in the matrix, as it were, or in Plato’s Cave in which we’re mesmerized by illusions or superficial appearances.
From a scientific perspective, this latter job is perverse, because scientists want to discover all the facts. But this is the scientistic overstretch. Given BBT’s mechanistic metaphysics, everything is made up of mechanisms (systems of causality), so why shouldn’t we just appreciate the different jobs that various mechanisms perform? In fact, strictly speaking, that’s all we can do from this mechanistic perspective, unless we’re presupposing the value of some ultimate mechanism (and this could be scientism).
The point of the philosophy/religion I’m exploring on my blog is that we can see our way out of the matrix, when the mechanisms that support our intuitive self-image break down and we get carried away with science and reason. In that case, we suffer the curse of reason, as both Scott and I say. But I add that we then have a new game to play, the existential one of making the best of our malfunctioning minds. We can build new mechanisms and give them functions by means of exaptation. Another way to put this is to say that the manifest image may be needed only by the masses, whereas the cognitive elites may be differently wired, so that they have ways of coping with excessive theoretical accuracy (the curse of reason).
I’m not sure if I understand you fully, Ben, but I’m up for a kind of game design/life support design for human notions. I’m up for the magic circle of game theory, where inside the circle, certain things ‘are’ and we don’t just let any old fact penetrate that circle and burst the notion (we don’t cry ‘But all these chess pieces mean nothing’ – we play. Whilst in the magic circle of chess). But this ‘carried away with science and reason’ and these accusations of scientism – this isn’t game desing/life support design! It’s just attempting to cut off a certain direction by bludgeoning in, IMO, ignorance! Sure I’m not up for any borg like conversion to utter fact (heck, I think I’ve had nightmares about that as a child), but I would have exits from the magic circle that people could possibly consent to taking. I’m not up for any elites and muggles. The exit avenues (which also act as re-entry avenues) are not only made available, but the knowledge is gently but consistantly pressed, on an ongoing basis. To ensure a chance of dissenting mofo’s armed with that outer circle knowledge. Because fuck elites and muggles!
I really don’t dig this scientism accusing thing you’ve got going on, Ben! I suspect it’s a primal attempt to build or replace a magic circle outer ring. I’m up for life support systems as I say, but this scientism thing isn’t the only way to build a life support perimeter. That or you are going with the elites/muggles and trying to teach folk to accuse of scientism, while you don’t genuinely practice that?
Of course for some who were in the Canadian weapon X program, when their life support is removed for periods of time, they can just regenerate afterward (humour my humourous analogy!)! Some just want to try and embrace pure flood of all things at all times in how they investigate and write. No dike. No magic circle. But not all of us can do that. All the same I’m interesting in any scouring critique in regard to the magic circle/game design/life support notion.
Yes, Callan, I can see that you don’t care for this talk of scientism. I know the word is overused, and I know it’s often used defensively to protect something cherished from scientific demystification. But my limited use of “scientism” in this discussion about BBT is coupled with my agreement with BBT’s main points, for the sake of argument. So I’m conceding at the outset that intuitions and values don’t do what the folk say they do, that there are no such things in the folk sense, that the intuitive self-image isn’t factually correct. Thus, I’m not trying to stop scientists from further investigating the mind. I’m agreeing that cognitive science will likely continue to undermine our cherished notions of the self. There is no supernatural self and we’re not what we seem to ourselves when we fall back on intuitions and feelings.
Where I part from BBT is that while I agree that intuitions, values, and speculations don’t do what we naively assume they do, they nevertheless may do something important, from my blog’s perspective. They do just what BBT says they do: they deceive us with illusions. My point is that this BBT conclusion has existential significance! Scientism comes in here only when someone says, roughly, that because something fails to be scientific, therefore it’s no longer worth thinking about. That’s an overstretch, because some of our goals may be nonscientifically achieved. To stay alive, despite the horrors that science brings to our attention, we may have to deceive ourselves to some extent and maybe that’s a function performed by intuitions. So intuitions by themselves are no substitute for science, if our goal is to discover the facts, but this leaves open the possibility that intuitions have some interesting nonscientific jobs to do.
Ben (should I use Ben or Benjamin? I’ve been worrying about that), it sounds like your taking it that your distinction is concrete and grounded – like when you say ‘That’s an overstretch, because some of our goals may be nonscientifically achieved.’ and ‘but this leaves open the possibility that intuitions have some interesting nonscientific jobs to do.’
You seem to be talking about a line in the sand as much as I do, but you’d have it known yours is grounded and fixed. Where as my line in the sand, I refer to as a magic circle. No more existant or grounded that the magic circle used in chess play.
I just want to establish whether you’d agree I’m understanding you (rather than assuming I do and continuing to argue based on assumption)?
I kind of feel you have at some point taken it to be a magic circle, then gone and tried find and fill it in with grounded elements. If this is the case, it causes a cognitive dissonance with me where it seems to be treated by you as both magic circle AND grounded.
Ben is fine. I had to look up the idea of a magic circle in chess and other games. As I understand it now, the circle is an imaginary, stipulated border between some conventional pastime and the rest of the world. So when you distinguish between the magic circle and something more grounded, I think you’re saying that the function of intuitions and values is a social construct within a magic circle as opposed to having a mind-independent basis.
But my point in the discussion with Scott is that their function can be just as real as any other evolutionary function, such as the function of the hair on your head. A biological function is a naturally selected effect of a trait or mechanism. I’m saying that our intuitions may have an exapted function, which is one we stumble on and modify to suit our newer purposes (the purposes that emerge within the Age of Reason, for example). This exapted function helps us survive in evolutionary terms, and so it’s a natural, grounded function, not something we imagine. Now, we rationalize that function by telling ourselves all sorts of myths, such as the theistic ones, and those myths are fictions and so they occupy that magic circle. But the underlying mechanism, including the brain’s blindness to itself, has real causal power.
And what’s the function of that power? Well, I think it’s to counteract our rationality, to keep us in the dark so we don’t die so quickly as a result of our having eaten the apple from the Tree of Knowledge. We have a biological matrix, a fantasy world in which we live to occupy our narrow attention span. Maybe our brain’s blindness has that survival value, as opposed to being merely a scientific deficit. In scientific terms, our intuitions stink; they mislead us. Nevertheless, they may have kept us alive precisely by forcing us to dwell in our fantasy world, because the real world is too horrible to contemplate and requires existential heroism of its full-time, posthuman inhabitants.
I should have given a link in regards to magic circles. My lazy! 😦 Thanks for looking it up, Ben! 🙂
I think you’re arguing an evolutionary niche – much like birds take advantage of a niche that physics happens to provide.
I could pay that to some degree. But you seem to describe it with an optimism as a solution to all people – when some people aren’t interested in such a solution, but instead to hijack such a system. Something like a cuckoo effect – exploiting the myths of nurturing, to nurture someting that has nothing even vaguely to do with those myths. Sure such exploitative people might tell themselves other myths and such, but at the same time they are quite happy to use scientific insight to exploit other myths others engage in (weve had some posts here on various advertising companies in regard to this). You seem to be saying we have an evolutionary niche – and so everythings fine. But that would leave you open to anyone who doesn’t want that to be fine, but instead that to be the way to their dinner/jetski/new mansion! Let alone hijacking political spectrums (and through that, the martial forces they command). Hawks also use the same evolutionary niche as birds – but do they fly amongst them, or prey upon them?
IF you were to argue an evolutionary niche AND if everyone just wanted to fly together, I would largely concede your point.
You’re saying I shouldn’t defend intuitions and the manifest image so much, because those who do are hypocritical, since they also rely on science when it suits them and they’re in danger of being exploited by predators (ignorance makes us vulnerable).
But when I speak of a transcendental defense, we shouldn’t see this as a normative defense. I’m in agreement with Scott that the scientific image may soon make the manifest one untenable, so that even though our cognitive hardware forces the latter on us, we’ll suffer angst and horror because we’ll be caught between what makes us comfortable and what puts us in contact with reality.
Also, I don’t say the defense is for everyone. I distinguish between the exoteric and the esoteric, the masses and the outsiders. I agree that ignorance makes us vulnerable to predation, but the scientific image makes us vulnerable to angst and horror, depression and suicide, and thus makes us liable to destroy our species. The manifest image also has its dangers, such as religious wars about nonsense. The point is that I don’t say intuitions and values are good. I say the illusions Scott talks about may be efficacious and functional in an evolutionary sense. That’s as far as the transcendental defense goes. As to the advantages and disadvantages of that function or of some compensatory function for the outsiders (existentialists, etc), that’s another question.
It sounds like you say the defence is for everyone, Ben? That could just be my reading, or a slip of the quill on your part or a bit of both, I grant.
I don’t think I said hypocritical – I did say hierarchical though. It’d only be hypocritical if not admitting openly to the heirachy of preaching one thing but doing another.
I agree that ignorance makes us vulnerable to predation, but the scientific image makes us vulnerable to angst and horror, depression and suicide, and thus makes us liable to destroy our species.
To me, I see a spectrum between the two. It seems like you’ve decided on a particular point between those spectrums (and your talk is in regard to that particular spectrum point). I think talk should involve talking about people choosing their own point on that spectrum (and various folks notions on the wider consequences of choosing various points on the spectrum). Of course it’s hard to talk about being closer to the scientific image end without actually going there by talking/thinking about it (shows how close it all is, really). But still, I think some hints towards that end should be there. Probably shows the use of fantasy as a sort of waystation point, instead of force dumping someone way towards the scientific end of the spectrum. The fact the SA glossary of will have a scientific version released latter seems an act of genius to me (I think I read that somewhere it would – I can’t find it now).
With saying that, perhaps were down to arguing the fiddly details? Which I would count as solid progress in the discussion! Internet beers for the both of us! 🙂
The defense of intuitions is for everyone in the sense that their evolutionary function would be universal and everyone with the needed hardware would face the causal power of our cognitive biases, instincts, and so on. But I don’t defend the rationalizations of those functions for everyone. Like Nietzsche, I’m interested in the prospect of a higher form of human being. We’ve got our great intelligence which points in the direction of science-centered naturalism, and then we’ve got our irrational side which points most people in the direction of politically correct delusions, such as the manifest image and exoteric theism. Is there some better way of dealing with these two factors, one which salvages our dignity while also keeping us alive and sane? I call this the need for an unembarrassing naturalistic religion, for some way of being rational and irrational at the same time, without losing our integrity.
The defense of intuitions is for everyone
Read literally, that seems to say it’s for everyone and for 100% of the time. Even as cognitive scientists are not putting in an absolute 100%, clearly.
I’d propose a less than 100% defence can be utilised itself as a defence for (what I call) the human continuum. Somewhat like having your child stabbed with a thin spear of steel and feel sick for a day is breaking the defence of the child, but in doing so supports a grander defence (the whole thing being called innoculation)
I’m guessing what I’m having trouble with in regard to your approach, Ben, is that you are gunning for 100% of the time. No innoculations?
You’re fourth point seems dubious at best: “Absent information pertaining to the absence of information, cognition assumes the adequacy of the information available, no matter how inadequate it may be.”
Where do you get this reasoning? As an example in let’s say, Physics, where much of this kind of debate was carried out in the previous century over descriptions of quantum events in which both Heisenberg (“Principle of Indeterminancy” … contrary to Popper’s attack), and Bohr’s conceptions on complementary descriptions both investigated the inadequacy of knowledge or epistemological frames of reference or impositions. Yet, both knew that to get on with our work it “forces us to adopt a new mode of description designated as complementary in the sense that any given application of classical concepts precludes the simultaneous use of other classical concepts which in a different connection are equally necessary for the elucidation of the phenomena.” More of a fine movement of modes of knowledge operating between unknown unknowns in your terminology.
The most important example of complementary descriptions is provided by the measurements of the position and momentum of an object. If one wants to measure the position of the object relative to a given spatial frame of reference, the measuring instrument must be rigidly fixed to the bodies which define the frame of reference. But this implies the impossibility of investigating the exchange of momentum between the object and the instrument and we are cut off from obtaining any information about the momentum of the object. If, on the other hand, one wants to measure the momentum of an object the measuring instrument must be able to move relative to the spatial reference frame. Bohr here assumes that a momentum measurement involves the registration of the recoil of some movable part of the instrument and the use of the law of momentum conservation. The looseness of the part of the instrument with which the object interacts entails that the instrument cannot serve to accurately determine the position of the object. Since a measuring instrument cannot be rigidly fixed to the spatial reference frame and, at the same time, be movable relative to it, the experiments which serve to precisely determine the position and the momentum of an object are mutually exclusive. Of course, in itself, this is not at all typical for quantum mechanics. But, because the interaction between object and instrument during the measurement can neither be neglected nor determined the two measurements cannot be combined. This means that in the description of the object one must choose between the assignment of a precise position or of a precise momentum.
So even from this example we discover nothing is precise, everything is fuzzy as in set theory. Fuzzy logic is a form of many-valued logic or probabilistic logic; it deals with reasoning that is approximate rather than fixed and exact. Compared to traditional binary sets (where variables may take on true or false values) fuzzy logic variables may have a truth value that ranges in degree between 0 and 1. Fuzzy logic has been extended to handle the concept of partial truth, where the truth value may range between completely true and completely false. Furthermore, when linguistic variables are used, these degrees may be managed by specific functions.
From such things as fuzzy logic we in computer architecture use an outgrowth in Inference Engines: In computer science, and specifically the branches of knowledge engineering and artificial intelligence, an inference engine is a computer program that tries to derive answers from a knowledge base. It is the “brain” that expert systems use to reason about the information in the knowledge base for the ultimate purpose of formulating new conclusions. Inference engines are considered to be a special case of reasoning engines, which can use more general methods of reasoning.
The whole idea here being of applying a pattern recognition and data-driven approach: The computation is often qualified as data-driven or pattern-directed in contrast to the more traditional procedural control. Rules can communicate with one another only by way of the data, whereas in traditional programming languages procedures and functions explicitly call one another. Unlike instructions, rules are not executed sequentially and it is not always possible to determine through inspection of a set of rules which rule will be executed first or cause the inference engine to terminate.
The idea that it is rules (algorithms) that communicate with each other takes it out of the human domain ( one could even think in the brain’s metaphors of neurons communicating with each other, etc.). The inference engine model allows a more complete separation of the knowledge (in the rules) from the control (the inference engine). This is complementary to Physics separation of position and momentum of which Heisenberg and Bohrs clarified.
The point being of all this is that as you’ve said already in other terms – there is more than one way to skin a cat. In other words we can no longer have some intentional totalistic frame of reference to guide either knowledge or ontology, etc. ever again. We deal in double-vision, in unknown-unknowns in every are of knowledge and ontology and must use our tools in more inventive ways that we have hither.
The nihilist implosion is over, now everything truly is possible. Nothing is final, everything is permitted. As Nietzsche said long ago ‘truth is a fiction we all believe in’. And, if it helps us get on with our work so be it.
When you say that “We deal in double-vision,” I think Scott would say the brain is blind to itself, so when it comes to self-knowledge science supplies all the vision we have available. The threat from science is that our nonscientific discourses are wholly illusory, that the natural facts are entirely unexpected because in our nonscientific capacities we’ve been wearing blinders.
But when you say, with Nietzsche, that we all believe in fictions that help us get by, this is the main point I make in my recent discussion with Scott. I think Scott’s concerned that the so-called coping mechanisms found in the humanities departments are unreliable and doomed to fall by the wayside after the technoscientific apocalypse, when there will be nowhere to hide from the horror of our inner reality. Will we always have fictions to distract us? Will artistic fantasies always inspire us even after we’ve looked too long in the abyss? This is what I’m working on on my blog, Rants Within the Undead God. What sort of philosophy and religion can face head-on the technoscientific threat to the manifest image or can at least help us pick up the pieces when that image is shattered? What should posthumans be doing after they learn the whole miserable truth of our existential predicament?
>”You’re fourth point seems dubious at best: “Absent information pertaining to the absence of information, cognition assumes the adequacy of the information available, no matter how inadequate it may be.””
This is, actually, how the brain seems to function, as discovered through neuroscience. The brain works with what it has at the moment, it can deduce information it doesn’t have, but those are ‘known unknowns’, the brain is helpless with regard to information it has no clue exists or should exist, the ‘unknown unknowns’. Studies on brain injuries have born this out, the brain will outright deny the reality that is apparent to normal brains, but the damaged brain will confabulate rationalizations for even the most bizarre errors. Scott uses this to assume, if I have understood him correctly, that normal functioning brains are confabulating our metacognitive experiences in a similar way that damaged brains do, but our shared social experiences reinforce these ‘illusions’. I don’t agree, but explaining why will take too long here.
>”The nihilist implosion is over, now everything truly is possible. Nothing is final, everything is permitted. As Nietzsche said long ago ‘truth is a fiction we all believe in’.”
I’m guessing you mean ‘everything is permitted’ as long as they conform to the laws of physics. This is true in a sense, but the fundamental laws of physics aren’t the only laws governing nature, there are also complexity laws that arise out of those fundamental axioms, and by which we must abide by as well. Physics gives us the chessboard, the pieces, and the rules of the game, but there are computational and organizational laws that arise out of complex systems which restrict, say, how much we can predict moves in advance, or with how much fidelity we can predict our opponents future moves, etc. This last point is crucial because it isn’t acknowledge by many scientists and philosophers, or at least it isn’t give the attention it deserves.
And whether everything is possible implies an end to nihilism, I don’t see it. One can argue that if everything is possible then no choice one way or the other matters, so nihilism rears its head once more. I don’t believe that, but only because I’ve incorporated my previously state complexity thinking into my worldview.
I have a review coming up of Jesse Butler’s Rethinking Introspection which brings all these issues up, and I’ve been wondering how you would respond to the critique I give since his account turns on a similar spontaneous cognition idea that I’ve been (rightly or wrongly) attributing to you haig. I’ll be sure to drop you a note when I finally post it. Otherwise, you’re right: I explain intentional phenomena as a kind of ‘natural anosognosia.’
If you would like to run your ideas by us with a guest-post some day haig, lemme know! Like I say, the teleonomy stuff is the only stuff that prickles my skeptical hide.
Replying to Scott’s comment (aug.19@3:21pm)
I’m in the process of setting up my site w/ blogs devoted to these issues, but I’d be happy to guest post here any time. Look forward to reading your review of Butler’s book to see if it does align with some of my thoughts.
And yes, the teleonomic view that I’ve developed was the ‘eureka’ moment for me which pretty much reformed my fundamental assumptions regarding brain-cognitive science, without which I would probably be pursuing lines similar to yours.
I take it that you’re not taking issue with the content of (4) so much as with my interpretation of it’s significance. (4), after all, is just a way of saying there’s always unknown unknowns. Deployed in BBT it explains ‘default self-transparency,’ why it is we took introspection to be an infallible source of ‘knowledge’ regarding the ‘mind’ short of appealing to explicit ‘infallibility representations’ a la Carruthers, for instance. It’s importance is that it explains why so much neuroscience and cognitive science is so ‘counter-intuitive’ on the one hand, and why it is that the intuitions we do have exercise very little constraint on what the science will find, why it is, for instance, all the traditional cognitive catergories, be in memory or introspection or concepts, are suffering the radical fractionalization they are at this moment.
The quote you give is nothing short of fantastic, but I think it actually might serve my pessimism more than your guarded optimism regarding complementarity or double-vision vis a vis the mechanical and the intentional. The reason I say this is that unlike the situation with the quantum, where the heuristic limitations of human cognition run into a brick wall tout court, in the case of intentionality (the analogue to the classical understanding of particle and field) we in fact do have cognitive systems capable of taking the intentional ball and running with it nonintentionally: namely those underpinning mechanical cognition. Given their greater domain generality, they will, in all likelihood, ultimately prove to be the more efficacious. Thus the creeping depersonalization you see in so many aspects of contemporary life: I’m predicting that we should assume that this process will simply continue and continue, until the posthuman allows us to leave the intentional entirely behind. So one way of looking at the narrative is one of largely intentional ‘single-vision’ becoming more an more intentional and mechanical double-vision ultimately becoming mechanical ‘single-vision’ – leading to who knows what else down the road which need be recognizably human or intentional in any way.
Reblogged this on noir realism and commented:
R. Scott Bakker with another in our continuing dialogues. I had not read Ben Cain’s blog before, so will work on reading his Rants Within the Undead God today! Either way, as Scott says, and I would agree, “The irony is that although we three actually don’t disagree about that much, the disputed remainder is nothing less than the whole human aspiration since the Enlightenment.” Bull’s-eye! He’s onto something… take a gander… whether you agree or disagree with Scott, he’s worth an effort and engagement as singular voice outside the academic circle questioning, questioning, questioning…
I’ve just found your blog too. Judging from your About page, I think we’re on the same wavelength, although you seem to have read more of the recent French philosophers than I have. You say you’re interested in “a revolutionary materialism that seeks the emancipatory vision of human and non-human alike.” Does this mean you’re optimistic about naturalistic philosophy and the prospect of a posthuman condition to match the postmodern one in which science has made our intuitive self-image untenable? Do you think this self-image is, rather, transcendentally necessary, that it’s an indispensable precondition of some project so that we’re got to be metaphysical dualists of some kind?
I agree with Badiou and Zizek that we need a new theory of subjectivity. I agree with Scott that brain science is closer to the point of providing a way forward in that direction than any of our know philosophical paradigms current or past. For me I want to retain the metaphors of the old literature rather than the less than adequate and ingrown scientific terminology that Scott and the brain scientists use. That’s about my only dispute. The aridity of the scientific terminology will keep it closed off from main-stream society, locked away in a circle of academic and scientific journalism and culturally jailed expert society. The whole point of the humanistic enterprise was to find viable ways of enlightening the public mind of the new world of science in their midst. This is my only caveat… we need the ‘manifest image’ for the world at large that will never become immersed in the stringent strictures of the codified world of science. Scott seems to want to get rid of the manifst image and replace it with a world of pure scientific image in Sellarsian sense. Yet, this is a contradiction that he happily admits too, since he spends his major time writing fantasy novels about culture, philosophy, science, et. al…. Scott just wants us to open our eyes and give up the appearances…. contrary to Barfield… there is no “saving the appearances” ever again. We are no longer battling in a postmodern or posthuman zone of nihilism… we are already beyond such categories. We are the new without a concept. The past is nothing more than a critic’s paradise of touchstones… moments of insight that led to this world we are in now. Truth is not a resting place, it is a movement… a happening, an event.
Well, when you say the manifest image may be needed for the world at large, for exoteric purposes, this is the point I was driving at in my transcendental defense of intuitions, against BBT. I don’t think Scott would concede that he contradicts himself when he writes fiction, since he’d say art isn’t cognitive. The question for him, I think, is whether the humanities produce any kind of knowledge. Are they “theoretically competent”?
When you say the whole point of the humanities was to enlighten the public about modern scientific knowledge, I wonder whether there was also an esoteric purpose of preserving the ancient, perennial wisdom that was rediscovered during the Renaissance, from the apocalyptic implications of the new materialism. That was what all of those modern secret societies were about, I think.
But for me the next question stops everyone in their tracks: Heuristic simplification how adapted to what?
I’d still like to see a better answer from either interlocutor. I also don’t understand how we can assert cognitive efficacy if we don’t have an adequate fiction enabling us to identify heuristic ecologies…
Wicked discussion between the three blogs though. Cheers.
There’s some disagreement here about what counts as cognition. Is knowledge just about getting the facts right, or about providing us with an accurate picture of the world? If so, our native thought processes will fail to be cognitively useful, because they’ll pale next to science. But I say this is a scientistic (science-centered) conception of knowledge.
We know what we’re talking about when we have a worldview that’s pragmatically as well as scientifically justified. So while intuitions and values can’t compete with science when it comes to telling us the natural facts, including the facts of how our heuristics work, intuitions and values may have altogether nonscientific functions which are nevertheless relevant to cognition. Put bluntly, those supporters of the manifest image may function to keep us in the game so that scientific revelations don’t trigger the very apocalypse that Scott sees on the horizon. Our intuitive self-image may be our matrix, as it were, and maybe that’s the exapted function of the intuitions that Scott belittles on scientific grounds.
Thanks for replying. I still think we’re missing each other here though, Cain. I hazard I’m after something more practical.
Take confirmation bias, for instance. Bakker’s argument would frame this within a (possibly) discernible ecology, wherein, confirmation bias is the most efficient biopsychological expenditure of energy for sensorimotor function, within a given circumstances of one, or few, specific schema constituted of cumulative endogenous/exogenous cues. However, our evolutionary pedigree often clashes with modern human civilization, to our reward and detriment. Which seems to explain the availability of refined sugars, sexual objectification/gratification in advertising, the success of bloated of Hollywood blockbusters, the drug culture. Hijacking heuristic functionality for gratification, without necessarily understanding that heuristic functionality.
There’s also the selectivity of brain function to consider, which seems to be explained nicely by BBT, even if it still requires evidence. You have a visual system involving over 30% of brain function but yet with unaccountable specificity in function in regards to the individual neurons within that system, and despite which persists as one enduring experience.
If our experience of heuristic and bias is illusory, insomuch as their biomechanical functions are efficacious in ways that differ from our traditional conceptions, then we need a credible fiction to discern from which ecologies they really are borne before we can maintain a credible fiction to argue for their conscious exploitation in our favour after the singularity of apocalypse.
I think you’re saying there are two fictions to consider, the natural and the artificial ones, that is, the automatic, innate one which is the appearance that our inner self is immaterial, free, and fully rational, and the fiction we might resort to to replace that one.
You speak of exploitations of our system, as in the selling of sugary foods and so on, but I’d put this in terms of exoteric and esoteric myths. The masses are content with the innate fictions that emerge from our cognitive biases and blindness, as BBT says. Then there are those who think too much, who see past those illusions, at least in the abstract, and who then have to think of some other way of getting by in the world. They’re existentially homeless, as I put it, cursed by reason to no longer feel satisfied with the myths that comfort most people. These elite few need new myths that are more consistent with the scientific picture of our inner reality.
So if you’re asking whether any of these fictions is efficacious as an exapted function of a cognitive mechanism, I think the innate fictions are clearly so. In my discussion with Scott, I give the example of religions which pile onto the manifest image. As for the posthuman fictions, it’s hard to say since these haven’t fully emerged. My blog just explores some options.
My point is that if we think of illusions as having evolutionary or some other social or existential functions, and if we take a broad, non-scientistic view of cognition, a mechanist can have only a pragmatic objection to those illusions (maybe they’re better performed by some other mechanism). As long as we take those illusions with a grain of salt, as fictions rather than revelations of transcendent, supernatural facts, we should be OK on naturalistic grounds. Naturalists can enjoy fictions, after all. Scott even writes them.
I may have misunderstood what you were driving at, though.
Wicked indeed! And you’re right: this is the million dollar question. It’s probably worth a post!
On Ben Cain’s site you say: “The problem, quite simply, is that we have no fucking idea what we’re talking about when it comes intentional theoretical discourse. We have no object to be known, which you are willing to admit, but we likewise have no function to be understood, aside from, perhaps, some vague and anodyne notion of upsetting received views–the rationale I use for my fiction, in fact!” Exactly! Kant’s fictions are dead, and have been dead for a long while… we’ve been governed by a fiction that purported to be real, but now the truth is out… all interpretations are subject to interpretation without end except as they become machines of inference that guide us heuristically to get on with our work. Period! In that sense we are supreme nihilists that have moved beyond chaos and into the land of plenty where every frame of reference is nothing more than a helpmeet, a tool, and guide along the heuristic gambit rather than a garden to cultivate we have a multiverse full of wild plants that never will again belong to the farmers of knowledge or ontology. The Open Universe is free of us…. at last. Yet, we remain the ineluctable questioning creature in the midst of this strangeness without a hook or sinker. Neither trapped nor bound by mundane laws we travel along the shores of the natural curve speculating on the formidable ignorance we have become. We’ve returned by way of circuitous wisdom to Socrates’s delight: We remain lover’s of Wisdom rather than its jailers.
Some great images here. I wonder whether Scott would agree that we’re “Neither trapped nor bound by mundane laws.” Are you saying we’re free to create heuristics, that is, rules for games to play, so that the beauty of postmodern relativism is that we’re faced with “a land of plenty” and a “multiverse full of wild plants” to explore rather than with a “garden to cultivate” (a hegemonic worldview to defend as fundamentalists)?
Well, if he’s against the ‘principle of sufficiency’ which is the cornerstone of philosophy since Hume then yes we’re no longer bound by the causality as it was once known and constrained within Newtonian mechanics. We use diverse heuristics and frameworks for differing and multifarious questionings about this strange realm around us. Whether we agree of disagree with Meillassoux he’s main argument in After Finitude is just that to take Hume to the ultimate n’th degree in which we are no longer bound by causal laws and a principle of sufficiency concerning logic and truth. As for the wildness and garden metaphor I was thinking of my own Lucretian swerve that for too long we’ve lived comfortably in the fictions of philosophy and pruned its vines, when science left the garden long ago for the great unknown unknowns….
You almost make me want to give up my akratic torment when you frame it in these terms, noir! There is, I admit, something to be said about escaping the penitentary walls of all the old oppositions, to not have to worry’subjects’ and ‘objects’ and to simply speak of systems nested in systems, with each and every demarcation the function of yet another system, that can either be dissolved or embraced depending on how it carries you. But it remains theoretical terra incognita, and until we all map out some subset of its permutations, we really don’t have anything to affirm – and much to fear.
You should turn this into a post…
That italicized “inhuman” makes me crave your Aspect-Emperor conclusion all the more.
Just wait… Man, I’ve been having fun… Dark fun, but fun all the same!
Don’t we all.
What is your take, Jorge, now that you’ve completed your PhD and find yourself peering across the desert of the academic real? You’ve been up to your eyeballs in the very thick of it, whilst following all the hand-wringing here.
What do you want to know? From a pragmatic perspective, funding is getting really tight even for the hard sciences. It seems most programs are readjusting their numbers to admit fewer candidates, and things are going to get tougher. But it didn’t take me too long to find a postdoctoral position in a field I’m very new to (I had a grad student lecture me on new techniques to florescently tag potassium channels to monitor action potentials).
From the theoretical “end of humanity” side… yeah, things are getting scary and no one seems to care. Well, that’s not entirely true. I think some people are starting to wake up that we are building a machine, and it is a meat grinder, and we’re going to have to ask how far we want to go sooner or later:
http://www.nature.com/news/us-brain-project-puts-focus-on-ethics-1.13549
I can’t help thinking though, that there really is an opportunity here. If you turn out to be correct, and the sciences finally do start to cut through the Gordian knot of the old philosophical problems then it’s a fundamental shift (possibly *the first* fundamental shift) in humanity’s capacity to “do metaphysics”.
There’s no stopping it though, so might as well cross our fingers.
then it’s a fundamental shift (possibly *the first* fundamental shift) in humanity’s capacity to “do metaphysics”.
What do you mean, Jorge?
> “And this suggests that the flood of scientific information in the domain of the human is going to do what such floods have done in every other domain of human inquiry: wash every thing away, and reveal something utterly indifferent to our cherished traditional conceits. Something inhuman…”
Which cherished traditional conceits are you exactly talking about, and how would scientifically understanding humans better reveal something inhuman?
Basically intentionality and normativity as a whole. No aboutness, purposiveness, right or wrong. These, and the vast conceptual web they anchor, have provided the foundation of our traditional understanding of the human (as they once formed the foundation of our understanding of the ‘natural’ more generally). Take these away, and you are no longer discussing the human in traditionally recognizable terms.
Based on the current secular, scientific worldview, I think most prominent scientists (eg Weinberg’s meaningless universe), and ‘scientific’ philosophers (eg Dennett) would already accept, without needing to understand more about the brain, that intentionality and normativity are useful fictions. Useful in that they still assume we can achieve the Enlightenment ideals of social progress built on top of reason and empiricism even in a pointless/meaningless universe, which critics like Nietzche and, more recently, people like Alisdaire MacIntyre and Thomas Nagel, have argued against. What your project does is add insult to injury, showing that not only is the universe pointless/meaningless, but our concept of the human is fatally incorrect, replacing folk psychological ideas with inhuman mechanisms formed merely through efficacious heuristics and nothing more. With that final nail in the coffin, you go further then those scientists/philosophers by abandoning any hope for a coherent path to achieve the promises of the Enlightenment and show how heuristics reign supreme from here on out and into the posthuman era where efficacious heuristics could not give a damn about us humans (or what we think of as us humans).
As you’ve pointed out before, our pursuits are on opposite ends of the spectrum but arise out of the same predicament. You are trying to push Enlightenment’s reliance on reason and science to its breaking point thereby excising the ‘useful fictions’ we’ve held on to and reveal them as just fictions. What I’m trying to do is push reason and science to that same breaking point, but end up rediscovering those ‘useful fictions’ as essential facts. Your project leaves the scientific view relatively unaltered, but radically destroys the philosophical views modernity has relied on. My project radically changes the modern scientific view, and in so doing, recapitulates (or as Kauffman poetically states, ‘re-enchants’) our philosophical outlook.
Wonderful summary, haig. All I would add is that my project isn’t wholly destructive: aside from providing a powerful (if dehumanizing) way to look intentionality and explain away numerous philosophy of mind riddles, it ‘hopes’ that there is some ‘redemption’ to be had in moving forward. This is actually what forms the basis of my interest in projects like yours. You characterize it in rehabilitative terms, but this need not be the case at all. My money is that you’ve actually leapt into the abyss with me, and are in the process of scrying something new, which may or may not possess resonances with traditional metacognitive conceits.
All you need do is to read Dennett’s “How to Protect Human Dignity from Science” to get a sense of how low the tank is running in pragmatic naturalism!
haig eloquently stated why I “assume that [you are] chronically over-estimating the power of the scientific paradigm.”
I believe we are nearing the end of the Enlightenment era, but throw my lot in with him. John Michael Greer writes about how the “Dark Ages” weren’t all bad nor the end of history, we just don’t know how to talk about them because they weren’t characterized by modern [Greco-Roman] ideals of rationality.
I believe that we’re approaching the point where some form of mysticism will actually be more useful in navigating the world and thus the question is how to transition without losing all semblance of empiricism and liberalism.
@Mikkel
I think I’ve found a kindred spirit! Glad you brought up mysticism, because it does play a part in my views, but in a completely naturalized form so that it becomes amenable to scientific analyses. There’s been some work on neurotheology, but I think its been too myopic, focusing on neural correlates of religious experiences, an update on William James’ work, while still missing an important aspect which a complex systems approach would rectify.
Mikkel and Haig, I’d like to understand where mysticism and a re-enchantment of nature fit into the complex systems picture. Is it because, as you say, Mikkel, we’ve got to be strategically ignorant about the total cause of whatever state emerges from the system, so that the system might as well be called magical? Are we saying that chaotic systems are literally miraculous in that their lack of predictability isn’t just an epistemic matter, as Mikkel says somewhere in these comments?
@Benjamin:
I’d love to hear what Mikkel has to say about this also. For my thoughts, I concur with Mikkel that emergent complexity is not just epistemic, it is an ontological fact of how the universe behaves, but I would not call that miraculous, no more than I would call any other aspect of nature miraculous, unless you’re using poetic license. So I think it’s wrong to say that the lack of our ability to completely predict or control complex system behavior means we are ignorant of how to do it, that implies an epistemic limitation mentioned above. No, the limitations are baked into the universe so to speak, it isn’t a question of overcoming those limitations through science creating better models or gathering more data, it is the way the universe works full stop. We should acknowledge these limitations and we do need to incorporate them into our strategic thinking by organizing our behaviors around them (the ancients would refer to this as living according to the Logos or the Tao).
Re-enchantment is not simply a pragmatic choice we should make through modern myth making or placing romanticized narratives on top of a scientifically meaningless universe in order to preserve our humanity. I literally mean that mysticism is a brute fact of nature, a natural phenomenon like any other, both an organizing force inherent in the cosmos as well as a class of subjective mental experiences which serve an adaptive purpose. For brevity I won’t get into the entire spiel here, I’ll just simplify things by saying that the evolution of the universe is constrained by the limitations which complex systems science is discovering, which means the universe ‘unfolds’ in a purposeful direction (though for what ultimate purpose remains to be seen). Mystical experiences, as independently encountered throughout the perennial traditions, and which can be characterized as feelings of oneness with everything, compassion for all sentience, and the loss of the individual ego, are subjective conscious states that are the leading edge of our emerging human evolution which serve an adaptive purpose pointing the way forward towards the future evolution of life and consciousness as a more cohesive and collective species, not quite eusocial, but certainly some variation of a superorganism.
Well I fully agree with haig, mostly.
Systems principles are ontological facts as strong as any physical law we’ve observed and — if you want to get real out there — when combined with information theory then they potentially supercede physical laws. It is possible that they are truly the Theory of Everything that physics has been searching for, although ironically that muddies everything instead of clarifying.
Now I want to emphasize ontological here, because I don’t feel any claim can be made about absolutism and a question that has long intrigued me is whether DST is true only in our social context and if not there, our physical context. It is possible that a new logic and mathematics can be developed that overturn DST principles, and indeed this is a speculation by some mathematicians in the field. Of course, we’re talking about reworking the ideas of logic that have existed across cultures for thousands of years, so optimistically they don’t see resolution for hundreds of years of intensive work.
Or perhaps it is ontological on a physical level, and humans are fundamentally incapable of peeking behind the veil.
Or perhaps it is ontological universally and no entity can do so. I dunno.
What I do know is that they are demonstrably true and a superior explanation for the vast majority of phenomena, and have long felt that the 21st century will be as radically transformed by DST as the 20th was by quantum physics, or the 18th & 19th were by Newton. A Turing computer the size of the universe can’t overturn any DST, so it’s not just a matter of plugging our minds into a computer and suddenly becoming a posthuman — although maybe it would interact with neural processes in an unpredictable way and prove me wrong!
Given the hand we’ve been dealt, naturalized mysticism seems to be the only practical way to go.
Metaphysically I might part ways with haig. As I just mentioned, I view DST ontologically, and so cannot proclaim it unveils any “purpose.” I absolutely believe in the reality of the perennial philosophy and transcendental experiences as representations of the human experience. I don’t think it’s knowable whether they are material reality, but I strongly believe that humans can tap into the Oneness and Flow of the Universe, whatever you want to call that; and by doing so, we uncover deeply spiritual and pragmatic truths that — when combined with rational inquiry — create a life full of understanding.
But on this issue, I agree with many Buddhists (including the Dalai Lama) and Taoists who are agnostic and nihilistic. I strongly object to the Myth of Progress and do not believe humanity is a priori part of any unfolding of the universe on an evolutionary level. To me, these principles are strongly against DST itself, but it puts me on the outs with many scientific mystics. [Although it is possible that I’m just scared of the implications otherwise.]
Beyond metaphysics though, DST is the modern revealing of truths that are greater than any field or culture, and have opened my eyes to the levels of acceptance and quantitative understanding that are difficult to convey.
For me, moving beyond the analytical level and actually living the Tao (and especially using the skills for something useful, instead of “making money” off the stock market) absolutely requires a mystical appreciation and conception of sacredness. Not doing so leads to a constant battle with the ego and destroys the ability to listen to the system.
>”I strongly object to the Myth of Progress and do not believe humanity is a priori part of any unfolding of the universe on an evolutionary level. To me, these principles are strongly against DST itself, but it puts me on the outs with many scientific mystics. [Although it is possible that I’m just scared of the implications otherwise.]”
Can you elaborate on this? Why do you have such a strong objection to progress? Why would it be antithetical to DST? How does your position differ from those you call scientific mystics? And which implications would you be scared of and why?
In my defense, I acknowledge that direction/purpose is a tough pill to swallow without reformulating fundamental aspects of our modern conceptions of evolutionary theory and cosmology. For what its worth, I think I’ve made some interesting discoveries along those line which have made me bite the bullet and accept such scientifically heretical notions. Time will tell whether I’m vindicated or have been spinning my wheels.
Easy haig, the Myth of Progress assumes linear progression of time and DST worldviews only make sense with a cyclical progression of time. There is no such thing as a line attractor…
The way that I know some people get around this is to insist that the system as a whole evolves, so what looks like cyclical is actually evolutionary; giving both circular return and linearized progression. I presume you subscribe to this view.
I have two problems with this as most people state. First I believe it it violates the Maximum Power Principle, because efficient systems are hierarchical and end up over optimizing, leading to their own demise. However, that strategy will always out compete an egalitarian strategy assuming that the carrying capacity hasn’t been reached — are you aware of systems anthropologists noting the tendency for societies to switch from hierarchical to egalitarian and back as a function of environmental stressors?
My second problem is that increased complexity requires increased energy requirements, and those are sustainable over the long run; look at the dinosaurs.
Therefore, unlike the scientific mystics I’ve met who believe that humanity is on the cusp of a systems guided evolution to a new phase, I believe we’re much more likely to go extinct — at some point.
It’s a cynical interpretation, but speciation occurs most rapidly under enormous environmental pressures and rarely does the same alpha class of animal emerge from the chaos intact. Thus, I do not deny the possibility of that type of mysticism, but feel it is anthropocentric for no justifiable reason.
On a grand metaphorical level — such as that the Universe is Brahma’s dream and that when he falls asleep he is scattered and then over time his dream parts coalesce into universal cognition and he awakes — sure, why not? That is a plausible metaphor for self organization, but it is on a totally inhuman understanding.
The reason I’m “scared” of that possibility is due to my ego. If there truly is a divine guiding principle, but that principle exists on an unfathomable timescale that not only outlasts the self but (for reasons above) the species, the planet and the solar system, the galaxy, etc etc then it feels like a great loss of wonder.
Your last statement says you will be “vindicated” or not, but that’s not true. Vindication only comes out of recognition, and unless you are immortal you will most definitely not be around to be recognized on the evolutionary level!
Perhaps I will change my mind with enough meditation. I’m at the point where I am fairly egoless but still human, but far from simply a Being.
>”…the system as a whole evolves, so what looks like cyclical is actually evolutionary; giving both circular return and linearized progression. I presume you subscribe to this view.”
Yes, more or less, it would be akin to a spiral of linearly progressing cycles.
“First I believe it it violates the Maximum Power Principle, because efficient systems are hierarchical and end up over optimizing, leading to their own demise. However, that strategy will always out compete an egalitarian strategy assuming that the carrying capacity hasn’t been reached — are you aware of systems anthropologists noting the tendency for societies to switch from hierarchical to egalitarian and back as a function of environmental stressors?”
Yes, when energy sources or resources are scarce a hierarchical structure of system control is optimal, but in the spiral model I advocate, new organizational structures with better efficiencies emerge at criticality points that preserve some of the old hierarchy within a new egalitarian regime. The system can fall back into previous cycles if faced with major catastrophe, but the system will eventually ratchet up. The Maximum Power Principle fails to acknowledge efficiency gains that come with parallel/distributed organizations that are more adaptable and resilient as opposed to just maximizing optimal energy flow. This is aligned with the meta-systems view as expounded by Valentin Turchin.
“My second problem is that increased complexity requires increased energy requirements, and those are sustainable over the long run; look at the dinosaurs.”
My view is somewhat idiosyncratic in that I don’t assume progress to mean more complexity, at least not directly, what is being progressed is general adaptability, which means more general purpose control systems (control of both internal and external environment), which requires complex structures. It’s a subtle distinction, but an important one. I need to think of a better way to explain it. A recent paper by Adami et al. evolved intelligent agents, and the most successful generations as selected by their fitness function (navigating a maze) were actually more efficient in their use of internal resources (they discarded half their virtual brains). The dinosaurs faced an extinction level event precipitated by a hugely abnormal environmental stressor, an asteroid impact, it is unfair to say they died out because of complexity demanding unsustainable energy requirements.
“It’s a cynical interpretation, but speciation occurs most rapidly under enormous environmental pressures and rarely does the same alpha class of animal emerge from the chaos intact. Thus, I do not deny the possibility of that type of mysticism, but feel it is anthropocentric for no justifiable reason.”
The alpha species may not survive, but if the niches they inhabited remain new species will converge into the previous species roles. It is anthropocentric in the sense that humans qua humans will survive, meaning some species which can fit that niche will, and because of computational/complexity constraints, the phenotypes occupying those roles will converge onto a narrow probability space and the essence of humanity will be inevitably reached. So not human per se, but probably humanoid in my opinion (still debatable), and definitely intelligent tool making conscious animals somewhat isomorphic to humans.
“If there truly is a divine guiding principle, but that principle exists on an unfathomable timescale that not only outlasts the self but (for reasons above) the species, the planet and the solar system, the galaxy, etc etc then it feels like a great loss of wonder.”
I don’t understand this. That divine guiding principle is just the universe in development, it is an organicist view of the universe as Whitehead called it, as opposed to the mechanistic view. I guess you want a completely open universe where anything goes (as long as it abides by physical law) and feel any direction would close off some of the mystery? Interesting, because that is exactly what frightens me, a Lovecraftian cosmic horror in the making.
I keep trying to figure out how to tie this back into BBT since we’ve hijacked the thread. *Shrug*
Anyway, no issue with anything you just said [except the dinosaurs, imo the asteroid caused a massive decrease in biomass and thus available energy, so the small/rudimentary took over. OTOH you can argue that the rudimentary mammals were an example of a more complex “control system” that then rapidly scaled when energy became available]. I am even writing a fantasy novel that is literally about your spiral model, except of course it just appears like a fantasy novel while being completely realist in a post catastrophe world.
There’s not even disagreement on the last part because I fully recognize that peace and self actualization comes through accepting your role in the meta narrative. I think about it in terms of the human narrative, but if that is just a sub narrative of a larger one, then so be it.
I will say that it seems like we both agree that Scott’s BBT is accurate as a descriptive model, it’s just that we think there is a base heuristic that colors all the other ones and we are picking a dynamic systems mystic one vs. a rationalist mechanistic one.
I’ve been thinking about it recently and have decided that whether people realize it or not, this will become a (if not the) next major ideological divide.
Mikkel and Haig,
I’m glad you both saw my little post there in this labyrinth of comments. This question of the religious implications of DST is relevant to the overall question of the clash between science and the manifest image. But to make it even more relevant to this thread, I’d like to connect two of Mikkel’s statements about underlying heuristics:
“I will say that it seems like we both agree that Scott’s BBT is accurate as a descriptive model, it’s just that we think there is a base heuristic that colors all the other ones and we are picking a dynamic systems mystic one vs. a rationalist mechanistic one.”
“I think the downstream effects of the existential heuristic on the cognitive ones is interesting: i.e. that metacognition itself is affected by an explicit choice in believing one existential heuristic over another.”
Now this interests me greatly, because it looks like Mikkel is saying there’s a not-so rational leap of faith involved in the choice of an existential direction, whereas Haig is saying that when you stare long enough at those geometric representations of vast systemic evolutions, you begin to divine progress and other kinds of purpose as natural facts.
But I think Scott would say that none of this mysticism counts as knowledge because we can’t trust how things seem to us when we merely fall back on the biases that are baked into our brain. We project purpose onto nature, because we’re programmed to be social and to interpret each other’s mental states in intentional terms, and we overreach with that skill. As BBT says, we become over-reliant on our ability, because we mistake the blindness for a skill.
So I’d like to know how you think this religious interpretation of nature is consistent with BBT. This is sort of my project too! Maybe this comment section isn’t the best place for this discussion, but anyway I think there’s still some conflict here.
Well I’m too much of a pragmatic skeptic to have a literal belief like haig. I disagree with his claim that his metaphysics are grounded in empiric inquiry, although they are rational. That means that even though I don’t see any way he can justify them arising from objective observation, the metaphysics is consistent with his experiential worldview.
We’re been talking about the inherent unpredictability of complex systems, but have left out an important caveat. If the system is constrained to have certain feedback parameters then it does become rather predictable. Moreover, even in a complex system, if we zoom in far enough around certain types of equilibrium points, then the function appears it is a linear function and can be quantified within boundary conditions.
Both of these (particularly the latter) are key to practical engineering. This is why functional requirements are so important in design — with the proper outlook a system can be made to behave under a limited regime.
Yet if the system ever goes past the boundary conditions, all bets are off.
This is my way of saying that I do not believe metaphysics are knowledge and why they can never be empiric. Although I cannot poke a hole directly in what haig has said, when you are talking about the infinite universe then it’s impossible to know whether your view is clouded by boundary conditions; particularly since his idea lasts longer than the lifespan of species or planets.
The only evidence I would accept would be if cosmic energy beings literally visited earth and shared this knowledge.
But while metaphysics are not knowledge, they are wisdom. And wisdom itself is logical and consistent. To be honest, I’m torn between wondering whether haig understands something I don’t, or whether his beliefs are an artifact of upholding knowledge as the absolute good.
Remember in my comments on your blog, I stated that knowledge is not divine and my take on what the Tao says about it.
So where BBT comes in is that I believe that knowledge itself is dependent on Kantian ontology and on wisdom; particularly the type of wisdom that is chosen by the individual to live through.
I agree that we project purpose and live through bias, but instead of saying them in negative terms or that we could do better, I would say that they ARE human nature. Humans, above all else, are narrative creatures to our core and this is what makes us who we are on an intrinsic and biological level.
On this, I agree with traditional religions, humans are fundamentally religious and that is our super heuristic. To try to tear that down is not only “mean” but it destroys fundamental tools for uncovering Truth. I used to follow the humanism path of religion, a la Sagan, but later realized that it was untenable to reality and overly optimistic. MLK talks about humanism in his autobiography and says that while he was attracted to being a liberal intellectually and optimistically, that his rational inquiry led him to believe that humanity is dominated by sin and lack of grace, rather than quest for rationality and knowledge. He stated this is what fundamentally allowed him to create the civil rights movement and succeed, where the liberals failed. And I believe that too much liberalism and not enough grace is what is dooming us at present; leading to misguided optimism as seen in statements like haig’s below about global challenges.
A humanist could say that’s because of biases and things like BBT and cognitive neuroscience will change that. Perhaps, but it only takes a second to convey Love to those that accept it and a lifetime to convey Knowledge.
Ultimately it’s not about being right or wrong, but about understanding how the different religions fundamentally alter how people act, feel and perceive the world, from the largest to most personal events. It is what leads to the creation of the cognitive heuristics.
Cain, even if this doesn’t count as relevant per say, I’m still reading it all. Let discussion flourish where it may ;).
Also, as mediator, you’ve got fair and concise interjections.
Sorry Scott for going off on such a tangent, I’ll try to wrap things up on my end by tying things back to the BBT.
“I will say that it seems like we both agree that Scott’s BBT is accurate as a descriptive model, it’s just that we think there is a base heuristic that colors all the other ones and we are picking a dynamic systems mystic one vs. a rationalist mechanistic one.”
The BBT is accurate in the same sense that Newtonian mechanics is accurate, they explain things correctly up to a certain point, but they’re not complete. For me, my problem with BBT is not just a distinction between DS and mechanism, what I’m objecting to is the premise that its only ‘heuristics all the way down’, instead I’m arguing that fundamental features of the brain and cognition are convergent and inevitable as a result of the organizing force of nature, and, further more, that consciousness is more fundamental than metacognition/self-awareness making intentionality and normativity ontologically real.
“…whereas Haig is saying that when you stare long enough at those geometric representations of vast systemic evolutions, you begin to divine progress and other kinds of purpose as natural facts…But I think Scott would say that none of this mysticism counts as knowledge because we can’t trust how things seem to us when we merely fall back on the biases that are baked into our brain”
I do not base my ideas on mere intuition or pattern recognition. My mystical perspective is based on phenomena of nature that are amenable to the rigors and objectivity of science and reason like any other; the same standard of empirical and theoretical work that Scott relies on for his BBT is what I use for my theories.
Tangents are rarely frowned upon here, haig. I find all this very fascinating…
@Mikkel
> “Anyway, no issue with anything you just said [except the dinosaurs, imo the asteroid caused a massive decrease in biomass and thus available energy, so the small/rudimentary took over. OTOH you can argue that the rudimentary mammals were an example of a more complex “control system” that then rapidly scaled when energy became available]”
I haven’t come to a conclusion yet whether or not dinosaurs would have gone extinct even without the ELE because of something like the maximum power principle, paving the way for mammals, or if dinosaurs could have evolved into an intelligent species instead of mammals->primates->hominids. Dinosaurs may very well have been trapped into their evolutionary trajectory, unable to evolve into more adaptable and intelligent species, whereas mammals happened to start off initially adaptable enough (warm blooded, dexterous morphology, etc.) to have a more open evolutionary trajectory towards intelligence. Answering that question would shed light on the previous question of how anthropocentric my framework is, whether mammalian phenotype is necessary, or if intelligent species could be descended from a wider range of species. Astrobiologists and people over at SETI discuss these things regularly.
@Scott
> “aside from providing a powerful (if dehumanizing) way to look intentionality and explain away numerous philosophy of mind riddles, it ‘hopes’ that there is some ‘redemption’ to be had in moving forward.”
This is what Eliezer Yudkowsky and friends are doing over at the Singularity Institute, now rebranded as the ‘Machine Intelligence Research Institute (MIRI)’ (intelligence.org). I don’t know if you’re familiar with them, you can catch up on their ideas by reading the wiki and posts at Lesswrong.com. Suffice it to say, I think you’ll find them to be exactly aligned with your views, but have expanded far past them, working hard on the ‘redemption to be had in moving forward’ part you are hoping for. Their approach rigorously blends cognitive science with analytical philosophy, their main focus is on advancing meta-ethics through decision theory in order to create a goal architecture for an artificial general intelligence that will usher in the singularity. “Saving the world” is a phrase you’ll hear brandished about their writings frequently. They’ve identified all your concerns regarding lack of normativity/intentionality, in fact they claim it is one of the, if not the, most dangerous existential threats facing humanity, and their solution is to build a recursively self-improving intelligent bayesian optimization algorithm that evaluates over models of the brain very similar to the BBT in order to create utility functions for its goal system that is ‘human friendly’, hence they label it friendly AI.
> “You characterize it in rehabilitative terms, but this need not be the case at all. My money is that you’ve actually leapt into the abyss with me, and are in the process of scrying something new, which may or may not possess resonances with traditional metacognitive conceits.”
I considered myself a part of the aforementioned Singularity Institute group’s agenda five years ago, and in that sense, I did leap into the abyss with you. Their thoughts and work are extremely appealing. Highly intelligent, mathematically rigorous, aware of all cognitive biases and thought traps, up to date on the relevant scientific theories and philosophical problems, confident in their abilities yet humble enough to change their minds and update their priors when new evidence presents itself, self aware to a fault, and dedicated to the cause like nobody’s business. I came to the conclusion that, though they had opened my eyes to the problems like no other writing has, their proposed solutions were bound to fail because their foundational assumptions were wrong. This was only after I’d formulated my own thoughts which contradicted said assumptions. So if you want to see where the BBT leads you into the posthuman era, they’d be the people you want to associate with. For me, I stared into the abyss, and what I realized was that I was already in the abyss staring back out of it.
Thanks, Mike H. I take debate moderating seriously, which is why the American presidential debates make my blood boil. And please call me Ben.
Haig, I see your point, but if you’re engaging in what I think used to be called natural theology (the kind that comes in for the heaviest criticism in Hume’s dialogue on religion), I wonder how you get around the naturalistic fallacy. Science can tell us the facts, including sublime cosmic ones, but can science show that any part of nature is good? Isn’t mysticism an inherently normative proposition? I suspect science could, in theory, give us good reasons to accept certain mystical assumptions, such as the oneness of nature. But the mystical feelings and prescriptions (altruism, asceticism, etc) will require a leap of faith at some point.
By the way, I’ve seen Yudkowsky in several debates on the internet and on TV (Bloggingheads and TVO) and I’ve been quite impressed with him. I think I heard he has a genius level IQ.
@Mikkel
> “Well I’m too much of a pragmatic skeptic to have a literal belief like haig. I disagree with his claim that his metaphysics are grounded in empiric inquiry, although they are rational. That means that even though I don’t see any way he can justify them arising from objective observation, the metaphysics is consistent with his experiential worldview.”
1.) The teleonomic organizing principle I mention is formalized using computational complexity theory and variations of complexity theories like logical depth and kolmogorov complexity. Objective verification would be to have my theories evaluated for internal consistency (math is sound), verified through repeatable simulations, and apply them to experimental observations in various fields. I admit those things have not yet been done, but in principle, if they were to be confirmed, then that theory would be an empirical fact. Whether you want to call it metaphysics or natural computation or whatever is just semantics.
2.) Mystical conscious states are relatively straightforward to study using the techniques of cognitive neuroscience (subfield is neurotheology), and will get more precise with optogenetics. My incorporating those neuroscientific findings into an evolutionary psychology explanatory framework is par for the course.
> “…To be honest, I’m torn between wondering whether haig understands something I don’t, or whether his beliefs are an artifact of upholding knowledge as the absolute good.”
That’s the trouble, I’m basing my ideas on yet to be confirmed theories of mine, and I’ve said many times, it is hard to believe until they are peer reviewed and understood to be a better model of explanation. I’m giving my opinions of what I think will prove to be the correct view, and defending those opinions helps me poke holes in them, but I don’t expect people to come around to my view until the necessary science is done.
I think you’re still talking about a level much lower than I am. Even if the math works out and even if observations support the theory, then at best it is simply a paradigm for how humans interpret reality at the present time; it still says nothing about the general universe over all time. If DST highlights anything, it’s that we can construct explanatory models that fit data and are logically consistent, but that the system could still change because we are missing information or that if we make different assumptions we can construct an alternative system.
If you say there is a tendency for it to do X then that’s empirical, but saying that there is a purpose is metaphysical. I’m rejecting the concept of universal objective fact and stating that everything (including our interpretation of physical laws) are ontological; potentially there are beings that live across dimensions, or within black holes or are as large as galaxies or small as quarks or live as long as stars or as short as atomic decay. They would surely have a vastly different take on complexity.
I also ran into an observation that I think is apt: “What’s interesting about Curtis’s comment is that humanity has demonstrated a strange sort of “reverse-anthropromorphism” in which we view ourselves and our cultures in terms of the machines we create. In the 19th Century, we viewed ourselves in terms of mechanical devices; in the 20th Century, as systems and computing machines; in the 21st as netowrks. A key point of the Enlightenment was to throw off a view of humans that was driven by Biblical stories and accept mankind as part of nature. But within 50 years, we simply moved our vision to our own creations, turing our machines and technologies into gods ever since.”
Read more at http://www.nakedcapitalism.com/2013/08/bill-mckibben-movements-without-leaders.html#4K4J2qbziI7ug1Qv.99
@Benjamin
>”Science can tell us the facts, including sublime cosmic ones, but can science show that any part of nature is good? Isn’t mysticism an inherently normative proposition? I suspect science could, in theory, give us good reasons to accept certain mystical assumptions, such as the oneness of nature. But the mystical feelings and prescriptions (altruism, asceticism, etc) will require a leap of faith at some point.”
This is an important point, and I don’t have space or time to do it justice here, but I will say that the is/ought dichotomy has to be dissolved for my project to fulfill its ultimate philosophical purpose. No leap of faith will be required, in short, normative moral judgements will be shown to be subjunctive conditionals based on objective intersubjective affective experiences, and mystical experiences are a class of these affects serving an adaptive purpose pointing the way forward towards large-scale social cohesion and cooperation. This is a crucial part of my naturalistic panentheism.
@Mikkel
> “Even if the math works out and even if observations support the theory, then at best it is simply a paradigm for how humans interpret reality at the present time; it still says nothing about the general universe over all time. If DST highlights anything, it’s that we can construct explanatory models that fit data and are logically consistent, but that the system could still change because we are missing information or that if we make different assumptions we can construct an alternative system.”
On how our scientific models change, I’ll refer to an articulate essay by none other than Asimov: http://chem.tufts.edu/answersinscience/relativityofwrong.htm
Right, but that’s refinement within biological ontology, not across! His essay is why I’m a existential nihilist, not a epistemological one!
@Mikkel
But you previously said you consider everything ontology, including epistemology, which I agree with. Epistemology is an outgrowth of ontology, and what’s good for the goose is good for the gander, the epistemological refining of theories that Asimov described is just a continuation of the way ontology functions, that process is an aspect of the universal organizational force I’ve been arguing for. Biological ontology is not an isolated thing separate from the rest of the universe.
@Mikkel
>”Right, but that’s refinement within biological ontology, not across! His essay is why I’m a existential nihilist, not a epistemological one!”
Can you explain this further? Are you saying you believe we can build ever better explanatory models of the universe, but what phenomena those models are describing are somehow capricious or arbitrary?
@haig
“Are you saying you believe we can build ever better explanatory models of the universe, but what phenomena those models are describing are somehow capricious or arbitrary?”
Well this is a two part answer. The models are certainly not capricious or arbitrary, because they have to be consistent. I believe that empiricism and logic can lead to increasingly consistent systems and in that sense Asimov is right.
But consistency does not necessarily imply universality no matter how much it would seem to. For instance, there is the argument that perhaps the Universe is “designed” because if any of the cosmological constants were different in any way then the Universe would not have formed.
This is an invalid outlook however, because if the constants were different then we would not have Our Universe; which is the source of observations from which we derived our laws leading to the idea of the constants in the first place. Thus, all it demonstrates is the Universal system has very few degrees of freedom in which it can maintain consistency.
If the underlying reality that we abstract through the cosmological constants were different then there is no way to know whether the system would “organize” differently, leading to radically different laws.
This is why I’m a fundamentalist agnostic.
I would take it one step further: we cannot know whether the Universe has the same laws at all parts of it and we cannot know whether those laws change over time. In fact, I would go so far as to say that if your metaphysics is correct *then the fundamental laws of nature would change over time and space* because the laws would merely be representative of self organization. A principle of self organization is that it changes the the internal dynamics and/or feedback constants, leading to self consistent but variable laws as exposed through empiricism.
The fact that we had to include dark energy and dark mass — massive fudge factors — to make “universal laws” work suggests to me that it’s quite likely that physical laws are variable across spacetime. This is what I alluded to somewhere in this thread where I said that a complexity-entropy approach to the universe could actually supercede our understanding of physical laws.
The second part, and what you were asking me to clarify, is that the above is still dependent on our context. As Humans, with the ontology we bring with that, we may be able to discover Universal Laws that are applicable to our reality, but a different ontology could easily lead to a different set of self consistent and empirically derived laws even given the same reality!
So in my book, even if we discover fundamental reality for our shared ontological context, that still does not prove any metaphysical statement because that exists across contexts. I just ran into the quote by James Gleick: “Not only do living things lessen the disorder in their environments; they are in themselves, their skeletons and their flesh, vesicles and membranes, shells and carapaces, leaves and blossoms, circulatory systems and metabolic pathways – miracles of pattern and structure. It sometimes seems as if curbing entropy is our quixotic purpose in the universe.”
I agree with this and it is basically what you are saying, but he narrowed it to living (Earth) things. If you said that your metaphysics were Our overall guiding principle [where Our = anything with our ontology, which could be everything for all I know] then I have no issue; but I cannot claim it is the Universes.
Haig (and others but especially haig) have you seen this collection of Transcendental accounts written by scientists?
@Mikkel
> “But consistency does not necessarily imply universality no matter how much it would seem to.”
But consistent evidence for universality does imply universality until new evidence proves otherwise. Newton’s models described the motions of terrestrial objects as well as the celestial bodies in the heavens with one universal law, and since then all our theories and observations have remained faithful to this universality. Though Hume was right when he said causality was not a priori justified through reason alone but is based on experience, and that induction of experiential data can fail to predict the future time series no matter how consistent the past data was, what you do with this insight matters very much in how you form your metaphysical beliefs. You can believe that we remain forever in ignorance of the ‘thing in itself’, that theories remain underdetermined indefinitely and we just use them pragmatically, or you can form probabilistic beliefs based on observation and believe we really are converging on the real. I doubt you would be skeptical of the sun rising tomorrow based on past data, yet you doubt universality based on the same type of inductive inference. I can formalize this epistemological process through bayesian probability and mathematically show how your doubts about universality are inconsistent.
> “If the underlying reality that we abstract through the cosmological constants were different then there is no way to know whether the system would “organize” differently, leading to radically different laws.”
Again, this pushes past the ‘naturalistic metaphysics’ I think we can rationally talk about and enters into speculative metaphysics of things like modal realism where we have no way of coming to effective conclusions. You can always posit some ‘other’ underlying ontology and avoid coming to any conclusions altogether. This is not being agnostic, it is borderline magical thinking.
> “…we cannot know whether the Universe has the same laws at all parts of it and we cannot know whether those laws change over time. In fact, I would go so far as to say that if your metaphysics is correct *then the fundamental laws of nature would change over time and space* because the laws would merely be representative of self organization.”
Getting back to my points above about induction, we can and do know whether the Universe has the same laws both throughout space and throughout time since the big bang, all our models rely on this assumption. Technically, depending on how you define space and the status of eternal inflation, laws do vary through De Sitter space but within bubble universes far removed from our own hubble sphere. For what its worth, based on my metaphysics, the fundamental laws do change over time, but not through space, meaning I hold to a variation of Smolin’s cosmological natural selection where new baby universes, born from black holes, inherit with variation the cosmological constants and thus change, but those universes are causally separated from our own and so our universe remains with a fixed and unvarying set of universal fundamental laws.
> “As Humans, with the ontology we bring with that, we may be able to discover Universal Laws that are applicable to our reality, but a different ontology could easily lead to a different set of self consistent and empirically derived laws even given the same reality!”
This is where I’m most confused. Ontology is not contextual, humans don’t bring their own unique ontology to the table, they derive ontology from experience. This is what drove the logical positivists to abandon all metaphysics as rubbish. In my framework, ontology restricts epistemology, and humans use that epistemology to discover that original ontology. Aliens or AIs starting from their unique positions without reference to human culture may develop different languages and formalizations, but those will be isomorphic to ours, and their epistemology will converge on our own and then converge on the same ontology we’re discovering. You cannot derive a self consistent set of universal laws that are empirically derived yet different from the ones we’re discovering. You *can* build a new self consistent set of laws and simulate a universe obeying those laws, Alan Kay says as much when he said, “In normal science you’re given a world and your job is to find out the rules. In computer science, you give the computer the rules and it creates the world.” *But*, the computer you’re using still operates on the laws of the universe it inhabits.
> “If you said that your metaphysics were Our overall guiding principle [where Our = anything with our ontology, which could be everything for all I know] then I have no issue; but I cannot claim it is the Universes.”
We are an extension of the universe, our ontology is the universe’s ontology, at least if by ontology you mean one derived empirically and rationally and not something we just make up without justification.
Ontology is a word that somehow is both arcane and also means drastically different things in different contexts. You are using the word in the context of structural knowledge, which I agree with what you’re saying. I’m using it in the context of structural being on a metaphysical level.
You say we derive ontology, which is true, but I’m saying we are ontological in the Kantian sense. Our perception of spacetime is ontological and fundamental to our experience. What if instead of observing three dimensions and time, we observed 11 dimensions or spacetime as posited by Theory of Relativity? What if your being could exist across Universes and thus the causality you state is independent is not?
It’s easy to dismiss this as magical speculation, yet gravity is hypothesized to be a force that acts in this very manner and particles under quantum mechanics potentially cycle through different dimensions/universes as well.
What happens when a particle collides with its anti-particle; what happens when energy is turned back into it’s positive and anti-particles?
Is it really impossible to believe that there could be consciousness that exists on these levels? Would they derive the same laws or see complexity the way we do?
“Again, this pushes past the ‘naturalistic metaphysics’ I think we can rationally talk about and enters into speculative metaphysics of things like modal realism where we have no way of coming to effective conclusions. You can always posit some ‘other’ underlying ontology and avoid coming to any conclusions altogether. This is not being agnostic, it is borderline magical thinking.”
I never claimed we can rationally talk about these speculations: in fact I’ve stated the exact opposite! And it is the epitome of agnosticism, which after all does not mean “unsure or skeptic” but means “without knowledge.”
As wikipedia says about strong agnosticism: “The view that the question of the existence or nonexistence of a deity or deities, and the nature of ultimate reality is unknowable by reason of our natural inability to verify any experience with anything but another subjective [I’d say ontological] experience.”
@Mikkel
> “Haig (and others but especially haig) have you seen this collection of Transcendental accounts written by scientists?”
Interesting, thanks for the link. I’ll spend some time reading through the submissions. It is interesting to note how personal psychological disposition to these experiences affects the scientists credulity on the matter. I’m not sure if William James personally had such experiences, but he was certainly open to them and they colored his views on religion and metaphysics, famously documented in his Varieties of Religious Experience. In contrast, Freud wrote many times that he had absolutely no understanding or experience of such things, which led him to be completely unsympathetic in the matter as we can see in his book against religion in all forms, and in his pessimistic outlook on civilization (and its discontents).
@Mikkel
> “You say we derive ontology, which is true, but I’m saying we are ontological in the Kantian sense. Our perception of spacetime is ontological and fundamental to our experience.”
I’m not versed in modern interpretations of Kant, but I’ll say that my analysis of what he describes as synthetic a priori judgements are really just the cognitive structure of our brains experiencing the world without a tabla rasa, meaning we come pre-loaded with a particular way of parsing experiences of the world. If this is what you meant by contextual ontology that humans bring, then okay, but I’m saying that those default cognitive structures are themselves structured by the universe’s organizational dynamics such that they always converge on the same base ontology.
> “What if instead of observing three dimensions and time, we observed 11 dimensions or spacetime as posited by Theory of Relativity? What if your being could exist across Universes and thus the causality you state is independent is not? It’s easy to dismiss this as magical speculation, yet gravity is hypothesized to be a force that acts in this very manner and particles under quantum mechanics potentially cycle through different dimensions/universes as well.”
Spacetime is 4 dimensional according to relativity, I think you meant the 11 dimensions of M theory. And in M theory, gravity is diffuse through inter-dimensional branes, with our universe being on one brane slice, hence its weak force relative to the other 3 fundamental forces of the standard model. Quantum mechanics, under certain interpretations like Everett’s many world’s interpretation, posits that at each collapse of the wavefunction, as described by the Schroedinger equation, the universe splits into causally isolated branches, one for every possibility in the probability distribution, and which branch we end up in is indeterministic but follows the born rule. Now, all of these fantastically sounding ideas directly come out of empirical observations and are, in principle, falsifiable. Scientists arrive at these ideas through the process of building the best explanatory models which are empirically consistent with our observations and mathematically consistent with all other theories they are built around. They are not just throwing ideas at the wall and seeing what sticks, there is a method to their (what laymen might see as) madness. [FWIW, I’m skeptical of the more speculative ideas in string theory like branes and 11 dimensions, I lean more towards a middle ground between LQG and ST. Also, I don’t ascribe to many worlds interpretation, the most appealing to me are zero world’s/QIT and Penrose’s objective reduction.]
> “Is it really impossible to believe that there could be consciousness that exists on these levels? Would they derive the same laws or see complexity the way we do?”
Yes, it is impossible. For clarity, I’ll unpack the ambiguous word ‘consciousness’ and define it with 3 different possible meanings: 1.) subjective experience, 2.) self-awareness, 3.) metacognition. Starting backwards, #3 is the use of symbolic reasoning to think about thinking, or thoughts referencing thoughts; #2 is the ability for the brain/mind to form a representation of its ‘self’ and other ‘selves’ in relation to it (Hegel was surprisingly insightful in pretty much coming up with this same definition); and #1 is the most problematic, ill-defined, and controversial, it is the phenomenon of interiority, the raw experience of qualia, the hard problem.
Now, when you say could there be consciousness that exist on these levels, I’ll cautiously say, based on my framework, that definition #1 arises out of yet undefined quantum processes, so a rudimentary, fleeting form of proto-consciousness permeates the universe (panpsychism/pan-experientialism), but is not developed enough to be anything except blips of fleeting and ephemeral states of interiority. Definitions #2 and #3 can only come into being and be experienced by complex structures that either evolved within a long developmental process where social interaction is crucial, or is designed/constructed purposefully by a sentience that went through the development process previously stated. These things cannot happen ‘in these levels’ which you have imaginatively proposed.
You might find Ladyman and Ross’s book about naturalized metaphysics called “Everything Must Go” informative: http://www.amazon.com/Every-Thing-Must-Metaphysics-Naturalized/dp/0199573093
Yes, I was referring to 4 dimensions of Relativity and 11 dimensions of M-theory back to back. What I really meant was that in Relativity time is not a dimension with independent flow but is based on reference frames across a spacetime: which we can obviously calculate accurately but I’m skeptical we can ever cognitively grasp. I’ve never heard an intuitive explanation and often the response becomes, “well if you understand the math then it makes sense.”
As for M-theory and branes, my point stands that it could lead to a many worlds interpretation, in which gravity (and potentially matter) can interact across worlds even if they appear to be causally separate.
But my general point that these ideas stand outside our “contextual ontology” and give rise to potentially unknowable states of reality. You just assert that “default cognitive structures are themselves structured by the universe’s organizational dynamics such that they always converge on the same base ontology” and “I don’t ascribe to many worlds interpretation” and “I’m skeptical of the more speculative ideas in string theory like branes and 11 dimensions…” and then from there state that consciousness has to have certain properties.
All this while admitting “definition #1 arises out of yet undefined quantum processes, so a rudimentary, fleeting form of proto-consciousness permeates the universe.”
Yet those “quantum processes” could be cognitive beings operating across branes using gravitational interaction in a way where they move across universes. Serious physicists (a la Hawking) have said we can’t seriously discount that our whole Universe is just one atom in a larger Universe, or that every atom in ours isn’t a Universe onto itself.
Again, I’m not saying what you are talking about is wrong or not useful (it could very well be the best metaphysics yet invented — would need to read the book you linked to figure it out), just that it relies on choosing a set of assumptions and holding them as Truth.
By contrast, Eastern nihilism pervading Buddhism and Taoism doesn’t even bother with a response or at least a purposefully nonsensical one.
I think your ideas sound awesome and are supported by modern interpretations of science and physics on a level that very few appreciate. But spiritually, it does nothing for me (probably because spiritually, not knowing is the most powerful feeling) and interestingly, many people that have experienced pure transcendental experiences remain fundamentally nihilistic anyway. So, I move in paths incorporating new knowledge but seek to live wisely outside of any context.
@Mikkel
> “But my general point that these ideas stand outside our “contextual ontology” and give rise to potentially unknowable states of reality.”
These ideas do not stand outside of our ‘contextual ontology’, how would we even think them in the first place if they did? I think I’m starting to grasp your problem, if I replace ‘contextual ontology’ with ‘intuitive/gestalt conceptualization’ then you have a point. We have an evolved repertoire of sense making cognitive apparatuses adapted for the planes of the savannah ill equipped to understand quantum entanglement in the same way we understand an apple falling from a tree. That is why we use mathematical abstractions to reason about n-dimensional topologies or relativistic spacetime or whatever, they are basically analogies derived from our evolved intuitions that push our default sense making equipment to the limit. We know 1+1=2 because our evolved numeracy can intuitively grasp one apple, plus another apple, makes two apples. But to understand eigenvectors or differential geometry we need to rely on towers of abstractions built on top of that evolved rudimentary intuition that interfaces between what we intuitively grasp and what mathematically describes empirical or rational phenomena. These aren’t unknowable states of reality, they are knowable through abstractions, and maybe when we can enhance our brains we can experience conscious states that bring these abstractions into the same type of gestalt sensations we currently experience within our mesoscopic environment.
> “Yet those “quantum processes” could be cognitive beings operating across branes using gravitational interaction in a way where they move across universes. Serious physicists (a la Hawking) have said we can’t seriously discount that our whole Universe is just one atom in a larger Universe, or that every atom in ours isn’t a Universe onto itself.”
I don’t know the context or accuracy of what Hawking actually said, but it probably was just playful rhetoric, the word atom has a precise meaning and there’s no way I believe he seriously entertained the idea that our universe is literally an atom or that each atom is a universe. Poetically, sure, each cell is a planet of microorganisms, each atom is a universe of subatomic particles, etc. And quantum processes cannot be cognitive beings, that doesn’t make sense, this type of thinking only makes sense when you don’t know what consciousness or cognition is. I remember Rupert Sheldrake (the infamous morphogenetic fields guy who thinks dogs are psychic) once said he thought the sun could possibly be conscious. His justifications for proposing such ideas were the same you employ. Don’t tell me ‘well it could be possible, we don’t know’, give me a *reason* to believe it first, otherwise it’s just fantasy. All my speculations and theories are grounded in reasons all the way down to current empirical facts. If metaphysics is to be taken seriously at all it cannot be anything goes, it has to be grounded, which is what that book I linked to tries to accomplish.
> “But spiritually, it does nothing for me (probably because spiritually, not knowing is the most powerful feeling) and interestingly, many people that have experienced pure transcendental experiences remain fundamentally nihilistic anyway.”
I’ve experience a mystical/spiritual/transcendental experience twice in my life, and the only reason why I don’t dismiss them as fluke aberrations of a kludgy brain or exaptations of a random evolutionary process is precisely because they now fit into my developed non-nihilistic scientific worldview. They are experiences that helped me start to be sympathetic towards the perennial traditions (at least without the dogma) and led me to incorporate them into my scientific worldview. If I were nihilistic then they would just be mysterious, beautiful experiences and nothing more, but now they have meaning and purpose, they play a role that fits within the entire cosmos which has given them so much more gravitas.
The first time I had such an experience, I was in the midst of a prolonged existential angst and suffering from a personal tragedy. I remember feeling hopelessness and dread and tried to overcome the negativity by trying to feel compassionate towards everyone who ever experienced any amount of pain and suffering, to take all that negativity in and turn it into what buddhists call metta, or loving-kindness. I was overcome with a sense of oneness with the universe, a bliss and serenity that all was as it should be, and both a disassociative state of the loss of my individual ego combined with a blended transcendental ego shared by all. Sounds funky, but that’s the best I can describe it.
The second time I experienced the same mystical experience I was out under the stars in contemplation. I was frustrated with my work and wanted to stop thinking about ‘everything’ and quiet my mind, to stop trying to analyze the entire universe by reading and writing about so many different fields and topics at the same time and just concentrate on one thing for once. I remember picking up a leaf and thinking if I could just concentrate on this one leaf and forget about everything else I would feel better. I started focusing on just that leaf and in my imagination allowed myself to envision the cells, then the molecules, then the atoms, and so on, continued to focus in the moment on this one thing, continued to dive deeper and deeper into the nucleus of the atoms, the quarks of the particles, the frothy foam of the quantum vacuum until I came out the other end and from the vacuum I was imagining the big bang, the forming of galaxies, interstellar nebulae, the formation of our solar system, the evolution of life on earth, civilization, all the way through history back to myself sitting on the grass under the stars looking at the leaf. I tried as hard as I could to hold all this in my mind at the same time, the smallest and the largest possible features of space and time all at once and that’s when it hit me, like the first time, really an indescribable sensation.
Anyway, I think this thread has run its course. Tangents are one thing, but I think we might be pushing it. 🙂
I’ll make one last comment and then not respond: I too have had similar experiences both in a Loving kindness meditation and looking at leaf (are we just arguing with other projections of ourselves?)
And in both, after feeling the entirety of all space, time and consciousness, I found nothing. No grandness or purpose, just nothing except the present and billions of stories upon stories all folded within each other…present overlapping in all configurations until intent fell away and transcended beyond into complete unknowing and complete being.
The problem with arguing with a nihilist is that we don’t take things seriously. I take material reality seriously and sapient experience seriously, but not metaphysics. I laugh at extinction and mourn for suffering, then work on material matters to alleviate it, because. Because? that is enough.
@Mikkel
I guess I’m also going to have to make one more comment 🙂
> “And in both, after feeling the entirety of all space, time and consciousness, I found nothing. No grandness or purpose…”
I know, that purpose is not revealed through the experience itself, only upon further reasoned analysis placing these types of experiences into an explanatory framework like mine do they start to make sense. Before I just thought that these experiences were purely aesthetic, more like music or art appreciation. But now, in the context of my framework, they’re more like the feeling of ‘hunger’ or ‘sexual attraction’, subjective experiences with their own unique dimension of qualia serving a crucial behavioral and social purpose with direct causal implications for our survival, and not just some mysterious curiosity in an unknowable universe.
> “I take material reality seriously and sapient experience seriously, but not metaphysics. I laugh at extinction and mourn for suffering, then work on material matters to alleviate it, because. Because? that is enough.”
And I’m saying that is *not* enough. Besides the fact that what I’m arguing for is not speculative metaphysics but a naturalistic one grounded in empiricism and reason, and which scientifically describes reality better than your nihilistic materialistic one, attempts to work on material matters alone without the transcendent and directional worldview to guide us, which incorporates spiritual/mystical experiences, will lead to a dead end. Your nihilism has stopped further inquiry into discovering the truth, laughing at extinction and mourning suffering is a coping mechanism which will only alleviate your own mental suffering, it won’t help to actually move society forward. Treating mysticism/spirituality as a palliative instead of a cure is just as bad as denying it altogether.
But this cuts both ways, haig. You too have closed off alternative approaches, explanations, etc., and just as you can rhetorically characterize the nihilistic path in a dismal light, your path can be rhetorically characterized in a pollyanna one. Ultimately it comes down to have one’s chips at different ends of the table. The rest is just PR. I genuinely hope you are right, but historically science has simply been too unkind to such hopes for me to muster any faith.
@Scott
> “You too have closed off alternative approaches, explanations, etc., and just as you can rhetorically characterize the nihilistic path in a dismal light, your path can be rhetorically characterized in a pollyanna one. Ultimately it comes down to have one’s chips at different ends of the table. The rest is just PR.”
My thesis is not rhetorical, it is empirical, the problem is that without understanding the details, rhetoric is what I’m left with to defend myself for the moment. What seems like closing off alternative approaches is just my insistence on the factual nature of my framework. If a scientist thinks she has the right model that describes an aspect of nature, she won’t entertain others until her own is falsified. If no scientist ever took any risk on their own ideas science would come to a halt.
Like I say, chips on the table. The important thing is that they are genuinely interesting chips – at least for me. I’ve never encountered anyone so skeptical of prevailing pro-intentional approaches arguing conclusions such as yours. It’s a strange mix, and as Eric Schwitzgebel is fond of saying, the only thing certain about consciousness is that something crazy is going on!
@Scott
Can you elaborate on where you think my thoughts diverge with conventional pro-intentionality proponents? I agree mine are different, but just to clarify I’d like more detail. So much of my conclusions are necessarily arrived at by my heterodox understanding of evolution and cosmology that it shouldn’t be a surprise that they are unique.
The fact that you but into so much of Dennett! You had me going in circles on CE trying to figure out your theoretical sympathies.
@Scott
My opinion of Dennett is mixed. His description of evolution is spot on, but he does not go far enough with the implications of what he calls design space. His theories of consciousness, on the other hand, are a mixed bag. I think his lack of extending the design space concept previously mentioned affects his dependence on the intentional stance to explain (away) things that are still open questions. His computationalism, though necessary, overreaches, and he focuses too much on algorithms/heuristics without understanding the biophysics. He (and most cognitive scientists/philosophers of mind) remain within Marr’s first two levels of analysis, the computational and the algorithmic, without delving into the physical implementation. Dennett and Hofstadter were instrumental in teasing out amazing ideas that I think are mostly correct, but they don’t explain the whole picture, no matter how much they say they do, and the gaps in their models are the center stage for what we’ve been arguing over.
Hey Scott, I’m a computer scientist that has done a lot of work with neuroscientists and systems engineering in general.
On Ben’s blog I have left comments describing why I think BBT seems correct but not wise from a systems’ perspective, and also why your assumptions about the feasibility of science truly unravelling metacognition are improbable.
They expound on noir-realism’s comment about fuzzy logic above, but then incorporate haig’s comment about predictability in complex systems. Something that fascinates me is the idea that the act of discovery about a system alters interaction with it, inevitably leading to indeterminate consequences and destroying the utility of our knowledge.
Based on my privileged position, I truly believe that we are on the cusp of a seismic shift in expectation of what science can provide, and post-humanism will be dealt a fatal blow on a normative (if not practical) level.
My quantum mechanics professor led off the course by stating quantum physics reestablished the concept of free will; systems theory naturally leads to existential metaphysics, particularly eastern philosophies.
Anyway Ben suggested I notify you about my comments, they’re on both the exchange you had with him and the Kant/Mechanists post.
In regard to unraveling complex systems, we all mock the accuracy of weather reports – but aren’t they getting more accurate these days? That’s a complex system, as in planet wide with a trillion butterfly effects in it.
The accuracy of predicting future states of complex systems can improve with more data and/or more processing power, but there are fundamental limits to how far into the future you can predict. Irretrievable initial conditions located in the past influence chaotic dynamics such that your precision will never be what you want it to be, also, open systems subject to black swan events do not allow for quantifiable probability distributions of all scenarios.
Weather prediction will continue to get more accurate for a while but we’re closer to the end of improvement than the beginning — the amount of information needed to increase by the same % is vastly exponential.
As haig said, at some point you hit a fundamental limit where even having 100% information about every atom in the planet is not enough.
I’m not sure what the practical limit in the near future [let’s say 1 billion times more data/computational power than presently used] of weather prediction is for say, a city wide scale. My guess is we would get 95% accuracy for tomorrow and 30% accuracy for a week in the future.
That said, it’s been shown that weather forecasts are really good about knowing what they don’t know — i.e. if you look beyond the icon and into the actual percentage chance of rain, then it is roughly accurate. I believe that’s on a daily basis though, not a sequential basis. By that I mean I am looking at the forecast for the next week and it shows 30% chance of rain for 4 days in a row. However, the whole front could miss and then none of that rain will happen, so you’d need to use conditional probability to figure out the chance of it raining on any particular day(s): and those distributions would be very hard to calculate.
Thus, even though the rain % forecasts are right over a long enough period, looking at each day individually, they are functionally worse than the numbers would make it seem.
Standard disclaimer whenever this arises: this is NOT supporting the meme that therefore we can say nothing about climate change — that’s a different topic mathematically speaking.
I think some sort of practical limit should be put on this unpredictability – an old fashioned slave would no doubt not be utterly controlled by the slaver – but in the end, they would labour under horrible conditions, with no where to go and family who would suffer if they go. So there’s unpredictability still there but – really, is it worth arguing for that? Forgive me for going the slave example, but it contrasts neatly between unpredictability and yet a much larger amount of predictable.
One thing that struck me about Mikkel’s comments on those posts on my blog is his point that the brain is a chaotic system in which case we’re fooling ourselves when we talk about neural mechanisms. The so-called mechanisms would be like the illusion of recognizable shapes in the clouds. It wouldn’t be heuristics so much as illusions all the way down.
There seems to be a conflict between cognitive scientists, who have a computational, mechanistic perspective, and the neuroscientists who think more in terms of dynamic and chaotic systems. I don’t think either perspective supports the manifest image exactly, but Mikkel seems to think the systems one is more friendly to the normative. At least, if the brain is chaotic rather than mechanistic, the normative isn’t as obviously eliminated from reality. I’m still not sure why that would be so, though. What’s interesting about the dynamic systems theory of cognition (van Gelder, Andy Clarke, etc) is that it leaves out mental representations, so that would be consistent with BBT. And yet Scott shares the mechanistic perspective.
Christ, there’s been a lot of activity here! I need to check out Mikkel’s comments but prima facie I haven’t the foggiest what he’s talking about if he assumes that I or anyone else talking about neural mechanisms means anything other than stochastic systems. ‘Mechanism’ is not married to GOFAI computationalism in any way I’m familiar with.
Mikkel can certainly speak for himself, but I’ll just summarize my interpretation of his response. (See especially his comments on the article, “Mechanists and Transcendentalists.”) The opposition is between (1) the search for decomposing mechanisms as a reductionist strategy of explanation, and (2) the nonreductive strategy of searching for patterns that emerge from impenetrable chaos. The key point, then, is that the brain, together with the environment, form coupled chaotic or dynamic systems, subject, for example, to the Butterfly Effect. In that case, we shouldn’t look for reductive explanations of their behaviour, including explanations that posit causal relationships that decompose into more fundamental ones (heuristics within heuristics). Instead, we should look at whole systems and predict their states with deductive-nomological explanations, using differential equations and initial conditions in the classic Newtonian way, which gives us geometric representations of how systemic patterns evolve over time.
Now Mikkel didn’t say all of this, but this is me reading between the lines. So no, I don’t think he’s going after the strawman of old-fashioned computationalism. They key questions are whether there are mechanisms in a chaotic system and whether the brain is chaotic.
Actually this strikes me as somewhat similar to what haig is arguing, and he’s certainly caught my attention in our debates on Conscious Entities. My position isn’t reductionist in a metaphysical sense, however – it’s heuristics all the way down, I think – but in a ‘future historical sense’: mechanistic heuristics are simply far more powerful, and varieties of depersonalization will likely carry the day. As for DS or other mathematical understandings of systems and ‘prediction only explanations’ they do seem to offer a powerful alternative to mechanistic explanation, but as Craver would point out, manipulation seems to force the mechanical on us. Prediction doesn’t seem to make for full-blooded explanation. Think of all the equations you could use to describe/predict your car. If it breaks down…
But these alternatives in no way preserve or vindicate anything intentional. Just look at ‘moneyball’: it’s just as dehumanizing as mechanical understanding as far as I’m concerned.
I had to look up Craver, but this point about manipulationism is interesting. Does this mean you favour the mechanistic view of scientific explanation because of this pragmatic factor, because a causal system counts as a mechanism if our knowledge of the system is, in effect, useful, since in that case we know how the effects would differ were we to modify the causes? If this pragmatism comes in with mechanistic explanation, I think the view of knowledge on the table is inclusive enough to give intuitions their own cognitive credit for likewise being useful (especially to the masses whose illusions may well nevertheless soon become untenable, because of science).
“Instead, we should look at whole systems and predict their states with deductive-nomological explanations, using differential equations and initial conditions in the classic Newtonian way, which gives us geometric representations of how systemic patterns evolve over time.”
A differential system is made up of three parts: Variables, parameters, initial conditions.
Variables = “dimensionality” or the number of factors that influence the system. For every variable you have one equation f'(x) = to determine the change
Parameters = constants that do not have an equation in the system. They determine feedback strength between the variables.
Initial conditions = F(0) or the initial values for every variable
A system can exist without initial conditions, in which case you are concerned with the overall geometric representation called the attractor. A system with initial conditions leads to a stepwise calculation, called a trajectory. In theory the attractor = characterization of all trajectories over t->infinity. “Reality” and “prediction” are normally concerned with linear flow of time and our current state, or the trajectory that we are on.
This is where DST is cool: it states that trajectories are fundamentally unpredictable after some time proportional to the complexity of the attractor. Fundamentally as in, no amount of data can ever make it predictable (although the more data you have the more variables you can model and the more accurate it is the closer you can get to the fundamental limit).
However, the attractor itself can be estimated and *is* predictable, theoretically. Weather = trajectory, climate = attractor. If you have a good idea of the attractor you can make statistically valid inferences about some time in the future based on your current state. The issue is that the attractor is determined largely by the feedback parameters. Changing those is called a bifurcation.
Tying into my comment below, a system is stable and stationary if the parameters remain constant, so the statistical explanation (heuristic) assumes that what we really care about is characterizing the variables and their interaction, over different starting conditions. I.e. the attractor
My viewpoint is that in reality the world runs enormously on bifurcations, and knowledge about the attractor creates action — a bifurcation itself. In this case, the geometry changes; and the problem with heuristics is that we assume it doesn’t.
Ultimately it becomes a philosophical question about whether a system has an enormous number of variables that we just need to discover (more data) or whether it should be seen as fewer numbers of variables with constant bifurcations (data doesn’t help). If you have a mechanistic explanation (like synthetic equations) you can actually delineate where bifurcations fundamentally change the system, so you get the best of both worlds. This is why DST prefers mechanism over heuristics. But in the real world, the only way to discover mechanism is to use heuristics to make models and say they are mechanisms. My personal experience makes me doubt this is useful except under certain narrow conditions.
All of this is to say what we *should* do is focus less on prediction and more on setting up and interacting with systems so that we minimize bifurcations and certain types of feedback. Most of what we do (including in medicine) does the opposite. As I stated on your blog, we should nurture instead of control.
Thanks for that very helpful clarification, Mikkel. I was really just working there from Lee Smolin’s summary of what he calls the Newtonian paradigm of explanation, under which dynamical systems theory seems to fall, except that the latter takes chaos into account.
Hi, Mikkel. I’m way behind on the comments here, so I apologize for any oversights in my response. I guess I just don’t understand 1) Why you (seem to) assume that my arguments are deterministic (they are not), or why they need to be; or 2) Why you (again, seem to) assume that I am chronically over-estimating the power of the scientific paradigm.
Probably semantic baggage and assumptions about your perceived usefulness of BBT.
I have no issue with anything you have written about BBT directly, but generally “mechanistic” is a code word for “deterministic.” To be more precise, it is generally used to mean “reductionistic” and from that reductionist study then you get deterministic understanding.
As far as I can tell, the core of BBT can be correct without it being deterministic, but what I don’t fully understand is the *usefulness* of BBT if it’s not deterministic.
There is another type of [statistical] determinism that isn’t mechanistic, and that’s heuristic. By statistical determinism, I mean that although you may not understand all the parts of the system and be able to accurately know what will happen for a specific event, the system exhibits stationarity. Stationarity is a tricky concept because it is defined largely by itself: a process is stationary if the underlying statistics don’t change over time, and you test this by assuming the process is stationary. Since processes don’t have to be from a normal distribution, arguments occur about whether a process is stationary with an extremely complex distribution, or whether it is non-stationary and has simpler distributions stuck together.
Stationarity is largely in the eye of the beholder and fundamentally unprovable one way or the other.
While this seems like a technical or semantic argument, the implications are tremendous. If a process is stationary then by definition it can be characterized and largely predicted by heuristics such as neural networks. You might need enormous amounts of data to get an accurate heuristic, but it is theoretically possible. It’s the way that people can accept the world is stochastic but argue it’s still predictable.
Thus, the entirety of the machine learning outlook on life implicitly assumes that core processes are stationary and many ML people feel that they don’t need to understand the sources of data, they just need a lot of it.
By contrary, if a process is non-stationary then it is not describable with first order heuristics and at best you need heuristics about what heuristics to use, or heuristics about heuristics about heuristics..etc. At this point, understanding of the natural of the system itself is critical and largely intuitive. [Or as Ben says, the heuristics themselves are illusions.]
Moreover, in this case, the act of applying the knowledge obtained by the heuristic can be seen as changing the system itself. Based on your comment above about moneyball, I’m not sure you are getting the nuance of what haig and I are saying about systems theory.
In a stationary based mindset, moneyball (or high frequency trading) models can be generated that characterize some universal reality and then they are applied by people to get good results. In a systems based mindset, the act of applying the models is a feedback in itself, which then changes the system and makes it non-stationary, decreasing the relevancy of the heuristical models.
Both moneyball and high frequency trading models led to rapid adoption and quickly created a net zero sum game. In effect, the success of the strategy rendered it obsolete. At this point HFT would be a net loser if not for stupid social rules that give money just for playing and moneyball is passe: it’s back to fundamentals!
Systems theory is not contrary to mechanistic or heuristic explanations, it attempts to integrate both and prefers mechanistic ones when they can be found. However, it is a rumination on overall predictability and outlook about what is possible under *any* investigative tool.
In the thread you Ben you write, ” No, we have BBT now as well, which is to say, a way to finally get behind these metacognitive myopias naturalistically, thus opening the prospect of finally hitching the human to the scientific revolution” and above you say, “in a ‘future historical sense’: mechanistic heuristics are simply far more powerful, and varieties of depersonalization will likely carry the day.”
This is why I think you’re overestimating the power of science. The leading edge of science is very close to throwing its hands up and declaring these core issues fundamentally unsolvable. Strategic myopia (or what I termed literate ignorance on Ben’s blog) is going to be seen as wise, just as it has been in religious and philosophical traditions.
Society’s assumption that everything can be handled through more data and behavioral feedback is already having enormous detrimental effects and will soon be compounded. In my opinion, it could very well bring about the end of civilization, depending on which normative path of industrialization we collectively choose to explore.
I just don’t know/follow anyone who thinks neural mechanism in the deterministic sense, and I’ve largely taken much of what you say as a given, but I see now you were simply hedging to make sure. Most of my exposure to DS approaches comes via Van Gelder and the representationalism debate, so if it sounds like I’m talking out of my ass, Mikkel, I likely am. Don’t hesitate to call me on it!
I suppose what I don’t see is how any of the considerations above impact the cognitive status of metacognition. Saying ‘science can only take us so far’ (which I agree with) is a very open-ended claim. For one, it simply does not imply the adequacy/applicability of traditional cognitive modes. For another, the claim, as robust as it seems generally, is pretty much impossible to justify for any given domain. You can never foreclose on the possibility that some new heuristic, like a paradigmatic reconceptualization of a problem ecology, for instance, will prove to be a game changer. ‘Strategic myopia,’ after all, is another way to describe heuristics (which I see as ‘adaptive neglect’)! Science, you could say, is a mountainous, dishevelled heap of strategic myopias, heuristic prosthetics we use to see around our the limitations of our biologically fixed cognitive toolbox. As soon as we begin rewiring or redesigning the latter, or as soon as we hit the posthuman and merge with our instrumentation, then all bets are off as to what science can or cannot do in any one domain.
This is why I think science pretty much entails human extinction: even it the technological optimists are right, we’re done for.
This is why I think intentional conceptuality as traditionally metacognized is doomed. It’s efficacy has never been clear. And the actual heuristics humans use to understand one another will be modelled mechanistically and mathematically.
All that said, I am interested in what you make of the ontology that falls out of BBT, which does away with the traditional dichotomies of subject and object and strands us with… you guessed it, systems!
Scott,
If you read through the ramblings of myself and haig you’ll see that I’ve explicitly agreed with all your points here.
The part I’m unclear about is what (if anything) you think we should use the insight to achieve externally or internally.
This is a non-trivial question, particularly as the (current) default cognitive heuristics are characterized. For instance, Nudge Theory is now extremely influential in public policy and commerce. With Big Data and targeted profiles, mass manipulation (“customized”) of results is occurring to the point where our perception of reality is being dictated by what is perceived to be cognitively useful. Google is about to literally change maps based on who is asking it for directions or local features!
It is now becoming a fad to talk about these topics as facts and people are being encouraged to change their interactions with others and perception of what life is to accommodate them. Say this, do that, track everything. Use cognitive science to maximize your income, find the perfect partner and get in shape!
I think what haig and I are getting at is that there is misguided expectation about what is possibly by applying the (seemingly universal) heuristics, particularly because the advice is almost always status quo affirming. What if the problem isn’t in how people think, but how society is constructed?
People under save for retirement and under perform the stock market due to many cognitive biases, but does that mean that they are wrong or does it mean that the conceptual of retirement and stock market as constructed are unnatural and non-systemic? [they both only work under certain demographic and resource conditions that we are running headfirst into]
I think we’re commenting about what heuristic people should use to evaluate efficacy and wisdom of the other cognitive heuristics themselves. On that level, I think the downstream effects of the existential heuristic on the cognitive ones is interesting: i.e. that metacognition itself is affected by an explicit choice in believing one existential heuristic over another. I have some examples of this, but won’t go into it now.
I haven’t had nearly the time I would like to devote to this thread! You would think finishing a book would free up time, but the opposite is always the case…
I agree with all your concerns, and in particular,
“What if the problem isn’t in how people think, but how society is constructed?”
This question encapsulates the problem with what I call ‘akratic society,’ which I see as the inevitable result of the social dissociation of knowledge and experience (the problem that Neuropath is organized around). Despite the limitations of all these new techniques and technologies, they continue to leverage competitive advantages, and as result will continue to command tremendous resources. The masses, however, will remain trapped in the labyrinth of experiential affirmation and pseudo-empowerment, colonized in ways they lack the conceptual resources to fathom, let alone believe. Since it’s only ever populations that can be reliably anticipated and ‘nudged’ the ‘individual’ can always claim to be ‘free’ of these untoward influences and we should expect it to be impossible to convince the public of the perniciousness of these trends. Techno-scientifically organized systems will continue extracting resources from experientially obsessed masses with impunity. I write fiction that is meant to raise consciousness of these issues, but I despair of anything being effective.
Then there’s the greater problem of using experientially grounded moral intuitions to assess and condemn all this.
“that metacognition itself is affected by an explicit choice in believing one existential heuristic over another. I have some examples of this, but won’t go into it now.”
It’s the examples that I’m most interested in, Mikkel! This is what I’ve been dogging Ben about as well. Ben thinks the tradition and metacognition can be salvaged for pragmatic purposes, whereas I’m convinced the only hope we have is to jump headlong into the abyss, gambling on something new.
@Mikkel:
> “All of this is to say what we *should* do is focus less on prediction and more on setting up and interacting with systems so that we minimize bifurcations and certain types of feedback. Most of what we do (including in medicine) does the opposite. As I stated on your blog, we should nurture instead of control.”
“Society’s assumption that everything can be handled through more data and behavioral feedback is already having enormous detrimental effects and will soon be compounded. In my opinion, it could very well bring about the end of civilization, depending on which normative path of industrialization we collectively choose to explore.”
This. These are exactly my sentiments, and you only get to this point by understanding complex systems. Nassim Taleb has been railing about this for over a decade, his views aren’t just about the financial system (where he was prescient), but society and nature as a whole. I think more people are coming around to this view, but interestingly enough (and corroborating Nassim’s thesis) the changed views are bubbling from the bottom up, the engineers, tinkerers, and entrepreneurs are leading the way while academics and policy makers follow slowly behind dragging their feet. Jaron Lanier’s last two books also tackle these problems, with more emphasis on information technology and humanistic technology (ie anti-singularity) than Taleb’s books.
The popular zeitgeist of ‘big data’ should eventually give way to ‘smart organization’, meaning that changing the organizational structures of institutions and technologies matters more than adding more data or relying on flawed predictive abilities. Google is seen as the exemplar of big data, but I’d argue they’ve been riding the wave of the initial change in how they organized the web, and any incremental offerings they’ve had since based on big data is marginal (same with Facebook). The problem is that ‘smart organization’ means smaller and more distributed/less centralized organizations which share and cooperate more than they compete, that are allowed to fail without intervention. This is the antithesis of what power hungry big corporations and governments want.
Off the top of my head, I see three possible scenarios for the future framed within this context of organizational change: 1.) networks of ‘smart organizations’ form from the bottom up and eventually displace incumbent institutions and things(QOL not necessarily GDP) start to improve at an accelerated pace. 2.) Incumbent institutions resist change and there will be, as Tyler Cowen’s book is titled, a long stagnation where we, like the proverbial frog, slowly boil as the temperature rises (figuratively and metaphorically) and things get incrementally worse. 3.) Incumbent institutions resist change, but instead of a slow stagnation, there will be a series of increasingly larger shocks to the system resulting in eventual collapse. There’s still hope that scenario #1 would come about after a long period of #2 or after #3 takes its course, but I’d rather like to avoid those if possible.
Haig,
Needless to say I agree. I feel there are many people that do as well, but they don’t know how to reconcile it with their daily lives and professions.
I am focusing on trying to develop techno-social business models to address the core needs of food, water, shelter and basic necessities (medical too, but that’s hard because it’s so centralized and elitist) based on the principles you stated. It’s slow going, but with every success we can pull in a few more people and they start to live systemically, making us all stronger as a whole.
If you are interested I could elaborate more some place.
As for your last sentence, I am beyond hope. Resources are depleting too quickly, the climate is changing too much and demographics are terrible. The real key for me was to act out of intent instead of outcome. Once I cast away having a desire for the end state, I found I could live in the present and immediately reap the rewards of a health(ier — still transitioning) lifestyle. This obvious happiness and lack of fear then inspires others far more than any treatise ever could.
One day the world will wake up from its slumber and choose whether to follow fear or joy, nothing more. Then time will pull the covers back over our eyes and do the rest.
@Mikkel
I’m not *that* pessimistic, I’m more cautiously optimistic with a cultivated Stoic apatheia for worst case scenarios. Demographically speaking, population growth is hitting negatives or soon will in developed nations such that total global population will probably reach some equilibrium at ~10 billion. The efficiency and economics of next gen photovoltaics and batteries, along with safer and cheaper nuclear power (thorium etc.) will displace coal and oil. Urban farming, GM crops, and synthetic meat will help the food supply, while energy efficient desalination plants will help with the water supply. Charter cities (Romer) and similar urban developments, if done sustainably, can house more than the expected population in high quality of life areas allowing for distribution of urban density. 3D printing and recycling will create a sustainable closed loop of materials and manufacturing.
Uncertainty remains in several areas: acidification of oceans and overfishing, extreme climate change (Freeman Dyson isn’t too worried, unless we hit some major runaway event like releasing huge pockets of hydrogen sulfide under melted ice caps), political/economic crises causing deep depression and/or large-scale/nuclear war, and lastly pandemic from high r-nought airborne pathogen like 1918.
Bakker, we are all abuzz. .. .. is TUC finished??? (not counting rewrites, line-by-line edits, etc.)
I’m just wrapping up the first draft is all. Maybe seven or eight days to go. It’s been a long bloody haul!
Wow!
Hey Bakker, publish BBT already. This guy’s ripping you off, down to the whole “a theory doesn’t have to be satisfying to be correct” and “Subjective experience, in the theory, is something like a myth that the brain tells itself”
http://www.aeonmagazine.com/being-human/how-consciousness-works/
http://en.wikipedia.org/wiki/Michael_Graziano
I was getting ready to post the same article, but with so many comments, decided to do a “find” first. Glad I checked!
So, one commenter, EP, on the Aeon site claims, “Any actually interesting things from Bakker is just directly stolen from Metzinger.”
Sounds like fighting words to me.
Well either EP hasn’t read Metzinger or he hasn’t read me. I just dropped a quick note to let him know Thomas disagrees. The Graziano piece is interesting, strikes me as headed in the right direction, but insofar as it transposes the problem from one of explaining what we experience to one of explaining why we think we experience what we experience – without providing any clear theoretical model for how the latter might work – it just strikes me as Dennettian. I’m sure the book, which I have on order, will give us the details. Sans details, however, this approach is actually quite old, at least in philosophy of mind circles. What distinguishes BBT is that it takes the next step and explains (quite convincingly I think) why it has to be the latter, ‘explaining away’ approach.
[…] Based on the current secular, scientific worldview, I think most prominent scientists (eg Weinberg’s meaningless universe), and ‘scientific’ philosophers (eg Dennett) would already accept, without needing to understand more about the brain, that intentionality and normativity are useful fictions. Useful in that they still assume we can achieve the Enlightenment ideals of social progress built on top of reason and empiricism even in a pointless/meaningless universe, which critics like Nietzche and, more recently, people like Alisdaire MacIntyre and Thomas Nagel, have argued against. What your project does is add insult to injury, showing that not only is the universe pointless/meaningless, but our concept of the human is fatally incorrect, replacing folk psychological ideas with inhuman mechanisms formed merely through efficacious heuristics and nothing more. With that final nail in the coffin, you go further then those scientists/philosophers by abandoning any hope for a coherent path to achieve the promises of the Enlightenment and show how heuristics reign supreme from here on out and into the posthuman era where efficacious heuristics could not give a damn about us humans (or what we think of as us humans). (see The Decline and Fall of the Noosphere) […]