Inchoroi Love Song
by rsbakker
Aphorism of the Day: compliments of Thomas Metzinger…
.
There’s this pernicious myth out there, one that bears too many similarities to the kinds of bootstrapping myths you find in popular culture more generally. The claim is that individuals and/or communities are makers of meaning.
Just think of all the narratives you’ve encountered where the hero has to own up to some difficult ‘choice,’ an easy, cowardly one that will lead to a dissolute, meaningless existence, and a difficult, courageous one that will lead to status, love, and the restoration of some traditional order. We are weaned on versions of what might be called the ‘HUMAN Potential Narrative,’ stories that teach us to strive, strive, strive (which is to say, work-work-work—largely for the benefit of others) to become ‘more than we are.’ This is, without any real doubt, the dominant ideology of the liberal democratic West. This is largely why we tend to buy into our system as enthusiastically as we do, and this is largely why so many of us think we only have ourselves to blame when we almost inevitably fail to achieve our ‘dreams.’ The more our meritocracies seem to drip away, the more our aggrandizing myths seem to seize our imagination.
So perhaps it should come as no surprise that the bootstrapping impulse seems to so intimately inform so much transhumanist and posthumanist thought.
The value of science lies in the way it renders the natural world compliant to HUMAN desire. Science, whatever it is, means power over the natural. Since extracting what we need from our natural environments is what we are all about, biologically speaking, science has proven to be an almost miraculous boon. The twentieth century, however, provided us with the first real indications that our power over nature could possess catastrophic consequences, intended or otherwise. Nuclear Armageddon. Biological Apocalypse. Environmental Ragnarok. Pick your poison.
And the pharmakon is growing. Now, we are entering an era which will see HUMAN nature become thoroughly compliant to HUMAN desire, and so dwell in the shadow of yet another catastrophic consequence: the Semantic Apocalypse.
The potential problem with rendering HUMAN nature compliant to HUMAN desire is quite obvious: given that HUMAN desire is rooted in HUMAN nature, the power to transform HUMAN nature according to HUMAN desire becomes the power to transform HUMAN desire according to HUMAN desire. This is a cornerstone of what troubles so-called ‘bioconservatives’ like Francis Fukuyama, for instance: the possibility of ‘desire run amok’—or put differently, the breakdown of the consensual values required for liberal democratic society.
For the first time in HUMAN history, in other words, the biological basis of HUMAN desire will be put into play. Given that this is historically unprecedented, and given the degree to which social cohesion depends upon overlapping networks of consistent—or at the very least, compatible—desires, the threat seems quite clear. ‘Designer desires’ should have the same sinister ring as ‘neurocosmetic surgery.’ Imagine waking up and deciding what to wear as well as what to feel for the day.
Now it should be noted that pretty much everyone in the field understands the social necessity of regulating desire (value). What distinguishes bioconservatives like Fukuyama is the desire to prevent the problem of designer desires from even arising, to regulate, in effect, the technologies of HUMAN nature. Call this the Easy Answer. Even though it would likely be impossible in practice to regulate these technologies (because the market, not to mention, strategic, advantages would be too decisive), it certainly is easy to suggest in theory. Pass a law, perish the thought.
Fukuyama’s myriad critics, on the other hand, have a harder row to hoe. What they need to provide is some kind of theoretical assurance that things won’t go awry in the manner that Fukuyama fears. The strategy, at least from what little I’ve read, seems to be twofold: to argue, first, that HUMAN desire as it stands is biologically, historically, or conceptually insufficient and so only stands to gain from technological augmentation and the resulting cultural transformations, and second, that desire is self-regulating in some way.
So with regards to the first strategy, you find Nick Bostrom, for instance, continually characterizing HUMAN desire as it exists as a kind of biological cage. If only we could set aside our fears, we could let desire fly into the vast possibility space of transhuman potential. Or Donna Haraway, continually characterizing HUMAN desire as it exists as a kind of socio-biological cage. The fear should be embraced as belonging to the liberating potential of transcending the oppressive conceptual and political orthodoxy of our existing values.
With reference to the second strategy, you find, to put it crudely, the wanker’s predictable and perhaps obligatory faith in wank. For transhumanists like Bostrom, this faith seems to be grounded in the Enlightenment link between autonomy and reason. As Kenan Malik writes in his review of Fukuyama’s Our Posthuman Future, HUMANs “possess the capacity to rise above their natural inclinations and, through the use of reason, to shape their values. But if this is so, then no amount of biotechnological intervention will transform our fundamental values.” For other posthumanists, particularly those with poststructuralist commitments, you generally find varying degrees of residual commitment to these selfsame values, only refracted through the funhouse lens of some specific diagnostic cultural critique. So for posthumanists like Cary Wolfe, for instance, who place animal suffering on a par with HUMAN suffering, the present situation is simply so horrific that any exit has to be a good exit. Anything that forces society to abandon the conceptual cage of the ‘HUMAN’ and the horrifying crimes that it licences is a good thing.
Needless to say, we tend to be pretty cynical about the ‘power of reason’ to ‘bootstrap HUMAN desire’ here at Three Pound Brain. Like Hume guessed, and cognitive psychology is discovering, reason seems primarily invested in rationalizing desire. To use Haidt’s metaphor, these guys are putting the elephant on the back of the rider.
But I literally think that all of this, from Fukuyama to Bostrom to Haraway and Wolfe, is beside the point. Why? Because no one—including them—knows what the fuck they are talking about.
Strong words, I know, but I mean them quite literally.
Should we count contemporary philosophical theories of the HUMAN as knowledge? Of course not. But the sad fact is that this is all we got. Opinions abound, the way they always abound, and the wild diversity of claims is enough to beggar belief. Until recently almost all theoretical claims regarding the HUMAN were prescientific in a very profound sense. All things being equal, the overarching reason why we can’t definitively decide between varying philosophical conceptions of the ‘HUMAN’ is the same reason any other prescientific speculation regarding another domain couldn’t arbitrate between its competing claims. No one knew what they were talking about. Of course, people in the grip of this or that interpretation are prone to forget as much, to treat abject guesswork as knowledge, but this is just what we do: buy our own bullshit.
The HUMAN, as yet, eludes anything resembling thorough scientific understanding. The speculative discourses devoted to it, such as philosophy, literature, and so on, contradict one another in innumerable ways. Perhaps no concept is so wildly overdetermined. When we talk about the ‘posthuman,’ there’s a very real sense in which we are talking about the ‘post-whatchamacallit.’ As yet, we really have no idea just what it is that science is set to transform. Aside from low resolution facts, all we really know about the ‘HUMAN’ as we intuit it is that we cannot trust our intuition. As Eric Schwitzgebel puts it, “There are major lacunae in our self-knowledge that are not easily filled in, and we make gross, enduring mistakes about even the most basic features of our currently ongoing conscious experience, even in favorable circumstances of careful reflection, with distressing regularity.”
It really is the case that science might have more humbling, epochal revelations to make, perhaps the most dreadful of all revelations, a final ‘wound’ (to use Freud’s famous image) which kills far more than our narcissism. Think about it. Creeping medicalization. Corporations retooling themselves in ways to manage you as a mechanism. The factory farm is becoming the assembly plant as we speak.
Should we worry that this is the very trend we might expect given the truth of nihilism (the trend given narrative bones in Neuropath). Should we write it off as mere coincidence? Or should we prepare? This very experience you are having now really could be a kind of informatic dream, systematically connected to actual, effective processes of the brain, but hopelessly distorted—and certainly not an ‘agent’ in any obvious intentional sense. And the more we learn, the more plausible this seems to become.
When it comes to this debate as opposed to the posthuman, I find myself stranded, quite against my wishes, on the side that thinks science will show how the ‘HUMAN’ as we intuit it is largely hallucinatory, an artifact of any number of neuromechanical kluges. I could be wrong. Christ, I pray that I’m wrong. But no amount of neoenlightenment or poststructuralist speculation can decide the issue one way or another. The fact is, for better or worse, the question of HUMAN meaning has now become an empirical one. The question of the posthuman is largely a question of the consequences of neuromechanical intervention, of how we will change ourselves once we know ourselves. And this means the question of meaning is prior to the question of the posthuman, both practically and theoretically. To talk about transhuman or posthuman ‘value’ is to assume there will be such a thing. If meaning and value are parochial to the way HUMANs are, then being posthuman could be tantamount to being post value as well.
In this respect, with the glaring exception of David Roden, almost everything I’ve encountered in the posthuman literature so far, even the stuff that touts its radicality on its theoretical sleeves, suffers from what might be called the ‘Star Trek fallacy.’ They all assume that intentionality will survive the break with evolved biology, that the future will be familiar enough for the intentional kernel of our dramas to live on. But the discontinuities awaiting us are existential in every sense, including the conceptual. Why should science serve up anything other than a knowledge utterly indifferent to our hopes and desires? Isn’t that what we pay it for?
Experience and knowledge stand at a crossroads. This is the explosive time, the bewilderment that comes before the reckoning. We cannot assume that meaning transcends biological humanity as it stands, or that the hopes invested in some set of contemporary scruples can be pinned on a future indifferent to all scruple. We cannot presume that ‘right desire,’ let alone reason, is sure to survive what comes.
The future of value must be decided before it can be divined.
This is based on some pretty superficial readings. I invite anyone steeped in the literature to flag any mischaracterizations.
As a sidenote, I’ve decided to polish up the first chapter of TUC for the SA forum. I’ll keep you updated!
Who is the “prepare” link paper by? Pete Wolfendale?
It got me thinking: you know how BBT predicts that consciousness is structured in such a way to always seem complete? The essay mentions dreams (“offline dreams”). They are a perfect example! Even when the picture presented doesn’t make one whit of sense, even when everything is ‘blurry’ in that dream-like way… we do not question its completeness and realness*.
This may all be wank, but it’s good shit.
*Lucid dreams aside.
Peter has a genuine gift for making wank – even of the Continental variety – crystalline. Check out his Deleuze paper when you get a chance. It’s the reason why I invited him to critique me and Roger as a guest blogger in the near future. He’s another Brassier in the making.
Great point about anosognosia in dreams. Metzinger actually discusses this in reference to ‘autoepistemic closure,’ if I remember correctly. One of the things I like about Peter’s paper is the way it lays all the intentional commitments on the line – everything (at least by BBT lights) is so obviously ‘inside out.’ The very fact that functionalism assumes identity across substrates or implementations shouts the absence of distinction – which is to say, informatic privation. I’m starting to think that anything that plucks identities (functional or personal or what have you) out of the informatic morass of natural environments bears the Mark of Cain… BBT sees tokens all the way down.
I once had a dream where I crashed a car. I got out, sensing everyone around was going to be really upset, so I seached desperately for a solution and…to calm them I said to them ‘It’s all right, it’s just a dream!’.
Why did I need to calm them if I knew they were dream fragments? Well, if they had gotten upset at that point, it would really have upset me, that’s for sure. Even if not an actual answer.
Seconds afterward I realised what I said and then the dream broke and I woke up.
I’ve mentioned to my GF that my dreams have a recurring motif of mechanical failure: cars fail to break, airplanes inevitably drop, guns don’t fire (when I’m doing the shooting anyhow). I wonder how common this is? Or perhaps there is some bias at work… we only remember the dreams where shit hits the fan?
This is pretty common if I remember correctly: up there with punches that do no damage. I’m trying to recall where I read this though – I bump into so much garbage.
Do you know of anyone researching the difficulty/inability to remember sources, Jorge?
No, I only personally know one person working on a PhD in brain and cognitive sciences, and she’s focused on addiction. A very brief search on PubMed for ‘misattribution’ yielded all sorts of shit, but nothing on that particular sin of memory.
Since I’m plagued by annoying lucid dreams I can assure you that you don’t get that awareness because the dream makes less sense than usual.
It’s this experience that also made me noticed that once you are aware you’re in a dream you suddenly have a desire, even a feel of suffocation, to get out ASAP. That becomes the first priority because you feel suddenly vulnerable and detached from the real world.
Which means that if you could perceive reality as a similar faked space you’d probably get a similar urge to get out. Which is essentially the other side of sufficiency and that confirms it.
For example: in lucid dreams a test that always works for me is to look at a analog clock. If you’re dreaming, every time you look at the clock you see the hour changing, or even the numbers repeated or in wrong order.
In one of these dreams I was in complete awareness, in my own room. I perceived everything. And then when I woke up I suddenly realized that the room was completely different from my own. Different furniture, different arrangement. Yet this was one of the most “lucid” dreams I had, and I was still so largely deluded even if awake in the dream. I was absolutely certain it was a dream, and yet I couldn’t tell you what was wrong in it. It was my room in the dream and I had no way to deny it.
You WILL believe whatever your brain wants, there’s no escape.
About mechanical things not working in dreams: always happen to me. Including doors that do not close (for example the door being smaller than the frame). But I always interpreted this as the projection of a fear: you want the gun to fire, but it doesn’t.
Ah, forgot to say another thing I noticed:
there seem to be a hierarchy of dreams. In the sense that sometimes it happens that you’re having a lucid dream and desperately trying to wake up. You do, but you don’t realize that you are just having a new dream.
What I noticed is that this is always one-directional. When you “wake up” you’re always certain you were having a dream. No doubt. The same obviously feels when you wake up “for real”. You are suddenly certain you were dreaming.
Basically you can always and only be certain about a state you just abandoned, but not about the one you’re currently in.
P.S.
Sorry for the split comments but I always remember something else. I say it’s one directional because when you wake up from one of these states you carry the “memory” of the previous one. Instead memory is erased if you move the other direction.
For example in my lucid dreams I could be totally aware, but if I try to think what I was doing before I fell asleep, I can’t. And feel like I’m slamming my face onto some kind of psychic barrier.
(though I guess this breaks a bit the illusion of sufficiency)
Hm… a few random thoughts I’d like to share:
1)
It can be argued that any system capable of possessing something resembling a value system either already has, or at the very least will in short order develop, a “value” that ensures a degree of preservation, or at the very least formal backward compatibility, of its value system – simply due to the fact that lacking such a “value-fixing value”, the system will have its value system changing constantly up until the moment it implements a “value-fixing value”.
Humans most definitely possess numerous value preservation mechanisms.
It thus stands to reason that as long as the “more efficient” posthuman is derived from human, it will inherit a degree of values from its predecessor material, and will seek to protect those from erosion at least as long as that does not come at an unacceptable cost (if it does – then such values are boned. Fucked. Existentially doomed. Creatures with “less expensive” values will either eat the “conservative”‘s breakfast or the “conservative” him/her/itself)
Thus….
as long as values we so cherish are not too expensive in terms of resources needed to maintain them as compared to other values capable giving rise to entities of comparable power over “reality/nature”, we can reasonably assume that they will persevere, though perhaps in a fairly strongly altered form and/or as beloved heirlooms and “pets” of the posthuman entities derived from people bearing such values.
2)
However, I do think that the belief that humans indeed share a lot of “hardwired” values is a nice comfortable pipe dream, not unlike a kind benevolent god, friendly space aliens, or dating Anie Cruz 😉
You don’t need to move all too far from this very blog to find evidence for existence of people who, relative to you and me, live in what I call Lovecraft County, a perceived reality radically different from one we are witnessing. Such people, facing “posthuman opportunity”, will seek to preserve their Lovecraft County values, perceptions and beliefs, and it’s not entirely obvious that their Lovecraft County values are significantly more “expensive” than ours, especially if you allow some optimization wiggle room for both “value sets” (and, of course, there’s the issue that from “their side of the fence”, it’s us who are Lovecraft County residents while they are the inhabiting the Real Reality 😉 )
Oh, and Scott, please please please please make a timed-edit function – or at least a preview function 🙂
I don’t even know what these mean/are!
Computer spells you can cast on your blog to improve the rather unkind comment system 😉
Aye, I get that much. But where is one to find such dulcet arcana?
More seriously, wordstress has “plugins” which implement new functionality – functionality such as ability to preview comments before posting, edit them for a limited time after posting, user-friendly quoting mechanism and such, thus reducing the amount of clusterfuckiness in comment-based discussions.
Check them out http://wordpress.org/extend/plugins/
Gracias… When I get some time!
But you’re making the very same error of making your present experience the frame for any possible experience, or for that matter, interpretation of other intelligent systems. If the ‘experience of value’ is the result of a human incapacity, and the whole drive to become transposthuman is increase human capacity then its not a move to ‘new and improved values’ but a move beyond value altogether. I know it’s hard to imagine, but that’s precisely my point: we are literally talking about the unimaginable.
I love ‘Lovecraft County’!
I guess I just define “value” differently.
A system with “absolutely no values” will not “prefer” any future state over another (even if technically capable of predicting future very effectively) and will not form complex plans for coherent actions. Such a system will have the behavioral complexity of a bacterium or even a virus – not that such an existence is impossible (we know for a fact that it is possible, viruses exist after all) but it is a fairly limited form of existence, with very limited capacity to affect other systems / reality.
As soon as system develops both capacity to predict possible future and a mechanism for preferring some “possible futures” over other “possible futures”, it can be said to have some sort of “values”.
Take Lovecraft’s Cthulhu. Cthulhu has (informed) property of being “cognitively closed” to human understanding, but from the text we can infer some of Cthulhu’s values – for instance, that Cthulhu values waking up over continuing the peculiar undead slumber state 🙂
Nope. You really are letting the bottleneck of experience get the better of your imagination. Otherwise, you would be saying the question of nihilism is the question of whether humans are viruses.
Think about the way the suitability of terms like ‘prefer’ seems to increase in proportion to the complexity of the systems you raise as example. The more the bottom-up causal complexity of systems exceeds our ability to readily comprehend, the more we turn to intentional notions – ‘skyhooks’ as Dennett calls them. ‘Preference’ is simply a compensatory heuristic (according to BBT), a way to sum a certain kind of cognitive incapacity. Once we’re decked out with exaflops of surplus processing power, then we will be able to see we were talking about viruses and bacterium all along – which is to say, mechanisms. Nothing is ‘preferred’ and nothing is ‘chosen,’ we just happen the way we happen, and arrive at the ‘possible’ future we arrive at.
By my lights, you’re simply anthropomorphizing Cthulhu rather than taking Lovecraft at his word!
Hm, my point about viruses was intended to convey a slightly different notion.
Not that humans aren’t mechanisms (they are mechanisms, and viruses are mechanisms), but that there is a different type of interaction with the world at play here, one which viruses are fundamentally incapable of. Viruses do not remember, and do not plan.
I like roombas because in such discussion the simple vacuuming bot can replace humans without loosing much if anything.
Both roomba and viruses are clearly “mechanisms” – no argument here.
Roomba, unlike viruses, has memory and a certain limited capacity for planning its routes after it has finished “learning” its surroundings .
Thus, roomba can be said to possess a certain set of capacities viruses simply do not have, which includes memory, modelling and, thus, planning (there’s a vac-bot with far more sophisticated planning ability on the market already BTW, forgot the name).
It does not mean that roomba is “transcendent” (though I do like the mental image), it simply means that roomba has an ability to memorize the surroundings thus gaining an “internal model” and act in its accordance to attain a certain state, something viruses have no means of doing.
” ‘Preference’ is simply a compensatory heuristic (according to BBT), a way to sum a certain kind of cognitive incapacity. Once we’re decked out with exaflops of surplus processing power, then we will be able to see we were talking about viruses and bacterium all along – which is to say, mechanisms. ”
Excuse me, but I think there is a terminological / semantic disconnect here.
1) Do we (by we I mean “mankind” not “you and me”) have a full account of the way Google search engine works, capacity to enumerate and understand all its components and internal states ?
2) Is google engine a mechanism of sorts ?
3) Can we say google engine has “preference” towards certain types of data structures, as indicated by putting them “first” in an output ?
If any of those is a “NO”, then this is really getting interesting 😀
If all of those is “yes”, then mechanisms with completely enumerated and understood composition and functioning can be said to have “preferences” ( I suspect that I am using preference the way programmers see preferences, not the way philosophers see preferences 😉 )
One could argue that viral populations have preferences, even if individual virions do not.
” By my lights, you’re simply anthropomorphizing Cthulhu rather than taking Lovecraft at his word! ”
Well, for starters, Cthulhu is among the most anthropomorphic of HPL’s creations, second only to Nyarly.
Cthulhu is said to have a rather humanoid body plan, and in terms of mindset is at least capable of formulating a set of suggestions and planting them into the minds of susceptible humans (most notably cultists) when most active. That suggests that Cthulhu can interact with humans in meaningful ways (though the characters don’t seem to enjoy it all that much)
At most, if the Old Castro account is supposed to be taken as more or less reliable, there’s quite a lot we can say about Cthulhu’s attitude and even “values” (for instance, that Cthulhu is a radical information openness proponent, willing to give everyone access to maddening power and insight whether they like it or not. Kinda like Open Source enthusiast on steroids. Which means that Cthulhu was the good guy/girl all along 😉 )
I’m thinking what’s being refered to here is looking at things with a pure mechanical understanding. Imagine reality as being like a computer screen, with a neat grid. Everything is just a pixel – there is no roomba, there is no virus, there is no spoon. There are pixels. More pixels. Then more pixels. The way we shortcut and call a cluster that apparently is a roomba is a reflection of the information bottleneck (I think?). Maybe think of a 2X2 grid of pixels (only colours are black or white). You can grasp all the combinations easily enough. And yet see nothing in it, it’s just combinations. Once you expand the processing – well, it gets bigger than 2X2 – but seeing nothing in it continues. A bit Doctor Manhattan’ish.
Though I wonder how far you can take that – if you ‘die’/your processing breaks, well, then you are dead. Maybe you have no concern for that as this wonder processor, but I’m guessing those who have a concern for not dying are more likely to not end up dead. Darwinisms bottleneck grip seems to continue (an enforced information bottleneck, of being forced into the frame of thinking how to survive). Though if you don’t care and you take out all other species (ie, all humans are dead and you don’t even let some other animal species have the chance to evolve sentience), then subjectively Darwinism isn’t some sort of barrier that remains for those who care, either (yeah, Dawinism is still there/a barrier, but if all who care are dead, it doesn’t do them any good as a barrier, so I’ll count it as not being a barrier).
I’d say that “don’t care” and “take out all other species” isn’t really compatible.
For some reason I, the hypothetical value-less processor, decided to spend some time and energy at coming up with a methodology for reliably extinguishing life, then putting it into practice. Seems like a bit too much involvement for someone who doesn’t give a single hoot either way.
Now, if they got taken out by something else (something that didn’t bother with me), I the value-less processor, will stick around until some space rock falls and “kills” me (I saw it coming and could have avoided but… I lack values, so why bother?)
Also, I’m a bit iffy as to the whole “You can grasp all the combinations easily enough. And yet see nothing in it, it’s just combinations. Once you expand the processing – well, it gets bigger than 2X2 – but seeing nothing in it continues”. That seems like… passive reflection of sorts. Kinda like a system that records anything, but doesn’t well, do anything other than obtaining very high-fidelity records.
If it doesn’t detect and classify any patterns, then it’s merely a fancy recorder.
If it does assign classifications (doesn’t matter what those are), it can be said to have some kind of intentionality, and some kind of capacity for aboutness (it does sort patterns it grasps “in the pixels” into different stacks, after all, which is pretty much what aboutness-as-used-in-information-retrieval is all about. Sorry, can’t resist :)).
Yes, that kinda implies that “aboutness” and “intentionality” are more like groups of concepts than specific concepts, and “human” intentionality is merely one of the members of such a group.
I’d say that “don’t care” and “take out all other species” isn’t really compatible.
I guess I should have more explicitly described the ‘if’ in ‘if you take out…’. I’m talking by chance. I mean, a machine gun takes alot of focus to build. But once it’s built, it’s built – if an epilectic is spasm gripping onto the grip and trigger and spasmotically waving it around as it fires, are you going to describe that as ‘intentful’ when it kills a bunch of people? (I’m using an epileptic as a possible equivalent to a recently brain edited person). Granted, without the weapons that science affords us, yeah, how likely is it to see human wide extinction from spasming? Let alone all life extinguished? Without the weapons of science, not likely I’d agree. Anyway, I was describing a limit, one based on darwinism. But even that limit could evaporate. Also it’s a sucky limit to base things on – even one life at risk is bad – let alone one life actually lost. But I’m being confusing, because I raised the darwinism bottleneck, then argued with it. But it was a fun arguement while it lasted! 😉
That seems like… passive reflection of sorts. Kinda like a system that records anything, but doesn’t well, do anything other than obtaining very high-fidelity records.
And…apart from darwinism, so what?
I mean the baseline were drawing here isn’t that if certain people modify certain of their emotions, that’s a bad thing. Because we’d judge if they are bad from our own emotions – and what are we going to do then? Our emotions are the good ones, so hey, lets modify them? Then fly through space in a golden boned space whale (that falls to earth…omg, just realised a hitch hikers guide to the galaxy potential reference in the PON series!!??)
The point (or maybe just my point) is to instead illustrate ‘the cold’, where you are not going to have some sort of values to guide you. There’s just nothing there – only what you’ve got right now is what you’ve got. Edit that and…your only guide will be the cold. Right now, our ‘lords’ are a bunch of crazy emotions that developed over millenia. Edit and your lord is … nihilism. Ice.
To put it in dramatic terms, anyway.
” I mean the baseline were drawing here isn’t that if certain people modify certain of their emotions, that’s a bad thing. Because we’d judge if they are bad from our own emotions – and what are we going to do then? Our emotions are the good ones, so hey, lets modify them? Then fly through space in a golden boned space whale ”
Maybe. Who knows ? I never said that by recursively tweaking human values we won’t eventually arrive at something that is, by baseline assessment, really fucking weird. But that weird thing, assuming it is at all capable of activity, would still have values, in the most generic sense.
“The point (or maybe just my point) is to instead illustrate ‘the cold’, where you are not going to have some sort of values to guide you. There’s just nothing there – only what you’ve got right now is what you’ve got. Edit that and…your only guide will be the cold. Right now, our ‘lords’ are a bunch of crazy emotions that developed over millenia. Edit and your lord is … nihilism. Ice.”
Wouldn’t the new modified states be the guide (assuming you didn’t mean “purge” when you said “edit” 😉 ) ?
And then, after the next tweak (assuming new “values” still lead one to seek more tweaking) again new ones, and then new ones, etc, indefinitely or until a catastrophic failure happens / a strong “value preservation value/incentive” is arrived at ?
Just as Karl Popper complained of promissory materialism, I think you are guilty of promissory nihilism. You point out that some writers assume that intentionality will survive the terrible discoveries of neuroscience. Aren’t you just assuming the opposite?
I think that there is also a solid chance that people have divergent definitions of “intentionality”.
Let’s run a thought experiment.
Let’s say we have just bought a roomba. Roomba clearly has internal private programmatic states (that are concealed from you) which we can very well consider a type of mental states. In recent models, some of those states arise when sensors detect “dirt” and thus programmatic states become directed at this “dirt” and specifically at making the roomba drive through places where there is apparent “dirt” more often.
Given that roomba has private internal states and can direct them at specific artifacts in the external world (which it finds via sensors), can we say that roomba has “intentionality” ?
If no, why ?
Most researchers, I think, would want to say yes. But the example actually begs the question of whether ANYTHING exhibits intentionality, which is to say, whether WE actually have ‘internal representational states’ at all, or whether it just seems that way because of how little information makes it to consciousness. ‘Aboutness,’ on this account, is a kind of ‘compression heuristic,’ something the conscious brain conjures to compensate for its inability to track the causal histories that are actually responsible. A kind of inverted version of ‘rules.’
Well, roomba is either non-conscious or has a very limited consciousness (depending on how you define consciousness, I for one would argue that many individual computer components already have a very limited consciousness 😉 ), and yet, it can be claimed to have a certain intentional stance on things (specifically, on “dirt” as detected by its sensors).
And we know for a fact roomba has private internal states – human-designed ones 🙂
Do humans “really” have “internal states”? The very fact that we form internal model of “reality” and make predictions based on that clearly indicates that, at very least, as much capacity for internal states as a roomba.
It would be very problematic to go about predicting future and forming complex behavior with no “internal model” of reality – you would basically be reduced to memoryless, predictionless behavior of an average singular microbe (not to be confused with a population of microbes, which isn’t exactly memoryless if a certain friendly sorta-ex microbiologist is to be trusted)
Not at all. In the post I’m saying this is a very real issue that has yet to be decided. Personally, I fear this will be the ‘final wound’ that science delivers the traditional image, but I remain stubbornly committed to moral realism.
I think moral realism requires something more than “intentionality” and “consciousness”,
Intentionality may very well be a “real” thing, and in could very likely be inherent to all cybernetic systems with complex proactive behaviors (I am yet to find an example of such a system that would clearly lack some kind of intentionality, even if a meek one), but it would not make statements about “morality” of certain outcomes any less fragile and arbitrary.
In the basic sense of ‘right and wrong’? I’m not so sure. Part of the reason I spend so much time working over semantics in “The Last Magic Show” is that the RULES of mathematics and logic seem to be so unassailable. I try to show that this apparent unassailability is the very thing BBT predicts (under the aegis of ‘sufficiency’ like everything else).
I’d say math and logic are so unassailable because they are so profoundly all-encompassing. They are, however, quite devoid of anything remotely “morally normative” (much like physics, which is also quite unassailable).
Aboutness may or may not be fragile (I guess it’s more about definitions and semantic trickery than anything else), but morality certainly is fragile. You don’t need no advanced neuroscience to undermine any moral system – that’s why I enjoy misotheism so fucking much 🙂
Mh, this is quite interesting.
For the first time in HUMAN history, in other words, the biological basis of HUMAN desire will be put into play.
First, it reminds me a manga by Tetsuya Egawa titled Last Man, that went on as a perverted/sexual escalation when “desire” was made absolute (but Egawa is actually great).
Then it makes me think this is happening now. Think for example the LGBT community and the desire to have kids, adopted or otherwise. I see this as the essence of human desire transcending biological limits. Or even the will to affirm personal desire over biological facts.
But I’m actually FOR this stuff, because I believe human values can be more “noble” than natural ones (nature’s a bitch).
And then I can’t avoid seeing this from the Kabbalistic perspective, since the Kabbalah is centered on the mechanics of desire.
In their model, the rules that regulate the mechanics of desire indeed self-regulate. And I’m actually quite convinced that their description corresponds to the truth.
What they say is: if you “trap” the desire in yourself, as just a form of personal fulfillment, then you simply get nothing out of it, if not an increase sense of “lack”. You only get a continuous rise of that desire toward something else, never feeling quite content and satisfied. Basically, by playing it you only augment your dissatisfaction and need for more, more, more. Desire is made to only grow without being fulfilled, being proportional to suffering (for not reaching what you want).
The only way (they say) to get true satisfaction through desire is solely through altruistic forms. That way you don’t trap and choke the desire, but it flows through you and “enlightens” you. So the most fulfillment isn’t in trapping more and more of that endless struggle, but in being a vehicle for it, distributing it around and feel part of something more and shared.
And in spite of the metaphysics I do believe that it works like that because it does feel that way.
So I’d say that at least at this level of perception, we’re built so that we’re driven toward communities. And that’s essentially the “story” that most religions tell us: that we are actually competently programmed, and built so things go a certain way (so no worries and have faith).
Actually the Kabbalists are all for these scientific processes, because they believe that nature drives naturally things to work a certain way. And unbounded desire will lead more and more to a need for spirituality (since it builds a desire that can’t be quenched through other means).
But I guess your post semantic world is way beyond this, because it isn’t simply about unbounded desire, but about mastering its own mechanics. And so not mastering the God’s order, but actually manipulating those very rules…
Fascinating stuff. I agree that the Kabbalistic account seems to capture the phenomenology of the desire well enough.
But yes, the problem I’m describing does step outside of this circle. It’s literally saying that the ‘darkness that comes before’ is constitutive of our understanding of intentional phenomena like desires, rules, representations, and so on, and that the posthuman, by shining light on this darkness, will in essence wipe OUR experiential palette clean, and create something OTHER, nonintentional, and quite terrifying to us.
The overarching point is simply that consciousness is far from what we intuitively think it is, and that neuroscience may very well relativize it the way astrophysics has relativized the world – which is to say, show it to be far smaller and more insignificant than we thought.
To schematize more.
Certain religions warn and forbid you to meddle with certain things considered tabu. Kabbalah instead says that you’re free of doing whatever, since it’s always contributing to the same end.
But what if both are ways to “rationalize desire”, one way or another, and science allows to go so deep as to rewrite the rules?
I mean, religion can even give you a good description of the way you work and to what end. We could say “it works” and it’s a good wisdom. But that status could be surpassed if you get to manipulate the actual rules.
Take Kabbalah as a “meaning-full” “science” of the human being. What happens when the human being is no more?
But I don’t agree with you on the fact we don’t know the “human”. Probably just a semantic difference, but I intend “human” EXACTLY what we perceive right now. It’s a definition that exists this side of consciousness and describes only consciousness.
The problem I see is that once you transcend the human as we intend it, we don’t simply reach out to the truth about ourselves (or discovering what’s really hidden from consciousness), but that you are entirely out of the picture of reality and your power equals *total* omnipotence. The body doesn’t become more prominent, it ceases to exist.
Post-semantic doesn’t mean subject to a different set of rules. It means without rules entirely. Out of the frame of the picture.
Human is hallucinatory, but why believe that hallucinatory isn’t “good” or preferable?
Metaphysically speaking: what if the singularity isn’t our destination, but our starting point. And what if you deliberately accepted (and then forgot) to impose limits on yourself and live the hallucinatory experience?
Scott, you’d be a form of contrarian human being that opposes his own choices out of stubbornness. You sent yourself on a holiday trip and now you just don’t want to enjoy it 😉
If you remember the old debates, I was actually argued out of my trenchant anti-singularity position. What we are is what we are. I leave that for science to decide. If it is as depressing as I think it will be, then the posthuman and the singularity may be our only hope to become TRUE ‘meaning makers,’ if such concepts make any sense at that time.
A surprisingly optimistic perspective from you 🙂
– If “what we are is what we are” then we could have faith that we’re “competently programmed”, and so have faith in this greater program, wherever it leads, however Godlike.
– If we instead are doomed and it is just an horrifying realization after horrifying realization, then we can always hope the singularity goes so deep as to liberate ourselves (“any exit has to be a good exit”).
Both rather optimistic overall.
What I suspect is largely at play here is concern over sustainability of “intentionality”, whether “intentionality” is even a “thing”.
I am not particularly concerned about that, since in my humble opinion, “intentionality” is a property of any cybernetic system that 1) has internal states that are related to, and to an extent are modelling, the so-called “external reality” 2) uses those models in order to predict outcomes of interactions in the so-called “external reality” 3) is organized in a manner that causes it to select certain outcomes above other and execute actions that, based on the internal model, would lead to such outcomes, that is “focus” its internal states on achieving some of those outcomes, though affecting the external reality or otherwise.
Thus, planes on autopilot, robot vacuum cleaners, search engines, “big data” predictive systems, bees, spiders, bats and humans are all “intentional” systems, though the specific ways of implementing this capacity of course vary profoundly.
In fact, I would endeavor to speculate that there actually is a lot of variations in implementation of this wonderful “intentionality” among humans, and we just don’t notice it because there are a) linguistic limits to expressing our internal states (I for one never understood how could someone seriously believe in “free will”, but apparently some people somehow do believe in it 🙂 ) and b) social limits on behaviorally expressing possible differences (there are only so many ways to behave without causing cops to come after you 😉 )
“I for one never understood how could someone seriously believe in “free will”, but apparently some people somehow do believe in it 🙂 ”
I seriously don’t even know what “free will” really means. No idea. Could someone who believes in it explain it to me?
‘Intentionality’ is as overdetermined as any other concept in philosophy to be sure, but no version I’ve ever heard of dispenses with ‘aboutness’ or normativity, the very things at issue here. So if you continue to use the term without regard to these concepts (let alone the whole fam-damily of intentional concepts) all you’re doing is misleading people, which raises the question of why use the term at all, doesn’t it?
Hm…the way I interpret “intentionality” is “property of mental events/phenomena of being directed upon a real or imaginary object “.
Sorry, Scott, that’s what it says in my decayed crash course notes.
Am I abusing/misusing the term ?
Aboutness. That’s what I’m saying might not exist, outside the hallucinatory circuit of human experience, that is.
# 01 checks with wiki to make sure http://en.wikipedia.org/wiki/Aboutness
Uh… now I am confuse 🙂
How exactly does above definition of intentionality, and the example of roomba-as-intentional-entity, dispense with “aboutness” as such ?
I would argue that intentional(as per definition above) roomba and it’s preoccupation with using its internal model to effectively find and collect external objects known as “dust” is quite compatible with at least some definitions of “aboutness”…
But that’s just it: you’re not dispensing with the traditional notion at all. You were trying to give a functional definition, a ‘intentionality is as intentionality does’ account. In order for this to be relevant to my argument you have to be using some understanding of intentionality that does not reference ‘directness’ or ‘aboutness’ or ‘logical relatedness’ or so on. Otherwise, you’re simply begging the question. I’m saying, “What you see is not what you get,” and you’re replying, “But this is what I see!”
Since I already know that, I extended you the benefit of the doubt and assumed you were taking the ‘redefinitional approach’ like certain deflationary meaning apologists like Dennett are fond of doing.
Hm.
…
Okay, so… the problem is that “aboutness” might not actually exist at all in complex proactive decision makers (such as humans and moderately sophisticated modern robots) thus rendering accounts of intentionality that somehow circumstantially involve it “broken”, amrite ?
Upon some thought, aboutness seems, to a degree, be traceable in every thing that needs to sort complex surroundings in accordance to relevance to a particular goal or other template.
If you have a need to meticulously sort edible from inedible, you would necessarily have something about your organization that would invoke and implement “aboutness” in some shape or form.
Obviously, that suggests that there are many possible ways to go about implementing your aboutness (pun partially intended) and the way it is wired in humans is not necessarily the best way to go about it (shit, this term lends itself so well to bad puns)
I think the problem is that aboutness (as I understand it) is contextual, thus rendering any conclusions (including conclusions about what values you should re-engineer and in what manner, if you get the opportunity) context-dependent.
I saw a donkey today in the morning, well two. And now thinking about the things posted here I was reminded of this one scenario from philosophy:
There is this donkey in a universe, where nothing exists except the donkey and two haystacks. The universe is perfectly symetric, as is the donkey. One haystack is on the right of the donkey and the other is on the left, both at the same distance to the donkey. The donkey is hungry and wants to eat from a haystack, but because it cannot decide, which direction to take and there is no force in this universe which would pull it in one direction or the other, it dies. …poor donkey…
I don’t know if this is familiar to all of you, but I always thought, that such a “symetric universe” isn’t really realistic and that in reality there would always be one particle more on one side of the universe and the donkey would go that way.
But now substitude desire from and value from the donkeys brain. Even in the asymetric universe and even if there would just be one haystack. Why go to the haystack and eat? Why, if there is no value to it? Maybe there have to be desires/values for people not to become suicidal-zombie-posthumans. I think as long as the brain creates the illusion, that is called conciousness, it will always create values and desires at the same time.
I hope this is all not totally beside the point.
If the universe contains nothing but donkey and hay, donkey suffocates 😀
Touché! Donkey, Haystacks AND AIR…or the donkey doesn’t need to breath…some kind of super-donkey.
Well, that’s actually a pretty trivial loop the donkey seems to get caught in (I don’t know if donkeys actually do get caught in such loops. Ants certainly do). It could probably be resolved in many ways, and, in any case, I think that conditions of perfectly symmetrical haystacks are too rare to pose any real threat to donkeys 😉
But I think we’re kinda making the same point in regards to value. If there are no values implemented in donkey’s decision making, then it probably won’t eat irrespective of haystack symmetry (of course, viruses and other microbes do just fine with no “values”, just reactions to immediate environment, but such state of affairs makes behavior more complex than “go in the general direction in which concentration of N increases” somewhat problematic)
I think there’s just a bit of confusion going on between values as in “liberty/prosperity/pursuit of happiness/safety/sanity/consent/etc.” 😉 and values as “if navigation subsystem detects deviation of course from one optimal for reaching designated destination, it will pass parameters required for course correction”. The latter case clearly involves some kind of values (the thing is clearly going somewhere, knows where exactly it needs to go and has a course laid out) , but they aren’t human values.
A hypothetical “strong” artificial intelligence hellbent on computing pi to some absurd precision and willing to sacrifice everyone just so it can improve its computational ability clearly has “values”, just not ones we can relate to.
A hypothetical “strong” artificial intelligence hellbent on computing pi to some absurd precision and willing to sacrifice everyone just so it can improve its computational ability clearly has “values”, just not ones we can relate to.
So if the sun nova’s and consumes all the planets around it, it has ‘values’?
Or when it’s an ‘artificial intelligence’, particularly if it’s ‘strong’, it’s made of something else other than a what a sun consuming all is made of? Something other worldly? Something spiritual?
To quote a witful friend with whom I used to have AI discussions a lot, “The difference between an intelligent system made of a given type of substrate and a non-intelligent system made of same substrate is somewhat akin to the difference between an obsidian dildo and an obsidian knife. The difference is most certainly trivial to an impartial observer, but I think that an actual participant who fails to make this trivial distinction is in for a nontrivial amount of pain”
There’s nothing spiritual about having ability to remember, to analyze, to plan and to carry out proactive behavior based on those plans. Robots we built from the ground up (and thus known to have exactly zero zombie ghosts inside) already can do that (to a limited degree, but hey, small steps).
But ability to model reality around you, to refine those models and plan is a fairly big thing practically, even if from some purely theoretical POV it’s basically kinda like “same obsidian, arranged oh-so-slightly differently”.
Sun can very well kill you, and so can a river, or a large rock falling from above. But none of those things can adapt on the fly in a specific manner that facilitates killing you, and none of those things will try to proactively interfere with your attempts at defense (rock won’t adopt a new method of hitting you after you start wearing a helmet).
There’s nothing spiritual about having ability to remember, to analyze, to plan and to carry out proactive behavior based on those plans.
The idea is that actually those ‘remember’, ‘analyze’, ‘plan’, ‘proactive’, ‘behaviour’ – are entirely spiritual attributes.
Where in physics are any of these?
Sun can very well kill you, and so can a river, or a large rock falling from above. But none of those things can adapt on the fly in a specific manner
‘Adaption’ is spirituality, within this idea. Does not the avalanche wait for the sound and disturbance of a traveler, for it to pounce?
It’s not very flattering and turns things inside out, but within this idea all you have are avalanches, teetering.
The avalanches that could not set off others that would, with push from the suns energies (directly or otherwise) reset the original avalanche or even duplicate it – those avalanches aren’t around today. Others repeat. Snow falls and collects. Hearts beat.
‘Adaption’ seems like something ‘other’ because of the information bottleneck. Deep blue – the computer that beat Kasperov. Did it adapt? Or was it a series of operations? A series of avalanches?
I refer to this as an idea, to impart a demarcation, to make it ‘over there’. But at the same time, to also make it one way of potentially thinking about things. I think I would probably refer to it as ‘the cold’. But sometimes you need to put things on ice…herp derp… 😉
“The idea is that actually those ‘remember’, ‘analyze’, ‘plan’, ‘proactive’, ‘behaviour’ – are entirely spiritual attributes.”
Um… then robots capable of mapping their surroundings and planning their activities in advance by leveraging the models they have constructed are imbued with “spiritual attributes”, ensouled even ?
Because we have theory and practical implementation of every single one of those attributes in robotic systems (those constructs are fairly limited, for now, but they satisfy formal definitions of every of the “allegedly spiritual” attributes in question).
And robots retain and analyze data, create proactive plans, and act upon those plans.
Does that make them spiritual despite the fact that we have built them from ground-up and know them to be ghost-free ?
“‘Adaption’ is spirituality, within this idea. Does not the avalanche wait for the sound and disturbance of a traveler, for it to pounce?”
Well, it certainly isn’t specific to travelers – and they are definitely not “getting better at it” with time. If they were – I’d be worried about them having some kind of intelligence, even if one not much smarter than our current robots.
“‘‘Adaption’ seems like something ‘other’ because of the information bottleneck. Deep blue – the computer that beat Kasperov. Did it adapt? Or was it a series of operations? A series of avalanches?”
Deep blue’s algorithms can be described as adaptive, yes. The whole point was the machine’s ability to pre-calculate human moves proactively, and leverage human mistakes.
That makes for boring, but very efficient, chess.
Now, if avalanches started to somehow reorganize themselves in a manner that maximizes “killiness”, they would be good candidates for “adaptive” entities.
But they don’t, and thus are not.
” I refer to this as an idea, to impart a demarcation, to make it ‘over there’. But at the same time, to also make it one way of potentially thinking about things. I think I would probably refer to it as ‘the cold’. But sometimes you need to put things on ice… ”
Well, it’s an interesting perspective, yes, but it seems to me that it does not quite succeed to erase the “line” between “intelligent” and “non-intelligent”.
Deep blue, despite being human-made from ground up, is a kind of (limited) intelligence, extremely potent at what it was designed to excel at (chess). Despite being quite demonstrably ghost-free it had an internal model of how “chess” is played and a capacity to plan proactively.
Avalanches, or rivers, or bushfires, or rocks, exhibit none of such traits (fortunately. An avalanche waiting for maximum people to gather before being set off would royally suck).
Um… then robots capable of mapping their surroundings and planning their activities in advance by leveraging the models they have constructed are imbued with “spiritual attributes”, ensouled even ?
By your own measure, logically yes. I’m saying you are claiming this, that (within the idea I describe) your notion of ‘plan’/’remember’ etc are religious attributes. You’re still painting these robots/these things/these clusters of pixels with religious spirituality. It just seems that ‘plan’ and ‘remember’ are so banal and day to day to your perception they simply could not be some sort of religious attribute (like divinity or salvation, etc). Within the idea I describe, no, they are religious. As religious as divinity and salvation.
As I said, where in physics are any of these notions? Yet you do not answer and instead read ME as saying I am the one attributing spirituality to these robots? Me? I’m not the one describing things outside of physics, am I?
Unless you don’t think anything is spiritual/you don’t describe anything as – in such a case I don’t know why you are saying “There’s nothing spiritual about…” etc etc?
You want to say there is nothing spiritual about planning/remembering/etc. In my lil’ fantasy idea, actually they are spiritual. Where the word spiritual is shorthand for ‘lie’.
Anyway, you’ve read me wrong – don’t say ‘there’s nothing spiritual about…’ then when I respond read it as if I’m the one painting things with the spirit brush. I’m saying you are treating them as spiritual – it just seems so normal to you, it could not be so. But (when thought about within the idea I’m painting – I think I’ve called it ‘the cold’ before, cause it probably needs a name) you are.
C’mon, indulge me my fiction and immerse in it awhile.
Now, if avalanches started to somehow reorganize themselves in a manner that maximizes “killiness”, they would be good candidates for “adaptive” entities.
But they don’t, and thus are not.
So what’s your creation myth then, as to how life first came about? Chemical chains clashing/avalanching together over and over until…one particular crash’s particular physical properties of the crash would duplicate the first crash.
Or what’s your creation myth?
Just to me, your saying deep blue isn’t just an avalanche. And yet if you had enough domino’s, you could make a computer (seriously). That’s how computers work, really. A sequence of domino’s falling. An avalanche.
So can you point at which domino is ‘planning’, please? Which one in particular?
“Oh, it’s not just one, it’s a kind of…”
Kind of a kludgey group of them, somehow? A vague, hovering splodge that defies exact emperical designation?
” I’m saying you are claiming this, that (within the idea I describe) your notion of ‘plan’/’remember’ etc are religious attributes. You’re still painting these robots/these things/these clusters of pixels with religious spirituality. It just seems that ‘plan’ and ‘remember’ are so banal and day to day to your perception they simply could not be some sort of religious attribute (like divinity or salvation, etc). Within the idea I describe, no, they are religious. As religious as divinity and salvation.”
Um…
I guess I’m having a communication disconnect here.
Is the proposal as to “religious nature” of planning supposed to be interpreted as a “fictional” hypothesis, or a serious philosophical assertion ?
Because I do have enough suspension of disbelief squirreled away to handle it like I would handle exotic proposals in fiction, but the proposal seems a mite “too much” to treat as actual empirical possibility.
Said “cold” proposal seems to assert that emergent properties in complex physical systems (such as memory, or capacity for carrying out complex, essentially computational tasks such as “planning”) is largely a perception trick and a subject of religious/spiritual belief.
Would that also mean that “porosity” (as a complex emergent property arising from presence of structural voids, usually gas-filled, in a solid, giving rise to more features than merely combination of properties of the gas and solid involved) is largely an invalid (“spiritual”) concept ?
“As I said, where in physics are any of these notions? .”
Why, physics does not seem to have trouble dealing with emergent properties, be it “porosity” of solids or “memory” (numerous physical systems have the ability to store and recall prior states, ranging from those fucking shapechanging metal things to our dearest computer “memory”).
The ability of some types of organized matter (such as our dear electronics, but also DNA computers and whatnot) to support computation does not seem to be “outside physics”.
As various robots clearly illustrate, all those “magical” wonderthingies like “planning” and whatnot are computational tasks, and physics has no problem with “computation-capable” physical matters.
So yes, things like “planning” or “recording and analyzing inputs”, by virtue of being computational, are clearly “in” physics.
Or does something have to be a “fundamental” aspect of physics the way “interactive forces” are to be a “non-spiritual” concept ? 😀
“So what’s your creation myth then, as to how life first came about? ”
I am an IT Security person, not a philosopher, so I am not particularly concerned about how world/life came to be.
But yes, I find the hypothesis you specified convincing enough for my tastes 😉
” Just to me, your saying deep blue isn’t just an avalanche. And yet if you had enough domino’s, you could make a computer (seriously). That’s how computers work, really. A sequence of domino’s falling. An avalanche. ”
A sequence of dominoes falling in a certain way that supports computation.
Well, again, just like the case with “intentional storms” (that we discussed somewhere below) I am not completely closed to possibility of something like “intentional avalanche” though properties of vanilla, IRL mountain avalanches do not seem very conductive to building convincing computation systems, let alone ones capable of sustaining some kind of “intentional” internal state.
So yeah, Deep Blue isn’t “just” and avalanche in the same sense your desktop’s CPU isn’t “just” a random bunch of transistors strewn together.
“So can you point at which domino is ‘planning’, please? Which one in particular?”
Which one of hydrogen atoms is “cause” of fusion inside the sun, and for that matter which one transistors in a server’s CPU “is” this blog 😉 (or maybe it’s the HDD magnetic domains where we should look for the singular physical item responsible ;)…)
“Oh, it’s not just one, it’s a kind of…”
”
Kind of a kludgey group of them, somehow? A vague, hovering splodge that defies exact emperical designation?”
Why yes, it’s a bit kludgey – all complex systems and their classifications are.
But not defying empirical explanation – not anymore than “porosity” or “stellation” defy empirical explanation.
Empiricism does not require all groups and concepts to be completely nice and tidy.
“theoretic elegance” is not some kind of natural law. It’s at most a heuristic, and damn weird one at that.
‘substitude’ should be ‘subtract’ or just ‘remove’…hm…being able to edit posts would be cool.
Looking on the bright side, if we will, for example, all one day be able to look like the-celebrity-flavor-of-the-fifteen-minutes, then we become we kind of take looks off the table for breeding purposes with designer genes.
I’m far more concerned with the desire of a few being forced on the many than the many finding out and being given the power to figure out what their desires are as individuals. At least with the latter, we are liable to find a lot more variance than might be obvious viewing these issues from a sociological perspective (that’s all about generating generalities from specifics).
Would we choose to be sociopaths if we had the option? I’m not so sure. Some would, but as a race? Maybe what’s HUMAN is the pursuit of joy. It’s not as if stoicism established itself like Christianity, Islam, or even vegetarianism.
There is a distinct possibility that HUMAN is a category with way more variation than we expect. I mean, our “happy tree friends” Vox and ACM are most likely human, and look at how divergent they are!
As to stoicism – it established itself quite well by infiltrating into all major traditions one way or another. It’s kinda like those “silent viruses” that have become parts of human genome (a friend once told me that a massive part of what we consider human genes are virus genes that stuck around in some poor sod’s gametes after an infection eons ago. I even had him link the source so it must be written down somewhere…)
Speaking of sociopaths: For me, THE great scandal of moral philosophy is it’s inability to decisively knockdown Ted Bundy’s rationale for his acts.
You mean his “pron made me do it” shtick ?
There’s no particular need to knock it down. He was just toying with people’s minds – high-intelligence sociopaths are often reported to be fond of mind games for mind game’s sake, and crazy Dobson dude seemed like a fine target for the last manipulative gambit (and I might say he played his last gambit quite well, since that crowd has a confirmation bias the size of Manhattan).
Bundy just liked to kill pretty girls. That’s all there is to it
You know, the same way some people like to collect dried moths or postage stamps or play golf. Or have people scream and cry and plead during sex (which, unlike murdering people, can be done in a legally acceptable and mutually agreeable manner ;)).
There is no deeper meaning to it, Bundy just ended up with an interest in – a fascination towards – murder, rape and necrophilia. Well, that’s one bunch of shitty hobbies, there.
And we like to kill people who do such things. That’s all there’s to it.
Are you starting to see?
Why isn’t it as simple as “do unto others”? I know, there are exceptions (I always think of the extras in *Nautral Born Killers* with the signs, “Kill Me Next!” and think, “Yeah, I’ve met those people”), but as long as we still have some sympathy (even if it’s for the skin of me-me-me) why doesn’t that work? Seems to be the basis for civilization to begin with, separating what’s mine from yours in a way that doesn’t require 24/7 vigilance, escalatingly complex alarm systems, being armed and trained to the max, to the point that we are not capable of doing anything BUT protecting ours and society crumbles from a complete lack of productivity.
Though I don’t know what specifically Bundy cited as motivation (see 01’s reference to porn). It’s brain-wiring, we agree. Is that what you mean, that he couldn’t stop himself? Yeah, that’s a bitch roll of the dice, ain’t it? Argument for predestination it would seem, for the theologically minded. Though my response to that is, then why bother? It’s all fixed so I’m free of responsibility.
And you have at last dragged me kicking and screaming into thinking about my very limited philosophy classes from some time last century. 🙂
“And we like to kill people who do such things. That’s all there’s to it.
Are you starting to see?”
Why yes, that’s all there is to it 🙂 (yes, we’re wired for vengeance to a ridiculous degree. Ted could have done some good to society if we made him work for the entirety of his life at some dangerous and unlikable but nonetheless necessary line of employment, but we just happen to be so vengeful that we preferred to kill him instead. Because fuck such dudes. Rawr 🙂 )
Though I wager that even if we didn’t like killing people like Ted, and weren’t naturally vengeful beyond any rhyme and reason (which we, as a society, are), we would still have to do something about Ted and people who share his hobbies, because we don’t particularly enjoy getting killed (and having people we enjoy having around killed) which kind of ensures that we and “such people” just don’t have any common ground for negotiation.
Ted lived in a very special Lovecraft County, one which made him irreconcilably and nonnegotiably dangerous so some manner of neutralization was a necessity. Since we’re vengeful as hell, “manner of neutralization” involved a whole fuckton of electricity (which is rather expensive).
mccoyote, don’t feel shy about your phil.
My primary exposure to philosophy is a crash course some genius tacked on as a mandatory part of ER training I undertook some time ago.
@01:
“Ted could have done some good to society if we made him work for the entirety of his life at some dangerous and unlikable but nonetheless necessary line of employment, but we just happen to be so vengeful that we preferred to kill him instead.”
And now you’re knock-knock-knocking on one of my pet peeve issues. The US has done a lot of that actually (PAPERCLIP after WWII, Jim Jones, Manson maybe, etc.) and I can’t say a see a single good fucking thing that’s come of it. We stayed ahead of everyone else where cults, propaganda, and behavioral modification go, but back to Scott’s point above and my concern of abusing it by a powerful few, it means that he-who-calls-the-shots gets to decide the course of the human race. And if biology has and counter-argument to offer it’s that diversity is important to survival. If a non-sociopath can make and stand by the hard choices (if/when there is no other alternative—and I don’t really believe in those situations) then I question the value of mass-conversion to sociopathy strikes me as…well, just wanting to watch the world burn.
Well, my implication was that Ted could have dug some quite wonderful trenches, given that he had two perfectly operational hands and equal amount of operational legs (and Yog-Sothoth knows, there’s never too many trenches), not that he is something to be mass converted into (given his primary distinguishing quality was being really good at murdering women, I too would question the sanity of such a proposition)
The Aphorism of the Day was awesome! I especially liked – “Your personal brand preferences may be altered to align with those of our sponsors.” and the $18k/month licensing fee required to keep intact your memories of copyrighted works.
That bloody thing has had me laughing for days. I also thought the comment string was funny! Some people are so literal…
On the actual Inchie Bros:
So yesterday out of curiosity I was flipping through stories billed as “erotica for women”, and I noticed how short these were. Though supposedly billed for variety, most seemed as to the point as porn, without any real foreplay. You could tell the authors were getting hot and bothered as they wrote, jumping to the “good stuff” as fast as possible.
Made me think of the Inchies, and how they seem to have no sense of aesthetics. They want to jump past any sense of intimacy and get straight to the fucking, with no ability to delay gratification. As such, they seem to resemble porn addicts, the kind of people who’d think cuisine can be made “better” by piling on bacon (mind you, not a completely inaccurate theory), salt, or cheese.
So it’s hard to imagine the Inchies having a love song. The idea of Aurang being a lover is Shaeönanra’s human perspective, I suspect Aurang thinks fuck buddy is a better one.
(As an aside I like the idea that the Tekne – divorced from the reality-language tangle of sorcery – can save *Voices*. Chorae in the Carapace makes me wonder…)
*is a better term.
Porn addiction is more than a great analogy, it’s one of the things I have in mind. It all comes down to values. “Foreplay,” which is positive for most people, is as about as valuable as “Story” is for porn buffs. Now transplant this dichotomy into any alien species that have biologically re-engineered themselves to ‘explore’ carnality. For them, ‘Love’ is the perfect word to capture their values. They would argue that what Men call ‘Love’ is nothing more than a flattering sham.
I’d like to point out that there are people who like some story to their porn – I would wager that some porn flicks out there have at least as much “story” (and definitely more “character” “development”) as “Oryx and Crake” which I was sadly exposed to as a result of certain discussion of Watt’s Newscrawl (and deeply regret time thus lost. I wish I could sue Atwood for misleading advertising or somesuch 😉 )
It’s interesting to think the Inchies, for all their…Epicureanism?…feel this need to describe their pleasure seeking as “Love” and themselves as “Lovers”. To raise the banality of their porn star lives to something transcendental.
I suppose its how Kellhus manages some manipulation, or at least a reading, of Aurang while Esme is possessed. (Still a contender for best lines in the series: “But then how does any desire belong to me?”)
I doubt Inchies see love as “transcendental”. They have optimized away that claim.
Dammit, scratch Epicureanism. Franco Ferruci’s Autobiography of God led me astray.
But why refer to it as “Love” at all then? The Inchies care little for Gods, for anything sacred that we can see, yet they seem attached to the moniker “We are a race of lovers.” as opposed to “we are a race of fucking machines.”
My point is that the Inchies need to be heroes in their own story, rationalizing any and all contradictory evidence or moralities. They are angered by what they see as the *injustice* of their damnation, rather than being scientists simply trying to avoid a certain post-life outcome.
Oh, I do recognize some porn has stories, but I’m thinking of how extended sexual tension is cut off, to an almost inhuman/illogical degree. Like you go in to buy a mattress, and hey next thing you know…
the other thing I wanted to mention was how these stories seemed little different from any other sex scenes in SFF or elsewhere. These stories by women seemed pretty close to the whatever sex scenes I’d seen written by males barring the really weird shit that happens in Rivers of Gods (backflip sex between academics? ha!).
admittedly small sample size, but I’d been expecting some greater deviation beyond search&replace for the variance in genitals.
“Lovers” is shorter than “fucking machines” 🙂
Also, there might have been some humor/sarcasm to it at some point (though they don’t seem very witty at the moment narrative takes place… maybe they optimized that away too).
As to pron…
Extended sexual tension is often cut off because, well, in most cases consumer has some sexual…tension of his/her own, and doesn’t particularly need an artistic depiction of that other fellow’s tension buildup to properly augment experience. You know, same reasons war movies rarely show you the wonderful intricacies of military logistics and all those hours upon hours of driving through desert
Of course, if you go “fridge logic” on porn, it’s comical as hell (and if you go Fridge Logic on some “story-heavy” porn you can get SAN damage 🙂 )
” admittedly small sample size, but I’d been expecting some greater deviation beyond search&replace for the variance in genitals. “
Or maybe women are less different from men in terms of porn preference than “folkpsy” wants you to believe 😉
” (and if you go Fridge Logic on some “story-heavy” porn you can get SAN damage 🙂 ) ”
*cough*
Power Word “John Fitzgerald’s Undercover”
😉
# 01 twitches, then runs
🙂
Actually, it’s mostly a rather mild-mannered flick with remarkably coherent narrative, but if you think about it too much (or well, have someone like 03 to watch it with you, kindly do the thinking and cheerfully announce “insights”…) it will suddenly become unexpectedly creepy.
Oh, and Third, why do you think Inchies call themselves “lovers” ?
Well, I like creeping people out, and that movie is such a nice tool 🙂
And, concerning Inchoroi’s reasons for using the word “love”, I think that, well, when you rape someone and tell them “I will fuck you forever/till death parts us/come up with your own taunt”, it has less of a sting than “I will love you forever / till death parts us”, thus, word “love” gives you more…opportunities.
In other words, calling themselves “lovers” might be a kind of taunt to creatures with a more…touchy-feely concept of love.
Oh my ^^
Scott, you know… if you ever feel the need to make rape scenes more disturbing and yet more convincing, consult Third.
She has unparalleled talent for creepy-yet-plausible rape fantasies
I think an Inchies love song is rather like the songs Pratchets Dwarves sing ‘Gold, gold, gold, gold…’, but, you know….
Reading that Schwitzgebel chapter now. I came across this:
“Let’s try an experiment. You’re the subject. Reflect on, introspect, your own
ongoing emotional experience right now. Do you even have any?”
I laughed out loud, and my colleague are all like WTF?
My systems are operating at 97% efficiency. Self-diagnostic routines report no anomalies and no signs of structural damage. 😛
He’s got a wicked sense of humour – and love fantasy RPGs as well.
I’ve not much to offer in the ways of this discussion and I feel that every time I read the blog, this post and its comments I’m distracted by new entangled threads of thoughts.
In my mind, there are a few separate issues here:
Firstly, Semantica. Pump that shit out.
Secondly, what I think is of primary issue when considering the Semantic Apocalypse in this context is recognition. I was writing a short story for Kalbear a long time ago – which he never ended up seeing – where I had a Detective detecting Tweakers by their lack of the defining cognitive habits of unaltered humanity.
I think that this more than anything defines the Semantic Apocalypse’s singularity. When we can no longer depend on even the consciously unavailable information about our environment, specifically other humans, which shapes our conscious experience – on that note, I’m surprised I never recommended See What I’m Saying by Lawrence Rosenblum, Bakker, a book which goes a long way towards highlighting the real distance between our conscious experience now and the experience we’d have at the absolute recursive limitations of the BBT.
I mean, sure, I see someone on the street, I have no idea whats going on in their minds. Yet there is the possibility of recognition, of understanding through communication.
I think neuroscience is going to begin destroying that very unique human commons even as it describes what it once was.
Which begs the last issue, I see here.
For our current sociocultural circumstances, “Think about it. Creeping medicalization. Corporations retooling themselves in ways to manage you as a mechanism. The factory farm is becoming the assembly plant as we speak.”
Coffee and sugar are condoned substances within an industrialized civilization, for their effects on productivity. They just lack the finesse of techniques humanity is now cultivating to target those sames ends – ends that should be questioned simply because the human is attempting to decide the post-human and these decisions are going to be made by institutions like corporations, governments, and universities whose, in some cases, ancient memes, or founding principles, are even now directing research and deciding the “values,” if there are any we can even share, of the “post-human.”
Thoughts anyone?
Many. I’m checking out that Rosenblum piece, like fur sure.
I love the idea of the ‘neurocommons.’ It strikes me as a great metaphor through which to work out this problematic. It’s clear that you see how this kicks the door open on everything.
The context of manufactured desire is one that was hovering at the edges of my thought while I was writing this. My feeling is that the further we creep toward this dystopia, the more ‘managerial institutions’ will promulgate ‘value disinformation.’ Like a magician, they’ll want to keep the populace as misdirected as possible to better push its invisible buttons. This is the ‘Disneyworld thesis’ I offer in NP, at least. But I can see other possibilities…
The political ramifications of all this, and the ways BBT can be spun into a fullblooded social theory, have been bubbling on the backburner for some time now. I hope to dismay and bum my buds with it tonight, as a matter of fact.
Never invite me to a pawty…
Only one thought:
COFFEE IS SOOOOOOOOOO GOOOOOD!
(This thought sponsored by Folgers™: the best part of waking up is Folger’s in your thalamocortical loop. Wait. That doesn’t rhyme.)
” I mean, sure, I see someone on the street, I have no idea whats going on in their minds. Yet there is the possibility of recognition, of understanding through communication.”
And here, ladies and gentlemen, is a common human illusion of believing they do indeed have a lot in common with a random other “human”
You see someone on the street. He has a wiring not unlike that of Bundy (naturally so), and what then ? You don’t have the benefit of understanding – you will never understand each other. If you’re lucky, you’re just a boring bit of scenery to him. If not, you’re fresh meat. You can communicate with him alright – but what possible understanding could you achieve ?
Or maybe it’s someone like Vox, living in his very own private reality which is besieged by demons (and not some fancy-shmancy metaphor demons, the real shit – supernatural evil and all that jazz). Unless you also have a worldview that includes invisible horned douchebags, what possible understanding could communication bring?
I really want us to hear more about these demons. Cacodemons? Spider demons? Hell Knights?
Mind you, I always loved the Hexen/Heretic shooters more.
Vox is kinda shy about the bestiary of his Lovecraft County.
It took quite a long talk and a kind informed bystander to even expose the fact that he’s living in what I call a “cool Lovecraft County” (“cool Lovecraft County” is when one’s world model includes cool stuff like demons, Cthulhu, aliens, etc. “Boring Lovecraft County” is just “evil degenerate mongrel scum spoiling everything”, with no DJ/MC Yog-Sothoth to spice up the party with some kinky sphere&tentacle action).
“And here, ladies and gentlemen, is a common human illusion of believing they do indeed have a lot in common with a random other “human”
That quote was meant to provide a basis to contrast the extreme differences in what actually constitutes the neurocommons once nootropics and neurocosmetic surgergy flood the market in their different forms.
Right now, excepting the neuroanomalous dysfunction and injury, as it were, you and I, 01, at least have the chance of understanding each other through communication, whereas past recognition, post some sort of conscious commons, we will be divided as never before.
I’d always invite you to a party, Bakker.
I would suggest that Big Pharma has a fantastic distribution model in place already. It’ll just be this subtle shift in the propaganda of symptoms, from bodily experiences to mental ones.
I look forward to your commentary on the memetic BB. I think you’ve already offered research hypotheses for generations to come – at the rate we currently forward knowledge’s progess anyways – but keep them coming.
And here, ladies and gentlemen, is a common human illusion of believing they do indeed have a lot in common with a random other “human”
Is that statistically so? I’ve heard the statistic that 2% of the population are sociopaths, but still, 98% does make this an illusio, does it? I’ve heard stories of people riding public transport, grocery bags in each hand, boa constrictor across shoulders. But still – shopping for food. I gets it. And boa constrictor girl probably doesn’t get THAC0…wait, neither do I…
Saajan: Yea! Hexen and Heretic! Remember the necromancers gloves from Heretic? Particularly the powered up version! Friggin’ even better than a chainsaw! Wicked!!!
Callan, I’ve included other kind of weirdo for a reason.
Sociopaths, especially “non-negotiable” ones like TB are a small subset of “unexpectedly different humans”, but there are other kinds of weird cognitive arrangements that are both more puzzling and less apparent.
There’s a solid possibility many “cognitive subtypes” are completely “undiagnosed” now because they don’t lead to a sufficiently notable degree of social deviation (and mind you, any remotely intelligent agent will adapt to minimize such impairment, meaning that people with radically different drives and “values” will behave kinda similar in the office, and will even probably learn to write psych eval tests in a least-problematic manner if they are smart enough).
For instance, there are people who don’t have “internal monologue / stream of consciousness” thing (one which Fodor seems to love so much) and yet there doesn’t even seem to be a nice sciency term for that.
For all practical intents and purposes, Vox is legally and medically sound. Yet he believes that supernatural invisible doucheroos are giving kids cancer, for some incomprehensible invisible-douchebag reason.
Some people claim there is “free will”. I don’t even… but they are obviously numerous.
And that’s just cases when difference was apparent enough to come up during discussion. There’s quite likely a whole forest of “neuro-uncommons” that are completely obscured by social conditioning, interaction limits and limits of language.
And yes, if people get the “find out your Real Priors / True Values…then rewire them as you see fit” tech, all that stuff will boil to the surface and what seems now as a more-or-less cognitively unified “mankind” will come apart at the seams like a badly stitched Romero zombie, tearing itself into thousands if not millions of independent and cognitively incompatible groups.
Then again, it’s not like we’re really cognitively compatible with Voxkind right now 😉
I suppose when I read
Yet there is the possibility of recognition, of understanding through communication.
I read is as ‘some’ amount of understanding. Some sort of overlap, where each party attempts to search for a (roughly) equivalent idea of what the other is caring about (actually this reminds me of table top roleplay, where each person likely has quite a different imagined space in their head that actually doesn’t 100% match up with everyone elses idea of the imagined game world. Yet you can work with that)
I’d pay this requires that they already overlap in terms of wanting to figure some kind of equivalent in each other. With VD, that seems to be absent (who know, maybe with his friends he does. Trying to read him charitably (but doubting it)).
And that’s just cases when difference was apparent enough to come up during discussion. There’s quite likely a whole forest of “neuro-uncommons” that are completely obscured by social conditioning, interaction limits and limits of language.
And yes, if people get the “find out your Real Priors / True Values…then rewire them as you see fit” tech, all that stuff will boil to the surface and what seems now as a more-or-less cognitively unified “mankind” will come apart at the seams like a badly stitched Romero zombie, tearing itself into thousands if not millions of independent and cognitively incompatible groups.
Seems plausible. Your talking of ‘dialing up’ of those real priors more than a rewire/change, right/semi right? And a subsequent “How can they not understand me? This is the REAL me! My inner HUMANITY bared to all!”
“I suppose when I read
Yet there is the possibility of recognition, of understanding through communication.
I read is as ‘some’ amount of understanding. Some sort of overlap, where each party attempts to search for a (roughly) equivalent idea of what the other is caring about (actually this reminds me of table top roleplay, where each person likely has quite a different imagined space in their head that actually doesn’t 100% match up with everyone elses idea of the imagined game world. Yet you can work with that)”
Well, *some* degree of mutual understanding is possible with distinctly inhuman agents, like say, wolves, and human “mental exotics” like Vox (We have painstakingly established that Vox’s model of reality includes exotic paranormal entities and a constant low-intensity conflict with said entities, and I am reasonably sure that Vox understands that I find such a world model, as well as agents who sincerely subscribe to it, highly comical.)
It is quite problematic (though perhaps not impossible) to imagine a kind of “mind” with which absolutely no understanding can be found. Neurocommons argument seems more about whether we can make some strong human-specific assumptions upon encountering a biological human.
My position is that while there probably are some assumptions to be made based on species of the agent you encounter, their extent and number is greatly exaggerated by neurocommons proponents.
“Seems plausible. Your talking of ‘dialing up’ of those real priors more than a rewire/change, right/semi right? And a subsequent “How can they not understand me? This is the REAL me! My inner HUMANITY bared to all!””
Well, semi-right.
There’s a distinct possibility that some of those values and priors are hierarchically incoherent. I doubt natural selection or social systems are that strong a filter for that kind of incoherence (natural selection is pretty much about being able to leave enough offspring before you prop up the daisies, society is pretty much about getting enough resources to achieve whatever “goals” you happen to think you have ;), and not getting into severe trouble with whatever formal or informal “law” enforcement systems are in place. Those seem to be fairly coarse sieves, so to say)
Or maybe it’s someone like Vox, living in his very own private reality which is besieged by demons (and not some fancy-shmancy metaphor demons, the real shit – supernatural evil and all that jazz). Unless you also have a worldview that includes invisible horned douchebags, what possible understanding could communication bring?
The same understanding possible between an individual who is aware of the existence and purpose of x-rays and one who does not. Or, to take a more extreme example, between blind and sighted individuals. Communication might be difficult, though not impossible, concerning certain matters, but that leaves the vast realm of human reason, emotion, and behavior still on the table. I have no problem understanding either your attitude or your belief system; you don’t actually have any problem understanding me, your problem is accepting the possibility of my belief system.
Which is fortunate for you. Once you find yourself in the presence of sufficiently naked evil, you will likely find yourself more open to the possibility.
I always loved the Hexen/Heretic shooters more.
Power of Seven almost got to do the music for those games, but Raven gave us CyClones instead. The closest I’ve come to feeling crazy was playing through Heretic in 18 hours straight, no breaks, in order to write the cover review for Computer Gaming World. I was strafing and seeing those monsters in my sleep for the next three nights.
Vox is kinda shy about the bestiary of his Lovecraft County.
More ignorant. I view it as asking a plankter to describe the whale family. I don’t know what they look like, I only know there are some big ass things swimming past in the dark.
I’d pay this requires that they already overlap in terms of wanting to figure some kind of equivalent in each other. With VD, that seems to be absent (who know, maybe with his friends he does. Trying to read him charitably (but doubting it)).</b.
Why do several of you appear to imagine that it is at all difficult to understand the perspective of the science-trusting rational materialist? I have no problem understanding you, even Scott's lament concerning irreligious moral philosophy's inability to decisively knockdown Ted Bundy’s rationale for his acts" is not only understandable, but downright predictable concerning both the inability and his frustration with it. The consistent error that I've seen from the irreligious crowd is their insistence that "magical thinking" somehow precludes "scientific thinking" or "logical thinking" when it quite obviously does not. It's an outdated concept, since any educated individual who engages in "magical thinking" has been steeped in precisely the same rational materialism as those who hold solely to science combined with personal experience as the sole arbiters of reality.
Hence my lack of interest in trying to understand the common perspective here. I am already intimately familiar with it as it is a portion of my own perspective. I even think that Scott is asking some of the right questions when he focuses on society and technology rather than bumbling about with his erroneous and ignorant theories about the beliefs of others. He merely has not yet begun to draw the consequent conclusions.
The latest developments notwithstanding, I don't believe that science can negate free will or the human soul. In part because my magical predictive model informed me a long time ago that it would try. It should be obvious to anyone who is paying attention to science that the great risk to the human race doesn't stem from certainty, but scientific curiosity. If we magical thinkers are incorrect and the Eschaton never arrives, then we can all be confident that the last words of the human race will be "oops!"
Then again, it’s not like we’re really cognitively compatible with Voxkind right now
Why not? Surely your imaginations are not so limited as to make it impossible for you to postulate how your thinking would be modified by personal experience of some aspect of the religious supernatural! Whereas you see Vox-kind as crazy, Vox-kind merely sees you as something akin to colorblind.
And being mildly colorblind, I can understand that complete sense of incredulity and assumption of insanity when someone is pointing at something totally imperceptible saying “look, it’s right there in front of you!”
“The same understanding possible between an individual who is aware of the existence and purpose of x-rays and one who does not. Or, to take a more extreme example, between blind and sighted individuals. Communication might be difficult, though not impossible, concerning certain matters, but that leaves the vast realm of human reason, emotion, and behavior still on the table. I have no problem understanding either your attitude or your belief system; you don’t actually have any problem understanding me, your problem is accepting the possibility of my belief system.”
Actually, I do have a problem understanding you, since your peculiar belief goes well beyond anyone’s ability to demonstrate/prove .
A sighted person could contrive numerous means to demonstrate existence of light-based detection systems to the blind (much like sighted humans have managed to build systems for detecting neutrinos, a task for which human sensory system is radically unfit).
Yes, we do have a “degree” of understanding – you “understand” that I happen to have a grievously inaccurate model of “reality” that is characterized by an absence of “demons”. I happen to “understand” that you happen to have a grievously inaccurate model of “reality” that is characterized by a presence of “demons”.
Unless I invent a way to somehow “disprove” unfalsifiable entities ;), or you invent a demon detector I can replicate and use to go find some horned invisible doucheroos, there is no way we could advance understanding beyond this boundary.
I strongly doubt that you would bother to demonstrate a protocol that would reliably permit me to detect demons, though of course I am quite eager to listen if you do.
” More ignorant. I view it as asking a plankter to describe the whale family. I don’t know what they look like, I only know there are some big ass things swimming past in the dark. “
A plankter (assuming a kind of plankter capable of “reasoning”, “memory” and “communication” 😉 ) would actually know quite a few things about whales. It might not get taxonomy right, or might grievously disagree with humans on some whale-related issues, but “smart plankton” would definitely figure out fairly reliable whale-detection protocols and a fairly coherent (not necessarily completely accurate) description of “whales” in general
Thus it stands to reason that you, as a helluva smart plankter that knows that some “demons/whales” are out there, have some demon-detection tricks.
“Why not? Surely your imaginations are not so limited as to make it impossible for you to postulate how your thinking would be modified by personal experience of some aspect of the religious supernatural! Whereas you see Vox-kind as crazy, Vox-kind merely sees you as something akin to colorblind.”
I can totally imagine living in your Lovecraft County – after all, I called it “Cool Lovecraft county”.
Now, I doubt you can actually “argue me into your Lovecraft County” (unless there’s a demon detector in your pocket, or something) and thus there is a fundamental limit to how well I can understand your position, let alone predict your further activities.
Imagination can only go so far in modeling the behavior of someone who faces a radically divergent “reality”.
I am pretty sure both you and me would have a lot of trouble really understanding someone who sincerely believes that Republican party is actually lead by disguised space aliens hellbent on conquest, while Democrats are time-travelling cyborgs from a dystopian future 😉
VD,
I have no problem understanding you,
No, you don’t understand. This is your issue in a nutshell: You don’t ask me if you are understanding me right – you tell me you understand me.
If you were at some level expecting some little agreement from me that I think you understand, you do not get agreement. You are going to have to ask if you understand me right (and on what), if you want a chance at that.
OR if you don’t care about getting agreement from the other person that you understand them (at all), then whatever, you’ve got a mental condition. Why, I’m just following your principle in saying it. I’m just doing unto you as you do unto others, ie, telling you my understanding of you, rather than asking if it’s the case.
No, you don’t understand. This is your issue in a nutshell: You don’t ask me if you are understanding me right – you tell me you understand me.
That’s not an issue. I either understand you correctly or I don’t. Those are the only two options and one of them is true regardless of your opinion on the matter.
If you were at some level expecting some little agreement from me that I think you understand, you do not get agreement. You are going to have to ask if you understand me right (and on what), if you want a chance at that. OR if you don’t care about getting agreement from the other person that you understand them (at all), then whatever, you’ve got a mental condition.
That’s both stupid and illogical. I could not care less about getting agreement from you or anyone else anyone here concerning my understanding of them. It’s a binary situation with or without such agreement. I may be wrong about my assumed understanding – although I again wonder what is supposed to be difficult for anyone to understand about rational materialism flavored with the usual science fetish – but my lack of interest in seeking confirmation concerning that understanding is only indicative of a “mental condition” in the broadest sense, that of possession of a functioning mind
Why, I’m just following your principle in saying it. I’m just doing unto you as you do unto others, ie, telling you my understanding of you, rather than asking if it’s the case.
And I’m not complaining. Do as you like. I’m not the one who has been postulating the insanity of others on the basis of my inability to understand them.
Actually, I do have a problem understanding you, since your peculiar belief goes well beyond anyone’s ability to demonstrate/prove .
Why? We all harbor peculiar beliefs that go well beyond anyone’s ability to demonstrate or prove. Perhaps you believe your dead grandmother loved you. Perhaps I believe my brother is the nicest person in the world. Perhaps we both believe in human equality. None of these things can be demonstrated or proved any more than the existence of demons and none of them need inhibit understanding.
You might point to a letter that your grandmother wrote. I claim that it’s a forgery. I might point to the behavior of the dead Miami face-eating cannibal. You claim “cocaine psychosis”. Repeat as needed.
In any event, your conclusion simply doesn’t follow from the premises.
Well, I find it kind of remarkable that when you proceed to illustrate possible exchange between two agents disagreeing in regards to allegations of a poorly documented deceased person, you kind of make my point for me.
There is a distinct “understanding horizon” at work here, running along a number of allegations regarding the deceased relative, and claims related to those.
Same goes for allegations regarding “human equality” (whatever the fuck that is…)
Consider the case of nice fellow who thinks that both US parties are run by “Secret Inhumans”, specifically conquest-crazy space aliens for Republicans and creepy cyborgs from the future for Democrats.
We can establish *some* degree of understanding (at least, we can find out hypothetical person’s weird beliefs and establish an understanding in regards to the fact that we disagree with him and he disagrees with us), but there’s only so far we could go. When imagining ourselves in his shoes we will only muster a distorted projection reflecting neither his actual state nor our own (kind of like imagining yourself as participating in a battle and actually participating in a very real fucking battle are two different things), and same would be true for him (assuming he ever bothers to try imagining what our worldview feels like).
Same of course goes for unverifiable and unfalsifiable assertions regarding dead relatives.
Human equality… well, for starters it would be nice to define it in a way that does not summon Captain Obvious 😉 then see if anything approaching a framework for pragmatically assessing various such “claims”. I find it entirely plausible that there is as little chance of understanding between you and hypothetical “equality fellow” in regards to this vague “equality” thingamajig as between you and me in regards to the existence of supernatural intelligent forces scheming to affect the world in some manner.
Me: No, you don’t understand. This is your issue in a nutshell: You don’t ask me if you are understanding me right – you tell me you understand me.
VD: That’s not an issue.
Priceless. Again, you tell me.
I either understand you correctly or I don’t. Those are the only two options and one of them is true regardless of your opinion on the matter.
And regardless of your opinion on the matter, you don’t understand this. Regardless of your opinion, I understand you all too well.
Unless you ask.
Let’s try a dungeon scenario – if you can’t describe my position on something/state your understanding accurately (as in, it is the same thing I would say), then the room you are trapped in floods and you (as in your player character) drowns. You may even ask me about my position, before stating it.
Do you not bother asking, because you just understand without asking? When asking is for free and by doing so you can be certain of survival?
I could not care less about getting agreement from you or anyone else anyone here concerning my understanding of them.
Then your compulsion to to tell them you understand them is both stupid and illogical. There is no point telling someone you understand them, if you are not looking for some sort of acknowledgement of that statement. It’s the same as going up to inanimate objects and saying in a husky voice ‘I understand you’. Your behaviour is both stupid and illogical.
Unless you ask.
And I’m not complaining. Do as you like. I’m not the one who has been postulating the insanity of others on the basis of my inability to understand them.
Wah wah wah. You are complaining here, I understand you all too well and I can tell you exactly what you are doing, because of that. “I’m not the one…” – whine whine whine, you’ve got to add this little bit, because you do have this little need to complain about it. I know this. I understand you. My theory of mind eclipses you.
Unless you ask.
Go on. Continue to bleat posts here stating how much you get everything, all the while subtextually begging for acknowledgement of your knowing, whilst lying to yourself about seeking that. Yeah, sure, your just here – for no apparent reason, of course – keep telling yourself that, because I know you will. I know you, understand you, like the lyrics of an 80’s song.
Unless you ask. Then the lyrics change and I would not know.
In general, one doesn’t necessarily understand any given view. But naturalism happens to be easy for a Christian to understand – the majority of things that happen during the day are coincidences anyway from the Christian’s perspective. Merely extend that to the remaining part and voilá! Naturalism.
It is sort of like the One Less God argument, the failure of which is only because it is called an argument. If it were called an illustration, it would be valid.
Do you not bother asking, because you just understand without asking? When asking is for free and by doing so you can be certain of survival?
No, not when I’m aware I already know. Would you ask a second time, after asking a first time? A third time, after asking a second time? There is no need to inquire about information already possessed.
Then your compulsion to to tell them you understand them is both stupid and illogical. There is no point telling someone you understand them, if you are not looking for some sort of acknowledgement of that statement.
You don’t even seem to have understood the context of the discussion that 01 and I have been having. I have no compulsion and I am not looking for any acknowledgement of my statement. I merely rebutted 01’s point that understanding is impossible between certain parties. My understanding of the rational materialist perspective is only one of many possible examples; his subsequently claimed grasp of Cool Lovecraft Country works just as well to illustrate my point. Note that I have not acknowledged his claim of understanding nor was he seeking any such acknowledgement either. You’re simply flailing about on an irrelevant tangent.
Wah wah wah. You are complaining here, I understand you all too well and I can tell you exactly what you are doing, because of that. “I’m not the one…” – whine whine whine, you’ve got to add this little bit, because you do have this little need to complain about it.
You obviously don’t understand me in the slightest if you think I’m complaining.
Go on. Continue to bleat posts here stating how much you get everything, all the while subtextually begging for acknowledgement of your knowing, whilst lying to yourself about seeking that. Yeah, sure, your just here – for no apparent reason, of course – keep telling yourself that, because I know you will. I know you, understand you, like the lyrics of an 80′s song.
Who said anything about being here for no apparent reason? Who said anything about getting “everything”. Is rational materialism with a dash of science fetish everything? Is there anyone with an IQ over +1 SD who doesn’t get rational materialism or science? I always show up at least once when I’m challenged by someone new, and TPB is no exception. 01 and Delavagus at least raised some interesting points. You, on the other hand, not so much. But you can certainly claim to understand what you like. The truth of these matters is readily observable.
I never claimed a “grasp” (as in, sufficiently complete understanding of Cool Lovecraft County), merely that some information on it is available to understanding, but that there are apparent limits.
Imagining “what would I have done if I encountered Cthulhu/demons/space aliens/Inchoroi” and the actual experience of even merely living in the “same world” with actual Cthulhu/demons/space aliens/Inchoroi, let alone meeting those things in person, are radically different, and the latter two are completely unavailable to me.
Despite being able to “imagine” myself an inhabitant of a demon-infested world (as a brave explorer of “ze demonic”, founder of ghostbusters and whatnot 😉 ), I can not even begin to properly comprehend what actually being an inhabitant of “demonically infested reality” is like.
Your Lovecraft County is to me an entertaining hypothetical, an urban fantasy trope, a game. But it is reality to you. Try as I might, I can’t completely grasp what it is like, to live in a whitewolf RPG story. There’s only so much disbelief I can willingly suspend.
And thus, I can’t completely grasp/understand your motives and conclusions – there will always be a fundamental “empirical incompatibility” between us, stemming from an irreconcilable disagreement over a rather fundamental empirical aspect of the universe.
I reckon that people will just rationalise away the Semantic Apocalypse, like we always have been. But if the Semantic Apocalypse does become a reality, does that mean we’re completely fucked? Then again, haven’t we always been fucked? God dammit, I need another drink.
Mentioned Unwritten in the last post, this quote (from #28) seems apt:
“You’ve got to touch something. Some kind of — tap root. You’re aiming to tell a story that people don’t have to *consciously* buy into…because they feel like they’re already a part of it.”
“Radio. And newspapers. Movies. Paperbacks…Whatever you call them, Miri. They’re the future, is what I’m saying.”
“Actually…the future is the audience who reads and watches those things, isn’t it? A million people, all dreaming the same dreams. Dreams that will still be there when they wake up. That’s what I want to do, I think. Reach into people’s minds, and paint dreams there.”
Okay, I am a rather simpleminded person and can barely comprehend this stuff, especially that “aboutness” thing is to be honest (isn’t the issue of what a particular statement or object “about”, like, a matter of perspective and shi^Hpostmodernism like that ? 😀 derp 😀 ) but I gotta ask…
If this “aboutness” thing is of no functional relevance or doesn’t exist, then wouldn’t we just… go on trucking, after we find that little bit of cognitive trivia out, no ? Am I being extra-stupid or what ?
I seem to think that you actually have an important point, but I’d need to think on that.
I think the idea is raised in relation to intentionalness, which is raised in regard to the idea you can have your brain modified and yet some special intentional/aboutness will still be intact and present to guide you towards what is still within a capslock HUMAN sphere of…well, humanity. Ie, to present the idea that hey, actually no, something wont just necessarily exist to guide all that.
Well, my counterargument is that “some” intentionality and “some kind of” normativity (some sorta-kinda values ) are necessary for any complex directed activity, and that a “posthuman” with no intentionality and absolutely no values will just sit there doing nothing, until he/she/it “dies” due to lack of basic maintenance or, assuming there is expensive hardware involved, people like my dear Third tearing it apart to retrieve said expensive hardware for resale or trophies (SHINY EXPENSIVE THINGY!!1ELEVENTY!)
As to “aboutness”… well, I primarily think of aboutness in terms of inforetrieval stuff, and find it unlikely that you can classify large amounts of input data into pragmatically meaningful categories (which is a useful thing to do) without having some kind of “aboutness” framework. I know no such classification approach.
Of course, post-modification “aboutness” “intentionality” and “values” might end up bearing only most remote resemblance to the ones that can be considered HUMAN (compare a bird and a boeing. Fundamentally, they utilize the same aerodynamic principles to fly, but their actual characteristics are profoundly different)
Well, my counterargument is that “some” intentionality and “some kind of” normativity (some sorta-kinda values ) are necessary for any complex directed activity
Well, I suggested upthread how far can you go just seeing everything as pixels, without some sort of darwinian bottleneck – ie, only those who don’t die will continue on. This means seeing more than pixels. But apart from that you’ve got a sort of circular logic there, that intent is needed for intentfully directed activity. The storms on saturn are pretty complex, after all.
and that a “posthuman” with no intentionality and absolutely no values will just sit there doing nothing, until he/she/it “dies” due to lack of basic maintenance
Ha, I’ve said practically word for word the same thing about the Dunyain! Not all of the legion can have been yoked, or they’d crumple to the ground and lie there till they die of dehydration (btw, have you seen the Radio Head music video where there’s a guy on the ground and he just wont move? Then at the end he whispers something to the crowd around him. And in the next cut away, they are all on the ground…)
That said, were talking about ‘values’ as being made up, like unicorns. The idea is to think about a zone where values simply don’t exist. So, as much as you could say were are valueless now, were atleast running on something that has had billions of years of field testing. A post human would have a mind running on…nihilism. Synapses skittering across endless black.
I guess to argue against that, I wonder if logic exists, even at a roughly paralel to physics level. If logic exists, and values are expressions of logic (even if badly programmed, often blue screening/rationalising logic programs), then values exist in as much (or atleast the core programming language values is written in + contact with the darwinistic bottleneck exists). So one of my concerns is whether logic actually exists (ie, exists as principles as much as a fulcrum and lever exist as a principle)
” Well, I suggested upthread how far can you go just seeing everything as pixels, without some sort of darwinian bottleneck – ie, only those who don’t die will continue on. This means seeing more than pixels. But apart from that you’ve got a sort of circular logic there, that intent is needed for intentfully directed activity. The storms on saturn are pretty complex, after all. “
Well, it’s dangerously close to circular, but I’m using “values” in the most generic sense possible (not “moral values” or even “grammar rules”) in which a bee can be claimed to have “values”.
As to storms on Saturn, well…the idea of an intelligent/intentional storm does seem far-fetched (try as I might I don’t see how a storm would implement an internal state of the kind that is directed at objects)… but perhaps we’re just being chauvinist. The only reason we’re seriously discussing “intentional” machines is because you can buy one for less than a kilobuck…Perhaps there can be an “intentional” storm and we just haven’t encountered one / “pissed it off” enough for it to demonstrate unusual properties ;), eh ? (Anyway, I love the mental image irrespective of plausibility. An “intentional”, self-sustaining super-hurricane would totally kick ass 😀 )
“(btw, have you seen the Radio Head music video where there’s a guy on the ground and he just wont move? Then at the end he whispers something to the crowd around him. And in the next cut away, they are all on the ground…)”
Nope. Seems cool, will find.
“That said, were talking about ‘values’ as being made up, like unicorns. The idea is to think about a zone where values simply don’t exist. So, as much as you could say were are valueless now, were atleast running on something that has had billions of years of field testing. A post human would have a mind running on…nihilism. Synapses skittering across endless black.”
Well… that throws us back to the Darwinean bottleneck. Such a posthuman will have all the “smarts” and “behaviors” of a giant slab of cultured neurons sitting in a black box, randomly firing at one another…
Kind of like a device with absolutely no software and no firmware.
There’s a word for that state.
The word is “bricked” 😀
” I guess to argue against that, I wonder if logic exists, even at a roughly paralel to physics level. If logic exists, and values are expressions of logic (even if badly programmed, often blue screening/rationalising logic programs), then values exist in as much (or atleast the core programming language values is written in + contact with the darwinistic bottleneck exists). So one of my concerns is whether logic actually exists (ie, exists as principles as much as a fulcrum and lever exist as a principle) “
Well, that’s a mind-bender, he he.
I’d say that logic at the very least exists in the same sense numbers exist. Not as a thing, but in-between them 😉
Perhaps “Valueless” doesn’t describe it terribly well. But what happens when values rest upon each other (like a house of cards) and you pull out one? NOW what happens when what is left of the house of cards is determinant of what next value is pulled out?
What happens when the cards are pulled out or stuck in, over and over?
Do values exist when today you are helping the homeless, and literally tomorrow (after another mental edit, prompted by todays values) you are filling your freezer with them? Does that seem like a place where values exist? Or would you describe it as valueless (I know it’s an extreme example – though actually potentially by pure hapchance as to how the house of cards falls and what further edits this prompts, it would not seem extreme at all).
Maybe if you argue you can only perform one edit every fifty years or so. Then values hold for some amount of time. Though that’s probably still tons faster than weve historically delt with change at all, let alone one change per fifity years per millions of people, with all those changes not being the same.
” Perhaps “Valueless” doesn’t describe it terribly well. But what happens when values rest upon each other (like a house of cards) and you pull out one? NOW what happens when what is left of the house of cards is determinant of what next value is pulled out?
What happens when the cards are pulled out or stuck in, over and over?
Do values exist when today you are helping the homeless, and literally tomorrow (after another mental edit, prompted by todays values) you are filling your freezer with them? ”
I’d say that those weird…um… posthuman creatures would…still have, at every given moment, values, they just happen to mutate remarkably fast.
Thus, values would still exist.
Also, if “value editing” indeed turns out to be highly unpredictable in two “cycles” or less (kind of like a collapsing card house is unpredictable in one cycle 😉 ) then it is quite likely to become a relatively unpopular thing, since few people will appreciate the very real possibility of suddenly becoming my willing sex slaves entirely due to probabilistic nature of “mental editing” 😉
Which is a kind of weak but likely effective value preservation “value” (in most general sense) that will remain in effect until technology improves to the point of allowing no less than one well-modeled, predictable edit, possibly more.
I’d say that those weird…um… posthuman creatures would…still have, at every given moment, values, they just happen to mutate remarkably fast.
Thus, values would still exist.
To me, that’s like saying if you put a microphone near the speaker it’s plugged into, the screeching that results is ‘a song’ and ‘song still exists’. As I said before, I think you easily could have the equivalent of an elpilectic siezure, but the actions would be far more sophisticated than spasms. The hands would grip and operate (like one touch types without really thinking) – but that doesn’t mean values are behind it (unless you want to count the hand and it’s associated automatic brain functions as a seperate ‘robot’, then maybe you could say the hand has values, I guess. But this becomes a disincorporated person). It’s entirely possible the person becomes free of the darwinistic bottleneck – a neuronaut, ala neuropath. That is post value, even if their limbs operate complicated machinery (again, maybe I’d pay the hands/limbs and their automatic actions, if isolated as seperate robots, have values still. But again, it seems a disincorporated individual)
Also, if “value editing” indeed turns out to be highly unpredictable in two “cycles” or less (kind of like a collapsing card house is unpredictable in one cycle 😉 ) then it is quite likely to become a relatively unpopular thing, since few people will appreciate the very real possibility of suddenly becoming my willing sex slaves entirely due to probabilistic nature of “mental editing” 😉
You mean like casino’s aren’t really popular, since everyone knows the house always wins?
one well-modeled, predictable edit
Hehehe. ‘Well’. Until one gets out of the habit of attributing judgements of value to value editing, it’s always gunna be a screaming feedback loop. Karaoke in hell*.
* Yeah, I always have to end with some dramatic flair. Scott’s a bad influence.
” To me, that’s like saying if you put a microphone near the speaker it’s plugged into, the screeching that results is ‘a song’ and ‘song still exists’.”
Well, more like “sound” still exists. And it is factually accurate, if somewhat uncannily hollow.
An interesting question to ponder in this analogy is whether each successive value in the hypothetical “value boil” will be logically relatable to prior one, or will the system at some point become unbound from prior value systems, determined by something else instead (like microphone feedback sound is determined by everything but the parameters of the sound that started the feedback loop, IIRC)
” As I said before, I think you easily could have the equivalent of an elpilectic siezure, but the actions would be far more sophisticated than spasms. The hands would grip and operate (like one touch types without really thinking) – but that doesn’t mean values are behind it (unless you want to count the hand and it’s associated automatic brain functions as a seperate ‘robot’, then maybe you could say the hand has values, I guess. But this becomes a disincorporated person) “
This depends heavily upon technological details involved, but I am not sure I get you at the “disincorporated person” bit. First, if “auto” functions are in motor control, they are hardly “disincorporated” in any meaningful way, and I am not convinced that these systems will necessarily have “pershonhood” (whatever that is), despite probably being intentional in the same sense ATMs (see below) and Roombas are intentional.
” You mean like casino’s aren’t really popular, since everyone knows the house always wins?”
Casinos aren’t that popular, and, besides, they merely trade a good show and a small chance of victory for a manageable monetary fee. And casinos don’t have a “get rewired into 01’s sex slave” probability 😉
“Hehehe. ‘Well’. Until one gets out of the habit of attributing judgements of value to value editing, it’s always gunna be a screaming feedback loop. Karaoke in hell*.
* Yeah, I always have to end with some dramatic flair. Scott’s a bad influence.”
If value editing is a service, it by necessity involves value judgements as to goals and utility of the service and its operational parameters.
Well, more like “sound” still exists. And it is factually accurate, if somewhat uncannily hollow.
As in it’s factually accurate sound will still exist?
Well, the universe will continue to exist, I’ll totally agree with that.
or will the system at some point become unbound from prior value systems
I’d say on the first edit. I mean, you are editing the value system – the will be no prior value system to exist afterward. If you wipe your hard drive, there is no connection between the new stuff you put on it and the old (ugh – if you want to try and fit ‘unformat’ into the (not so very much an) analogy, okay, pitch it to me). That is if the values are like a house of cards, interdependent on each other. If there are some values that pretty much stay the same even if other values are subjtracted AND this particular values are not edited, then there could be a prior value system.
First, if “auto” functions are in motor control, they are hardly “disincorporated” in any meaningful way, and I am not convinced that these systems will necessarily have “pershonhood” (whatever that is), despite probably being intentional in the same sense ATMs (see below) and Roombas are intentional.
I’m not sure what you are saying. I wasn’t trying to say the individual systems would have a ‘personhood’ (more the disincorporated lack of it). I was refering more to your roomba example (are you being paid by the makers of roomba, btw?? 😉 Or you just can’t get enough of roomba?)
If value editing is a service, it by necessity involves value judgements as to goals and utility of the service and its operational parameters.
Shift the emphasis to my bold and YES, that’s the problem, you’re right!
It’s like having a ruler to determine if another ruler is long enough – then you trim down that first ruler. Except that’s actually the one you use to see if, after the trim, your ruler is long enough. Uncannily, yes it is! That’s how blind we are – there is nothing that will se a discrepancy. How do you evaluate your new values with your new values? Download your current self onto a computer, get edited, then have it evaluated your edited self?
Okay, how about this:
hypothesis (inspired by Third’s post above):
The likelihood of some obnoxious elusive mental concept being “completely hallucinatory” (that is, in the same way demons and ghosts are) is inversely proportional to the strength of functional account one can construct for it and quite likely also to the rate at which phenomena that fit same definition (even if barely so) tend to arise in fully artificial, “completely understood” systems.
Intentionality and a at least some kind of normativity seem to be present in all systems capable of planning and decision making, irrespective of substrate (though such normativity and intentionality aren’t quite the same as those humans usually ascribe to themselves, which is an interesting can of worms). However, normativity remains completely arbitrary in its specifics thus making for rather fragile “moral/normative” arguments (WHY does roomba “hunt” “dust” ? Well, because we made it that way… WHY do you “like” to eat caviar ? Well, because an almost totally random combination of genetic and environmental factors has made me that way…)
Human self-aware subjective experience (as opposed to weird and somewhat scholastic subjective self-awareness of a self-monitoring and self-reporting HDD for instance 🙂 ) and “common experience” seem to be less future proof than “very general” concepts like “intentionality”.
It is not obvious what the functional account for “subjective human experience” really is (derealization doesn’t completely disable you, and is sometimes reported as pleasant experience), and “neurocommons”, while a sound idea in the most general functional account sense (they do seem like something that has a de-facto function, and important one at that), appear to have trouble accounting for the weird-yet-socially-integrated variants of “heterodox” human cognition
Replace “derealization” with “depersonalization”, though my cliff’s notes indicate that derealization does not in itself completely disable the individual as well, and is also sometimes sought by certain “chemical adventurers” as pleasurable…
I haven’t been following this conversation, so I’m not sure about the context in which you brought up derealization and depersonalization, but I wanted to mention that I did some research on both — especially the latter — for my fantasy novel. Earlier drafts of the opening chapters had much more of the ‘depersonalized’ viewpoint, but it’s still there.
I like being able to point out to people that some of the weirdest shit in my fantasy novel is actually based on real life! Basically, advanced sorcery in my world leads to a state of both derealization and depersonalization. Of course, they’re dressed up in mystical and metaphysical clothing, but strip all that away and the core of it is simply an actual mode of experience.
It’s some spooky ass shit!
delavagus, sounds like a pretty cool fantasy setting. You published already ? 😀
When you play chess with a smart person or system you do not know, you assume she, he or it will play good moves and therefore you try to play something that will stand against anything he or she will throw at you.
So your model of the other person is “smart and goal directed”.
If you hit someone under the knee with a small mallet, you predict that his knee will raise.
So your model of the other person is mechanistic.
So we have two ways of modelling systems and they do not exclude each other. (But in most instances, one is more practical!)
One way of defining values (01 agrees as far as I understand) is to say that they are the goals of goal directed systems.
One way of defining rationality is to say that it is the ability to achieve goals.
So my prediction is that (probably super-rational) posthumans will “have” values in a more clear cut way than we do, because modelling them as goal directed will make more sense, since they will not suffer from akrasia, intense stupidity and the like. (If I become posthuman, procrastination, is certainly the first thing I’ll get rid of. ;))
In other words, it will be easier to model posthumans as having values as to model them mechanistically. Maybe the posthumans will model us humans mechanistically, though.
In what ways do your definitions differ from mine?
“In what ways do your definitions differ from mine?”
Well, I view intelligence as fundamentally “mechanistic” property that can arise if matter is arranged in sufficiently complex and computation-friendly way.
I don’t see a particular reason to establish a clear a strict boundary between “mechanistic” model and “intelligent” model, given that we have the apparent ability to construct purely mechanistic systems that exhibit all the traits that are required to be considered “intelligent”.
Posthumans will probably model us as “intelligent”, but only in the same sense I consider model various robots and self-monitoring/reporting hardware “intelligent”.
The subtle terminological change that I propose is that we use “modelling as intelligent” (~goal-directed) and “modelling as mechanistic” as predicates of our ways of thinking about a system and not as properties of the system itself.
This terminology might enable us to think more clearly about those concepts.
For example, we see that there is no contradiction in considering a system both mechanistically and as having goals. And we avoid the task of defining goal-directedness intrinsically in such a way that toasters partake of it.
Now, intentionality is the property of being about something, of representing something. So that goal-directedness is incomparable to intentionality. I’d call something intentional if it represents something or refers to something.
There certainly is a normativity associated to languages, in the sense that a concept, for example, should be used in some cases and not other. But it is a different kind of normativity than moral normativity. The prohibition of lies is a relationship between the two, but a tenuous one.
The values that a person holds dear are expression of his or her goal-directedness. Morality is then a kind of social aggregate of those individual values.
I don’t even understand what consciousness ought to be, but we do not seem to need it to understand the rest…
Again, the question isn’t whether taking some kind of intentional stance is something we think/feel that we do, but whether it’s something we ACTUALLY do. We think we ‘will’ things all the time, but do we? The more science learns, the less this seems to be the case. The question is whether this kind of ‘deception’ obtains for all intentional phenomena. Should we say, ‘the question isn’t whether ‘willing’ is a properties of the system itself, but simply a predicate of our way of thinking about certain systems’?
This merely changes the question from ‘What is going on?’ to ‘What do we think is going on?’ Surely it’s the former that science is interested in, isn’t it?
‘This merely changes the question from ‘What is going on?’ to ‘What do we think is going on?’ Surely it’s the former that science is interested in, isn’t it?’
I think that our common sense is mistaking ways of thinking about something for properties of the thing. So that the question “what is going on?” losses its meaning.
There is no fact of the matter answering the question: “Is “willing” a property of a system.” To attribute will to a person “works” in practice. To analyse away the “Will” in biochemical mechanisms “works” too. Sometimes one is more useful, sometimes the other. When we say that a being has a “Will”, we project a property of our model on the thing.
When I model a system as being goal-directed, I do not do it because I think that the system is goal-directed. I do it because I think that the goal-directed agent is a good model for my system.
In the past, before AI or maybe computers, things were simple, humans were “intentional” and not mechanistically analysable, the rest wasn’t “intentional” but was mechanistically analysable. So we didn’t need to make the distinction between the kind of systems and the ways of thinking about them. Now, the frontiers are blurring and we need to refine our definitions.
So ‘facts of the matter’ aren’t what science is interested in?
Of course, I agree that science is usually interested by “facts of the matter”. But the question “Do we want to use one model or another to understand a system?” is not a scientific one. (Although science can help tell us which approach will be better for specific aims.)
Since there are systems (a sophisticated toaster) that can be looked at successfully both as a system with a will (It stopped because it detected that the bread was starting to burn.) or mechanistically (It stopped because its photoreceptor stopped sending some eletrical impulse …). The question “Does a system has a will?” does not really have meaning.
But this is precisely my point. The theoretical move that you’re making, the one which relativizes the ‘value’ of different ‘models’ against pragmatic interests is philosophical. The question here isn’t ‘What will philosophy tell us?’ but ‘What will science tell us?’ If it were the first question, then your pragmatic approach is merely the beginning of the debate, not the end of one: it only seems that way because of your exclusive epistemic commitment to it. For a skeptical naturalist like myself, you’re waxing metaphysical through and through.
For the longest time looking at the cosmos as a system with the earth at the centre suited us just fine: in fact, it served the interests of some powerful institutions – as well as human conceit more generally. Science told us the fact of the matter, and our interests were forced to adapt. I’m not sure how this situation is different, aside from the fact that so bloody many ‘interests’ (which we will learn the fact of the matter of as well) are bound up in the ‘intentional stance.’
Again, the question isn’t whether taking some kind of intentional stance is something we think/feel that we do, but whether it’s something we ACTUALLY do. We think we ‘will’ things all the time, but do we? The more science learns, the less this seems to be the case.
Doesn’t this echo, to some extent, your “how do we know we know” mantra? How do we know we will? Now, perhaps you’re thinking of some different experiments than the those of which I’ve read, but science doesn’t appear to have contradicted the concept of free will so much as the concept of conscious free will. Consider the experiment that showed there were brain patterns indicating motion before the test subject decided to move his finger: does this actually indicate a lack of free will or rather that the free will mechanism resides somewhere beside the concious mind?
If it cannot be successfully demonstrated that we are automatons responding mechanically and predictably to various inputs, then I suggest what science is actually learning here is that free will is seated something other than the mind, perhaps even, as crazy as it may sound, in what has been historically termed “the soul”.
After all, it is no problem for magical thinkers to accept that there is an additional component to the human beyond body and mind.
@Vox
“If it cannot be successfully demonstrated that we are automatons responding mechanically and predictably to various inputs, then I suggest what science is actually learning here is that free will is seated something other than the mind”
A physical (=”non-supernatural”) system might well be unpredictable, for example, if it is chaotic or if it involves strong effects from apparently truly stochastic phenomenon of quantum mechanics.
I personally reject free-will, not because of any positive thesis I hold about the mind, but because I think it is an incoherent or at least unclear concept.
My rejection of free-will does not entail rejecting the concept of Will which is not always well defined but does seem rather useful and free of problems in practice.
What cognitive task do you attribute to the soul? Do you agree that those cognitive faculties must not be altered by brain lesions or drugs?
@R. S. Bakker
I certainly plead guilty to arguing at a purely philosophical level. I also agree with you that science as much to tell us about the mind. However, you ask science to decide a question which isn’t scientific. There is nothing wrong with that, science can sometimes solve philosophical problems. But I think that in this case, science cannot decide between the models because they are predictively incomparable. Neuroscience is a detailed model which is not limited in its application but which isn’t usable in practice. The agent-with-a-will model is extremely practical, for example, in economics, even if it sometimes gives slightly incorrect predictions. As a rule of thumb, the more an agent is smart, rational and complex, the more the second model becomes useful.
You asked me to give the difference with the case of geocentrism. The difference is that there are experiments that can separate between the geocentric model and the heliocentric one. (Foucault’s pendulum.) I can’t imagine any experiment that would convince us that we are not partly rational beings. Only that we are less rational than we thought before. We all agree that we are frequently rational.
Dennett calls the intentional stance a level of abstraction, it is in this sense that I was speaking of a model. (I hope this didn’t cause too much confusion.) Models working at different levels of abstraction are not in direct competition. (At least not as much…)
Maybe we will agree more easily by contrasting our positions on the intentional stance with the question of free will. I think that there is considerable societal incentive to preserve the concept of free will (Our justice systems are often based on it. Many think that it is essential to the possibility of morality. I don’t.) while it is an incoherent concept. I do think that if we become more rational and if we understand more of the working of the mind, the idea of free-will will slowly disappear. In the meantime, the intentional stance will be as useful as ever. (By the way, I don’t think that values will disappear either, but I won’t repeat the position of 01, with which I agree heartily.)
But you’re taking a lot on faith, you do see that? Personally, not only can I clearly imagine experimental findings that would unravel what we call ‘rationality,’ I fear this is precisely what cognitive neuroscience will eventually do. If it turns out to be the case that our concept RATIONALITY is the product of some kind of ‘awareness bottleneck’ then it will clearly be the case that what we thought was rationality is, like ‘free will,’ actually something quite different. There’s a good chance that intentional phenomena will rise and fall together, that the peculiarities they share that render them so resistant to naturalization are actually symptoms of the same structural inadequacies that afflict consciousness as a whole. So long as the neurophysiology of consciousness remains unchanged, we likely will be forced to take the ‘intentional stance’ to ‘understand’ certain phenomena (realizing how parochial our concept of understanding has hitherto been), but like the ‘design stance’ in evolution, we do so with the understanding that it is inadequate in a number of respects – that it produces many deceptive inferential implications. Once we begin redesigning our neurophysiology, however, all bets are off. Thus the argument of the post.
You have to at least admit the possibility that you are running afoul what Dennett calls the Philosopher’s Syndrome: mistaking the failure of imagination for necessity. In the meantime, to give you a more visceral sense of the magnitude of the problem we face, try answering the following question (from “The Last Magic Show”).
Just-So Cognition Query: Given that conscious cognition, as a matter of empirical fact, only accesses a small fraction of the greater brain’s cognitive activity, how do you know it provides the information needed to make accurate second order claims about human cognition?
If you can’t answer this question, then you’re simply guessing that ‘rationality’ and ‘value’ and so on are what you think they are, making a bet that cognitive neuroscience, despite undermining so many other apparent verities of conscious awareness, will somehow pull through when it comes to these particular intentional phenomena.
The pessimistic induction is on my side, here, I’m afraid. Intuition, or what’s left of it, is on yours.
@ tickli
” The subtle terminological change that I propose is that we use “modelling as intelligent” (~goal-directed) and “modelling as mechanistic” as predicates of our ways of thinking about a system and not as properties of the system itself.
This terminology might enable us to think more clearly about those concepts.
For example, we see that there is no contradiction in considering a system both mechanistically and as having goals. And we avoid the task of defining goal-directedness intrinsically in such a way that toasters partake of it.”
While this is an interesting theoretical proposal and a worthy mental experiment, I can’t help that this sort of reductionism is weirdly eliminative.
I don’t see how the fact that processes involved in so-called intelligence are “completely mechanistic” makes intelligence a less valid concept, or necessitates such hm… eliminative juxtaposition between “mechanistic” and “intelligent”.
It might be that it is an abstraction-level issue, but it appears to me that it any model that does not explicitly account for new properties arising from alterations in arrangement of (purely “mechanistic”) matter is somewhat problematic.
“Intentionality” / “goal-directedness” is a bit like “sharpness” or “porosity” in that regard – a feature that arises from (un-spiritual and ghost-free) matter being arranged in a certain manner which, while enabling new interactions, is also quite vanilla.
Also, I have no problem with “toasters” partaking in goal-directed behavior. A sufficiently advanced toaster is entirely conceivable and can be built with current tech
I find it revealing and curious that completely “ghost-free” automatic systems already can be demonstrated to have all the properties of an “intentional” system (defined as “system that has internal states and can direct them at real or imaginary objects”) and even some kind of “aboutness” framework (can classify different types of input into pragmatically relevant categories).
Thus, I see absolutely no problem with definition of “intentionality” being permissive enough to “let the bots into the intentional club”.
In fact, I prefer those very definitions since they render intentionality decidedly un-magical and mundane without completely eliminating it.
Consider this:
The only part where the Oxford dictionary definition (“intentionality: the quality of mental states which consists in their being directed towards some object or state of affairs”) seems to invoke anything remarkable is when it invokes pesky “mental states”. Indeed, we could probably dedicate 100+ comments to discussing various positions on WTF exactly are “mental” states…
However, if we replace “mental state” with an “internal state”, then everything works out fine:
mental states clearly fit the definition of an “internal state” (“mental states” can be rather noncontroversially considered to be a subset of “internal states” IIRC) but an “internal state” is clearly not specific to humans (robots, computers, toasters, all quite provably have internal states, human-designed ones).
Anything that can have an internal state (like, say, a robot with sensors, motors, and memory that stores a map of an area and various software to adequately use and update it) and directs it at some object (uses its software to confirm a connection between map and observed location, then uses the internal-statey map to navigate actual surroundings) can be considered intentional.
What exactly is the problem with that ? That humans might eventually be proven to have exactly same degree of “intentionality”/”aboutness”/”goal-directedness” as roombas ?
That would be remarkably…unremarkable. Paint me completely un-terrified by such potential revelation 🙂
“Just-So Cognition Query: Given that conscious cognition, as a matter of empirical fact, only accesses a small fraction of the greater brain’s cognitive activity, how do you know it provides the information needed to make accurate second order claims about human cognition?”
Well, I don’t know – that’s why I prefer to replace intentional-human (exhaustive knowledge of whose brain operation is currently not yet available to empirical inquiry) with examples of fully artificial systems which are human-designed (thus, exhaustive knowledge of their operation is decidedly available) that nonetheless satisfy formal criteria of “having some sort of internal state” and “directing said internal state at objects” (and thus are “intentional too”).
Of course, internal states of robots =! internal states of humans (aka mental states) but that only suggest that humans might have a somewhat different kink to their “intentionality” operation
So VD is using the ‘Wait, there’s still some black box over there, in the unconcious! That’s all unknown in that box, the souls in that box!’? I mean, even if a scientific apparatus horrifically controlled someones mind, ie an opperator at a keyboard types a sentence and the person says it and it totally feels to them as if they wanted to, bang, the next ‘black box’ will be ‘oh, see that machine simply cut off the souls connection to the body/brain!’. Never mind how the drawing of this ‘soul in the unconcious’ conclusion will no doubt end up yet more compelling fodder towars a worship of the ‘elephant’ and dismissal of the rider. Or then again, that’s been the situation for quite some time.
@ 01
Apparently, the point at which we disagree is that you want properties like intentionality or rationality and I want to eliminate them. Certainly, I am not eliminativist in term of discourses, since I hold the intentional stance as useful and not eliminable. (I would argue even more strongly to for the usefulness of a “rational stance”.)
I am a reductionist in so far as I think that, in principle, predictions obtained using the goal-directed model can also be obtained using a mechanistic model. (It might however be impossible in practice.)
I don’t see taking intentionality as a property as impossible. I simply consider that if we do, we will arrive at a definition which is a bit unnatural. For example, I could say that a system is intentional if taking the intentional stance works well when thinking about this system. (This work even better with rationality.) This is a kind of trick to convert a stance in a property.
Let me try to argue against taking intentionality as a property.
1) Properties should either apply or not “objectively”. I think that in many cases using the intentional stance or not is a matter of practicality. If we know how a system works internally, we prefer using a mechanistic model, otherwise, we will use the intentional stance or the rational stance.
2) The property of rationality (Okay, rationality is not intentionality, but allow me to momentarily ignore the difference…) is really “wild” in the sense that its manifestations depend totally on the goals of the rational agent. (Maybe taking rationality as a relational property (between the agent and its goals) is a nice middle ground?)
3) It is often useful to use the intentional stance toward systems that do not understand anything about representations. For example, an ATM “tells me” how much I have on my bank account. Its intentionality is derivative on mine or the intentionality of its programmers. So you are in a difficult position if you need to answer the question: is the ATM intentional. The question “Do I take an intentional stance toward the ATM?” is a clear yes.
@ R. S. Bakker
“There’s a good chance that intentional phenomena will rise and fall together, that the peculiarities they share that render them so resistant to naturalization …”
I do not think that intentionality is resistant to naturalization. I think it will stand strong after having been completely explained in reductionist terms. My position on this point seem to be the same as 01’s.
“The pessimistic induction is on my side, here, I’m afraid. Intuition, or what’s left of it, is on yours.”
You see a bunch of ideas that we hold about ourselves like rationality, intentionality, consciousness, free will, moral values or the power of introspection. Some are in difficult positions and have been shown to be at least partially wrong. You induce that the other ideas will fall for the same reasons. 01 and I contend that we can give definitions and support for some of those ideas that do not depend on introspection or tradition. For example, we see rationality and moral values as manifestations of goal-directedness, a concept that should be quite unproblematic since we already apply it to machines and we can explain it in a clear and formalisable way. 01 astutely compared intentionality with information retrieval, if we can explain it in that kind of terms shouldn’t it be safe from the advances of neuroscience? Even posthumans will need to retrieve informations.
“… how do you know it provides the information needed to make accurate second order claims about human cognition?”
I don’t have a problem with the assertions that we actually follow other values than the ones we think we follow or that we are less rational than we think.
I think second order claims based on introspection are extremely doubtful. However, there are other ways of supporting second order claims about human cognition, for example psychological experiments, reasoning by analogy with computers, indirect arguments, arguments based on evolution and so on.
Again, we seem to talking at cross-purposes. I just don’t see how your argument amounts to much more than foot-stomping: “I see it, so there!” I’ve never questioned whether you see it or not, just whether what you think you see is real, or a kind of distortion of what your brain is actually doing. Again, I don’t see how anything you or 01 have adduced is relevant to the claim that profound, systematic distortion of the kind I’m suggesting is not possible. (Note the modesty of my claim: all I need is the possibility).
So, just to be clear, you do see the possibility of what I’m saying? If not, then what warrant/evidence do you have in support of this?
Note that I haven’t once used the term ‘introspection.’ In part, because I no longer have a clear sense of what it means, but also because I don’t need to. I’m just asking what information is available to conscious cognition.
Otherwise, what psychological experiments? What arguments from computer analogy? What evopsych abductions?
Note that my argument isn’t that the brain doesn’t cognize – it most certainly does – only that there’s a good chance that it doesn’t cognize the way we generally think we do. So successful instances of cognition provide no proof one way or another (just as throwing your arm up and exclaiming ‘I just chose to do that!’ is no argument for the reality of free will). What you need, it seems to me, is something like the Penrose/Lucas argument, only without all the holes. In the absence of such arguments or evidence, then we really have no choice but to wait for what cognitive neuroscience has to tell us. Personally, I think the Just-So Cognition Query is devastating…
So, given the split between our ‘feeling of willing’ and what we’ve learned scientifically, the possibility that such a split exists for all intentional phenomena is a very real one. So when it comes to the prediction of behaviour, it is possible that what our brain does and our ‘feeling of purposiveness’ are quite different. Our brain regularly successfully predicts behaviour – there’s no doubt about this. The question is whether what we call the ‘attribution of purpose’ is instrumental to this, or whether it is, like willing, simply an artifact of the limits of what information is consciously available to us. Since our sense of purposiveness is systematically related to the greater processes of the brain, it will seem (like the will) to be thoroughly effective. Since information regarding the activity of the greater brain is in no way consciously available to us, not even as ‘lacking,’ our sense of purposiveness will seem (once again, like the will) to be sufficient – to be everything we need.
Unless I’m missing something here, there’s no reason to presume that the fate of the ‘will’ isn’t the possible fate of all intentional phenomena. The question is open, pending a more mature neuroscience.
@R. S. Bakker
I also think that we are talking at cross purpose. Let me try to explain how I understand your position and compare it with mine, so you can correct me and we can understand what difference in assumptions we are making.
“Again, I don’t see how anything you or 01 have adduced is relevant to the claim that profound, systematic distortion of the kind I’m suggesting is not possible. ”
You suggest that we perceive, for example, intentional phenomena and that they are an illusion. The reason for which they are an illusion is that there might be in some sense nothing behind the perceptions that justifies them, to which it refers.
This idea that appearance is true when it refers to something, is how I understand your: “What you need, it seems to me, is something like the Penrose/Lucas argument …”
Is that a fair summary of the “metaphysical” background of your argument? (I carefully put metaphysical in brackets, because I do not mean that you need metaphysical assumptions, just that you work in a specific frame.)
Here is how I approach the situation, and it is very different. I am a pragmatist and I do not think the reality/appearance dichotomy applies in the cases of present interest. (The same for the accompanying notion of illusion which is appearance without reality behind it.) (Of course, I accept the truth/falsehood distinction.) For me, intentionality is not a phenomenon. I think that there is an intentional stance, but it does not need a correlative “real intentionality” to be valid. The intentional stance can fail, for example, when I try to interpret as a discourse the gibberish of “http://www.elsewhere.org/pomo/”. On the other hand, for the intentional stance to succeed, it is enough that it “meshes” harmoniously with the rest of my practice. This implies, in particular. some predictive success. (To be accurate I probably want the success to be necessary in some sense… but this is nitpicking.)
For me, there is nothing more to intentionality than the success of the intentional stance. So in some sense, it is a very “exterior” view of intentionality, where intentionality does not depend on the inner workings of the system exhibiting it.
This view succeeds in becoming scientific, if it can be formalised as behaviours, as predictive models. For me, classical (micro)-economic theories are often based on the rational stance. Their success are demonstrations of the usefulness of the stance. Our capacity to use language for communication and the fact that we can communicate orders to machines using programming languages are for me a demonstration that there are not too many “holes” in intentionality. I think we can understand the intentional stance scientifically by studying the language in a pragmatic way, pushing further what Brandom did in “Making it explicit”. Formalizability is an insurance again the possibility that part of our thinking remains unconscious.
“Our brain regularly successfully predicts behaviour – there’s no doubt about this. The question is whether what we call the ‘attribution of purpose’ is instrumental to this, or whether it is, like willing, simply an artifact of the limits of what information is consciously available to us.”
Do you mean that the brain of Bob predicts the behaviour of Bob or the behaviour of someone else? In the first case isn’t your criticism a mere criticism of introspection? (With which I agreed already.) In the second case, do you deny that hypothesis about the purpose of actions are useful for predicting Bob’s future actions? You say that we might be using a more efficient unconscious mechanisms to predict his actions, and I agree that it might often be the case, but this does not contradict the usefulness of the attribution of purpose. An ATM might deduce from Bob hitting it with a car that he intends to steal money from it and therefore activate some alarm in the police station. Couldn’t we say that a sophisticated ATM attributed a goal to Bob and used a simple example of rational stance? Doesn’t that show that attributing goals is useful in practice? (It shouldn’t be hard to come by a more complex and convincing example.)
But you do think our ‘sense of willing’ qualifies as an ‘illusion’? Or how about the colour and detail we ‘perceive’ in the margins of visual attention – does this qualify as ‘illusory’?
As a lapsed inferentialist I could go on and on (and on) about the problems I have with pragmatism. (What are the criteria for ‘meshes harmoniously with my practice’? Who gets to decide ‘mesh,’ ‘harmonious,’ let alone what counts as a ‘practice’? Community? Which community? Scorekeepers ‘here and now’? How about past scorekeepers? Future scorekeepers? Cherrypicking scorekeepers? Cognitively challenged scorekeepers?) But the question is really one of whether you think that intentional phenomena as we understand them are legitimate objects of scientific study or something supernatural. If science reveals that intentionality is the product of extreme informatic parochialism, then I don’t see how any mere philosophy can argue otherwise.
Like I said, the ‘apparent effectiveness’ of different ‘stances’ is neither here nor there given the case I’m making. Otherwise, since reference is another ‘arch-intentional’ phenomena, it’s up to science to determine whether it exists. No?
I don’t blame you if you find my approach odd. I’m a deontic scorekeeping community of one!
“But you do think our ‘sense of willing’ qualifies as an ‘illusion’? Or how about the colour and detail we ‘perceive’ in the margins of visual attention – does this qualify as ‘illusory’?”
I think that they partly qualify as illusory. Free will is an idea about our sense of “Willing” which is clearly illusory, but I wouldn’t say that the sense of “Willing” is always or completely illusory, because I think that we often do things because we want to. (Neuroscience shows that the feeling of “Willing” comes after the action, but the fact that it is correlated with the action is enough for me to say that it is not wrong. I would guess that people claiming that “the Will” is an illusion are usually confusing “Willing” and feeling that we “Will”.)
“But the question is really one of whether you think that intentional phenomena as we understand them are legitimate objects of scientific study or something supernatural. If science reveals that intentionality is the product of extreme informatic parochialism, then I don’t see how any mere philosophy can argue otherwise.”
I think that intentionality is a legitimate object of study by science and philosophy. I do not think that they are in any way supernatural or even “essentially quantum mechanical”. (Both ideas stem for me from the same impulse to reduce one mystery to another. The second reduction is still a bit nicer to my hears, I don’t believe in the supernatural.)
You speak of “intentional phenomena as we understand them” and I guess that is the main difference between us, we understand them differently. As I understand them, they are commonplace grammatical-pragmatic (pragmatic in the linguistic sense) phenomenon and there is nothing fundamentally mysterious about them. Maybe you could give me a very specific case of intentional phenomenon that you suspect is an illusion. This might tighten the discussion.
In a sense, I claim that you are right to think that your conception of intentionality is illusory, since I think that it is an error to see a stance of the subject as a property of the object.
“Otherwise, since reference is another ‘arch-intentional’ phenomena, it’s up to science to determine whether it exists. No?”
For me the question “Does reference exist?* is absurd because the concept of existence is not equipped to deal with this kind of metaphysical cases.
Maybe I should tell that even though I believe in the persistence of intentions, rationality and some kind of values (those last in a modified form), you did convince me of the danger and plausibility of your semantic apocalypse. (The transformation of values leaves room enough for terror and nightmares.)
I’ll leave the debate about pragmatism for some other time… (Like after a certain book I am impatiently waiting for appeared… :))
So ‘referring’ isn’t ‘natural’ but it isn’t ‘supernatural’ either? At what point does this kind of semantic wiggling become special pleading? What could be more natural than ‘referring’?
One the things that makes consciousness research so interesting is the way it reveals the transcendental commitments of apparently deflationary positions like pragmatism. You are literally using apriori philosophical committments to argue against a certain kind of scientific finding. One of the reasons I’m disinclined to put any faith in this (transcendental philosophical) approach is TI (theoretical incompetence): the fact that we’re geniuses at cooking up these kinds of rationales. When you say intentional phenomena “are commonplace grammatical-pragmatic (pragmatic in the linguistic sense) phenomenon and there is nothing fundamentally mysterious about them,” you are almost certainly wrong: they’ve been dividing and bedevilling philsophers for a long, long time – which is the very thing you might expect, IF they were not quite ‘real.’ You are taking a controversial position, as well as policing the most cherished distinction in philosophy: the one that conveniently renders it’s discursive domain autonomous from science. To me, this is not only very traditional, but very inflationary as well.
I’m not interested in ‘intertheoretic reduction,’ just in what tends to happen whenever science sinks its technical claws and procedural teeth into some cherished human phenomenon (my pessimistic induction). You don’t need to argue the logic of the resulting substitution to recognize it as a substitution. My fear is that science will show us that referring is not what we thought it was, just as willing isn’t what we thought it was. I don’t need formal metaphysical committments to ‘correspondance’ or the like to make this claim. Indeed, given that intentionality infects so much second-order philosophical explanation, my inclination is to remain agnostic on all these issues. So I’m saying something like:
Science (whatever the fuck it is) could show that intentionality (whatever the fuck it is) is illusory (whatever the fuck that means).
My question to you would be, Why all these committments? Another way to put this: What were the chances of any pre-Enlightenment 1.0 philosophies surviving the Enlightenment? Given that Enlightenment 2.0 is almost certainly far, far more radical, what are the chances of any pre-Enlightenment 2.0 philosophies surviving Enlightenment 2.0?
Like I keep saying, these are exciting, wide-open times, intellectually speaking, though terrifying the consequences might be.
Re: tickli, intentionality and models
You know, this whole “intentional models are at least as good as “ideal” mechanistic ones” reminds me of a kind of discussion me and a friend (the brainshrinker I’ve mentioned before) once had.
Basically, he argued that software is a superfluous concept because a “purely physical” model of a computer, one that does not involve the software abstraction, will be as good as the model we typically use (since “software” is essentially “a specific combination of certain highly mutable hardware states”)
Is your argument basically same, but for “intentionality” instead of “software” ?
A physical (=”non-supernatural”) system might well be unpredictable, for example, if it is chaotic or if it involves strong effects from apparently truly stochastic phenomenon of quantum mechanics.
Or it could even simply look unpredictable given our technological limitations on measuring the system. I don’t disagree, but regardless, it would certainly increase the apparent probability that some other mechanism is responsible. The point is that the current science calls the link between the conscious mind and action into question as well as the assumption that there is a link between the conscious mind and free will, but says nothing about the link between free will and action or the actual existence of free will.
What cognitive task do you attribute to the soul? Do you agree that those cognitive faculties must not be altered by brain lesions or drugs?
I don’t attribute anything to it at this point. I’m merely noting the fascinating way that science currently appears to be pointing the way to either a) Man as a purely mechanistic being or b) a trialistic being of Body, Mind, and Soul as conceived by the ancients. And at present, I think it is as much of a mistake to assume (a) as to assume (b).
So VD is using the ‘Wait, there’s still some black box over there, in the unconcious! That’s all unknown in that box, the souls in that box!’?
No, not at all. Despite your perfect understanding of me, you do seem to have some trouble articulating yourself. Call the black box whatever you like, the point is that a) the finger moved, and, b) the conscious mind doesn’t appear to have done it.
It appears something that makes fingers move and so forth is in the black box rather than the conscious mind. Whether it is something of cosmic and eternal import, such as the soul, or simply a complicated set of if/then rules, is as yet unknown.
Of course, neuroscience is finally catching up to sport here, as every athlete knows that if you have to consciously think about it, you’re going to be too slow.
@03
I think it is the same idea, intentionality is a kind of abstraction layer or a kind of protocol. It is even more essential than software, because we don’t understand the hardware and in some sense we use intentionality to understand our own relationship to the world. (Even in programming we have the idea of reflexivity!)
I really like your metaphor. The philosophical problem of multiple realisation has something to do with cross-platform compatibility.
My pet theory being that there is another mind that underlies the conscious mind, call it the subconscious mind, the soul, or whatever. It is the author of the thoughts that “occur” to us, largely through a deterministic relational mechanism, however in itself it bears very little resemblance to “us,” although it is far more powerful than our conscious minds. The one or three second gap is the time it takes a purely relational decision suggested by the subconscious to register in the conscious mind. The illusions of free will, self, etc. are maintained by the unilateral information flow between the subconscious our relatively superficial conscious minds. Through practices like meditation, or lucid dreaming, we can temporarily suspend, or “put to sleep” many of the influences of conscious mind, but conscious mind usually remains operational or we wouldn’t remember those experiences.
@ tickli, and in part Scott
Okay, this one’s gonna be short because I need to think through the “RJ’s software argument” Third mentioned, since software / hardware stuff is kinda up my alley. I don’t wanna both this one 🙂
But I’d like to step-by-step ATM’s intentionality.
First, let’s try to start with definition of intentionality. I’ll use the one I have always used throughout the discussion, which is basically dictionary definition with “internal states” instead of “mental states” (this nicely sidesteps the issue of what counts as mental state, and whether there is such thing. Mental states, if “real”, are a subset of internal states, so it extends the original definition a little bit)
So, “intentionality is the quality of internal states which consists in their being directed towards some object or state of affairs”
Second, let’s evaluate the ATM
* Does ATM have internal states? YES.
**How do we know that ? From designing it from ground up if we’re human, from reverse-engineering its hardware and software if we’re some hypothetical impartial aliens
*Can ATM’s internal states be directed ” towards some object or state of affairs” such as, mayhaps, another internal state ? YES
ATM’s internal state can be directed at wide variety of abstract objects known as “accounts” depending on inputs provided, as well as at a number of programmatic interfaces that allow it to use its hardware to dispense cash (thus, via a fairly long sequence of circumstances, its purely internal programmatic internal state can be directed at physical objects known as “cash”, resulting in said cash being provided to whomever managed to give ATM a certain sequence of inputs)
So, irrespective of what our “feeling of intentionality” actually is at a neurological level, of what worth is intentional stance, or whether the “thinker” going through steps above is even human, dictionary definition of “intentionality” is easily satisfied by ATM (which ain’t got no neurons and is quite simple in terms of behavior)
Now, I gotta go. Will ponder on RJ’s argument 03 has kindly retold a mite later…
Reply to R. S. Scott May 30, 2012 1:29 pm (part 1)
I find noteworthy how each of us interpret what the other is doing as attachment to metaphysical commitments. Let’s see the good side of that, we are both against metaphysical commitments!
“So ‘referring’ isn’t ‘natural’ but it isn’t ‘supernatural’ either? At what point does this kind of semantic wiggling become special pleading? What could be more natural than ‘referring’?”
As a pragmatist, I think that words do not obviously have a meaning when placed in unusual contexts, for example, I think that “existence” is not a category that can be applied indiscriminately, it works well for material or mathematical objects (but I would argue it does not work in the same way in those two contexts…), but I prefer not to apply it to abstract entities without being extremely careful.
“You are literally using apriori philosophical committments to argue against a certain kind of scientific finding.”
I am merely using philosophical arguments (of course, some of my commitments show through…) to argue against the probability or the possibility of science making some specific kind of findings in the future. We share the same confidence in science and the same distrust (your IT) in metaphysics and the power of philosophical arguments. We both have arguments about what science will find, yours is based on pessimistic induction, 01 and mine draw on our understandings of the nature of intentionality and rationality.
Both kind of arguments are weak by science’s standard.
“you are almost certainly wrong: they’ve been dividing and bedevilling philsophers for a long, long time – which is the very thing you might expect, IF they were not quite ‘real.’”
My vision of intentionality is very deflationary (It is inspired mostly by “Making it explicit”, but accompanied with a rejection of metaphysic of the kind Putnam proposed.) so it does not contradict your intuition that as usually considered it is not quite real. When you say something like: “There might not be intentionality, just stuff happening with information”, and I say: “Intentionality can be defined as a kind of protocol or abstraction layer.”, I have the feeling that our ideas of intentionality are very compatible, with me being a bit more specific. So the difference between us is not our idea of intentionality, but that I consider it commonplace and you consider it revolutionary.
“So ‘referring’ isn’t ‘natural’ but it isn’t ‘supernatural’ either? At what point does this kind of semantic wiggling become special pleading? What could be more natural than ‘referring’?”
As a pragmatist, I think that words do not obviously have a meaning when placed in unusual contexts, for example, I think that “existence” is not a category that can be applied indiscriminately, it works well for material or mathematical objects (but I would argue it does not work in the same way in those two contexts…), but I prefer not to apply it to abstract entities without being extremely careful.
“You are literally using apriori philosophical committments to argue against a certain kind of scientific finding.”
I am merely using philosophical arguments (of course, some of my commitments show through…) to argue against the probability or the possibility of science making some specific kind of findings in the future. We share the same confidence in science and the same distrust (your IT) in metaphysics and the power of philosophical arguments. We both have arguments about what science will find, yours is based on pessimistic induction, 01 and mine draw on our understandings of the nature of intentionality and rationality.
Both kind of arguments are weak by science’s standard.
“you are almost certainly wrong: they’ve been dividing and bedevilling philsophers for a long, long time – which is the very thing you might expect, IF they were not quite ‘real.’”
My vision of intentionality is very deflationary (It is inspired mostly by “Making it explicit”, but accompanied with a rejection of metaphysic of the kind Putnam proposed.) so it does not contradict your intuition that as usually considered it is not quite real. When you say something like: “There might not be intentionality, just stuff happening with information”, and I say: “Intentionality can be defined as a kind of protocol or abstraction layer.”, I have the feeling that our ideas of intentionality are very compatible, with me being a bit more specific. So the difference between us is not our idea of intentionality, but that I consider it commonplace and you consider it revolutionary.
Reply to R. S. Scott May 30, 2012 1:29 pm (part 2)
“You are taking a controversial position, as well as policing the most cherished distinction in philosophy: the one that conveniently renders it’s discursive domain autonomous from science. To me, this is not only very traditional, but very inflationary as well.”
Since I consider that intentionality is a pragmatic phenomenon, my position renders my ideas about it harder to reach for arguments from neuroscience, but linguistic, logic, mathematics, computer science, sociology, cognitive science and other sciences continue to apply. In what sense do you use the word inflationary here?
“My question to you would be, Why all these committments?”
I assume you mean my adherence to a kind pragmatism. The short answer is that I do not see my pragmatism as a heavy commitment but as method to avoid metaphysical commitments.
“I don’t need formal metaphysical committments to ‘correspondance’ or the like to make this claim. …”
“Science (whatever the fuck it is) could show that intentionality (whatever the fuck it is) is illusory (whatever the fuck that means).”
I think your second sentence demonstrate nicely what happens when we try to avoid commitments too much: we cannot say anything anymore.
“Another way to put this: What were the chances of any pre-Enlightenment 1.0 philosophies surviving the Enlightenment? Given that Enlightenment 2.0 is almost certainly far, far more radical, what are the chances of any pre-Enlightenment 2.0 philosophies surviving Enlightenment 2.0?”
I agree that future philosophies will be very different from our present philosophies. My guess is that in particular they will include a kind of pragmatism :).
This is kind of off-topic, but pre-enlightenment philosophies weathered the passing of time very nicely. (I much prefer Leibniz, Descartes or Pascal to the obscurity of the post enlightenment philosophies of Kant or Hegel, but that might just be my lack of sophistication.)
Derrideans say the same. I remember asking David Wood, my old PhD supervisor, “I notice that you use the same method to tackle the bulk of issues you encounter: Could you tell me, without begging the question, what makes this the best method?” At which point he began deconstructing the concept ‘best.’ It struck me then how all the Derrideans and Wittgensteinian’s I knew could be so dogmatic while remaining utterly convinced they were resisting dogmatism at every turn: they had traded outright theoretical commitments for performative ones. This has the effect of concealing their substantive commitments. You ask them, ‘What is use?’ and they reply (as I once did) “Let’s see how it is used.”
I came to realize that if it quacks like first philosophy, then it likely is first philosophy. This is why I literally generally see pragmatism as one among several ‘performative first philosophies.’ But even if you think this critique is too strong, the fact is, you are committed to a method that can only be theoretically justified. So the question simply becomes, Given Theoretical Incompetence, why should anyone choose exclusive philosophical committments over scientific ones? Certainly history has been exceedingly unkind to the former.
Either way, tickli, I fully realize the improbability of arguing you out of your committments here. The fact that you recognize the possibility of what I’m warning against means that you see the possibile limits of pragmatism, which makes you one of the most open-minded pragmatists I’ve ever had the pleasure of meeting!
Not true at all. I’m simply offering a different way of looking at the ‘problem of theory.’ My position allows me to ‘go pragmatic,’ ‘go deconstructive,’ ‘go positivistic’ or what have you, without worrying about the way exclusive committments to any of those positions hem thinkers in. Once you understand that it’s all heuristic bullshit, you can start taking them seriously as heuristics. No one knows what the fuck they’re talking about, especially in philosophy.
Consider the above statement: It totally allows me the luxury of speculating on What Science Is – what it refuses is the luxury of making exclusive commitments to that speculation, and so thinking I have a cosmic rule for sorting between claims that science can and cannot make. And in this respect, it’s consistent with the empirical fact of human cognitive frailty (not to mention the history of science).
How is pragmatism consistent with the findings of cognitive psychology?
“they had traded outright theoretical commitments for performative ones. This has the effect of concealing their substantive commitments.”
Well, as pragmatists they ought to at least admit (as I do) that theoretical commitments are not very different from performative ones.
Beside, I mostly try to avoid metaphysical commitments. For thinking seriously, I don’t think that not having a method is especially good and so I certainly accept some of the theoretical commitments that justify my methods.
“But even if you think this critique is too strong, the fact is, you are committed to a method that can only be theoretically justified.”
Everybody has a method and any justification is theoretical. The way you argue by pessimistic induction and avoid exclusive commitment by keeping your terms fuzzy (your “whatever the fuck that is”) is a method and you justify it by appealing to TI, a (slightly paradoxical) theoretical justification.
“Once you understand that it’s all heuristic bullshit, you can start taking them seriously as heuristics.”
For me this summarises the heart of my pragmatism! Parts of language become language sub-games and we should use them if they work, reject them if they don’t. Pragmatism too is just one game among others. I use it because I think it works well. Pragmatism (at least mine) as a kind of built-in scepticism. (We never can do better than frail induction… for justifying anything)
“thinking I have a cosmic rule for sorting between claims that science can and cannot make.”
I starkly disagree with you on this one, I think that there is nothing wrong with trying to sort between what science will or will not claim. Of course we cannot say for sure, but some assertions of that kind are extremely convincing. For examples:
1) Science itself often claim that something or another is outside of its possibilities:
1a) Gödel theorem and other indecidability results.
1b) Singularities in physics
1c) Chaos and quantum mechanics (The impossibility to predict)
2) Common sense argument say for example
2a) That physics cannot teach us history directly, because it occupies itself with universal laws and not with particular events.
2b) That in general different domains of science speak about different kind of things and not about anything they want.
3) At the beginning of the 20th century, science mostly stopped investigating paranormal claims after inducing pessimistically that there isn’t anything interesting in that direction.
4) A bit more daringly, I would assert that science cannot do non-trivial ethical normative claims by itself (without being combined with other normative claims).
“How is pragmatism consistent with the findings of cognitive psychology?”
“Consistent” is a bit vague, do you use it as in mathematics to say non-contradictory or in the vaguer usual sense?
First, the domains of the pragmatism and cognitive psy. only partly overlap, so the possibilities of synergies or contradictions are both limited. (If you think about one contradiction in particular, bring it on! :))
For me the appeal of pragmatism is that it tries to understand philosophy by using concepts we understand better. For example, I think that the point of replacing “meaning” with “use” is not that “meaning” is equal to “use”, but that we understand the word “use” better than we understand the word “meaning”.
This method agrees well with the findings that we are better at thinking about concrete stuff than about abstract ideas. (Those findings have been the subject of some cogn. psychological experiments.)
The position that I developed about an intentional stance is inspired by results in developmental psychology. (Saying that children sometimes adopt a kind of intentional stance toward inanimate objects.) Sadly, it does not provide much support (or counter-arguments) to my more specific ideas on that subject (as far as I know).
We might also find support in psychology for pragmatist ideas about the “morphology” of concepts and the fact that we learn meaning through use. But my understanding of this interesting direction is zero.
To conclude, I think that the best scientific support to pragmatism probably come from computer science or the possibility to use formal approaches. This might well go in the same direction as what you call philosophy of information.
@01
(Sorry for the delay, I have to ration my time online…)
I have nothing to add to your analysis of the ATM, it is very convincing. But you sweep under the carpet the problem of defining intentionality, while I think that it is the crucial problem!
If people do not agree with you assigning intentionality to the ATM, it will be because they have another definition. (Or they will just have criterion that a definition should satisfy that are incompatible with your ideas.)
Well, yes of course, people with different intentionality definition will disagree -that’s why I am explicit about the one I am using. However, my definition is basically “dictionary definition minus implicit anthropocentrism”. And I think my argument sort of works for straight dictionary definition, as long as we agree that something that can 1) remember stuff 2) analyze stuff it remembers to build some kind of model can be considered a “mind” 🙂
Now, as to “raw physics” model of reality and whether it explains away / optimizes away things like “software” or “intentionality” well… I don’t think it does, but it does make them more…problematic and contrived. The “raw physics” model still has to reflect that a given system has this strange ability to remember past inputs and change its behaviors in a manner that anticipates future inputs (otherwise, “raw physics” will become inaccurate 🙂 ). However, what the “raw physics” model does indeed seem to dispense with is the concept of “internal state”, since no state is really “internal” (or “mental”) from such a point of view.
That presents a fairly interesting semantic challenge of reformulating concepts such as “software” (which indeed boils down to having a particular combination of very flexible hardware states) and “intentionality” (which seems to boil down to using flexible hardware states of yours to implement some kind of model and have this model focus, in a functional model, either on some part of itself, another model, or something in the world at large) in a manner consistent with the “raw physics” view, but it seems to be quite tenable.
Take a physical system, if you want to consider it as containing information, you need to select what features you consider as coding for information. In a hard disk, the magnetisation, in a book, where the ink is, in a computer RAM, where the charge is, in the human brain maybe where the connections are and which neurons are firing.
So that intentionality, if it is predicated of something regardless of how the information treated by the thing is encoded would have a definition that quantifies over all possible ways of encoding. I think that this is very unnatural or even impossible (more one that below) and so defining intentionality as (the correlate of) a stance is more elegant. The coding is then “hidden” in the way we “apply” the stance.
I certainly think that your position tenable, I just think that we can have something more simple in the end using a less intuitive definition at the beginning.
The reason I think that quantifying over all possible ways of coding information in physical substrate to get a definition of intentionality is that we might have a situation where in some sense every message is encoded in every substrate with a sufficiently complicated encoding. (Yeah, it’s not a watertight argument, it might be avoided if we find a way of separating admissible encoding from others or something, but it shows that we meet difficulties in that directions.)
Do you think that I am correct and that your position lead to the problem of quantifying over a subset of encodings? Do you have an idea of how to select the subset. (I will try to think about it too.)
Like I say, I have no problem with the ‘utility’ of ‘stances’ as you describe them. All I’m arguing is that the facts of the matter as revealed by a more mature neuroscience have a good chance of revealing that all the intentional concepts assumed by your approach are parochial artifacts of the limits of second-order cognition – in other words, that you never actually ‘take a stance,’ at all, ever – and no human has. Defining ‘stanceness’ as a correlate of stances doesn’t make much sense, I’m sure you’ll agree (BBT actually explains why our attempts to cognize intentionality is so often delivered to these peculiar impasses). But this is precisely what you are suggesting (if I’m reading you right).
There’s no ‘you,’ no ‘utility,’ no ‘taking,’ and so on and so on – at least not in any sense compatible with the ‘manifest image’ of Sellarsian fame. In a strange sense you’re actually begging the question: you assume an agent requiring some cognitive means (which intentionality provides), when these are just some of the things that could be called into profound question.
The question of what the brain is actually doing (encoding), and how the peephole of attentional awareness can be plugged into it is something that can only be empirically answered. As it stands, it just strands us at the edge of absurdity, and the acknowledgement that we really have no fucking idea what’s going on – especially when it comes to intentionality.
Like I say at the end of “Last Magic Show,” there’s a damn good chance that neuroscience will reveal that ‘consciousness’ and all the intentional phenomena that comprise it are as small and insignificant relative the neural cosmos as the earth is relative to the cosmos proper. Strangely enough, the perspectival/informatic constraints that made the latter so difficult to conceive for our ancestors are likely operative in the former as well. You have to crawl into the gut of what this means – What would be the case if consciousness (as we experience it) turns out to be as peripheral vis a vis the brain as the earth vis a vis the universe? And the best way to do this, I think, is to look at what’s happening to the will (check out my Bestiary of Consciousnesses for an example) at science’s hands, and realize that the same fate could await everything intentional.
At present the brain is every bit as opaque to us as the universe was to the Alexandrian Greeks. The more we come to understand, the more it seems our ‘native understanding’ is the product of anosognosia.
@ tickli
The reason I think that quantifying over all possible ways of coding information in physical substrate to get a definition of intentionality is that we might have a situation where in some sense every message is encoded in every substrate with a sufficiently complicated encoding.
Hmmmmm… If I recall my infotheory days correctly, things along the lines of “every message is encoded in every substrate” only start happening when message qualifies the demands of Shannon’s “perfect secrecy” property, that is, that encoded / encrypted message does not betray any information about the pre-encoding state (except maximum length) and encoding algorithm.
Most encoding schemes (well, most of those that do not pursue secrecy from third parties as part of their design) do not have such property, and betray quite a lot both about the message and the codec (thus allowing even “naive” third parties to build decoders of varying fidelity). Remarkably, your consideration would partially apply (partially, because, well, there are also practical constraints inherent to any substrate that limit its channel capacity and thus information-theoretical ability to store and transmit information, but such a criticism of your proposal seems almost lowbrow 🙂 ) to perfectly random arrangements in any substrate (since any and every sample of “really” random data could be an OTP ciphertext, and any OTP ciphertext can be decoded to any conceivable plaintext of same length or less, depending on key used), but not, as far as I can tell, to non-random arrangements in any substrate (which would betray some info about both what is encoded an how it is encoded)
I find the whole affair to be curious and perhaps unnatural (then again, “natural/unnatural” distinction is an ill defined heuristic at best, and an entirely misguided novelty aversion response at worst), but definitely not problematic enough to challenge the notion of “intentionality” as a property of some (but perhaps not all) information-theoretic systems (intentionality as a kind of “software subroutine”).
P.S.:
… – .- – .. … – .. -.-. .- .-.. .–. .-. — .–. . .-. – .. . … — ..-. – …. .. … .–. .- – – . .-. -. -… . – .-. .- -.– .- .-.. — – .- -… — ..- – – …. . — .-. .. –. .. -. .- .-.. — . … … .- –. . –..– . …- . -. .. ..-. -.– — ..- …. .- …- . -. — .. -.. . .- .– …. .- – . -. –. .-.. .. … …. .. … .- -. -.. …. — .– .. – .– — .-. -.- … .-.-.- — ..-. -.-. — ..- .-. … . –..– .- -.-. — — .–. .-.. . – . .-.. -.– . -. –. .-.. .. … …. -….- -. .- .. …- . … ..- -… .— . -.-. – .– — ..- .-.. -.. -. — – -… . .- -… .-.. . – — .-..-. .-.. . .- .-. -. .-..-. . -. –. .-.. .. … …. ..-. .-. — — … – .- – .. … – .. -.-. .- .-.. .-.. -.– .- -. .- .-.. -.– –.. .. -. –. … ..- -.-. …. — — .-. … . -….- . -. -.-. — -.. . -.. . -. –. .-.. .. … …. – . -..- – … .- … .-.. — -. –. .- … – …. . .-. . .- .-. . -. — .- -.. -.. .. – .. — -. .- .-.. -.-. — -. – . -..- – ..- .- .-.. -.-. .-.. ..- . … -.–.- … ..- -.-. …. .- … . -. – .. – .. . … ..-. .-.. ..- . -. – .. -. -… — – …. . -. –. .-.. .. … …. .- -. -.. — — .-. … . -.-. — -.. . .–. …. -.– … .. -.-. .- .-.. .-.. -.– .- -.-. – .. -. –. ..- .–. — -. — . … … .- –. . … .-.. .. -.- . – …. .. … — -. . -.–.- –..– -… ..- – … ..- -.-. …. .- -. .- .. …- . .- –. . -. – -.-. — ..- .-.. -.. -.. . ..-. .. -. .. – . .-.. -.– .-.. . .- .-. -. .- .-.. — – .-.-.-
😉
@ Scott
Well, I have no beef with the claim that our notions of “intentionality” or “self” will be challenged and transformed via empirical neuroscience.
I just find it implausible that this knowledge will “break” us in some lovecraftian, way. (If my “self” is just “parasite code”, so what ? I am nothing but a rather foolhardily engineered flesh machine anyway, so what if my software is also sub-par ?)
BTW, what is your preferred definition of intentionality (should’ve asked that earlier, yes 🙂 )?
@01
“things along the lines of “every message is encoded in every substrate” only start happening when message qualifies the demands of Shannon’s “perfect secrecy” property, that is, that encoded / encrypted message does not betray any information about the pre-encoding state (except maximum length) and encoding algorithm.”
This is correct, if you have strong statistical assumptions on both the content and the kind of encoding. If the encoding is really arbitrary, say among recursive automorphisms of the set {0,1}* of binary messages (The automorphisms need to be recursive to avoid infinite length OTP, which are rather nasty.), you can send any finite ordered set of message to any other with the same number of elements. And hence you need to quantify over a smaller subset than that.
“(intentionality as a kind of “software subroutine”)”
Could I suggest that intentionality might be more an interface (in the OO sense) than a subroutine, the metaphor then extends nicely to the idea that it can be implemented in different ways. Now, an interface is something like a stance, it’s more about what the rest of the program can do with it than about what it is.
“A COMPLETELY ENGLISH-NAIVE SUBJECT WOULD NOT BE ABLE TO “LEARN” ENGLISH FROM STATISTICALLY ANALYZING SUCH MORSE-ENCODED ENGLISH TEXTS AS LONG AS THERE ARE NO ADDITIONAL CONTEXTUAL CLUE”
From this, I extract a slightly different reason to I think that intentionality is better described as a stance, it is not intrinsic enough to be a property, because a naive observer would not be able to extract the references of the internal stated from examining the “thinking” system. It depends on how the system is “embedded” in the world.
@R.S. Bakker
“in other words, that you never actually ‘take a stance,’ at all, ever – and no human has.”
We use words with a certain meaning. With new knowledge, the definition of our words and what we say we them evolves. Does that mean that what we were referring to before the new knowledge arrived didn’t exist? You seem to want to answer a clear no. I think that it is more natural to try to insure a maximum of continuity, and so to assume that we must search for definitions of our terms that insure that the set of assertion we consider true does not change with every little advance in knowledge. (Of course, we have other criterion, like simplicity, to choose our definitions.)
“Defining ‘stanceness’ as a correlate of stances doesn’t make much sense, I’m sure you’ll agree”
I am not quite sure what you mean. I’ll try to answer anyway. Definitions do not work ex nihilo, you can define the word language, for example, it is not useless, even if you need to have some kind of understanding of what language is to understand the definition. Is defining “stanceness” in term of stances any different? By the way, I want to define intentionality in term of stances, defining stances is not obligatory. We cannot define everything (at least not without circularities), but we can regulate the ways we use the words, their relationships.
“What would be the case if consciousness (as we experience it) turns out to be as peripheral vis a vis the brain as the earth vis a vis the universe? And the best way to do this, I think, is to look at what’s happening to the will (check out my Bestiary of Consciousnesses for an example) at science’s hands, and realize that the same fate could await everything intentional. ”
I tend to agree with you that it might well happen to consciousness. But I think that intentionality can be defined in a way which is both compatible (of course not identical) with its “manifest image” and in pragmatic (in the linguistic sense, =as a protocol or interface) terms that would make them sufficiently independent of our present self-image to resist the collapse.
When you say that intentionality might be proven to be an illusion, what do you mean by intentionality? It is the crux of the question. (Of course, your definition does not need to refer clearly to something, since you claim that it might be an illusion, I just want a definition that controls the use of the kind of intentionality you are talking about.)
P.S.
L.T.&G. rocks!
Dennett would say that what we were referring to all along whenever we use the word ‘will’ is some notion of ‘versatility.’ Metzinger would say that what we were referring to all along when we use the word ‘self’ is some kind of representational similation – his ‘phenomenal self model.’ What I’m saying is far more radical: that science may conclude that all of what we presently call ‘intentional phenomena’ are a collection of kluges, heuristics, and outright illusions. I actually think we should expect them to be such. I agree that science MAY be able to distill consciousness out from intentionality, but that we will likely not be happy with either. BBT, for instance, reinterprets ‘aboutness’ as a kind of heuristic forced upon consciousness by informatic blindness to the actual causal histories connecting percepts/words to things. An illusion, really, but rendered compelling for lack of any access to information otherwise, and apparently ‘efficacious’ for it’s systematic dependency upon the evolutionarily tuned efficacy of the greater brain. (This is where the Just-So Cognition Query becomes so very caustic: once you realize that the apparent efficacy of intentional concepts has no bearing on what science will find regarding them, then all bets are off, and transcendental philosophy, as a cognitive enterprise, is entirely up in the air).
Stipulating definitions is well and fine in the absence of alternatives. But ideally we want our definitions to reflect some kind of fact of the matter: thus Dennett’s redefinition of ‘will’ and Metzinger’s redefinition of ‘self.’
I’m pretty much fine with any definition of intentionality that references its connection to the ‘mental,’ the peculiarity of propositional attitudes, and the conceptual incompatibility with causal/natural categorizations.
Thanks for the upvote on LTG! – I’m waiting until the draft is entirely up before tainting the well with any comments or queries, though…
“… BBT, for instance, reinterprets ‘aboutness’ as a kind of heuristic forced upon consciousness by informatic blindness to the actual causal histories connecting percepts/words to things. An illusion, really, but rendered compelling for lack of any access to information otherwise, and apparently ‘efficacious’ for it’s systematic dependency upon the evolutionarily tuned efficacy of the greater brain.”
My only disagreement with this passage is calling “aboutness” an illusion. For me, illusion means something inducing us in error in a systematic way. (Not that it always induces us to commit errors of course.) I don’t make a difference between between truth and a reliable specific kind of efficacy. This is one of my pragmatic commitments. So what science shows us about intentionality would need to imply prediction of systematic failure of the conclusion drawn from using an idealised intentionality, to consider intentionality an illusion. (I speak about idealised intentionality because our intentionality is able to correct its own errors in some cases, and so we must look at the limit of this process of auto-correction. Probably with some bound on how much intentionality can correct itself without becoming something else…) Now, we have an (optimistic) induction argument saying that since intentionality worked until now, it will continue.
I agree with you that science might show that intentionality as a fundamental flaw. And that in some sense it is in the air. But intentionality is a tool that we have been using for a long time with a lot of success and I don’t think it is more in danger of containing a fundamental flaw than, for example, our understanding of causality. (Causality is one of my pet peeves, I think it is more unclear than people usually think, and often advantageously replaced by regularity, especially for science. Am I the last positivist? ;))
“I’m pretty much fine with any definition of intentionality that references its connection to the ‘mental,’ the peculiarity of propositional attitudes, and the conceptual incompatibility with causal/natural categorizations.”
In my view, a system is intentional if it conforms “well-enough” to a specific set of rules about according its behaviour, its propositional attitudes and the world. The reason that I like to see intentionality as a correlate to intentional stances is that we are the ones ascribing propositional attitudes. (We don’t want to deprive of intentionality someone who claims nothing.)
So, my proto-definition is all about propositional attitude, but does not reference either the mental directly or your “causal/natural categorizations”.
Do you think that the way my definition do not refer to the mental is problematic? (My hope is, that my definition is simply a bit more general in its applicability.) I am not sure I understand what you mean by “causal/natural categorizations”, do you have an example of a definition of intentionality you agree with?
P.S. Sorry for the delay in my answer, I am a bit too busy to sustain our previous rhythm, not losing interest.
By reference to “something inducing us in error in a systematic way” you could say the whole of transcendental philosophy.
Here’s a crude analogy:
Say that the whole brain has a protocol like the following:
In any environment, Q, a + b + c + n = X
where X equals some kind of behaviour. Now we know for a fact that consciousness, whatever it is, only accesses a small fraction of what the whole brain does. So let’s say something like Tononi’s Information Integration Theory of Consciousness turns out to be true, and that out of the above ‘whole brain protocol’ only,
n = X
finds itself integrated, and thus available to consciousness. We have a fuzzy sense and we witness the behavioural outcome. The preconscious parts of the protocol will not seem to exist, even as something missing, absent the integration of any information to this effect, something which evolution, caloric miser that it is, will almost certainly begrudge us. Now if n is ‘intentionality,’ and X is ‘prediction of behaviour,’ and q is ‘behaviourial environments,’ then so long as we remain in Q, n will seem to be all we need. The instant we leave Q, however, we should expect the fractionality of n to bring us up short, as indeed seems to be the case whenever we begin theorizing intentionality. Round and round we go, gaming ambiguities, mincing and begging.
Let’s take a concrete example. We are playing poker, on the one side, you try to guess what my hand is and how I think about the game given this hand. But you also have unconscious clues, maybe from facial expressions that give you a vague feeling of unease when I have a good hand and makes you a better player.
So Q is a friendly game of poker, X is the way you play, a+b+c are the unconscious clues and n is reasoning consciously about the game and the players propositional attitudes. I would say that n gave rise to game theory when we began “theorizing intentionality” and thus developed in something scientific, while the clues a+b+c lose their use once the environment change. (Maybe because you are playing poker on the web.)
Of course, we could also find example where things go badly from theorising intentionality. I think that there is no reason it should go badly in a systematic way and so maybe it should not be qualified as an illusion.
I’m not sure how this obviates (as opposed to illustrates) my point. In game theory, as in logic or mathematics, there are ‘rules’ that can be made explicit, typically expressed in some kind of formal notation. As you would expect given the systematic relationship of the informatic sliver we experience to the actual processes of the gut brain. We can formalize this systematicity – the problem arises when we try to explain it.
Given impoverished informatic access you should expect that we will be stuck with shadows on the cave wall, able to cognize (something of) how they work without any clue as to what they are. So for instance, BBT suggests that there is no such thing as an ‘a priori’ – that this is a central philosophical illusion. A posthuman, for example, could very well have conscious access to all the information missing from formal cognition, such that she would be amused of the quaint notions of ‘axioms’ and ‘rules,’ and try to convince us that these, like our sense of a motionless earth, are merely artifacts of our parochial perspective. For her, everything would be in the implementation, and what we call formal or analytic or a priori would be but dim shadows of the natural laws of information processing, laws that we can only discover performatively by running through possibilities with our own brains (which is why they seem to ‘come before,’ to be distinct from our a posteriori knowledge).
But once again, this is an empirical question. You can no longer be a pragmatist or an inferentialist or a contextualist (or so on) without making some kind of empirical stand, without stomping your foot and saying, ‘The brain must be X.’ So my question is, Why bother making exclusive commitments to any of them?
“able to cognize (something of) how they work without any clue as to what they are”
For me “how they work” is all there is to “what they are”, when we speak of intentionality.
“So for instance, BBT suggests that there is no such thing as an ‘a priori’”
I certainly agree that there is no a priori in the Kant-Husserl meaning. At least because we have no magical way of accessing a priori knowledge as they postulate.
“A posthuman, for example, could very well have conscious access to all the information missing from formal cognition, such that she would be amused of the quaint notions of ‘axioms’ and ‘rules,’ and try to convince us that these, like our sense of a motionless earth, are merely artifacts of our parochial perspective.”
I think that touches the heart of the difference between our positions. For you, the meaning of our notions, like “axioms” and “rules” derive from the brain having those notions. So naturally, “your” hypothetical posthuman see the notions marred by the same weakness that affect our brain. This is a thesis about semantic with which I disagree. I see the meaning of our notions as coming from norms, that might be only imperfectly implemented by our brain. And so “my” posthuman will, at least while communicating with us, use the same notions that we do, while maybe avoiding a few that she will judge too unclear (say the notion of free will).
“what we call formal or analytic or a priori would be but dim shadows of the natural laws of information processing, laws that we can only discover performatively by running through possibilities with our own brains”
Our understanding of the “a priori” seem identical. Going even a bit further, the capacities used by the brain for “running through possibilities” are actually acquired and so the “a priori” is not at all prior in its acquisition.
“You can no longer be a pragmatist or an inferentialist or a contextualist (or so on) without making some kind of empirical stand, without stomping your foot and saying, ‘The brain must be X.'”
My guess is that you are using your semantic thesis again. What one is saying is for you determined by what is saying it, therefore, of course, it depends on the nature of the brain. Let me propose a classical sounding counter-argument:
People with different brains can talk about the same things, they can use the same concepts. Therefore, the structure of the brain is irrelevant to semantic.
Do you agree that our difference of opinion can be crystallised as a problem of semantic? Do you agree with my characterisation of the semantic thesis implicit in your position? Do you defend this semantic thesis?
So they are what? Supernatural? I’m simply guessing that they are natural the way anything else is natural. Even if you reject the tag ‘supernatural’ because of it’s pejorative connotations, you do see the dilemma you’re in, don’t you? The fact is, you ARE making an empirical stand: ‘No matter what cognitive neuroscience discovers, it will leave certain boilerplate intentional concepts intact.” Now that is a strong claim, requiring strong justifications, is it not? My position strikes me as far more modest, despite the radicality of its potential consequences. I’m saying, you could be right, but we have no way of knowing this short of a mature neuroscience (which has proven quite destructive so far). You’re saying we can – which means the argumentative burden is actually yours. Given that your claims are not empirical, then they must be transcendental in some (I’m guessing) deflationary sense. But the problem you face is that the Just-So-Cognition Query can be raised against you at every turn.
You’re making a bet on transcendental philosophy, while I’m making a bet on cognitive neuroscience. This is where the power of the pessimistic induction shows its hand: transcendental philosophy has no real track record of reliability to speak of, which is why I think that your odds are long.
Precisely. This is exactly the way this debate always seems to breakdown. What kind of functionalist are you?
Once again, I’m arguing that the question of whether the structure of the brain is relevant to the semantic is one that only a mature neuroscience can answer. I’m arguing that it is entirely possible that what we call the ‘semantic’ is thoroughly blinkered, through and through. Possibility is all I need.
Put differently, What commitment would you have me relinquish?
“So they are what? Supernatural? I’m simply guessing that they are natural the way anything else is natural.”
I think they are non-natural in the same sense that a mathematical object like 7 is not a part of nature.
“Possibility is all I need.”
That depends on what we argue about. I do not argue against the possibility (“Possibility” has so many meanings…) of even the strongest version of the semantic apocalypse, I actually give it a (small) non-zero probability. I argue against it being likely.
You (justly) attribute to me the idea that “No matter what cognitive neuroscience discovers, it will leave certain intentional concepts intact.”. (I removed the word “boilerplate”, since I think that those concepts about intentionality are not yet even satisfactorily defined.)
However, you first claim that this idea is an “empirical stand” and later, you say that my claims are not empirically supported. First, since you say that it is an empirical stand, you acknowledge that it is not a priori impossible to defend it using argument that are empirically based. So I am aiming to make transcendental arguments (“transcendental philosophy” is too vague for my taste) to support the smallness of the probability I attribute to the semantic apocalypse. Those transcendental arguments are not based on anything a priori, but ultimately on observations about how the language work. I would place them at the same level, but going in the opposite direction to your “pessimistic induction”.
My first argument (against one aspect of the semantic apocalypse) is the following.
1) The fact-value distinction can be observed easily in discourse: It is notoriously hard to obtain non-trivial conclusions about values starting from statements that are purely about facts. This distinction is also used and formalised in AI while separating goals and data. It is thus very convincing that we cannot get conclusion about values from facts.
2) The idea that values may disappear after advances in neuroscience would imply a breach of the fact-value distinction.
The other argument is based on the same scheme, so we may start with this one. The other is a bit less clear and longer, but is more directly concerned with intentionality. (And I simply need time to find the best way to formulate it ;))
“Put differently, What commitment would you have me relinquish?”
This reminds me of Dylan’s idea of creating a a philosophical viewpoint that can survive against all criticisms. You have a position that is very minimal in terms of commitments, and this is certainly useful in a debate, but I do not think that rejecting commitment by default is actually a better policy than accepting them by default, thought it certainly is easier to defend. You are throwing away a lot of information! Even unreliable sources are better than nothing (thought yes, systematic biases make things really hard).
By the way, the notion of commitment is very rough (too bi-valued) and at this level of debate, I think it ought to be replaced by a more refined version of discourse about our propositional attitudes, maybe introducing some Bayesian probabilities. Otherwise, I cannot even express my position that accepts pragmatists commitments as very likely but do not hold them as certain, since they repose on observations about language and everyday life and thus induction and its weaknesses, and non-formalised discourse and its weaknesses. I also think that they are much less error-prone than the rest of philosophy by being more concrete and (for the moment in small parts) formalisable.
Introducing a finer grain of levels of commitments also forces you to take a position relative to the commitments that you can simply avoid in the bi-valued simplification.
Low probability because… ?
I think we have lots of surprising things about the way we experience mathematical cognition. I actually think we have lots of surprising things to learn about everything that passes through the miniscule bottleneck (comparatively speaking) of attentional awareness. Everyone agrees that consciousness does something, the real question is whether it is discharging its functions in the ways we assume. Once you see how easily it could be the case that our reasoning about what it is we do when we “observe language work” could be because the limitations on ‘observation’: the way we are stuck at what we see as the syntactic and semantic levels and no other information above and beyond. Theoretical observations about language at best provide us with a consolidated understanding of what little we got – which is why it’s all so controversial. Do we ‘observe’ the functioning of meanings or do we impute it?
My pessemistic induction is quite strong: Why should we expect science to be any less caustic to our nonnatural ‘observations of language’ than it was to our nonnatural understanding of the world? As much as I dread it, I think it will do what it has done: surprise us. You think it won’t… I still don’t know what your argument is.
I take it as given that commitments are a matter of degree. Also, you do realize appeals to the fact-value distinction simply beg the question. In a sense, it’s arguing, “There is value, you deny value, therefore you’re wrong!” It could be the case that ‘value’ will be understood as a kind of low resolution disposition – as the way a certain kind of regularity looks from a certain informatically parochial perspective. Who knows.
You could say science is a kind of social mechanism for information substitution. I’m not throwing anything away. All I’m pointing out is that much of the information that we presently think sacrosanct or immune to this substitution process will likely suffer the same fate as the information our recent ancestors thought immune.
So, returning to my original question: you guess that semantics (pragmatically and formally construed) is likely immune, while my guess is that they likely are not. The reason you hold this position is…
“I think we have lots of surprising things about the way we experience mathematical cognition.”
I agree, but large parts of mathematics have been completely formalised without nasty surprises. It is robust. At least in this domain we found a way to largely protect ourselves against the incoherence of our thinking. Would you go so far as to say that advances in knowledge about the brain could teach us something mathematical? Or could reveal an error in mathematics.
“Everyone agrees that consciousness does something”
I think that the concept of consciousness is not sufficiently well defined to be able to even agree that it does something. But this is presently not our main concern.
“Do we ‘observe’ the functioning of meanings or do we impute it?”
We usually do both. But we can also observe how we impute meaning. And while observing, we can avoid to impute it, just look at the sentences in a neutral way as a syntactical objects. It is something that we can learn to do and that linguist, philosopher and writers do to a degree, maybe as surgeons learn to view bodies as complex matter, without engaging their desire or disgust.
By doing that, we can study pragmatic in a detached way and it might become as scientific as history or palaeontology.
“Why should we expect science to be any less caustic to our nonnatural ‘observations of language’ than it was to our nonnatural understanding of the world?”
I think that our observation of language is partly natural, and that pragmatic (in the linguistic-logic-AI sense) is becoming scientific. There is non-natural observation of language by some philosophers and I think that your critic applies for them.
“In a sense, it’s arguing, “There is value, you deny value, therefore you’re wrong!””
I certainly do not say that values exist. I say that value-statements are distinct from fact-statements. (Calling them statement about values is an abuse of language with respect to the stage in the argument.) Even if you think values do not exist, you must accept that some statements are seen as value-statements.
Once you observe the distinction fact-value for statements (that speaker can reliably make, in the same sense that they can reliably recognise statement about the present or the future.) You can observe that arguments tend not to be able to conclude value-statements from fact-statements as premises.
By induction, you put in doubt that arguments from the structure of the brain will be any different.
Pragmatically, there is nothing more to the existence of values than the existence of value statements and behaviours appropriate or not to the context of those statements.
I do not consider my argument extremely strong since it is based on induction, but I don’t think it has as a logical fault.
“It could be the case that ‘value’ will be understood as a kind of low resolution disposition”
I think that “respecting a value” is a disposition.
“certain informatically parochial perspective.”
Maybe we meet another crucial difference between our opinions here. I consider that any cognitive system will suffer from a finite intake of information and a finite computational power. Therefore, it will need heuristics to deal with its informatical parochiality almost as much as we do. And for this reason, it will uses the concept of value because it is a heuristic which is in many situation best possible, in particular in situations described by game theory.
“All I’m pointing out is that much of the information that we presently think sacrosanct or immune to this substitution process will likely suffer the same fate as the information our recent ancestors thought immune. ”
I completely agree with that, but we can’t do anything but use to the best what we have.
“So, returning to my original question: you guess that semantics (pragmatically and formally construed) is likely immune, while my guess is that they likely are not.”
Let’s say that I guess that some small very specific parts of semantics pragmatically construed are likely immune. I consider them immune because they are formalisable in ways that might enable us to show that they will be useful to any cognitive system, even one more advanced than us. I think that with some work I can make something convincing along this line for the fact-value distinction. I am not yet very sure how to proceed for other part of intentionality like reference (or rather some of its aspects) that I suspect should survive too.
This is a view I would have been sympathetic to had I not started asking where we find ourselves in the gut brain’s digestive tract. The more you ponder this, with volition crumbling the way it is under scientific scrutiny (that intuition could have it so dramatically wrong), with what you find in anosognosia (that one can be completely blind yet utterly convinced they can see), and you realize that we could be wrong about everything. If mathematics is simply the ‘natural law’ of information, what we experience of our brain’s computation could prove to be as low on the brain’s digestive totem pole as volition. Who knows, it could be all implementation all the way down, with consciousness little more than an informatic mirage, crucial, but in ways that are entire incompatible with our present assumptions.
We are consistently discovering instrumentalities that cut deeply against the grain of intuitive assumption. The question is quite stark. How many things do we have to be wrong about before we acknowledge the very real possibility that we could be wrong about everything? You can make a strong empirical case that we have no reason to be confident in any intentional understanding of phenomena and good reason to fear that we’re likely systematically deceived.
Fictions. I’m fine with that. Really, our only point of difference is our estimation of the threat. Since conscious experience accesses no information about its neurofunctional role, it always seems the only game in town. Our experience of something as robust as logical reasoning could find itself anywhere in the neural digestive tract and it would still feel like the mouth, like it comes first. This could be our version of the Ptolemaic perspectival trap. Consciousness has so little access to the ways its conditioned, it has to seem like the centre of a universe. The False Unconditioned.
By Jove, I think I just stumbled on my next aphorism…
Part 1:
“and you realize that we could be wrong about everything.”
Attributing a non-zero probability to being wrong about almost anything is reasonable. But I am not sure that speaking about the possibility that we are wrong about everything makes sense. If we are wrong about everything, the notion of possibility itself might be incoherent. I would like to say that for the right notion of possibility, that we are wrong about everything is NOT possible. The opposite position is paradoxical I think.
In other words, I would throw an idea that I find very interesting: It might not be possible to be in the same time rational and as open to possibilities or as sceptical as you want. Delavagus spoke about scepticism as a consequence of rationality. Does rationality impose limits on our scepticism? Is there a trade-off somewhere.
“If mathematics is simply the ‘natural law’ of information, what we experience of our brain’s computation could prove to be as low on the brain’s digestive totem pole as volition.”
Yes, I agree that consciousness is just the tip of the Iceberg and all that. But it does not mean that consciousness is wrong, simply opaque to itself. That wouldn’t make the results of mathematics wrong, only our intuition of what is it for us to understand a mathematical statement would be wrong. Wittgenstein already criticised what we imagine understanding to be in a radical way, and really I get the feeling that neuroscience hasn’t added much that is new to that, just confirmed it.
Performative contradiction arguments don’t have much bite, either, I’m afraid. Saying ‘possibility’ isn’t what we think it is (like Deleuze or Bergson) doesn’t land you in the lap of nonsense, unless you have some commitment to bivalence lurking around somewhere. There’s no reason why I have to beg your definition of ‘possible’ to argue the possibility that possibility isn’t what we think it is. That said, thinking modal concepts in these terms makes things very strange and very interesting.
This is the argument I get the most often. The thing to remember is that ‘rationality’ is itself in play, so invoking a certain understanding/definition of it amounts to begging the question once again.
Neuroscience has radically changed the stakes – where would the Great Austrian be without normativity? I’m really not all that much hung about terms like ‘wrong’ or ‘illusion’ – opacity will do, so long as we realize how easy it is to see it in a manner warranting the former terms. Craver, for instance, identifies three levels of organization in all mechanisms: the constitutive, the isolated, and the contextual. If you see consciousness as a kind of submechanism, the opacity you refer to is the result of the way it cannot access so little information regarding it’s contextual functions – isolation becomes all creation (the Ptolemaic mind). It’s simply the degree to which its actual contextual functions contradict its manifest functions – a matter for neuroscience to determine – that warrant the use of ‘wrong’ type language.
Like I say, the evidence for the pessimistic induction is slowly stacking up, not vice versa. One positive of this is the way it wipes the blackboard clean. In the course of threatening ‘everything,’ it provides us with a new way of looking at everything.
part 2 of the previous answer:
“Consciousness has so little access to the ways its conditioned, it has to seem like the centre of a universe. The False Unconditioned.”
On the one hand, I agree completely and really like your “False Unconditionned”. And I also think that this constitute something that we can call illusion as much as a simple blindness. We have the illusion of free will and the illusion that we perceive objects as “they are”, like in the scholastic where objects were described as sending a little bit of their essence to us.
I’d like to note that both illusions were “discovered” before neuroscience.
A third illusion is that so much of what we call thinking is not done consciously.
All three illusions are basically of the same type: ignoring the existence of mechanisms. It didn’t change the fact that we still speak of will, perception and of what we are conscious. Do you think there should be changes? The metaphysical ideas are fragile, but the everyday instrumental use is robust, because it is efficient. Thermodynamic showed that the cold was just the absence of warmth. Should we say that that the cold does not exist? We will learn how the brain deal with intentionality, why should our use of the word be transformed or disappear?
Answer to your newest comment.
“Saying ‘possibility’ isn’t what we think it is (like Deleuze or Bergson) doesn’t land you in the lap of nonsense, unless you have some commitment to bivalence lurking around somewhere.”
As if being in the company of Deleuze and Bergson wasn’t bad enough ;). I rather lack openness to a certain brand of continental philosophy…
More seriously, the performative contradiction is not only a problem of rationality but also of meaning. The meaning of your words or sentences is dependent of your commitments and by examining the possibility of referring being “false”, for example, can you refer to the situation you describe?
“That said, thinking modal concepts in these terms makes things very strange and very interesting.”
Could you expand about this other way of thinking of modalities? I don’t know what you are referring to and you excited my curiosity.
Let me come at the view that your position on being wrong about everything is contradictory from another angle. And formulate a somewhat different argument:
If your pessimistic induction applied to everything and not just a small specific part of theorising about the world, it applies to itself as much as against what you put in question. It mostly refutes itself. For your argument to work against attributing values to cognitive systems, for example, it must be specific enough to say that: 1) Intentionality as a lot of chance of being wrong, but 2) neither science nor 3) your interpretation of it nor 4) your pessimistic induction are more likely to contain a mistake. As you said yourself, “we could be wrong about everything” is a consequence of your pessimistic induction. Now, there is strong ground to doubt that an hypothesis that contradict the credibility of the only argument for it is likely to be true.
“The thing to remember is that ‘rationality’ is itself in play, so invoking a certain understanding/definition of it amounts to begging the question once again.”
I don’t think we have a choice. We cannot avoid starting with some notions like rationality or correctness. Your accusation of begging the question only holds if I put specific stuff in my notion of rationality that you do not agree with, the accusation cannot stand “in general”. For example, you base your pessimistic induction on the possibility of separating the scientific from the non-scientific and to observe the efficiency of science and so on. This is a lot more that using a vague idea of what rationality is.
“Neuroscience has radically changed the stakes – where would the Great Austrian be without normativity?”
I don’t see the contradiction, Wittgenstein showed that rule-following is a phenomenon that has a fundamentally implicit part an thus prepared the place where neuroscience has its role.
“Like I say, the evidence for the pessimistic induction is slowly stacking up, not vice versa.”
Theories are replaced by more accurate theories, but this is hardly an argument for pessimistic induction, since the more accurate theories confirm in large part the usefulnes of the old ones, at least as approximations.
So what are those evidences? Could you quote some really new evidence coming from neuroscience? The brain is a mechanism for dealing mostly with the external world, it is not fundamentally reliable and most of what it does is unconscious. What else? In the last magic show, you start from that. Do you really need more for any of your conclusions?
“As a result we need to suspend our exclusive commitments to use as intentionally understood (USE-I) and begin thinking use for what it actually is, a genuine unexplained explainer (USE-X).”
If I speak of use in the inferentialist sense, what is intentional about it? More precisely, I say “use” as the norm or regularity ruling where the concept can occurs. The pragmatist view “respects” that the application of norms is best attacked by something else than the analysis of language, for example neuroscience. I don’t understand the USE-I / USE-X distinction.
“Again, I think the burden plainly lies with you – primarily because your position is the dogmatic one.”
I do not think that a more dogmatic position has the burden of proof if the dogma is currently accepted. For me evidence are necessary for the most “surprising” claims.
“I recommend Kahneman’s new book to complete strangers. ”
I see Kahneman’s book as showing that the most conscious and intentional of reasoning can help find the biases in the non-completely explicit thinking we usually do. The PR department of the corporation is actually helping reveal all the dirty little secrets… Is science a way of tricking the PR department to make it work for what it says it is working for?
“… what Sellars calls the Manifest Image could in fact be a collection of PR devices”
Let’s speak of the “game” of asking and giving reasons, of ascribing intentions and so on. I claim that
1) It constitutes language
2) Lying cannot be more than parasitic on telling the truth. Language must be able to represent to be able to misrepresent. So at least originally, there must have been more than PR.
So in some sense, fundamentally, intentionality is more than just PR. Of course, I don’t exclude that there is plenty of PR in it. Just that the PR can only appear afterward, be derivative.
Norm OR regularity? There’s a big difference, in that the former can be right or wrong. Inferentialism is normative through and through, which is to say, intentional. And this is nub of the distinction I’m making: the difference between intentional and nonintentional uses of ‘use.’
Ad populum. You’re mistaking the difference between rational warrant and persuasion. I’m at the bottom of the authority gradient belief-wise, there’s no doubt there! But not for long, IF the sciences of the soul continue their caustic course…
This is definitely one way of looking at it. But as Kahneman mentions, and other researchers have found, it is tremendously difficult to translate second-order knowledge of these biases into cognitive immunity to them. More and more, it seems that the enlightenment ideal of using representation to overcome repetition (to use Freud’s characterization) is not nearly as effective as ‘Skinner boxes,’ treating people like learning machines.
I think (1) is a very narrow, hyper-cognitivist understanding of language, one that is very controversial on grounds entirely independent from what I’m arguing here. I do think that the ‘reason game’ does have epistemic functions, but that these are obviously secondary to signalling functions (why else all the dismal research findings?), and that science is only now mapping the conditions favourable to either. I have all kinds of guesses as to what this map will look like, which also aren’t germane.
Relevant to our discussion, I think that the gulf between the epistemic and signalling functions of the reason-game will be shown to be far less abyssal than they appear: linguistic modes were selected for against the common backdrop of reproductive success, something requiring both political and empirical capacities for a species such as ourselves. In both cases, you have brains manipulating other brains vis a vis their natural and social environments. Signalling is not ‘lying’ – not by a long-shot, given that everything is generally ‘believed.’ But it’s not ‘truth-telling,’ either, and so for those wielding inferentialist hammers, it proves to be a bent nail indeed. This I think is just one more reason to abandon commitments to inferentialism. It could be that language evolved basic epistemic functions first, and that the social signalling functions are derived from it, but I’m skeptical. My hunch is that the ‘epistemic bias’ among philosophers is partly a psycho-cultural artifact – ‘My mode is the fundamental mode!’ – and partly a function of the drastic informatic limitations faced by reflection. The thing to always keep in mind is how preposterously little we experience of our language use, which hauls us from the neural dark, shunts us on its (apparently) semantic skin, then dumps us back into the dark.
What we are talking about are different kinds of neural information linking up to their environments in various ways. What I’m advocating is a wholesale bracketing of our intentional understanding as a potential blind alley, and a systematic interrogation of all these phenomena in a post-normative mode. Either way, a whole new conceptual vocabulary needs to be developed to explain everything we’re beginning to discover, and I think it’s eminently safe to assume that the old prescientific systems and vocabularies will suffer the fate they have in the past. I’m not telling you to hop on my train, tickli, I’m telling to hop off your old one and begin building something new!
“Norm OR regularity? There’s a big difference, in that the former can be right or wrong. Inferentialism is normative through and through, which is to say, intentional.”
When we describe what one person ought to do, we describe a regularity, and we add “follow this regularity”. For example, both a norm or a regularity can be described by a rule or a sequence of illustrative examples. When I say that “use” is a norm or a regularity, I mean that it is also described by rules or illustrative examples (Or both? Or even something else, maybe “pedagogic neurosurgery”? :)). So “use”, for me, is neither exactly norm nor regularity, but something common to both.
I think my position on this point is not the usual pragmatist inferentialist position. What I really liked in Brandom’s description of language is the description, not this kind of metaphysical categorisation (use as a norm) that I also consider problematic, mostly because the concept of norm is relatively unclear.
Of course in some circumstances “use” is used as a norm, but this is irrelevant, I think. (Of course, this is probably the argument Brandom would deploy against me…)
“Ad populum. You’re mistaking the difference between rational warrant and persuasion.”
I was mostly thinking of inertia as a rational warrant. Of course, in neither cases it is a strong one, but it might shift the burden of proof.
“More and more, it seems that the enlightenment ideal of using representation to overcome repetition (to use Freud’s characterization) is not nearly as effective as ‘Skinner boxes,’ treating people like learning machines.”
I completely agree and actually you can find in Wittgenstein that learning is foremost training as in “Pavlovian training” and not acquiring representations. (He was not speaking about correcting biases in particular.)
“The thing to always keep in mind is how preposterously little we experience of our language use, which hauls us from the neural dark, shunts us on its (apparently) semantic skin, then dumps us back into the dark.”
This is one of the last point where we still clearly differ. You seem to think that understanding the neural implementation is capital for understanding language. I think that it is almost useless. That might be a consequence of my training in math, the description for me is the focus, the origin of the phenomenon is less important. My arguments are, for example, the possibility of multiple implementation of the same cognitive function and the simple fact that our categorisations depend on their efficiency and the external world, not just our ability to conceptualise.
“What I’m advocating is a wholesale bracketing of our intentional understanding as a potential blind alley, and a systematic interrogation of all these phenomena in a post-normative mode.”
I certainly like this program, but I think that despite some metaphysical overreaching excusable for philosophers, this is a good part of what Brandom and other pragmatist and empiricist were trying to do, for example in “Making it explicit”. The book is mostly descriptive. And this kind of description is a good start for studying intentionality “from the outside” or indirectly.
I’m not sure I understand what you mean regarding regularities, then – especially from a Brandomian perspective! Much of the beginning of MIE is devoted to theorizing an account of normativity apart from ‘regulism’ (which sets on a Wittgensteinian regress) and ‘regularism’ (which cannot pick out competent performances from common ones). This is what motivates his account of deontic scorekeeping, isn’t it? Correct and incorrect performances are those that we take to be correct or incorrect in given contexts. This allows him to break the back of all the problems pertaining to ‘beliefs’ by breaking them in half, turning them into commitments answerable to the game of giving and asking for reasons. Something social, as opposed to something in our heads.
Brandom is also, if I remember correctly (I was a rabid fan for a while in the ’90s), quite concerned with the second-order consistency of his account: his ‘descriptive account of language’ is itself a normative commitment contingent upon the Sellarsian game. Not only is normativity an unexplained explainer for him, it is explicitly so. Taken as a fiction through and through, this is well and fine, but the more one argues the autonomy and the ineliminability of normativity, the more the metaphysical commitments pile up, do they not?
“I’m not sure I understand what you mean regarding regularities, then – especially from a Brandomian perspective!”
My perspective is not completely Brandomian, I accept his arguments against regularism and do not consider that meaning are regularities. But I do not think that it forces us to go as far as to say that meaning is a norm. I also accept his argument showing cases where meaning is normative, but I do not think that it is always or essentially normative in the full sense.
Mostly, I think that the word “norm” is a bit unclear and it as all kind of connotations that we must drop when we use it to speak of use. The main problem is the unclear modal force of the word norm. However, we can’t much fault Brandom for adopting the word norm, since we don’t know a much better one. Descriptive norms? differential disposition?
“This is what motivates his account of deontic scorekeeping, isn’t it?”
Sure, and I think that the motivation stands as long as we sometimes use meaning normatively. But I think that it is actually essential only to explain the genesis of use, and not constitutive of use.
“Correct and incorrect performances are those that we take to be correct or incorrect in given contexts.”
I think that this is a bit of a dangerous way of speaking. For me, while analysing language pragmatically, we should only speak of “taking to be correct” and not of what “correctness” itself is. Otherwise we get into problems from adopting two viewpoints at the same time.
Take the naive pragmatist saying “Truth is what people think is true”. He is right when he means the uninteresting: “”truth” is what people call what they think is true”. And since pragmatically you try to answer how is the word “truth” used, his unhappy utterance is explainable as a bad shortcut.
“Not only is normativity an unexplained explainer for him, it is explicitly so. …. ….
but the more one argues the autonomy and the ineliminability of normativity, the more the metaphysical commitments pile up, do they not?”
Why is normativity unexplained by Brandom or Wittgenstein? Because it is not possible to make it explicit. For me, this is the great strength of pragmatism, by leaving alone that about which they couldn’t say anything meaningful, they avoid pointless speculations. This is why, for me, pragmatism complements neuroscience admirably, it leaves room for something else!
Let us take the example of the norms expressed in the mysterious “reliable dispositions to respond differentially”, for me, this is simply the pragmatist saying “I don’t know how it works so I just call it a disposition”. And this is exactly where neuroscience and physics will have a role to play, explaining the long and hidden chains of causality between perception and utterance, for example.
Wittgenstein regress of rule argument says nothing less! Hidden behind the explicit application of a rule, lies much that must be implicit. In other words, the conscious is a thin veneer on the preconscious, (in this case, speaking of inferences and application of rules).
So I see the use of an unexplained explainer as modesty! For once, the armchair philosopher is not trying to talk over the scientist who can do more that just study speech.
Finally, I think that normativity (in Brandom’s sense) is ineliminable in the following sense. You cannot translate it in naturalistic terms, but it is not a “hole” in our scientific description of the world. For the same reason that questions or orders are useless in our scientific description of the world. You can eliminate it, in the sense that you don’t need it in the scientific description of the world.
I don’t see many metaphysical commitments there, just that descriptive norms exist, in the pragmatic sense, that means that we can talk about descriptive norms without too many problems, if we are careful. I think we can and so I accept the commitment. I also think that we can make sense speaking of norms in the full meaning of the word, but this would require more explanations.
Eventually it could even be possible to prove or refute that norms exist in the pragmatic sense scientifically. But not by examining the brain. By analysing language and reality and their correlations. By looking at the brain, you may find what I think, but not if what I say is correct or wrong. That depends on what is outside and what has been said only.
The identical pattern again. The fact is tickli, we have a good old fashioned philosophical game of out-fundamentalization going on here, and what I keep trying to show you is how this game is the very thing you should expect if all the things I worry about turn out to be scientific fact. You take norms as the ‘ineliminable frame’ based on a certain interpretation of What Science Is (or more accurately, Can-Or-Cannot-Do). You eschew outright trandscendental claims, appeals to aprioricity and autonomy and the like, and elect instead to ‘speak pragmatically,’ talk of what is useful to us. This is the sense in which “descriptive norms exist” – pragmatically. This is your deflationary route out of the naturalistic bind I keep trying to foist on you.
I’m disputing the possibility that there’s any such thing as the ‘pragmatic existence’ of pragmatism – I think there’s a good chance it’ll turn out to be a name that we give to something we don’t understand. Given the counter-intuitive claims coming out of cognitive neuroscience and elsewhere, I think we have no choice but to take this possibility very seriously, and that pragmatists such as yourself are well-advised to condition their commitments if not abandon them altogether. I have to admit that back when you adopted the deflationary extreme of your position, stating that norms were simply ‘useful fictions,’ I had an inkling that you would balk – I’ve been down this road with pragmatists before!
And so, you come back with a series of substantive commitments:
To say “descriptive norms existence in the pragmatic sense” is a metaphysical claim – this is the very claim I’m disputing. The mere fact that I’m disputing it throws the prospect of talking about “descriptive norms without too many problems” into doubt, but I take it as a platitude that few domains of human discourse are as controversial as ‘norm talk’ – it is philosophy you’re talking about after all!
The notion that cognitive neuroscience will have no role in resolving the question of whether norms exist I find very curious, especially since most of the furor surrounding cognitive neuroscience has to do with the ways its findings jar with our intuitive/traditional normative and intentional understanding. You think this has to be the case because the normative question of correctness is not something that lies in the brain, but rather on the normative contexts that brains find themselves caught up in. But you do see how thoroughly this begs the question? You’re saying that cognitive neuroscience cannot settle the question of whether norms are simply a parochial artifact of human experience because norms are SOCIAL, which is to say, NOT a parochial artifact of human experience.
It was precisely pinches like these that popped my pragmatic bubble (and got me thinking in terms of informatic as opposed to normative externalism). Unless I’ve made some gross misinterpretation, I’m not sure the position you’ve sketched is all that defensible – and so much the worse for humanity, I say.
“stating that norms were simply ‘useful fictions,’ I had an inkling that you would balk – I’ve been down this road with pragmatists before!”
I don’t think I balked, it’s just that for me existence in the pragmatic sense is being useful as a “fiction”. (Of course, I wouldn’t use the word fiction myself, since for me there is no “thing” which isn’t a fiction.)
“I think there’s a good chance it’ll turn out to be a name that we give to something we don’t understand. Given the counter-intuitive claims coming out of cognitive neuroscience and elsewhere, I think we have no choice but to take this possibility very seriously, and that pragmatists such as yourself are well-advised to condition their commitments if not abandon them altogether.”
Given that I agreed already that I have to condition on a kind of general failure of our understanding of everything, if you want me to be more specific about what might fail, you need to show that science is more corrosive to my pragmatism than to your pessimistic inductions and your other arguments, in other words, you need to show, for example, that my pragmatism is clearly unscientific, in the same way that old style metaphysic is unscientific. I don’t think you have done that by far.
“to say “descriptive norms existence in the pragmatic sense” is a metaphysical claim”
I don’t think so. That some kind of discourse is useful is not a metaphysical claim. Saying that talking using descriptive norms (not norms in general) is useful is not very different than saying that any kind of second order talk about language is useful. And of this I think we have many example, like while learning second languages, explaining the meaning of terms or their use.
“The notion that cognitive neuroscience will have no role in resolving the question of whether norms exist I find very curious, especially since most of the furor surrounding cognitive neuroscience has to do with the ways its findings jar with our intuitive/traditional normative and intentional understanding.”
The discoveries of cognitive neuroscience will tell us what is in our brain. “Is what is coded in our brain useful or not?” is another question. For example, when receptor for warm and cold were discovered in the skin, we learned nothing about what heat was. (Okay, we knew it before, but still…) Kahneman was able to show that we are irrational when evaluating probabilities, because he can compare with probabilities. Intuitively, to show that intentionality is wrong, you need to compare it with something right, and this something will not come from psychology or neuroscience.
“You think this has to be the case because the normative question of correctness is not something that lies in the brain, but rather on the normative contexts that brains find themselves caught up in. ”
In one sense, it does not make sense to say that a norm is somewhere. But certainly, ultimately, the brain judges, we judge if an instance fall under a rule or not. So the norm is implemented, (maybe imperfectly) in the brain. So examining the brain may tell us if the norm we implement is the norm we thought we implemented or another one for example. It cannot by itself tell us if the norm we implement is correct or not for some exterior correctness criterion like usefulness or coherence (except for coincidental exceptions).
“You’re saying that cognitive neuroscience cannot settle the question of whether norms are simply a parochial artifact of human experience because norms are SOCIAL, which is to say, NOT a parochial artifact of human experience.”
I define descriptive norms in a pragmatic way and then my definition has consequences. Do you then disagree with my definition? (My definition: ~”A way of classifying instances in two categories.”)
I don’t want to fight on what is “a parochial artifact of human experience”, since some very useful discourses and correct ideas may be “parochial artifact of human experience”!
Again the pattern. Do you not see the way you keep begging the question? At every turn you appeal to some intentional category, or some rationale regarding the immunity of intentional categories – you appeal to the very thing in question. Theoretically, the structure is really no different than what you find with Derrideans: rather than making substantive claims regarding the reality of intentionality (which would be a traditional metaphysical commitment), you simply follow the same procedure, which is to raise the question of usefulness. In a sense, you’re trapped by the totalizing applicability of your procedure: you can always ask about the correctness or usefulness of any given understanding or concept, even USEFULNESS and CORRECTNESS. This is the big reason I abandoned Wittgenstein, for instance: I realized it led to the same kind performative dogmatism that drove me out of deconstruction, a way of thinking that licenses the eschewal of substantive theses (and thus metaphysics as traditionally understood), but that nonetheless boxes the thinker within a narrow family of arguments from which they need never escape. Looking at pragmatism in meme terms, I would say that it evolved precisely because of the way it short-circuits the game of giving and asking for reasons. It’s a regimented way of thinking that allows you to avoid explicitly committing to substantive theoretical claims (where the game might not go your way). Your interlocutors try to pin these on you, but you simply step back and say, “My commitment runs only so far as my claim’s utility.” It becomes a kind of crypto-metaphysics, which is arguably worse than metaphysics.
But I know that you’ll likely find this second-order diagnostic approach unconvincing. So let me put it in a form of a question…
So for instance: Cognitive neuroscience has a good chance of showing that my fears are entirely misplaced. What would show you the error of your ways? What could convincingly demonstrate that pragmatism is wrong?
Lack of utility? The absurdity of this response, I would argue, simply shows the degree to which pragmatism (as a principled philosophical position) seals thinkers in – renders them dogmatic.
Or put a different way: You have a very specific, regimented way of approaching and thinking through philosophical problems: could you tell me, without begging the question, what makes this way best?
“At every turn you appeal to some intentional category, or some rationale regarding the immunity of intentional categories – you appeal to the very thing in question.”
Could you be more specific? In a general sense, any discourse, yours as well as mine appeals to intention, since language is intentional.
“Theoretically, the structure is really no different than what you find with Derrideans: rather than making substantive claims regarding the reality of intentionality (which would be a traditional metaphysical commitment), you simply follow the same procedure, which is to raise the question of usefulness.”
I consider most usual metaphysical claims meaningless, because based on a misunderstanding of the use of the word exist, so of course I don’t make usual metaphysical claims. I don’t know enough about Derrida’s ideas to say if there is some structural similarity, but there clearly is a huge difference in that pragmatists try to be as concrete as possible and are a lot more rigorous (at least compared to Derrida himself).
“you can always ask about the correctness or usefulness of any given understanding or concept, even USEFULNESS and CORRECTNESS.”
Yes, but people are actually interested in those two concepts from the start, before introducing pragmatism. It’s just that sometimes, they lose sight of what was their goal along the way and do old-style metaphysics.
“In a sense, you’re trapped by the totalizing applicability of your procedure”
That a procedure is widely applicable is an advantage.
“boxes the thinker within a narrow family of arguments from which they need never escape. ”
As long as it does not PREVENT the thinker to use other arguments, it is not a disadvantage at all. I would argue that pragmatist is especially flexible in allowing, for example, as Quine did, to argue and use the language of Platonism. (This flexibility is among other things a consequence of a better understanding of the word “exist”.)
“Your interlocutors try to pin these on you, but you simply step back and say, “My commitment runs only so far as my claim’s utility.””
It displaces the question, certainly, but the question of utility is more concrete and more decidable that the previous one, at least in most philosophical discussion. So the unusualness of the displacement is irritating, but if it works and helps solve philosophical problems we should adopt it.
Also you hint at the fact that a pragmatist considers the utility of the claim only in contexts arranging for her. This is naturally not the way recommended by anybody to assess a claim’s utility.
“Cognitive neuroscience has a good chance of showing that my fears are entirely misplaced.”
I seriously doubt that, what result in neuroscience would show that your fears are misplaced? Can you give me an example.
“What could convincingly demonstrate that pragmatism is wrong?”
Large part of pragmatism could be made obsolete, for example, if we discovered an artificial language which is more efficient in some way that natural languages or a general better way of thinking.
Some core ideas of pragmatism, like the idea that what counts is usefulness are certainly irrefutable. But on one hand it’s an argument for pragmatism, not against it. And, on the other hand, it might be unavoidable that if you define “correctness” or “truth”, your definition will not be a refutable theory in the old-style metaphysical sense. But it will be refutable for pragmatists, by showing that another definition of “truth”, than “useful in a specific xxx way “, is more fruitful! This is an example of how pragmatism makes everything more graspable.
“… seals thinkers in – renders them dogmatic.”
Advances in knowledge makes us state our new knowledge sometimes in dogmatic way, it is not necessarily a bad thing. It depends on what we become more dogmatic about…
“Or put a different way: You have a very specific, regimented way of approaching and thinking through philosophical problems: could you tell me, without begging the question, what makes this way best?”
Approaching problems in this regimented way enables pragmatists to be as flexible as possible in their thinking. A pragmatist can emulate other kind of philosophical thinking. By being strict in their method, they become uncluttered by all kind of implicit metaphysical commitments. (Of course, this is only an argument for the core idea of pragmatism, not all that is usually called pragmatism.)
What makes you certain What Language Is, let alone that it’s intentional? Seems to me this is an empirical question.
I appreciate that your commitments strike you as enabling, rather than constraining, simply because, once again, I shared those commitments. No longer. For me now, ‘pragmatic’ simply refers to doing the best I can in conditions of abject ignorance, minus any totalizing commitments to the role of truth/meaning and language and contexts and normativity and so on. Why? Because I have no clear idea what the fuck any of these things are, and I think it’s very clear (thanks to cognitive psychology) that theorizing them outside the sciences has very little to recommend it beyond generating new possibilities to consider. Given that these possibilities seem to have been mapped ad nauseum, and that most all pragmatic (contextualist) claims I encounter are simply versions of what I have already encountered, it just seems… uninteresting, even retrograde. I think we’ve already learned enough about the brain to warrant wholesale conceptual experimentation.
Irrefutability is never a good sign, theoretically speaking. Given that, all things being equal, you are wrong rather than right, irrefutability should count as a warning that you’ve found your way into the very kind of system I was describing, which is to say, one designed to game concepts for institutional advantage. Occultism made cunning.
The problem with totalizing claims is the way they render everything accountable to the same yardstick, and so beg that yardstick even in the course of responding to critiques of that yardstick. The problem, in other words, is the way they become irrefutable. The problem with irrefutable claims is that, ceteris paribus, you are likely wrong, but will never be able to recognize as much, since you have structured your half of any debate you enter into one that inevitably begs your yardstick.
Naturalists take scientific experimentation as their yardstick,. The virtue of this approach, aside from providing real speculative versatility, is that the yardstick is itself something that can be investigated and revised, perhaps radically. There’s nothing sacrosanct about scientific practices, nothing immune to radical revision, nothing that says the science of a thousand years from now will be recognizable as science now. But what you can say, is that it will be effective.
“What makes you certain What Language Is, let alone that it’s intentional? Seems to me this is an empirical question. ”
Once you have good definitions of “language” and “intentional”, the question might be empirical. The question “What is language” seems to call either for a metaphysical answer or for a definition. In none of those case will science be directly involved. If we take a standard definition of “intentionality” like in the Stanford encyclopedia, it is by definition applied to sentences.
Science cannot answer before a question has been posed. So you need to define your terms first. When you ask “What is intentionality?”, actually you already have an idea of how intentionality should “look like” and you search for a neurological correlate. Your idea of how intentionality should look like cannot be justified by science, (but may be refuted by science).
“I think it’s very clear (thanks to cognitive psychology) that theorizing them outside the sciences has very little to recommend it beyond generating new possibilities to consider.”
I think that pragmatism should be mostly scientific, although you still need a few bridges to translate your philosophical problems into a scientific language. Pragmatism offers such bridges. Most pragmatists are trained philosophers and so they act as such, but this does not mean that pragmatism itself must be metaphysical. When I say meaning is use, I might well say, “meaning” is metaphysical and so let’s speak about use instead, I am mostly prescribing, not stating a mysterious metaphysical equality.
Pragmatism as Wittgenstein practised it is very scientific. you observe language, how people get into philosophical trouble and maybe you prescribe to think or speak differently and hopefully theorising becomes easier afterward. The kind of mundane metaphysical assumption appearing in pragmatism done right are nothing more than those appearing in most sciences and in everyday thinking.
“Irrefutability is never a good sign, theoretically speaking. Given that, all things being equal, you are wrong rather than right, …”
When you declare what the point of what you are doing is, when you state your goal, you always get in some sense irrefutable claims. It is natural. Being clear about what we are trying to do is not a bad thing. So when I say that one of the goal of a language is predictive power through description, for example, I am prescribing as much as I am observing. And so what I claim is mostly irrefutable, because it is partly prescriptive.
Another way to say the same thing, is that when you state that claims are more often wrong than right, I think you are wrong, at least concerning “philosophical” claims. I think most are meaningless by themselves, largely vacuous or are definition-like rather than assertion-like. It is the case here.
I agree with you that many philosophical systems contain a kind of irrefutability trick, but I do not think that it is the real problem with those philosophical systems, I think the problem is that they are hopelessly vague. Non-formalised abstraction breeds vagueness and it is the plague of most philosophical systems. I see two ways out, pragmatism, (=trying to be as concrete as possible and experimenting.) and extreme formalism with tests. The second solution seems to exploit badly our cognitive capacities so I think pragmatism, maybe with a little formalism is the way to go.
“The problem with totalizing claims is the way they render everything accountable to the same yardstick”
This is a priori not a problem, in particular if the yardstick is not contested. In practice, in the long term, scientists are pragmatists. They consider a theory true if it works better than its competition, if it is more useful, why should we act differently when considering questions of philosophy? What are your yardsticks?
Holonomic Brain Theory by Karl Pribram.
He’s apparently teaching at Georgetown now. Hoya Saxa!
If nothing else, a good kick in the pants to review Fourier transforms.
Fourier transforms…the memories… the memories…
“Consider the experiment that showed there were brain patterns indicating motion before the test subject decided to move his finger: does this actually indicate a lack of free will or rather that the free will mechanism resides somewhere beside the concious mind?
Admittedly, this was my thought on Libet’s evoked potential experiment as well.
I remember Moorock’s Multiverse had, according to at least one denizen, incredibly limited free will. We could only act freely in certain moments.
Just threw in the Moorcock thing, thought it would strike a tinder in my mind but no so much.
I’m dubious about that test. To me, it strikes me as a test of random number generation (in this case, 1 or 2 (2== lift finger)). It’s based on no other input. I’m not making an arguement for free will, but I am making an arguement to say that when you reach for a random reaction from yourself, you are probably drawing on a random generator and indeed the generator might be deep. I mean, what about the recursive question – not the responce time to lift the finger, but the access time of requesting a random result. Does the request go through pretty much instantly? It just takes the random generator time to send the result back (in concious/human readable format)? Kind of a beurocracy – you can hand in the form pretty much instantly, but the processing time sucks?
I’m not sure I understand you objection Callan. Are you saying the problem is the subject has to gage their own awareness of the decision?
IIRC Dennet found the experiment problematic, I think for reasoning similar to yours, but I forget his exact issue with it.
Scott can probably clarify.
I think the conclusion that this is a test of actual decision making is probably false. Instead I suspect it’s a test of accessing a random number generator. Which happens to take some time to access. I think it does show that part of your brain can be prompted to do stuff then be doin’ stuff and it takes you some time to be aware of that. I don’t think it shows, for example, that your thinking/language is just a post hoc rationalisation/a commentary splattered on top of something elses actions. It probably does show that when you search for a ‘gut feeling’, you are probably feeling something that started to be generated six seconds before. Though I guess if you always go off gut feeling, perhaps the post hoc rationalisation has more relevance…?
“The pattern predicted a left or right decision with about 60% accuracy and occurred about 10 seconds before the conscious choice”
This is probably a case of the subconscious mind going “I’m having a good feeling about left. Yes, I think it seems pretty random. After all, I remember doing those four things involving right in the last two hours, and those other three things involving left. Getting real good vibes about left here.” And this process can start even 10 seconds earlier.
But then, in the remaining 40% inaccuracy, the conscious mind can disagree at the last moment and choose the other option. But since it is used to trusting the subconscious, it is the minority case.
“What cognitive task do you attribute to the soul? Do you agree that those cognitive faculties must not be altered by brain lesions or drugs?”
Personally I’m not sure soul is the right word, at least the idea that your ego survives your death.
For me, I think Chalmers is on to something when he says the mind is *more*, but I also am not sure I buy into quantum consciousness though it is an interesting idea.
I think Pribram is a little more grounded than Hammeroff though admittedly if there’s a guy you want at your party it’s Hammeroff. Penrose, not so much. 😉
I hope at least one of these guys is right, but the regularity with which science confounds what we want to be the case make me less than sanguine. We always end up being less special than we suppose.
I have a friend who wants me to take ayahuasca, and I end up thinking you should be the one to do it Scott. I’d be curious what you experience.
anytime someone mentions ayahuasca, the first thing to come to mind is the Tori Amos song “Father Lucifer.” From VH1 Storytellers:
“…when my father said to me, ‘Tori Ellen, I can’t believe you wrote this song about me.’ And I said, ‘I write everything about you, what are you surprised about?’ And he said, ‘No, but I’m really hurt about this one.’ And I said, ‘Well which one is it?” And he said, “well, you called me Satan.” And I said, “No! I was taking drugs with a South American shaman and I really did visit the Devil and I had a journey.” And he went, ‘Oh, Praise Jesus!'”
Weirdly i thought of you today, maybe first time ever in a random event.
So driving into Edinburgh they have average speed camera” (measuring the speed over a distance via at least 2 or more camera, just in case one didn’t know)
So anyway its set at 40 mph, and you have to stick to it as its over the duration of the length of road. so anyway coming into work today i slowed down to 40, then someone passed me, (fool) i thought then someone was going quite a bit slower so i overtook them (cautious fool in my road) was the thought, and i observed similiar reactions or facial expressions as everyone “judged” everyone elses speed, then it suddenly dawned on me.
We’re all only going by what it says on our speedos, and how the hell do i know my speedo works or is precise or even accurate, my car was 8 years old by the time i got my hands on it. So in fact it could be my speedo which is wrong and therefore either the slower or faster person is correct but i’m judging them by what it says on my clock, even though i have no right to believe it is correct.
Yet i’m quite happy to think everyone else is the fast/sloe arsehole rather than me.
Anyway that made me think of you and your message, not sure if it is relevant that i do, but from my limited understanding of your work, it seems to be what you’re saying.
It’s nice to be thought of!
For me, ‘Speedo failures’ usually involve my cock falling out!
Sound like the ‘other assholes’ you need to worry about are the one’s bent on ubiquitous automated surveillance.
Cheers for the mental image Scott, although considering the stuff in your books…
I agree, the problem is the infrastructure of my society requires a lot of people to read a lot of complicated stuff , understand it, then act upon it for any real change to happen, and there is so much more immediate and easier things to think about.
“What you need, it seems to me, is something like the Penrose/Lucas argument, only without all the holes”
My goal is to slowly work up to understanding Penrose’s points, the refutations, and the qualified support of Solomon Feferman to Penrose’s ideas against Strong AI.
I actually have the book by Hutter, Universal AI:Sequential Decisions Based On Algorithmic Probability, given to me by a prof, but it’ll be awhile before I can make heads or tails of it.
Oh, Dharma – Saw Tori in concert, she kept starting and stopping a song, then finally said “Oh fuck it” and ripped the cheat sheet for her lyrics from where they’d been sneakily taped to the floor.
Good stuff.
It’s interesting to note that Penrose himself seems unconvinced that anyone save Chalmers even understood his arguments. 🙂
For interested parties, an interview with Penrose on his argument:
http://simplycharly.com/godel/roger_penrose_godel_interview.html
Saajan, I think Penrose’s argument starts spinning to pieces when he asserts that mathematicians are “Godel immune”.
For one, what would a mathematician encountering a statement that is a case of “Godel statement for human mind” experience ?
Will it cause the poor guy to lock in a “loop” trying to prove it, or will it merely be an annoying little crunchy bit that seems to avoid proof despite making a kind of sense ?
Because as far as I recall, math is full of of little theorems that “evade” proof 🙂
It is entirely plausible that human mind is “computational”. It has little bearing on the issues of “intentionality” and “aboutness” though.
Just to be clear, I have no idea whether Penrose is correct or incorrect as I haven’t read Shadows of the Mind.
I’ve just been collecting the info in the hopes of going through it at some point, and figured since Scott mentioned the “Penrose/Lucas argument” people might be interested.
As to quantum consciousness, I find it a possible way to unlock aspects of Earwa and thus TUC but in RL I’d want to see more evidence.
It seems like Kellhus is somehow utilizing a connection between consciousness and light to fire watch without using actual sorcery.
@01
Yeah, Penrose’s attribution of the capacity to guess the missing axioms is what I was referring to (in the next post) by “the hypothesis that humans can compute stuff which isn’t Turing (+randomness) computable.”
I don’t think Feferman gives even a qualified support to Penrose’s thesis. My reading of his review would be that it starts kindly (praising the excellent exposition) but becomes “a polite and understated assassination”, he careful list all the mistakes Penrose makes even when they don’t have any influence over the conclusions.
math.stanford.edu/~feferman/papers/penrose.pdf
“The shadows of the mind” is an easy read (compared to Hutter ;)). So I’d advise you to start there and to then read some non-technical refutations. They seem sufficient against Penrose’s Gödelian argument. As far as I remember, one of Penrose’s assumption is equivalent to the hypothesis that humans can compute stuff which isn’t Turing (+randomness) computable. This is just far too strong. (It’s 95% of what he is trying to conclude.) Feferman doesn’t even emphasis this simple (and for me decisive) argument. Maybe because he agrees partly with the assumption?
By qualified support I just mean Feferman’s statement:
“…I am personally convinced of the extreme implausibility of a computational
model of the mind…”
Oh, here’s Penrose’s general refutation of his original critics, since you’ve read Shadow you might it find of interest though at minimum it seems you have to read the refutations of Feferman and Chalmers:
http://www.calculemus.org/MathUniversalis/NS/10/01penrose.html
01, am I right in thinking this is your proposition:
Seems like most things in life that we ascribe meaning to – love, happiness – aren’t things we choose but things we query our subconscious about.
“Do I see a future with X?”, “Should I be stricter with my kid?”, “Why am I worrying about this mumbo-jumbo when I should be writing balls-to-walls for glory in the name of the dark gods of dead cities?”, “Could Sci ghost-write TUC or would his vestigial prudishness ruin the novel?”, and so on.
So the mind, whatever it is, seems to query the subconscious to make projections for our utility. (Think of it as the maximization our long run expected value.)
However, the *weighting* of these different things (kind but bad sex, exercise vs. review Fourier transforms in the evening) is not up to the conscious mind. So what’s left to do it sort out the activities that maximize the expected value for some amount of time.
Will, whether free or not, is really the sorting mechanism that decided the goals that lead to maximization?
And Scott, your position is that it’s possible that this supposed calculation, which I *think* is Hutter’s “Sequential Decisions Based On Algorithmic Probability”, is just a farce?
That there is something else going on in the mind, orthogonal to these processes? But then why would natural selection lead to conscious beings that think they are engaged in deliberation to make “free won’t” decisions?
Hm, well, might be.
I’m not enough of a neurocog boffin to say whether the “willful conscious effort” is actually a sorting algo establishing current priority for “unconsciously established” goals (finish reply in an internet argument, then have round two of hot weird sex 😉 ), but I recall there being evidence that deliberately refraining from a desirable activity is dependent upon blood glucose levels for success, and is associated with increase of glucose utilization in the brain.
Which suggests that whatever the hell that willful suppression of “desired” activity is, it is an actual, physical process – and energy intensive one.
Also, it suggests that if what your are trying to refrain from is “eatin’ moar sugarz”, you’re kinda fucked.
“unconsciously established” goals
Just to be clear, I mean the positive and negative weights on the goals are unconsciously established. Whether the sorting mechanism is “conscious” depends on the pinning down of that vague word.
Me: So VD is using the ‘Wait, there’s still some black box over there, in the unconcious! That’s all unknown in that box, the souls in that box!’?
VD: No, not at all. Despite your perfect understanding of me, you do seem to have some trouble articulating yourself.
To continue the utter understanding motif; no, not at all. You have trouble recognising the crinkled line of yourself described, for when you see out of it, you just can’t see yourself as seeing from a twisted angle.
Call the black box whatever you like, the point is that a) the finger moved, and, b) the conscious mind doesn’t appear to have done it.
It appears something that makes fingers move and so forth is in the black box rather than the conscious mind. Whether it is something of cosmic and eternal import, such as the soul, or simply a complicated set of if/then rules, is as yet unknown.
Of course, neuroscience is finally catching up to sport here, as every athlete knows that if you have to consciously think about it, you’re going to be too slow.
Soo, soo careful. Careful words. Careful not to perturb what lays underneath. As far as your words can go, without a wrathful lash rising up and smattering any other thought in twain. Never mind an affirmation of the lash at the end. I understand you all too well. Unless you…
If it has a gun to your head, just raise your eyebrows twice! Of course, it’s listening too, so that little signage wont work out.
‘Or simply a complicated set…’, seriously, it’s funny! It’s like a knifes edge between two men – just a switch of emphasis and you’re another. I understand you all too well – I say you’ll just switch the black box to another spot and you refute it by saying ‘Or it’s just a complicated set…’ ’cause that’s just that…THING. Unrelated to the eternal! And so makes no conclusion about it. But with another emphasis, one the elephant will lash out and splinter with it’s over muscled trunk, the trail ends and it’s ‘Simply a complicated set…’
I envision a creature of logic, desperately attempting to not be smashed to pieces in a very organic landscape. A landscape that hears what the creature attempts to become, and punishes. A terrified rider on an elephant that knows how to roll. How do we speak without giving the game away to it? I understand you all too well…unless you ask…