Post-Intentionality Redux
by rsbakker
.
Peter Hankins of Conscious Entities fame has posted his thoughts on my Scientia Salon piece here. As always, I think Conscious Entities is the best site on the web for those seeking clear and impartial op-editorial updates on the world of cognitive science and consciousness research–far more so than Three Pound Brain! Which is okay. Here, the idea is to push a certain boundary, whereas there, the idea is to assess many of the different boundaries being pushed.
.
So when will the Blind Brain Theory t-shirts be available for purchase?
They already are. You. Just. Can’t. See. Them…
this made my dady
*day
…just a freudian slut
…i meant that to be a reply to my reply not to Bakker’s…
Reblogged this on synthetic_zero.
Cool. 😉
http://www.slate.com/articles/health_and_science/science/2014/05/quantum_consciousness_physics_and_neuroscience_do_not_explain_one_another.html
I agree with most of what he says, but (and I realize I’m probably shouldn’t be saying this) it seemed pretty harsh to me!
Ha! Pot meet kettle!
amen brer callan amen
He kettle. Me pot.
i can see the twitterbait now bakker in blackface…
Me pot.
That’d explain the smoke…
theres a new book out which treats a model of certain neurons. i cant afford 200 dollar books, but this book looks interesting in that it argues that a classical neuron can impliment a structure of a qubit thus beqeathing a classical system with certain simulated features of quantum information processing which remaining a bonafide classical system.
lowensteins book on quantum information processing in the brain is out too. i will probably pick that one up. good discussions on time in there.
The BBT t-shirt should be black with a picture of a human brain on it, and if you stare at it long enough you realize that it’s your brain and you’re actually the person wearing the t-shirt, but then you wonder how you can see that it’s your brain since you can’t see inside your own head and realize that in fact you are just watching someone else wearing the BBT t-shirt, but then… and it goes on like that until you realize you’ve got no money left, probably having spent it all on t-shirts, and are being herded into some sort of death camp by post-human cyborg drone creatures.
+1
i’m thowing up some BBTish photos onto my photobucket. i wasnt really consciously aware of bbt at this point though much of what brassier said about francoise laruelle in his book definitely fed into the aethetics of what i was going for, and i happen to think that bakker naturalizes the whole structure of dualysation / unilateral duality / idenity without unity duality without difference anyhow.
http://s31.photobucket.com/user/divisionbyzer0/library/OnesidedCut?sort=3&page=1
Way. Cool.
I’ve read piss all Laruelle, but I remember thinking it would be possible to capture his gestalt in ‘BBT-speak.’ I would love to hear more, DZ!
Click to access chiasma-laruelle-hauck.pdf
So a ‘perspectival klein shirt,’ one that you can only see others wearing if you are that other.
I pity the fool stuck inside my shirt.
BBT T-shirts are all inside out. So you thought you were wearing a T-shirt that whole time, but you weren’t. The world was.
grist for the mill: http://blog.urbanomic.com/cyclon/archives/2014/11/moremind.html
Guys, guys, I still don’t “get” it.
Why would learning about the underlying mechanics of the mind (even if those explicitly deny “intentionality” as a viable entity) alter my experience of the mind ?
I mean, I am already acting in accordance to those processes. Why would learning more about those processes lead to changing my behaviors and experiences?
I assign no motivational value to “beliefs” about me having “intentional states” and “aboutness” and whatnot, so why would learning that those are just stuff and nonsense cause me to somehow undergo a fundamental lifestyle change ?
Well that’s like saying learning about the heartbleed bug wouldn’t stop a server working as before. And it wouldn’t. That’s the problem. Were talking about the underpinnings of your thought behind discussed – the poles that keep the circus tent canvas that are our thoughts, up. Whereas before there was no ‘before’ of thoughts – thoughts just were the start of things. Now there’s a series of potentially vulnerable ‘server architecture’ (ie, the circus tent poles) up for discussion.
I assign no motivational value to “beliefs” about me having “intentional states” and “aboutness” and whatnot, so why would learning that those are just stuff and nonsense cause me to somehow undergo a fundamental lifestyle change ?
Can’t say I really understand this – ‘motivations’ is the measuring tool we use to get some logistical grip on the running of our lives. You don’t think hunger motivates you (at various times), for example?
@Callan
And so what ?
Okay, let’s assume we discover an exploitable part of mind.
I kind of always thought of humans as particularly contrived and poorly engineered machines, so such discovery wouldn’t even surprise me, let alone “terrify” me or make my worldview shatter or something.
If anything, I will be rather “satisfied” with myself being right about human nature after all.
If the attack surface of some “mindsploit” turns out to be exceptionally large (think something like BLIT or “insanity-inducing texts”) I might become a bit perplexed about how we failed to stumble upon it at random. That is all.
No, I just don’t assign any particular
valueerrr… significance to the eventual outcome of inquiry into the nature of intentionality or aboutness.If those turn out not to be “real things” after all, and go the way of the luminiferous ether, it won’t be much skin of my nose at all (I’ll lose a $50 bet tho). If they turn out to be things, but in a manner that also ascribes a weird kind of “intentionality” to spiders, weirdo molds and hard drives, I’ll win a $50 bet.
Okay, let’s assume we discover an exploitable part of mind.
I kind of always thought of humans as particularly contrived and poorly engineered machines, so such discovery wouldn’t even surprise me, let alone “terrify” me or make my worldview shatter or something.
I’m not sure I have the same concerns as Scott or others might have.
But zealots do scare me. Do you think mind tweaking can’t lead to vicious feedback cycles of behaviour extremism (ie, make a behavioural trigger stronger – it triggers a behaviour that gets more surgery to get more of that behavioural trigger strength enhancement)
Maybe we don’t share the problem at the existential level Scott and co has, but at the same time I don’t think you’re considering the level that is men with machettes chasing you down.
Further I’ve got money of another kind invested in this specie – and for anyone else who has, it aught to concern them like a lost investement/lost bet would.
I might become a bit perplexed about how we failed to stumble upon it at random.
Well why didn’t we stumble on AIDs at random at some point in the past? It just wasn’t available to stumble upon during those years.
Besides, that sounds like being a bit perplexed about how hot the climate is getting – well after climate change has occured. What’s the point of describing being perplexed at that stage? It’s like the stories of people looking at the stump of their hand after the firecracker went off – yes, the stump is perplexing. So?
@ Callan
That is a practical concern. I would even endeavor to say, a tactical one.
It is like being concerned about the possibility of “moderately intelligent” high loiter-time flying assassination drones. It’s not an entirely absurd concern, but it’s hardly some fundamental philosophical concern that deserves being called a “catastrophe” or “apocalypse”.
But images and texts were available for a long time.
It stands to reason that if it’s possible to, say, draw a picture that causes suicidal compulsions, we would have stumbled on this creepypasta-ish discovery throughout literally thousands of years of manipulating visual media.
01, you are exemplifying What You See Is All There Is. For starters look at Sellars argument concerning the myth of the given. There the idea is that sensation itself doesn’t confer knowledge about sensory contents, and that perception is conceptually mediated by the collective discursive historical labor of concept formations and their intrication in practices. Interestingly this is corroborated by others, around the same time that sellars was writing but from speculative, psychological, and literary considerations. Bruno Snells book is probably the most interesting of these investigations and its probably a dollar on amazon. His discussion unlike Jaynes focuses soley on the synthetic metacognitive dimension of the mind uncovered by the greeks and how this dimension was literally not present in much of the earlier greek literature.
Bruno Snell: “The Discovery of the Mind”. Or just read some anthropology! Some native americans have like 6 or seven different “bodies” which transect their personal body and mediate their experience of “mind” which they find strange to singularly encapsulate with a definite article “the”
@DivisionbyZer0
Speaking of anthropology, I am in an open relationship with a person who does not have “internal monologue” and doesn’t subvocalize/silently verbalize texts, whom you might know as 03 (though it probably doesn’t count as good anthropological science if you are so close with your subject of inquiry, so feel free to treat this as extra-contaminated data), so the fact that NAs have peculiar models of self that are unlike anything I am familiar with is not particularly surprising (however, very interesting. I will look into it more)
Again, I don’t see how the hypothesisthat “sensation itself doesn’t confer knowledge about sensory contents, and that perception is conceptually mediated by the collective discursive historical labor of concept formations and their intrication in practices” should somehow support the hypothesis that discovery proving essential futility of intentionality and other adjacent concepts would be catastrophic.
Are you pitching some strong-ish version of Sapir–Whorf hypothesis and somehow springboarding from SWH to the thesis regarding catastrophic consequences of this discovery ?
Or are you merely suggesting that there are / could be “mindsets/ways of thinking” that would be actually harmed by this discovery ?
It doesn’t work on everybody, but have you ever felt the meaning of a past event change when you gained new knowledge about it? You always thought you were a MacEwen from a proud line of MacEwens dating back to Bonnie Prince Charlie and your clan’s heroic stand at Culloden, then you find out you were adopted, and your whole glorious self-concept turns to ashes.
Some people may be immune to the way new knowledge can alter the remembered meanings of past events, but I think most of us are not. If the things you always believed about yourself turn out to be lies and if your sense of your own worth is founded on those beliefs it stands to reason that your sense of your own worth will crumble when you learn that it is founded on lies.
The other reason to be concerned about the effects of new knowledge is that new knowledge creates new power. If biological science gives us the ability to eradicate polio most people will say yes, let’s eradicate polio. If neurological science gives us the ability to eradicate (for instance) homosexuality should we?
The thing that strikes me about being able to turn homosexuality into hetrosexuality is that, lol, the door swings both ways then – you can turn hetrosexuality into homosexuality! If neurological science gives us the ability to eradicate (for instance) hetrosexuality, should we?
It’s an indicator of our particular model of thinking (a particular heuristic, to use the local parlance) that we just think of homosexuality as a thing that can be canceled and it goes to the true default of hetro, rather than hetro just being one way the toggle switch can be toggled – hey, I think that way too, off the bat, not saying I don’t.
@Michael Murden
Okay, I concede that people who believed themselves to be anything other than biological machines created through a pointless semi-random process might experience an existential crisis. They have my sympathies.
Third option:
Create a brain modification that allows sexual shenanigans to be patched in at runtime, create a whole load of downloadable sexuality/orientation/gender/whathaveyou modules for that brain mod and sell them in some kind of app-market for brains.
😀
That would be rather groovy.
Callan and 01:
I think designer sexuality is a great idea. We could call the company ‘Walk on the Wild Side.’ Somebody call Lou Reed’s estate.
In other news, David Dunning of the Dunning-Kruger Effect (great band!) is having an ask me anything session on reddit.
http://podcasts.ox.ac.uk/imaging-mechanisms-behavioural-control
http://www.wired.com/2014/11/robot-ghost/
I think this is almost the perfect example of the way the brain’s “heuristics” fail just because of how f#$%in’ crazy things get when systems adapted for something else go slightly awry. What I think is so great about it is how naturally we default to an “agent-based” explanation, even when we should know better (i.e., when we can see the robot).
Truly beautiful. Great example of the power of the heuristic view to make high altitude sense of these things.
By sheer coincidence I finally got around to Frith’s “The Role of Metacognition in Human Social Actions.” Great stuff.
Physicist, Perimeter Institute; Author, Time Reborn
I am puzzled by the arguments put forward by those who say we should worry about a coming AI, singularity, because all they seem to offer is a prediction based on Moore’s law. But, an exponential increase is not enough to demonstrate that a qualitative change in behavior will take place. Besides which, the zeroth law of economics is that exponential change never goes on forever. What specific capacities do they fear computers may acquire before Moore’s law runs out, and why do they think these could get “out of control”? Is there any concrete evidence for a programmable digital computer evolving the ability of taking initiatives or making choices which are not on a list of options programmed in by a human programmer? Finally, is there any detailed reason to think that a programmable digital computer is a good model for what goes on in the brain?
My present Mac Air is exponentially faster and more capacious than my original Mac SE, but it doesn’t do anything qualitatively different: I launch programs and they run. MS word offers now exponentially more features but it doesn’t come any closer to writing text without me than the primitive text editing program on my Commodore 64.
During the evolution of life on the planet, a vast number of options for a cell to behave have been invented and tested, yet major qualitative transitions in the capacities of cells have been few, often taking billions of years. Maynard Smith and Szathmáry identify only eight in four billion years, two are the invention of eukaryotic cells and the invention of language. But we talking about another such major transition. If possible at all, why shouldn’t it take as long as the transition from single cell amebas to multicellular creatures? Do we really think Google has at its disposal more processing power in a decade than a billion years of planetary wide evolution of procaryotes?
Why aren’t we more worried by the implications of a major transition in the organization of life which is undoubtedly underway, due to unanticipated consequences of run away technology? This is the growth of technology to the point its waste products disrupt the natural feedback mechanisms that control the climate, i.e. climate change. This is the unavoidable first step in a process that must, if we are to survive as an industrial civilization, end in a synthesis of the natural and artificial control systems on the planet. To the extent that the feedback systems that control the carbon cycle on the planet have a rudimentary intelligence, this is where the merging of natural and artificial intelligence could first prove decisive for humanity.
Those who worry that an exponential increase in the capacity of computers could bring about a qualitative transition in their behavior that trumps what took vast numbers of cells four billion years to develop, are making a mistake analogous to cosmologists who posit that our universe is one of a vast number of copies. If we can’t explain why our universe has the laws or initial conditions it does, we can invent a story in which a universe like ours arises randomly in a vast enough collection. Similarly, if we can’t yet understand how natural intelligence is produced by a human brain, take the short cut of imagining that the mechanisms which must somehow be present in neuronal circuitry will arise by chance in a large enough network of computers.
Neuroscience is advancing quickly; so sometime in this century we may understand how the several aspects of human intelligence arise. But why couldn’t such progress require us to come to a detailed understanding of how natural intelligence differs qualitatively from any behavior that a present day computer could exhibit. Why should our early 21st century conception of computation fully encompasses natural intelligence, which took communities of cells four billion years to invent?
Thanks,
Lee
http://edge.org/conversation/the-myth-of-ai
Is there any concrete evidence for a programmable digital computer evolving the ability of taking initiatives or making choices which are not on a list of options programmed in by a human programmer?
Master mould from the X-men comics proclaiming ‘All humans are mutants!’
It’s simply a matter of taking one approach and applying it to another situation (one of our primary intelligence skills) – an application the programmer just didn’t see coming. And that’s the point – a program to do things without needing a human to over see them/see it coming.
They are trying to build things that do actions that don’t fit on a list of options. That’s the hard evidence.
What do you think they are trying to make – just a big flowchart following machine?
Computers are better than humans at chess and Jeopardy, using mere storage and processing speed. They are simulating human intelligence. Even if we can’t make machines that think like humans (even given that we have no idea what ‘think like humans’ means) is it possible that machines will get better and better at simulating human intelligence across more and more kinds of the activities we used to think of as uniquely human just using storage and processing speed? Perhaps AI, like God, will come to have efficacy whether it exists or not.