What Makes Any Biomechanism a Nihilistic Biomechanism?
by rsbakker
Peter at Conscious Entities has another fascinating post on the issue of machines and morality, this time in response to a paper by Joel Parthemore and Blay Whitby called “What Makes Any Agent a Moral Agent?” Since BLOG-PHARAU was hungry, I figured I would post a brief reworked version of my take here. I fear it does an end run around their argument, but there’s nothing much to be done when you disagree with an argument’s basic assumptions
My short answer to the question in their title is simply, ‘Whenever treating them as such reliably produces effective outcomes.’ Why? Because there is no fact of the matter when it comes to moral agency. It is a heuristic how, not an ontological what.
I find it interesting that they begin their abstract thus. “In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences.” Since this question is the question of when a system can responsibly be held responsible we need to pause and ask the question of the former ‘responsibility.’ When is it morally responsible to hold machines morally responsible. It’s worth noting that we do this very thing in ways small or large whenever we curse or punish machinery that fails us. One can assume that this is simply anthropomorphism for the most part, an example of the irresponsible holding of machines responsible. My wife, for instance, thinks I treat anything mechanical I’m attempting to fix abusively. So approached from this angle, Parthemore and Whitby’s argument can be looked at as laying out the conditions of responsible anthropomorphization.
So what are these conditions? A pragmatic naturalist like Dennett would simply answer, ‘Only so far as it serves our interests,’ the point being that there are no fixed necessary conditions demarcating the applicability of moral anthropomorphization. There’s nothing irresponsible about verbally upbraiding your iPhone, so long as it serves some need. Viewed this way, Parthemore and Whitby are clearly chasing something chimerical simply because the answer will always be, ‘Well, it depends…’ The context in which a machine can be responsibly held responsible will simply depend on the suite of pragmatic interests we bring to any given machine at any given time. If holding them responsible works to serve our interests, then it’s a go. If not, then it’s a no-go.
In my own terms, this is simply because our moral intuitions are heuristic kluges geared to the solution of domain specific problems regardless of the ‘facts on the ground.’ There are no fixed ontic finishing lines that can be laid out beforehand because the question of whether the application of any given moral heuristic works is always empirical. Only trial and error will provide the kinds of metaheuristics we need to govern the application of moral heuristics in a generally effective manner.
Otherwise, I can’t help but see all this machine ethics stuff as a way to shadow-box around the real problem, which is the question of when it is appropriate to treat humans like machines, as opposed to moral agents. More and more the corporate answer seems to be, ‘When it serves our interests…’
Then there’s the further question of whether it is even possible to treat people like moral agents once the mechanisms of morality are finally laid bare – because at that point, it seems pretty clear you’re treating people as moral agents for mechanistic ‘reasons.’
This is my bigger argument, anyway: That many things, such as morality, require the absence of certain kinds of information to function ‘responsibly.’
That seems to come from a pretty bizarre base principle – treating all machines the same way is rather like treating some fire that burns your curtains the same way you treat your cat that claws your curtains. Fire cannot learn! Your car engine (to take a machine you might ‘abuse’) cannot learn! There is no feedback loop. Sure, there’s wanting to cut past the romantification of the human mind, but there’s also just plain ignoring emperic difference in doggardly doing so (well sure I can’t write a learning AI feedback loop, but again, to try and treat it as being the same as fire or your washing machine is basically just ignorance). Sure, you can call the feedback loop just mechanistic – but you can call DNA ‘just chemicals’ – but most chemicals do not repeatedly replicate patterns that transmogrify the face of an entire planet! To ignore that seems to be to let the world just pass you by.
So as said, sure, no AI feedback loop code to emperically point at as the distinction refered to (making this statement – ergh! – largely theoretical) – but if were going to dance around with corporate entities who want to treat us like vending machines, it’s not going to help to screw up emperical differences and call ourselves vending machines. The feedback loop is as much an emperic difference. Yeah, various organisations have ignored that – mass graves attest as much. But why are we jumping on the ignore the dif bandwagon with them? To much de-romanticisation? In thin tendrils, sometimes the emperic is laced through the romantic. Push me and I will use the baby/bathwater cliche – I mean it!
Anyway, Blog-Pharau is pretty funny…
My car knows. Oh yes. That prick has had it in for me since day one.
There was this experiment years ago where they distributed these robot pets in an old folks home, thinking the residents would be alienated by the things because of the limited responses. Turns out, a great number of them fell in love. It doesn’t take much to throw the heuristic switch. One way or another, I fear. Christ, look at Syria.
The point is, laying out the conditions of legitimate moral application of moral heuristics via capacities or structure such as ‘feedback loops’ is just verbiage, advocacy, because nothing is intrinsically ‘moral’ or ‘purposive.’ The question of legalities, however, is far different.
The emperic is self advocating. Who just ignores emperic facts without slipping into romanticisation/kludgey heuristics in doing so? Without ending up not looking both ways before they cross the street in doing so? If you want to argue ‘the game’ has moved passed or will move past these feedback loops being relevant, okay. Though it begs questions (Joker: Why so certain?). Or argue that what is treated as ‘the game’ will, by various powerful interests, treat it as if it it’s moved passed the feedback loops. But treating car engines and feedback loops as equivalent? There’s a difference between tearing shit down vs just making a false association. Self advocating emperic might not pin fuckers down like some kinda intrinsic moral or purposive thingie, but it’s still a card in play that wont vanish (except at the subjective level).
Anyway, now I want to run an experiment to test exactly how much those elderly residents might sacrifice for those loves? I wonder how far it’d go? I’d suspect uncomfortably close to putting them ahead of family (certainly many people put world of warcraft ahead of social commitments), but I’m guessing only uncomfortably close – but who knows. Might be surprised. Certainly a limited responce range is still a larger responce range than one gets from family who never visit.
This may be of interest (it may even be the study referenced above):
http://www.npr.org/2013/02/25/172900833/do-we-need-humans?utm_source=NPR&utm_medium=facebook&utm_campaign=20130315
I like the turn-around question of when it makes sense to treat moral agents as machines. That’s thought-provoking.
I have a worry about the Christian connotations of the word “morality,” in particular. Technically, I think morality is a sub-issue of the broader one of where ideals, normativity, or motivations to satisfy strongly-felt desires fit in. “Morality” today means something like slave morality, but without Nietzsche’s repudiation; it’s about equality, fairness, rights, freedom, and so on. In other words, it’s a liberal interpretation of what we ought to be doing.
I say this because if we look at the broader question of pursuing an ideal, the pragmatism you’re presupposing here comes into focus. If we say that personification of machines is limited only by the pragmatic value of doing whatever it takes to effectively satisfy our interests, we’re dealing with instrumental rationality and ducking the question of whether our interests themselves can be evaluated.
Now, you say that there’s no fact of the matter here, that there are no moral properties; instead, there are only heuristics which, I take it, are reducible to natural processes that have evolutionary functions. The evolutionary sorting of vehicles for genes gives us the superficial appearance of normative right and wrong, just as it gives us the illusion of intelligent design.
Well, I think it goes without saying there are moral *facts* if we understand facts as what we discover when we objectify, since objectification requires that we detach from our values and ideals, that we neutralize our humanity, as it were. In fact, that’s an example of when we should think of people as machines, when they’re neck-deep in rationality and they’re following the logic and the evidence wherever it takes them. I know we don’t entirely detach from our emotional and instinctive sides, but the point is that normativity isn’t a matter of cold and calculated facts. Ideals, obligations, and so on are posited to make sense of subjective, not objective reality. One of the subjective patterns in question is the way we use technology to humanize our environment, overwriting facts with symbols of our values. If we look at only the objects involved in that pattern, we miss the symbolism and the other meanings.
And I suspect pragmatism and instrumental rationality aren’t as value-neutral as they appear. I think they might amount to defenses of the social status quo. That’s my problem with centrist liberalism of Obama’s sort. I note also the similarity between instrumental rationality and the Satanist slogan that what we want to do is the whole of our law. The question is where our desires come from. Are some of them socially manufactured, by the mass media and so on?
I know we don’t entirely detach from our emotional and instinctive sides, but the point is that normativity isn’t a matter of cold and calculated facts. Ideals, obligations, and so on are posited to make sense of subjective, not objective reality.
I think you’re just about to directly refer to ‘the game’ or ‘a game’, Ben! Not sure if that’s worth anything or terribly on topic, but from my perspective it’s odd just how close but not quite to the position* you get. It’s almost like a membrane and…you’re just on the other side – still ideals & obligations but not only that, they also make sense of things and not only that, they make sense of the subjective (which is a subjective nestled inside of a subjective?). Makes me feel a bit shrivled in comparison with your take , anyway – perhaps theres water on the other side of the membrane…
* that I spy
I appreciate the imagery, Callan, but I’m sure you’re not as shriveled as you suggest. 😉 Don’t you have ideals or strongly-felt ideas about what ought to be done? Don’t some things really, really piss you off? We experience that sort of subjectivity all the time. BBT explains that subjectivity as an illusion, but I don’t think this amounts to explaining it away or to eliminating it. We’re left with our experience of the illusions, just as those being fed misleading neural signals are stuck inside the matrix. We can say that those illusions don’t give us scientific knowledge of the facts, and that’s fine, because those illusions instead give us the basis for ways of life, such as existentially authentic and inauthentic ones. The former requires what might be called wisdom rather than bean-counting knowledge.
When you say that morality and normativity are like moves in a game, this seems to deflate them because we think of games as arbitrary. You can choose to follow the rules of Monopoly or you can just make up the rules as you go along. Religion seems arbitrary in the say way, and anything that arbitrary loses its value. Thus, we think we can dismiss the subjective side of life as something we should just get over. But illusions aren’t necessarily like games in that way. Those who are stuck in the matrix are force-fed those signals, just as our brain forces the manifest image on us. We can choose to understand our experience in reductive terms, but as Scott says we’re still stuck with the first-order perception of ourselves as subjects. This was more or less my transcendental point about illusions: they’re not exactly as arbitrary as games.
Here I go, to dig my own ditch…
You can choose to follow the rules of Monopoly or you can just make up the rules as you go along.
This quote has all sorts of complicated assumption in it – as a long time roleplay theory guy, I’m particularly suited to nitpick at it!
I would bluntly refute the above notion, if it weren’t ambiguously worded. I’ll try one reading of it – no, you can’t just choose to ignore the rules – because other people can SEE you’re doing that. They can see it clearly because the rules are so emperic.
The other reading of the sentence is one where ‘You’ is being used as synonomous with the whole group/all players being able to do this – a ‘group will’. However it seems to slide into a use that I’d call, well, salesmanship, in that one impresses upon an individual that it’s ‘about the group will’ – impressing it so much that the individual forgets they have a vote in that will and if they say no, either ignoring the rules is NOT okay, or if the others ignore them, then they are being ignored.
Establishing that as the base I’m working from, I find the illusions an F ton more arbitrary than games! If anything, the things that piss me off are more like a molten forge, from which can be cast, into cooled emperic iron, the rules of game. But the molten in no way qualifies as some sort of dependably solid structure to rely upon. In no way a metal gear… >:)
And yet the common view of games is that they are arbitrary – enough to make illusions, of all things, seem more solid!
That said, I believe broad demographic paralels in moral behaviour (as well as explicitly advocated behaviour) (incest being bad/don’t do that, is an easy example) can be emperically identified. Though even this is threatened with brain augmentation stuff. But paralels can be found – yet even here you can also find sexist attitudes built in, seemingly. Even such a generalised baseline is pretty screwy!
So even when it comes to ‘illusions’, again it’s ambigous wording. Who’s illusions? Just illusions – sans whether they match up with an overall average of illusions? Somehow just illusions matter – regardless? Whether your looking after orphans or commiting female ‘circumcision’?
No, it’s pretty cold and dry, here.
Well, we’re talking about perceptual illusions, I think. Specifically, the question is how we see ourselves through introspection and intuition, and BBT says we’re blind to ourselves when we look in those ways. The details of the fictions we tell to fill in that blank may be arbitrary or culturally determined, but the illusion that our inner self is immaterial, for example, seems to follow from our innate inner blindness.
As for games, I agree that the group would have to follow along with the change of rules, so maybe Monopoly wasn’t the best example. But sometimes the instruction book is so long, the players just pare down the rules and play their family version of the game. It’s like following a recipe for some sort of food. You can do so to the letter or you can personalize what you’re making, adding your favourite ingredients. Perceptual illusions have physiological causes, so they’re unavoidable, whereas games are things we play on a conscious level and the rules are often arbitrary social conventions, like driving on side of the road or another.
It’s strange how you show control over game rules – yet treat the illusions as dependable?
Try driving on the other side of the road – how arbitrary does the standard convention feel vs how much do you feel you are actually violating something quite concrete? Okay, now drink alot of coffee, or stay up late, drink a little or watch some emotionally compelling programs – how stable are the illusions?
My estimate; the illusions are not that stable, unless rendered into emperic rule form, where you can see if there is a deviation. Contrast that against having a drink or two, saying something to someone you wouldn’t have said if you hadn’t had a drink. How can you see the difference between the two? If you set a rule for what you are to say to someone and then…you go and say it anyway, you can contrast against the rule. Without the rule – well, does it feel right to say that thing? Does whether it feel right actually shift while drinking? I think alot of people would say yes, even as they can’t feel it. They contrast it against records/their memory (another kind of rule)
Scott’s semantic apocalypse or not, I wonder how much driving on the wrong side of the road we’ll see? I’m guessing not alot. It’s pretty damn robust!
I know you’re saying illusions are unavoidable – I’d agree. But what illusions? The greater illusion is that you might not be able to tell one illusion has been replaced with another. Without a metric, how would you know?
Callan,
I’m trying to get at a distinction between games and illusions. The rules of games are arbitrary in that they’re social conventions. Once a convention is selected, like driving on one rather than the other side of the other, it has obvious consequences, but the arbitrariness is in the initial selection of the convention. The goal is just to control traffic, there are numerous ways to do so, and it doesn’t matter which is chosen (left vs right side of the road). It’s the same with games: different board configurations could generate the same amount of fun, as it were, given the game’s storyline or topic, so you can just flip a coin in choosing those parts of the board design.
I’m not saying there’s no skill involved in designing games or that the whole design is arbitrary. Obviously, there are pragmatic constraints on how large the board can be, for example, or on where certain key parts should be located so as not to interfere with each other. My point is just that in so far as games are conventional, they have an arbitrary aspect.
But the perceptual illusions are subject to natural laws whose arbitrariness comes out only in the ultimate quantum mechanical causes, or perhaps in the chaotic flow of the relevant systems, as Mikkel points out. So if normativity is more like an illusion than like a game, I think that means normativity isn’t so arbitrary. It’s forced on us, like the way the AIs force the matrix on the hapless humans in the sci-fi movie.
Ben, as I said, I agree illusions will be forced. And as I asked, how do you know you’ll be working from the same illusion though? You seem to be drawing a connection between it being forced and it being consistant – using the matrix of all things – a movie where agents hijack normal people and the protagonists screw with the consistancy of the illusion (to their benefit).
As I understand you, you assert the illusion that is being forced will also be consistantly the same illusion and will not, inperceptably, swap for another illusion at any point?
I swear to you that game rules are an anchor, to keep you at the illusion you set out to adhere to. Yes, I get that point doesn’t seem compelling if it seems that illusions can never be impercetably swapped over.
Callan,
I think the skeptical possibility you’re raising here could apply to any scientific truth. The illusion would be forced by natural causes and as Hume said, we can always wonder whether the world will keep working as we expect it to based on our past experience. But this is only a skeptical possibility; the skeptic has the burden of proof to show the instability is probable.
Inside the matrix, the prisoners would have the same issue, about the rationality of believing that there are evil creatures out there (the AIs) who could miraculously change people’s experience, even though there turn out to be such creatures.
But what’s the upshot of the point you’re making? Do you think it poses some trouble for my view? I take it you’re defending BBT and you’re saying that our perceptual illusions are (or could be?) game-like and therefore they’re negligible. Is that right?
The claim game? It’s not you claiming the existance of something and the onus being on you to prove that? It’s me claiming the lack of something/non existance of something and I have to prove that? No wonder I’ve become a ambush preditor of claims…
Who between us is claiming the existance of something Vs the lack of existance of something? I’m not claiming the ‘existance’ of a lack of dependability.
The upshot is I think you get very close to emperic rules – but A: you don’t quite get there, B: not quite getting there is pretty much the same as still running off a religion/determining lives by how you or someone feels from moment to moment – it’s like going to AA, but not admitting the drinking problem and C: It’s alot more fun to stay in the religious zone! I actually predict you eventually having a hardcore swing towards religious values and this will then all seem a flirtation with something that was full of accountancy (as opposed to matters of the heart) and a dead end. Somewhat like Scott says ‘If anything, the religions you excoriate seem to have a greater claim to exaptational cognitive efficacy.’, suddenly they will seem to make so much more sense, at some point. Sense in relation to the heart.
Ultimately I run off the illusion that our illusions are not dependable. Somewhat like your ‘irrational’ matrix suspicion. Do you have one of these illusions? If they are so dependable, why don’t you have one as well? Or do they get handed out like cards delt from a deck?
I agree that I’m “in the religious zone.” In several writings on my blog, I explain why I think all normal people have religions in the Durkheimian sense. They needn’t be theistic, but most people have strong feelings about something they regard as sacred, and they’re attracted to narratives (fictions, art works, myths) that rationalize those feelings. Some naturalists lack the self-awareness to admit this religious aspect of their worldview or else they won’t admit as much for strategic, political reasons, because they’re supposedly at war with religion (even though they’re really at war with theism). Some naturalistic religions are secular humanism, scientism/pragmatism, and materialistic consumerism.
What I’m after is the best possible naturalistic religion, the one most suitable for existentially authentic individuals. I suspect it will be pantheistic (atheistic), it’s normative aspect will be aesthetic in character, and it will feature a dark sense of humour and some degree of asceticism or social detachment, by way of addressing the absurdities of our existential predicament. Existential cosmicism lays out some ideas that might be useful to such a religion. By “religion,” here, I’m talking mainly about the philosophy that might come to mean something to naturalistic seekers who find themselves on the margins of society. I have less to say about the lifestyle (rituals, social organization) that might organically develop around this philosophy.
I think my AA analogy was more apt than I realised. If we compare strong feelings about something as being like a glass or two of beer on the weekend, we start to get a sense of scale (as the emperic pokes its nose in). From here – do the naturalists lack the self awareness to say they are alchoholics? Or are they just not alchoholics as they only have a beer or two on the weekend? Is someones beer or two on the weekend being used as an excuse for someone elses binge drinking, merely by equating them as being the same (‘oh, were all religious/the same!’)?
Going to an AA meeting and declaring practically everyone is a drinker (and neatly fitting oneself into this legitimised status quo) is still not admitting a problem.
So, the naturalists – do they really drink as much? Is it really a lack of self awareness on their part? It’s them that have the problem?
I certainly agree that pragmatism in the moral/political sphere often amounts to conservatism. I have problems with your account of ‘objectification’ and ‘detachment’ though, but since I’m working up a brief response to your reading of Brassier I’ll save the details!
Oh, cool! Maybe we could start up a dialogue on the political implications of the scientific image. In particular, what to make of pragmatism and instrumental rationality? These naturally come up when we’re talking about methodological naturalism and technology. Even more interesting to me is how liberals and conservatives bow down to pragmatism, as though our sense of what’s useful were at the same time a sense of some natural process we’re part of, the road to posthumanity.
Whoops! In my first response above, I said “I think it goes without saying there are moral *facts* if we understand facts as what we discover when we objectify.” I meant to say there are of course NO facts in that sense. I left out the negation.
No moral facts, that is…
If I had a nickel for every time I did that… Peril of the medium!
Yeah, for that reason I like blogger because it lets you delete your comment. Reddit gives you the most freedom, because it lets you edit it after you’ve posted it, with no time limit on how long your comment is modifiable. Of course, that freedom could be abused. WordPress seems the most restrictive in this regard–or is this Disqus you’re using for comments?
*pssthecan’tfigurehowtoaddtheeditwidget>:)…*
Reblogged this on sainsfilteknologi and commented:
Brain