What Makes Any Biomechanism a Nihilistic Biomechanism?

by rsbakker

Peter at Conscious Entities has another fascinating post on the issue of machines and morality, this time in response to a paper by Joel Parthemore and Blay Whitby called “What Makes Any Agent a Moral Agent?” Since BLOG-PHARAU was hungry, I figured I would post a brief reworked version of my take here. I fear it does an end run around their argument, but there’s nothing much to be done when you disagree with an argument’s basic assumptions

My short answer to the question in their title is simply, ‘Whenever treating them as such reliably produces effective outcomes.’ Why? Because there is no fact of the matter when it comes to moral agency. It is a heuristic how, not an ontological what.

I find it interesting that they begin their abstract thus. “In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences.” Since this question is the question of when a system can responsibly be held responsible we need to pause and ask the question of the former ‘responsibility.’ When is it morally responsible to hold machines morally responsible. It’s worth noting that we do this very thing in ways small or large whenever we curse or punish machinery that fails us. One can assume that this is simply anthropomorphism for the most part, an example of the irresponsible holding of machines responsible. My wife, for instance, thinks I treat anything mechanical I’m attempting to fix abusively. So approached from this angle, Parthemore and Whitby’s argument can be looked at as laying out the conditions of responsible anthropomorphization.

So what are these conditions? A pragmatic naturalist like Dennett would simply answer, ‘Only so far as it serves our interests,’ the point being that there are no fixed necessary conditions demarcating the applicability of moral anthropomorphization. There’s nothing irresponsible about verbally upbraiding your iPhone, so long as it serves some need. Viewed this way, Parthemore and Whitby are clearly chasing something chimerical simply because the answer will always be, ‘Well, it depends…’ The context in which a machine can be responsibly held responsible will simply depend on the suite of pragmatic interests we bring to any given machine at any given time. If holding them responsible works to serve our interests, then it’s a go. If not, then it’s a no-go.

In my own terms, this is simply because our moral intuitions are heuristic kluges geared to the solution of domain specific problems regardless of the ‘facts on the ground.’ There are no fixed ontic finishing lines that can be laid out beforehand because the question of whether the application of any given moral heuristic works is always empirical. Only trial and error will provide the kinds of metaheuristics we need to govern the application of moral heuristics in a generally effective manner.

Otherwise, I can’t help but see all this machine ethics stuff as a way to shadow-box around the real problem, which is the question of when it is appropriate to treat humans like machines, as opposed to moral agents. More and more the corporate answer seems to be, ‘When it serves our interests…’

Then there’s the further question of whether it is even possible to treat people like moral agents once the mechanisms of morality are finally laid bare – because at that point, it seems pretty clear you’re treating people as moral agents for mechanistic ‘reasons.’

This is my bigger argument, anyway: That many things, such as morality, require the absence of certain kinds of information to function ‘responsibly.’