the ethics of robot objects

Colin Allen has a piece in the NY Times on robots and morality. I am not familiar with Allen's work, though I now plan to check it out. As such, this is not a direct commentary on his work. It is instead some independent thoughts on the problems that he addresses here (and apparently in his scholarship). The problem is familiar to us in its sci fi articulation from Frankenstein's creature to the Terminator and the Matrix. There's HAL and the computer in Wargames, etc. etc. However this is not just a sci fi problem. As Allen notes:

In military circles, the phrase “man on the loop” has come to replace “man in the loop,” indicating the diminishing role of human overseers in controlling drones and ground-based robots that operate hundreds or thousands of miles from base. These machines need to adjust to local conditions faster than can be signaled and processed by human tele-operators. And while no one is yet recommending that decisions to use lethal force should be handed over to software, the Department of Defense is sufficiently committed to the use of autonomous systems that it has sponsored engineers and philosophers to outline prospects (.pdf report, 108 pages) for ethical governance of battlefield machines.

And there are plenty of less lethal everyday robots, including, as soon as 2013, robot-steered cars. As Allen continues, "Even modest amounts of engineered autonomy make it necessary to outline some modest goals for the design of artificial moral agents." As this suggests, and I've discussed here before, the question of morality (though I prefer ethics) is tied to questions of agency and cognition. Can you think? Do you have the capacity to make decisions about actions?

Allen concedes that "Perhaps ethical theory is to moral agents as physics is to outfielders — theoretical knowledge that isn’t necessary to play a good game. Such theoretical knowledge may still be useful after the fact to analyze and adjust future performance." I would judge this an effective rhetorical move in the NY Times and perhaps also for an audience of engineers or scientists. Maybe it is even important as a philosopher to hold this possibility open. However, I don't think it is an entirely fair analogy. Or at least it isn't yet. That is, I don't think we have an ethical theory that operates as physics does, describing the principles that govern ethical relations as physics governs physical/energetic relations. One might complain that since ethics involves agency it has to include a level of indeterminacy, but so does quantum physics. An ethical theory of this variety would be similar to physics and perhaps be of little use in helping "moral agents" make ethical decisions. However, if you were building a robot outfielder, you would certainly need an understanding of physics, an understanding that is built into engineering practices already. Similarly, if you were building a moral agent, then you would require an ethical theory. 

In addition, there is another layer of practice that rests atop these general theories. The outfielder is not only an object of physics, but also biology, psychology, etc. Sports medicine, athletic training, sport psychology and so on shape the outfielder, as does the statistics of sabremetrics. Of course we are familiar with this traditional relationship between pure and applied sciences. Allen is exploring the possibility of an applied philosophy. In rhetoric, we have long been comfortable with shifting between the pure and the applied. I think of my own work as rethinking rhetoric on that basic, ontological level with an eye toward how it might shift an applied rhetorical practice, so, not surprisingly, I am sympathetic to a project like this.

I also imagine that speculative realism would be a productive approach to this problem. Without knowing Allen's work, I would suspect that a deeply held correlationsim could be a significant obstacle (not necessarily held by him but perhaps by the engineers with whom he works). That is, I think it would be a problem to begin with the premise that humans are the sole moral agents or that we should serve as a model of morality for other objects. I don't think a Kantian categorical imperative would be a useful approach, even though most people conceive of morality in this way (universally black/white). As Allen points out in the article, Asimov's three robot laws are an example of how such an approach goes awry. Of course, if ethics are not universal then they are contingent. They are for something. 

The robot car needs to know that it must choose to strike the dog rather than the child, the tree rather than the dog, the wild animal rather than the domesticated one, to avoid hitting any creature assuming minimal risk to the passengers. For humans these are barely ethical decisions. We don't have time to think in such situations. If we see the dog and the child, we don't swerve toward the child but we don't think about it. What about the robot? Does it calculate a response? Presumably any robot car built in the next decade would. In the end, is this just a brute force quantitative calculation? A robot drives a car the same way that it plays chess? Honestly I think these kinds of split-second decisions, the kind we would hardly call ethical decisions for ourselves, are the hardest to model.

Far easier are the typical ethical dilemmas, the "I found a wallet with $500" decisions. But then again, these aren't ethical dilemmas for the robot, unless the robot can benefit from having $500. One cannot act in a selfless manner without a sense of self. One can only simulate ethical behavior. However, from an object-oriented perspective one might argue that all objects might have a sense of self and an autopoietic function. Autopoiesis is the foundation of ethical relations. Autopoiesis begins with sustaining self-organization but extends to recognize that such sustenance depends upon relations with others. DeLanda does a good job of investigating the game theory mechanics that model such behaviors. 

I will take this one step further and suggest that robots cannot have ethical relations with us unless we have ethical relations with them. And we can start with our non-robotic objects. What ethical obligation do you have with your car? Not with using your car in relation to the environment, but with your car itself? The car is a classic example of an allopoeitic machine, put into the service of an autopoietic machine (you). I suppose car maintenance could be an example. How about driving your car with style? Driving it the way it was meant to be driven?

If this was my project, I would begin by trying to understand what ethical relations already exist among non-human objects and use that fundamental ethical mechanic in the same way that engineers use physics to build the kinds of ethical relations we desire into new technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>