Abstract
As advances are made in artificial intelligence and machine learning, the distance between the activity of the designers/programmers of the system and the behavior of the system grows. This gap, between human action and the effects and consequences of that action, is not new, but emerging computing paradigms are presenting this challenge with a new urgency, and revealing the poverty of our tools for reasoning about what human responsibility means in a world with ubiquitous artificial agents. This paper proposes a new addition to our existing collection of frameworks for considering this issue.
Recommendations
Moral luck and computer ethics: Gauguin in cyberspace
I argue that the problem of `moral luck' is an unjustly neglected topic within Computer Ethics. This is unfortunate given that the very nature of computer technology, its `logical malleability', leads to ever greater levels of complexity, unreliability ...
Mind the gap: responsible robotics and the problem of responsibility
AbstractThe task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, ...
The ethics of designing artificial agents
In their important paper "Autonomous Agents", Floridi and Sanders use "levels of abstraction" to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral ...
Comments