Ethics and Information Technology 6 (3):175-183 (2004)

Abstract
Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a realistic option), or facing a responsibility gap, which cannot be bridged by traditional concepts of responsibility ascription.
Keywords artificial intelligence   autonomous robots   learning machines   liability   moral responsibility
Categories (categorize this paper)
DOI 10.1007/s10676-004-3422-1
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 69,089
Through your library

References found in this work BETA

Add more references

Citations of this work BETA

Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
Mind the Gap: Responsible Robotics and the Problem of Responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.

View all 104 citations / Add more citations

Similar books and articles

Analytics

Added to PP index
2009-01-28

Total views
417 ( #23,007 of 2,499,033 )

Recent downloads (6 months)
41 ( #20,967 of 2,499,033 )

How can I increase my downloads?

Downloads

My notes