Computers in control: Rational transfer of authority or irresponsible abdication of autonomy? [Book Review]

Ethics and Information Technology 1 (3):173-184 (1999)
To what extent should humans transfer, or abdicate, responsibility to computers? In this paper, I distinguish six different senses of responsible and then consider in which of these senses computers can, and in which they cannot, be said to be responsible for deciding various outcomes. I sort out and explore two different kinds of complaint against putting computers in greater control of our lives: (i) as finite and fallible human beings, there is a limit to how far we can acheive increased reliability through complex devices of our own design; (ii) even when computers are more reliable than humans, certain tasks (e.g., selecting an appropriate gift for a friend, solving the daily crossword puzzle) are inappropriately performed by anyone (or anything) other than oneself. In critically evaluating these claims, I arrive at three main conclusions: (1) While we ought to correct for many of our shortcomings by availing ourselves of the computer''s larger memory, faster processing speed and greater stamina, we are limited by our own finiteness and fallibility (rather than by whatever limitations may be inherent in silicon and metal) in the ability to transcend our own unreliability. Moreover, if we rely on programmed computers to such an extent that we lose touch with the human experience and insight that formed the basis for their programming design, our fallibility is magnified rather than mitigated. (2) Autonomous moral agents can reasonably defer to greater expertise, whether human or cybernetic. But they cannot reasonably relinquish background-oversight responsibility. They must be prepared, at least periodically, to review whether the expertise to which they defer is indeed functioning as he/she/it was authorized to do, and to take steps to revoke that authority, if necessary. (3) Though outcomes matter, it can also matter how they are brought about, and by whom. Thus, reflecting on how much of our lives should be directed and implemented by computer may be another way of testing any thoroughly end-state or consequentialist conception of the good and decent life. To live with meaning and purpose, we need to actively engage our own faculties and empathetically connect up with, and resonate to, others. Thus there is some limit to how much of life can be appropriately lived by anyone (or anything) other than ourselves.
Keywords Computer Science   Ethics   User Interfaces and Human Computer Interaction   Management of Computing and Information Systems   Library Science   Technology Management
Categories (categorize this paper)
Reprint years 2004
DOI 10.1023/A:1010087500508
 Save to my reading list
Follow the author(s)
Edit this record
My bibliography
Export citation
Find it on Scholar
Mark as duplicate
Request removal from index
Revision history
Download options
Our Archive

Upload a copy of this paper     Check publisher's policy     Papers currently archived: 31,334
Through your library
References found in this work BETA

No references found.

Add more references

Citations of this work BETA
Killer Robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
Artificial Moral Agents Are Infeasible with Foreseeable Technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.

View all 6 citations / Add more citations

Similar books and articles
Added to PP index

Total downloads
31 ( #185,702 of 2,225,308 )

Recent downloads (6 months)
4 ( #140,172 of 2,225,308 )

How can I increase my downloads?

Monthly downloads
My notes
Sign in to use this feature