AI and Society 35 (1):103-111 (2020)
Authors |
|
Abstract |
The advent of the intelligent robot has occupied a significant position in society over the past decades and has given rise to new issues in society. As we know, the primary aim of artificial intelligence or robotic research is not only to develop advanced programs to solve our problems but also to reproduce mental qualities in machines. The critical claim of artificial intelligence advocates is that there is no distinction between mind and machines and thus they argue that there are possibilities for machine ethics, just as human ethics. Unlike computer ethics, which has traditionally focused on ethical issues surrounding human use of machines, AI or machine ethics is concerned with the behaviour of machines towards human users and perhaps other machines as well, and the ethicality of these interactions. The ultimate goal of machine ethics, according to the AI scientists, is to create a machine that itself follows an ideal ethical principle or a set of principles; that is to say, it is guided by this principle or these principles in decisions it makes about possible courses of action it could takea. Thus, machine ethics task of ensuring ethical behaviour of an artificial agent. Although, there are many philosophical issues related to artificial intelligence, but our attempt in this paper is to discuss, first, whether ethics is the sort of thing that can be computed. Second, if we are ascribing mind to machines, it gives rise to ethical issues regarding machines. And if we are not drawing the difference between mind and machines, we are not only redefining specifically human mind but also the society as a whole. Having a mind is, among other things, having the capacity to make voluntary decisions and actions. The notion of mind is central to our ethical thinking, and this is because the human mind is self-conscious, and this is a property that machines lack, as yet.
|
Keywords | No keywords specified (fix it) |
Categories | (categorize this paper) |
DOI | 10.1007/s00146-017-0768-6 |
Options |
![]() ![]() ![]() |
Download options
References found in this work BETA
Philosophical Investigations.Ludwig Josef Johann Wittgenstein - 1953 - New York, NY, USA: Wiley-Blackwell.
The Conscious Mind: In Search of a Fundamental Theory.David J. Chalmers - 1996 - Oxford University Press.
View all 30 references / Add more references
Citations of this work BETA
Surveillance, Security, and AI as Technological Acceptance.Yong Jin Park & S. Mo Jones-Jang - forthcoming - AI and Society.
Conservative AI and social inequality: conceptualizing alternatives to bias through social theory.Mike Zajko - forthcoming - AI and Society:1-10.
AI Ethics and the Banality of Evil.Payman Tajalli - 2021 - Ethics and Information Technology 23 (3):447-454.
Moral Control and Ownership in AI Systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
View all 7 citations / Add more citations
Similar books and articles
Picturing Mind Machines, An Adaptation by Janneke van Leeuwen.Simon van Rysewyk & Janneke van Leeuwen - 2014 - In Simon Peter van Rysewyk & Matthijs Pontier (eds.), Machine Medical Ethics. Springer.
Universal Intelligence: A Definition of Machine Intelligence.Shane Legg & Marcus Hutter - 2007 - Minds and Machines 17 (4):391-444.
Out of Character: On the Creation of Virtuous Machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2013 - Topoi 32 (2):217-226.
The Status of Machine Ethics: A Report From the AAAI Symposium. [REVIEW]Michael Anderson & Susan Leigh Anderson - 2007 - Minds and Machines 17 (1):1-10.
The Singularity: A Philosophical Analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.
Machine Intelligence (MI), Competence and Creativity.Rajakishore Nath - 2009 - AI and Society 23 (3):441-458.
Once People Understand That Machine Ethics is Concerned with How Intelligent Machines Should Behave, They Often Maintain That Isaac Asimov has Already Given Us an Ideal Set of Rules for Such Machines. They Have in Mind Asimov's Three Laws of Robotics: 1. A Robot May Not Injure a Human Being, or, Through Inaction, Allow a Human.Susan Leigh Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
Asimov’s “Three Laws of Robotics” and Machine Metaethics.Susan Leigh Anderson - 2008 - AI and Society 22 (4):477-493.
The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
Fundamental Issues in Social Robotics.Brian R. Duffy - 2006 - International Review of Information Ethics 6 (12):2006.
Analytics
Added to PP index
2017-10-20
Total views
181 ( #65,683 of 2,517,898 )
Recent downloads (6 months)
13 ( #58,488 of 2,517,898 )
2017-10-20
Total views
181 ( #65,683 of 2,517,898 )
Recent downloads (6 months)
13 ( #58,488 of 2,517,898 )
How can I increase my downloads?
Downloads