Ethics and Information Technology (forthcoming)
|Abstract||Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances.|
|Keywords||No keywords specified (fix it)|
|Through your library||Configure|
Similar books and articles
Ludovic Marin & Olivier Oullier (2001). When Robots Fail: The Complex Processes of Learning and Development. Behavioral and Brain Sciences 24 (6):1067-1068.
Tatsuya Nomura, Takugo Tasaki, Takayuki Kanda, Masahiro Shiomi, Hiroshi Ishiguro & Norihiro Hagita (2006). Questionnaire-Based Social Research on Opinions of Japanese Visitors for Communication Robots at an Exhibition. AI and Society 21 (1-2):167-183.
Dylan Evans (2001/2003). Emotion: A Very Short Introduction. Oxford University Press.
Anthony F. Beavers, Between Angels and Animals: The Question of Robot Ethics, or is Kantian Moral Agency Desirable?
Min-Sun Kim & Eun-Joo Kim (forthcoming). Humanoid Robots as “The Cultural Other”: Are We Able to Love Our Creations? AI and Society.
Mark Coeckelbergh (2012). Can We Trust Robots? Ethics and Information Technology 14 (1):53-60.
Added to index2010-03-22
Total downloads59 ( #16,382 of 548,984 )
Recent downloads (6 months)1 ( #63,327 of 548,984 )
How can I increase my downloads?