Hostname: page-component-848d4c4894-2pzkn Total loading time: 0 Render date: 2024-05-24T13:19:29.970Z Has data issue: false hasContentIssue false

Unpredictable robots elicit responsibility attributions

Published online by Cambridge University Press:  05 April 2023

Matija Franklin
Affiliation:
Experimental Psychology Department, University College London, London WC1E 6BT, UK matija.franklin@ucl.ac.uk; https://www.ucl.ac.uk/pals/research/experimental-psychology/person/matija-franklin/ d.lagnado@ucl.ac.uk; https://www.ucl.ac.uk/pals/research/experimental-psychology/person/david-lagnado/
Edmond Awad
Affiliation:
Economics Department, University of Exeter, Exeter EX4 4PU, UK e.awad@exeter.ac.uk; https://www.edmondawad.me
Hal Ashton
Affiliation:
Computer Science Department, University College London, 66-72 Gower Street, London WC1E 6EA, UK ucabha5@ucl.ac.uk; https://algointent.com/
David Lagnado
Affiliation:
Experimental Psychology Department, University College London, London WC1E 6BT, UK matija.franklin@ucl.ac.uk; https://www.ucl.ac.uk/pals/research/experimental-psychology/person/matija-franklin/ d.lagnado@ucl.ac.uk; https://www.ucl.ac.uk/pals/research/experimental-psychology/person/david-lagnado/

Abstract

Do people hold robots responsible for their actions? While Clark and Fischer present a useful framework for interpreting social robots, we argue that they fail to account for people's willingness to assign responsibility to robots in certain contexts, such as when a robot performs actions not predictable by its user or programmer.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ashton, H., Franklin, M., & Lagnado, D. (2022). Testing a definition of intent for AI in a legal setting. Unpublished manuscript.Google Scholar
Awad, E., Levine, S., Kleiman-Weiner, M., Dsouza, S., Tenenbaum, J. B., Shariff, A., … Rahwan, I. (2020). Drivers are blamed more than their automated cars when both make mistakes. Nature Human Behaviour, 4(2), 134143.CrossRefGoogle ScholarPubMed
Banks, J. (2019). A perceived moral agency scale: Development and validation of a metric for humans and social machines. Computers in Human Behavior, 90, 363371.CrossRefGoogle Scholar
Chouard, T. (2016). The go files: AI computer clinches victory against go champion. Nature, https://doi.org/10.1038/nature.2016.19553Google Scholar
Christie's (2018). Is artificial intelligence set to become art's next medium? [Blog post]. Retrieved from https://www.christies.com/features/a-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspxGoogle Scholar
De Jong, R. (2020). The retribution-gap and responsibility-loci related to robots and automated technologies: A reply to Nyholm. Science and Engineering Ethics, 26(2), 727735.CrossRefGoogle ScholarPubMed
Epstein, Z., Levine, S., Rand, D. G., & Rahwan, I. (2020). Who gets credit for AI-generated art?. iScience, 23(9), 101515.CrossRefGoogle ScholarPubMed
Fiala, B., Arico, A., & Nichols, S. (2014). You robot. In Machery, E. & O’Neill, E. (Eds.), Current controversies in experimental philosophy (1st ed., pp. 3147). Routledge. https://doi.org/10.4324/9780203122884CrossRefGoogle Scholar
Franklin, M., Ashton, H., Awad, E., & Lagnado, D. (2022). Causal Framework of Artificial Autonomous Agent Responsibility. In Proceedings of 5th AAAI/ACM Conference on AI, Ethics, and Society (AIES ’22), Oxford, UK.Google Scholar
Franklin, M., Awad, E., & Lagnado, D. (2021). Blaming automated vehicles in difficult situations. iScience, 24(4), 102252.CrossRefGoogle ScholarPubMed
Furlough, C., Stokes, T., & Gillan, D. J. (2021). Attributing blame to robots: I. The influence of robot autonomy. Human Factors, 63(4), 592602.CrossRefGoogle ScholarPubMed
Hidalgo, C. A., Orghian, D., Canals, J. A., De Almeida, F., & Martin, N. (2021). How humans judge machines. MIT Press.CrossRefGoogle Scholar
Jörling, M., Böhm, R., & Paluch, S. (2019). Service robots: Drivers of perceived responsibility for service outcomes. Journal of Service Research, 22(4), 404420.CrossRefGoogle Scholar
Kaiserman, A. (2021). Responsibility and the “pie fallacy”. Philosophical Studies, 178(11), 35973616.CrossRefGoogle Scholar
List, C. (2021). Group agency and artificial intelligence. Philosophy & Technology, 34(4), 12131242.CrossRefGoogle Scholar
McFarland, M. (2016). What AlphaGo's sly move says about machine creativity. The Washington Post, retrieved from washingtonpost.com/news/innovations/wp/2016/03/15/what-alphagos-sly-move-says-about-machine-creativity/Google Scholar
McManus, R. M., & Rutchick, A. M. (2019). Autonomous vehicles and the attribution of moral responsibility. Social Psychological and Personality Science, 10(3), 345352.CrossRefGoogle Scholar
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 6277.CrossRefGoogle Scholar
Tobia, K., Nielsen, A., & Stremitzer, A. (2021). When does physician use of AI increase liability?. Journal of Nuclear Medicine, 62(1), 1721.CrossRefGoogle ScholarPubMed
Wilde, N., Kulić, D., & Smith, S. L. (2018). Learning User Preferences in Robot Motion Planning through Interaction. In 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia (pp. 619–626). IEEE.CrossRefGoogle Scholar
Woodworth, B., Ferrari, F., Zosa, T. E., & Riek, L. D. (2018). Preference Learning in Assistive Robotics: Observational Repeated Inverse Reinforcement Learning. In Machine Learning for Healthcare Conference, Stanford University, USA (pp. 420–439). PMLR.Google Scholar