Social robots and the risks to reciprocity

AI and Society 37 (2):479-485 (2022)
  Copy   BIBTEX

Abstract

A growing body of research can be found in which roboticists are designing for reciprocity as a key construct for successful human–robot interaction (HRI). Given the centrality of reciprocity as a component for our moral lives (for moral development and maintaining the just society), this paper confronts the possibility of what things would look like if the benchmark to achieve perceived reciprocity were accomplished. Through an analysis of the value of reciprocity from the care ethics tradition the richness of reciprocity as an inherent value is revealed: on the micro-level, as mutual care for immediate care givers, and on the macro-level, as foundational for a just society. Taking this understanding of reciprocity into consideration, it becomes clear that HRI cannot achieve this bidirectional value of reciprocity; a robot must deceive users into believing it is capable of reciprocating to humans or is deserving of reciprocation from humans. Moreover, on the macro-level, designing social robots for reciprocity threatens the ability and willingness to reciprocate to human care workers across society. Because of these concerns, I suggest re-thinking the goals of reciprocity in social robotics. Designing for reciprocity in social robotics should be dedicated to the design of robots to enhance the ability to mutually care for those that provide us with care, as opposed to designing for reciprocity between human and robot.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 94,623

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2021-05-01

Downloads
74 (#220,746)

6 months
35 (#118,057)

Historical graph of downloads
How can I increase my downloads?

Author's Profile