Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?” [Book Review]

Ethics and Information Technology 13 (1):17-27 (2011)
  Copy   BIBTEX

Abstract

There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in this area has been on the artificial agents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificial agents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 74,480

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2013-11-01

Downloads
68 (#174,432)

6 months
1 (#417,474)

Historical graph of downloads
How can I increase my downloads?

References found in this work

Trust and Antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
On the Morality of Artificial Agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
Trust and Power.Niklas Luhmann - 1982 - Studies in Soviet Thought 23 (3):266-270.

View all 11 references / Add more references