Switch to: Citations

Add references

You must login to add references.
  1. Simulating rational social normative trust, predictive trust, and predictive reliance between agents.Maj Tuomela & Solveig Hofmann - 2003 - Ethics and Information Technology 5 (3):163-176.
    A program for the simulation of rational social normative trust, predictive `trust,' and predictive reliance between agents will be introduced. It offers a tool for social scientists or a trust component for multi-agent simulations/multi-agent systems, which need to include trust between agents to guide the decisions about the course of action. It is based on an analysis of rational social normative trust (RSNTR) (revised version of M. Tuomela 2002), which is presented and briefly argued. For collective agents, belief conditions for (...)
    Direct download (11 more)  
    Export citation  
    Bookmark   10 citations  
  • Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...)
    Direct download (12 more)  
    Export citation  
    Bookmark   37 citations  
  • Thinking otherwise: Ethics, technology and other subjects.David J. Gunkel - 2007 - Ethics and Information Technology 9 (3):165-177.
    Ethics is ordinarily understood as being concerned with questions of responsibility for and in the face of an other. This other is more often than not conceived of as another human being and, as such, necessarily excludes others – most notably animals and machines. This essay examines the ethics of such exclusivity. It is divided into three parts. The first part investigates the exclusive anthropocentrism of traditional forms of moral␣thinking and, following the example of recent innovations in animal rights philosophy, (...)
    Direct download (9 more)  
    Export citation  
    Bookmark   5 citations  
  • The ethics of designing artificial agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...)
    Direct download (8 more)  
    Export citation  
    Bookmark   19 citations  
  • Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...)
    Direct download (9 more)  
    Export citation  
    Bookmark   17 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Direct download (17 more)  
    Export citation  
    Bookmark   256 citations  
  • Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
    Direct download (7 more)  
    Export citation  
    Bookmark   500 citations  
  • Defining Trust and E-trust: Old Theories and New Problems.Mariarosaria Taddeo - 2009 - International Journal of Technology and Human Interaction (IJTHI) Official Publication of the Information Resources Management Association 5 (2):23-35.
    The paper provides a selective analysis of the main theories of trust and e-trust (that is, trust in digital environments) provided in the last twenty years, with the goal of preparing the ground for a new philosophical approach to solve the problems facing them. It is divided into two parts. The first part is functional toward the analysis of e-trust: it focuses on trust and its definition and foundation and describes the general background on which the analysis of e-trust rests. (...)
    Export citation  
    Bookmark   22 citations  
  • The Impact of the Internet on Our Moral Lives.[author unknown] - 2005
    No categories
    Export citation  
    Bookmark   38 citations  
  • Can We Trust Trust?Diego Gambetta - 1988 - In Trust: Making and Breaking Cooperative Relations. Blackwell. pp. 213-237.
    Export citation  
    Bookmark   85 citations  
  • Trust and Power.Niklas Luhmann - 1982 - Studies in Soviet Thought 23 (3):266-270.
    Export citation  
    Bookmark   151 citations