What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents
Knowledge, Technology & Policy 23 (3):347-366 (2010)
Abstract
A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems (MAS) based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and human relation is judged to be a necessary requirement for trust. The other suggests that trust may apply to autonomous agents because predictability of agents’ interaction is viewed only as a relative value since the digital normativity that grows out of the communication process between interacting agents in MAS has always deal with some unpredictable outcomes (_reduction of uncertainty_). Furthermore, human touch is not judged to be a necessary requirement for trust. In this perspective, a diverse notion of trust is elaborated, as trust is no longer conceived only as a relation between interacting agents but, rather, as a relation between cognitive states of control and lack of control (_double bind_).DOI
10.1007/s12130-010-9118-4
My notes
Similar books and articles
Simulating rational social normative trust, predictive trust, and predictive reliance between agents.Maj Tuomela & Solveig Hofmann - 2003 - Ethics and Information Technology 5 (3):163-176.
Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
Cognitive science meets multi-agent systems: A prolegomenon.Ron Sun - 2001 - Philosophical Psychology 14 (1):5 – 28.
The entanglement of trust and knowledge on the web.Judith Simon - 2010 - Ethics and Information Technology 12 (4):343-355.
Trust in scientific publishing.Harry Hummels & Hans E. Roosendaal - 2001 - Journal of Business Ethics 34 (2):87 - 100.
Norms in artificial decision making.Magnus Boman - 1999 - Artificial Intelligence and Law 7 (1):17-35.
The sales process and the paradoxes of trust.G. Oakes - 1990 - Journal of Business Ethics 9 (8):671 - 679.
Defining Trust and E-trust: Old Theories and New Problems.Mariarosaria Taddeo - 2009 - International Journal of Technology and Human Interaction (IJTHI) Official Publication of the Information Resources Management Association 5 (2):23-35.
Analytics
Added to PP
2013-03-09
Downloads
44 (#267,475)
6 months
2 (#297,972)
2013-03-09
Downloads
44 (#267,475)
6 months
2 (#297,972)
Historical graph of downloads
Citations of this work
From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices.Jessica Morley, Luciano Floridi, Libby Kinsey & Anat Elhalal - 2020 - Science and Engineering Ethics 26 (4):2141-2168.
Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
Empowerment or Engagement? Digital Health Technologies for Mental Healthcare.Christopher Burr & Jessica Morley - 2020 - In Christopher Burr & Silvia Milano (eds.), The 2019 Yearbook of the Digital Ethics Lab. pp. 67-88.
Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
Organizational trust in a networked world.Luca Giustiniano & Francesco Bolici - 2012 - Journal of Information, Communication and Ethics in Society 10 (3):187-202.
References found in this work
Power/Knowledge: Selected Interviews and Other Writings, 1972-1977.Michel Foucault - 1980 - Vintage.
The Order of Things, an Archaeology of the Human Sciences.Michel Foucault - 1970 - Science and Society 35 (4):490-494.
On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.