Knowledge, Technology & Policy 23 (3):347-366 (2010)

Abstract
A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and human relation is judged to be a necessary requirement for trust. The other suggests that trust may apply to autonomous agents because predictability of agents’ interaction is viewed only as a relative value since the digital normativity that grows out of the communication process between interacting agents in MAS has always deal with some unpredictable outcomes. Furthermore, human touch is not judged to be a necessary requirement for trust. In this perspective, a diverse notion of trust is elaborated, as trust is no longer conceived only as a relation between interacting agents but, rather, as a relation between cognitive states of control and lack of control.
Keywords Trust  Multi-agent system  Uncertainty  Autonomous agent  Cognitive states  Normativity
Categories (categorize this paper)
DOI 10.1007/s12130-010-9118-4
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Translate to english
Revision history

Download options

Our Archive


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 50,447
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

On the Morality of Artificial Agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
The Method of Levels of Abstraction.Luciano Floridi - 2008 - Minds and Machines 18 (3):303-329.

View all 12 references / Add more references

Citations of this work BETA

Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
Organizational Trust in a Networked World.Luca Giustiniano & Francesco Bolici - 2012 - Journal of Information, Communication and Ethics in Society 10 (3):187-202.

Add more citations

Similar books and articles

Cognitive Science Meets Multi-Agent Systems: A Prolegomenon.Ron Sun - 2001 - Philosophical Psychology 14 (1):5 – 28.
Developing Trust.Victoria Mcgeer - 2002 - Philosophical Explorations 5 (1):21 – 38.
The Entanglement of Trust and Knowledge on the Web.Judith Simon - 2010 - Ethics and Information Technology 12 (4):343-355.
Trust in Scientific Publishing.Harry Hummels & Hans E. Roosendaal - 2001 - Journal of Business Ethics 34 (2):87 - 100.
Creating Trust.Robert C. Solomon - 1998 - Business Ethics Quarterly 8 (2):205-232.
Norms in Artificial Decision Making.Magnus Boman - 1999 - Artificial Intelligence and Law 7 (1):17-35.
The Politics of Intellectual Self-Trust.Karen Jones - 2012 - Social Epistemology 26 (2):237-251.
The Sales Process and the Paradoxes of Trust.G. Oakes - 1990 - Journal of Business Ethics 9 (8):671 - 679.
Defining Trust and E-Trust: Old Theories and New Problems.Mariarosaria Taddeo - 2009 - International Journal of Technology and Human Interaction (IJTHI) Official Publication of the Information Resources Management Association 5 (2):23-35.
Can We Trust Robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.

Analytics

Added to PP index
2013-03-09

Total views
39 ( #241,720 of 2,326,365 )

Recent downloads (6 months)
2 ( #433,912 of 2,326,365 )

How can I increase my downloads?

Downloads

My notes