Trusting the (ro)botic other

Acm Sigcas Computers and Society 45 (3):255-260 (2015)
  Copy   BIBTEX

Abstract

How may human agents come to trust artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as a consequence, they approach the robots involved as being trustworthy. Properly speaking, users rely on the overall accountability of the institution. Besides this option we explore some novel ways for trust development: trust becomes normatively laden and thereby the mechanism of exclusive reliance on the normative force of trust may come into play - the efficacy of which has already been proven for persons meeting face-to-face or over the Internet. For one thing, machines may evolve into moral machines, or machines skilled in the art of deception. While both developments might seem to facilitate proper trust and turn as-if trust into a feasible option, they are hardly to be taken seriously. For another, the new trend in robotics is towards coactivity between human and machine operators in a team. Inside the team trust is a necessity for smooth operations. In support of this, humans in particular need to be able to develop and maintain accurate mental models of their machine counterparts. Nevertheless, the trust involved is bound to remain non-normative. It is argued, though, that excellent opportunities exist to build relations of trust toward outside users who are pondering their reliance on the coactive team. The task of managing this trust has to be allotted to human operators of the team, who operate as linking pin between the outside world and the team. Since the robotic team has now been turned into an anthropomorphic team, users may well develop normative trust towards them; correspondingly, trusting the team in as-if fashion becomes feasible.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 104,026

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Trusting the (ro)botic other: By assumption?Paul B. de Laat - 2015 - SIGCAS Computers and Society 45 (3):255-260.
Institutional Trust: A Less Demanding Form of Trust?Bernd Lahno - 2001 - Revista Latinoamericana de Estudios Avanzados 15:19-58.
Trusting virtual trust.Paul B. de Laat - 2005 - Ethics and Information Technology 7 (3):167-180.
On the emotional character of trust.Bernd Lahno - 2001 - Ethical Theory and Moral Practice 4 (2):171-189.

Analytics

Added to PP
2017-10-25

Downloads
43 (#566,870)

6 months
2 (#1,351,201)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Paul B. De Laat
University of Groningen

Citations of this work

No citations found.

Add more citations

References found in this work

The Cunning of Trust.Philip Pettit - 1995 - Philosophy and Public Affairs 24 (3):202-225.
Trust, hope and empowerment.Victoria McGeer - 2008 - Australasian Journal of Philosophy 86 (2):237 – 254.
The Cunning of Trust.Philip Perth - 1995 - Philosophy and Public Affairs 24 (3):202-225.
Accountability in a computerized society.Helen Nissenbaum - 1996 - Science and Engineering Ethics 2 (1):25-42.
Trust, Reliance and the Internet.Philip Pettit - 2004 - Analyse & Kritik 26 (1):108-121.

View all 15 references / Add more references