Trusting the (ro)botic other: By assumption?

SIGCAS Computers and Society 45 (3):255-260 (2015)
  Copy   BIBTEX

Abstract

How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as a consequence, they approach the robots involved as being trustworthy (“zones of trust”). Properly speaking, users rely on the overall accountability of the institution. Besides this option we explore some novel ways for trust development: trust becomes normatively laden and thereby the mechanism of exclusive reliance on the normative force of trust (as-if trust) may come into play - the efficacy of which has already been proven for persons meeting face-to-face or over the Internet (virtual trust). For one thing, machines may evolve into moral machines, or machines skilled in the art of deception. While both developments might seem to facilitate proper trust and turn as-if trust into a feasible option, they are hardly to be taken seriously (while being science-fiction, immoral, or both). For another, the new trend in robotics is towards coactivity between human and machine operators in a team (away from making robots as autonomous as possible). Inside the team trust is a necessity for smooth operations. In support of this, humans in particular need to be able to develop and maintain accurate mental models of their machine counterparts. Nevertheless, the trust involved is bound to remain nonnormative. It is argued, though, that excellent opportunities exist to build relations of trust toward outside users who are pondering their reliance on the coactive team. The task of managing this trust has to be allotted to human operators of the team, who operate as linking pin between the outside world and the team. Since the robotic team has now been turned into an anthropomorphic team, users may well develop normative trust towards them; correspondingly, trusting the team in as-if fashion becomes feasible.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
Can we trust robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
Deciding to trust, coming to believe.Richard Holton - 1994 - Australasian Journal of Philosophy 72 (1):63 – 76.
Creating Trust.Robert C. Solomon - 1998 - Business Ethics Quarterly 8 (2):205-232.
The attitude of trust is basic.Paul Faulkner - 2015 - Analysis 75 (3):424-429.
Trust of people, words, and God: a route for philosophy of religion.Joseph John Godfrey - 2012 - Notre Dame, Ind.: University of Notre Dame Press.
Trust.Carolyn McLeod - 2020 - Stanford Encyclopedia of Philosophy.
Public Trust.Cynthia Townley & Jay L. Garfield - 2013 - In Cynthia Townley & P. Maleka (eds.), Trust: Analytic and Applied Perspectives. Amsterdam: Rodopi.

Analytics

Added to PP
2015-10-30

Downloads
205 (#94,707)

6 months
34 (#99,370)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Paul B. De Laat
University of Groningen

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references