Trusting the (ro)botic other
Acm Sigcas Computers and Society 45 (3):255-260 (2015)
Abstract
How may human agents come to trust artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as a consequence, they approach the robots involved as being trustworthy. Properly speaking, users rely on the overall accountability of the institution. Besides this option we explore some novel ways for trust development: trust becomes normatively laden and thereby the mechanism of exclusive reliance on the normative force of trust may come into play - the efficacy of which has already been proven for persons meeting face-to-face or over the Internet. For one thing, machines may evolve into moral machines, or machines skilled in the art of deception. While both developments might seem to facilitate proper trust and turn as-if trust into a feasible option, they are hardly to be taken seriously. For another, the new trend in robotics is towards coactivity between human and machine operators in a team. Inside the team trust is a necessity for smooth operations. In support of this, humans in particular need to be able to develop and maintain accurate mental models of their machine counterparts. Nevertheless, the trust involved is bound to remain non-normative. It is argued, though, that excellent opportunities exist to build relations of trust toward outside users who are pondering their reliance on the coactive team. The task of managing this trust has to be allotted to human operators of the team, who operate as linking pin between the outside world and the team. Since the robotic team has now been turned into an anthropomorphic team, users may well develop normative trust towards them; correspondingly, trusting the team in as-if fashion becomes feasible.Author's Profile
My notes
Similar books and articles
Trust of People, Words, and God: A Route for Philosophy of Religion.Joseph John Godfrey - 2012 - University of Notre Dame Press.
Trusting Others, Trusting God: Concepts of Belief, Faith and Rationality.Sheela Pawar - 2009 - Ashgate.
Trusting patients, trusting nurses.Derek Sellman phd ma bsc rgn - 2007 - Nursing Philosophy 8 (1):28–36.
Trusting the (ro)botic other: By assumption?Paul B. de Laat - 2015 - SIGCAS Computers and Society 45 (3):255-260.
Minim lly inv siv R botic T lesurge.Y. Frank T. Ndick An & Shakar Sastry - 2001 - In Stefan N. Willich & Susanna Elm (eds.), Medical Challenges for the New Millennium: An Interdisciplinary Task. Kluwer Academic Publishers. pp. 89.
On John McClellan’s “Not Skeptical Theism, but Trusting Theism”.Klaus Ladstaetter - 2016 - Southwest Philosophy Review 32 (2):87-94.
Il circolo della fiducia e la struttura dell’affidarsi.Riccardo Fanciullacci - 2012 - Etica E Politica 14 (1):277-303.
Making the Technological Trustworthy.S. D. Noam Cook - 2010 - Knowledge, Technology & Policy 23 (3):455-459.
Trusting Our Selves to Technology.Asle H. Kiran & Peter-Paul Verbeek - 2010 - Knowledge, Technology & Policy 23 (3):409-427.
On the emotional character of trust.Bernd Lahno - 2001 - Ethical Theory and Moral Practice 4 (2):171-189.
Analytics
Added to PP
2017-10-25
Downloads
11 (#846,266)
6 months
1 (#451,398)
2017-10-25
Downloads
11 (#846,266)
6 months
1 (#451,398)
Historical graph of downloads
Author's Profile
References found in this work
Trust, hope and empowerment.Victoria McGeer - 2008 - Australasian Journal of Philosophy 86 (2):237 – 254.
Moral appearances: emotions, robots, and human morality. [REVIEW]Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):235-241.
Accountability in a computerized society.Helen Nissenbaum - 1996 - Science and Engineering Ethics 2 (1):25-42.