Minds and Machines 28 (4):735-774 (2018)

Authors
Christopher Burr
The Alan Turing Institute
Nello Cristianini
University of Bristol (PhD)
James Ladyman
University of Bristol
Abstract
Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings.
Keywords Artificial intelligence  Autonomy  Machine learning  Human–computer interaction  Nudge  Persuasion  Recommender System  Intelligent System  Ethics
Categories (categorize this paper)
Reprint years 2018
ISBN(s)
DOI 10.1007/s11023-018-9479-0
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

References found in this work BETA

Practical Ethics.Peter Singer - 1979 - Cambridge University Press.
Superintelligence: Paths, Dangers, Strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
Creating the Kingdom of Ends.Christine M. Korsgaard - 1996 - Cambridge University Press.
Thinking, Fast and Slow.Daniel Kahneman - 2011 - New York: Farrar, Straus & Giroux.

View all 38 references / Add more references

Citations of this work BETA

Can Machines Read Our Minds?Christopher Burr & Nello Cristianini - 2019 - Minds and Machines 29 (3):461-494.

View all 17 citations / Add more citations

Similar books and articles

How Do Users Know What to Say?Nicole Yankelovich - 1996 - Interactions 3 (6):32-43.
Reader as User: Applying Interface Design Techniques to the Web.Karen McGrane Chauss - 1996 - Kairos: A Journal of Rhetoric, Technology, and Pedagogy 1 (2).
Agents of Alienation.Jaron Lanier - 1995 - Interactions 2 (3):76-81.
Spyware – the Ethics of Covert Software.Mathias Klang - 2004 - Ethics and Information Technology 6 (3):193-202.

Analytics

Added to PP index
2018-09-25

Total views
110 ( #93,749 of 2,432,670 )

Recent downloads (6 months)
31 ( #25,548 of 2,432,670 )

How can I increase my downloads?

Downloads

My notes