Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
The current paradigm of Artiﬁcial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the ﬁeld’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, (...) recent cases of unexpected eﬀects of AI are the consequences of those very choices that enabled the ﬁeld to succeed, and this is why it will be diﬃcult to solve them. In this chapter we review three of these choices, investigating their connection to some of today’s challenges in AI, including those relative to bias, value alignment, privacy and explainability. We introduce the notion of “ethical debt” to describe the necessity to undertake expensive rework in the future in order to address ethical problems created by a technical system. (shrink)
Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of 'social machine' and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social (...) scoring. After showing how all the building blocks of algorithmic regulation are already well in place, we discuss possible implications for human autonomy and social order. The main contribution of this paper is to identify convergent social and technical trends that are leading towards social regulation by algorithms, and to discuss the possible social, political, and ethical consequences of taking this path. (shrink)
Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of ‘social machine’ and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social (...) scoring. After showing how all the building blocks of algorithmic regulation are already well in place, we discuss the possible implications for human autonomy and social order. The main contribution of this paper is to identify convergent social and technical trends that are leading towards social regulation by algorithms, and to discuss the possible social, political, and ethical consequences of taking this path. (shrink)
As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but (...) draws on similar situations emerging from other algorithms utilised in controlling access to opportunities, to explain how machine learning works and, as a result, how decisions are made by modern intelligent algorithms or 'classifiers'. It examines the key aspects of the performance of classifiers, including how classifiers learn, the fact that they operate on the basis of correlation rather than causation, and that the term 'bias' in machine learning has a different meaning to common usage. An example of a real world 'classifier', the Harm Assessment Risk Tool (HART), is examined, through identification of its technical features: the classification method, the training data and the test data, the features and the labels, validation and performance measures. Four normative benchmarks are then considered by reference to HART: (a) prediction accuracy (b) fairness and equality before the law (c) transparency and accountability (d) informational privacy and freedom of expression, in order to demonstrate how its technical features have important normative dimensions that bear directly on the extent to which the system can be regarded as a viable and legitimate support for, or even alternative to, existing human decision-makers. (shrink)
Statistical approaches to Artificial Intelligence are behind most success stories of the field in the past decade. The idea of generating non-trivial behaviour by analysing vast amounts of data has enabled recommendation systems, search engines, spam filters, optical character recognition, machine translation and speech recognition, among other things. As we celebrate the spectacular achievements of this line of research, we need to assess its full potential and its limitations. What are the next steps to take towards machine intelligence?
Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of mechanisms. We (...) argue that this type of autonomous social machines has provided a new paradigm for the design of intelligent systems marking a new phase in the field of AI. The consequences of this observation range from methodological, philosophical to ethical. On the one side, it emphasises the role of Human-Computer Interaction in the design of intelligent systems, while on the other side it draws attention to both the risks for a human being and those for a society relying on mechanisms that are not necessarily controllable. The difficulty by companies in regulating the spread of misinformation, as well as those by authorities to protect task-workers managed by a software infrastructure, could be just some of the effects of this technological paradigm. (shrink)
Social machines are systems formed by material and human elements interacting in a structured way. The use of digital platforms as mediators allows large numbers of humans to participate in such machines, which have interconnected AI and human components operating as a single system capable of highly sophisticated behavior. Under certain conditions, such systems can be understood as autonomous goal-driven agents. Many popular online platforms can be regarded as instances of this class of agent. We argue that autonomous social machines (...) provide a new paradigm for the design of intelligent systems, marking a new phase in AI. After describing the characteristics of goal-driven social machines, we discuss the consequences of their adoption, for the practice of artificial intelligence as well as for its regulation. (shrink)
The field of Artificial Intelligence (AI) has undergone many transformations, most recently the emergence of data-driven approaches centred on machine learning technology. The present article examines that paradigm shift by using the conceptual tools developed by Thomas Kuhn, and by analysing the contents of the longest running conference series in the field. A paradigm shift occurs when a new set of assumptions and values replaces the previous one within a given scientific community. These are often conveyed implicitly, by the choice (...) of success stories that exemplify and define what a given field of research is about, demonstrating what kind of questions and answers are appropriate. The replacement of these exemplar stories corresponds to a shift in goals, methods, and expectations. We discuss the most recent such transition in the field of Artificial Intelligence, as well as commenting on some earlier ones. (shrink)
The automated parsing of 130,213 news articles about the 2012 US presidential elections produces a network formed by the key political actors and issues, which were linked by relations of support and opposition. The nodes are formed by noun phrases and links by verbs, directly expressing the action of one node upon the other. This network is studied by applying insights from several theories and techniques, and by combining existing tools in an innovative way, including: graph partitioning, centrality, assortativity, hierarchy (...) and structural balance. The analysis yields various patterns. First, we observe that the fundamental split between the Republican and Democrat camps can be easily detected by network partitioning, which provides a strong validation check of the approach adopted, as well as a sound way to assign actors and topics to one of the two camps. Second, we identify the most central nodes of the political camps. We also learnt that Clinton played a more central role than Biden in the Democrat camp; the overall campaign was much focused on economy and rights; the Republican Party is the most divisive subject in the campaign, and is portrayed more negatively than the Democrats; and, overall, the media reported positive statements more frequently for the Democrats than the Republicans. This is the first study in which political positions are automatically extracted and derived from a very large corpus of online news, generating a network that goes well beyond traditional word-association networks by means of richer linguistic analysis of texts. (shrink)
The road to artificial intelligence: A case of data over theory Computers that could simulate human intelligence were once a futuristic dream. Now they are all around us – but not in the way their pioneers expected.
Intelligence rethought: AIs know us, but don't think like us Processing and learning from millions of past cases allows machines to know what we want better than we do, even if they don't think as we do.
La scorciatoia - Come le macchine sono diventate intelligenti senza pensare in modo umano -/- Le nostre creature sono diverse da noi e talvolta più forti. Per poterci convivere dobbiamo imparare a conoscerle Vagliano curricula, concedono mutui, scelgono le notizie che leggiamo: le macchine intelligenti sono entrate nelle nostre vite, ma non sono come ce le aspettavamo. Fanno molte delle cose che volevamo, e anche qualcuna in più, ma non possiamo capirle o ragionare con loro, perché il loro comportamento è (...) in realtà guidato da relazioni statistiche ricavate da quantità sovrumane di dati. Eppure possono essere in certi casi più potenti di noi: ci osservano continuamente, e prendono decisioni al nostro posto. E allora come incorporarle nella nostra società senza rischi ed effetti collaterali? Questo libro - rigoroso, pungente, originale nell'approccio - ci spiega come siamo arrivati sin qui, e indica il percorso che ci aspetta prima di poterci fidare di questi nuovi agenti «alieni». La tecnologia non basta, occorre un dialogo tra scienze naturali e umane: è il passaggio cruciale per una convivenza sicura con questa nuova forma di intelligenza. (Il Mulino). (shrink)
Book. From the Publisher. An influential scientist in the field of artificial intelligence (AI) explains its fundamental concepts and how it is changing culture and society. -/- A particular form of AI is now embedded in our tech, our infrastructure, and our lives. How did it get there? Where and why should we be concerned? And what should we do now? The Shortcut: Why Intelligent Machines Do Not Think Like Us provides an accessible yet probing exposure of AI in its (...) prevalent form today, proposing a new narrative to connect and make sense of events that have happened in the recent tumultuous past, and enabling us to think soberly about the road ahead. -/- This book is divided into ten carefully crafted and easily digestible chapters. Each chapter grapples with an important question for AI. Ranging from the scientific concepts that underpin the technology to wider implications for society, it develops a unified description using tools from different disciplines and avoiding unnecessary abstractions or words that end with -ism. The book uses real examples wherever possible, introducing the reader to the people who have created some of these technologies and to ideas shaping modern society that originate from the technical side of AI. It contains important practical advice about how we should approach AI in the future without promoting exaggerated hypes or fears. -/- Entertaining and disturbing but always thoughtful, The Shortcut confronts the hidden logic of AI while preserving a space for human dignity. It is essential reading for anyone with an interest in AI, the history of technology, and the history of ideas. General readers will come away much more informed about how AI really works today and what we should do next. -/- Table of Contents -/- ABOUT THE AUTHOR. PROLOGUE. 1 The Search for Intelligence. 2 The Shortcut. 3 Finding Order in the World. 4 Lady Lovelace Was Wrong. 5 Unintended Behaviour. 6 Microtargeting and Mass Persuasion. 7 The Feedback Loop. 8 The Glitch. 9 Social Machines. 10 Regulating, Not Unplugging. EPILOGUE. BIBLIOGRAPHY. INDEX. (shrink)
The deep formal and conceptual link existing between artificial life and artificial intelligence can be highlighted using conceptual tools derived by Karl Popper's evolutionary epistemology.Starting from the observation that the structure itself of an organism embodies knowledge about the environment which it is adapted to, it is possible to regard evolution as a learning process. This process is subject to the same rules indicated by Popper for the growth of scientific knowledge: causal conjectures (mutations) and successive refutations (extinction). In the (...) field of machine learning such a paradigm is represented by genetic algorithms that, simulating biological processes, emulate cognitive processes. From a practical viewpoint, that perspective allows to identify the two different kinds of learning considered by artificial intelligence, knowledge acquisition and skill improvement, and to get a different view of the problem of heuristic knowledge in learning systems. From a theoretical point of view, these considerations can shade a new light on an old epistemological problem: why do we live in a learnable world? (shrink)