In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
Luciano Floridi presents a book that will set the agenda for the philosophy of information. PI is the philosophical field concerned with the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation, and sciences, and the elaboration and application of information-theoretic and computational methodologies to philosophical problems. This book lays down, for the first time, the conceptual foundations for this new area of research. It does so systematically, by pursuing three goals. Its metatheoretical goal (...) is to describe what the philosophy of information is, its problems, approaches, and methods. Its introductory goal is to help the reader to gain a better grasp of the complex and multifarious nature of the various concepts and phenomena related to information. Its analytic goal is to answer several key theoretical questions of great philosophical interest, arising from the investigation of semantic information. (shrink)
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...) principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts. (shrink)
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...) stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society. (shrink)
The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...) essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good. (shrink)
Who are we, and how do we relate to each other? Luciano Floridi, one of the leading figures in contemporary philosophy, argues that the explosive developments in Information and Communication Technologies is changing the answer to these fundamental human questions. As the boundaries between life online and offline break down, and we become seamlessly connected to each other and surrounded by smart, responsive objects, we are all becoming integrated into an "infosphere". Personas we adopt in social media, for example, feed (...) into our 'real' lives so that we begin to live, as Floridi puts in, "onlife". Following those led by Copernicus, Darwin, and Freud, this metaphysical shift represents nothing less than a fourth revolution. "Onlife" defines more and more of our daily activity - the way we shop, work, learn, care for our health, entertain ourselves, conduct our relationships; the way we interact with the worlds of law, finance, and politics; even the way we conduct war. In every department of life, ICTs have become environmental forces which are creating and transforming our realities. How can we ensure that we shall reap their benefits? What are the implicit risks? Are our technologies going to enable and empower us, or constrain us? Floridi argues that we must expand our ecological and ethical approach to cover both natural and man-made realities, putting the 'e' in an environmentalism that can deal successfully with the new challenges posed by our digital technologies and information society. (shrink)
Luciano Floridi develops the first ethical framework for dealing with the new challenges posed by Information and Communication Technologies. He establishes the conceptual foundations of Information Ethics by exploring important metatheoretical and introductory issues, and answering key theoretical questions of great philosophical interest.
The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...) ‘what’ of AI ethics —rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. (shrink)
Technologies to rapidly alert people when they have been in contact with someone carrying the coronavirus SARS-CoV-2 are part of a strategy to bring the pandemic under control. Currently, at least 47 contact-tracing apps are available globally. They are already in use in Australia, South Korea and Singapore, for instance. And many other governments are testing or considering them. Here we set out 16 questions to assess whether — and to what extent — a contact-tracing app is ethically justifiable.
This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the (...) right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’. Further, the ambiguity and limited scope of the ‘right not to be subject to automated decision-making’ contained in Article 22 (from which the alleged ‘right to explanation’ stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative and policy steps that, if taken, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018. (shrink)
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...) interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on mind-less morality we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the Method of Abstraction for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The Method of Abstraction is explained in terms of an interface or set of features or observables at a given LoA. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the transition rules by which state is changed) at a given LoA. Morality may be thought of as a threshold defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary cost of this facility is the extension of the class of agents and moral agents to embrace AAs. (shrink)
Modern digital technologies—from web-based services to Artificial Intelligence (AI) solutions—increasingly affect the daily lives of billions of people. Such innovation brings huge opportunities, but also concerns about design, development, and deployment of digital technologies. This article identifies and discusses five clusters of risk in the international debate about digital ethics: ethics shopping; ethics bluewashing; ethics lobbying; ethics dumping; and ethics shirking.
The capacity to collect and analyse data is growing exponentially. Referred to as ‘Big Data’, this scientific, social and technological trend has helped create destabilising amounts of information, which can challenge accepted social and ethical norms. Big Data remains a fuzzy idea, emerging across social, scientific, and business contexts sometimes seemingly related only by the gigantic size of the datasets being considered. As is often the case with the cutting edge of scientific and technological progress, understanding of the ethical implications (...) of Big Data lags behind. In order to bridge such a gap, this article systematically and comprehensively analyses academic literature concerning the ethical implications of Big Data, providing a watershed for future ethical investigations and regulations. Particular attention is paid to biomedical Big Data due to the inherent sensitivity of medical information. By means of a meta-analysis of the literature, a thematic narrative is provided to guide ethicists, data scientists, regulators and other stakeholders through what is already known or hypothesised about the ethical risks of this emerging and innovative phenomenon. Five key areas of concern are identified: informed consent, privacy, ownership, epistemology and objectivity, and ‘Big Data Divides’ created between those who have or lack the necessary resources to analyse increasingly large datasets. Critical gaps in the treatment of these themes are identified with suggestions for future research. Six additional areas of concern are then suggested which, although related have not yet attracted extensive debate in the existing literature. It is argued that they will require much closer scrutiny in the immediate future: the dangers of ignoring group-level ethical harms; the importance of epistemology in assessing the ethics of Big Data; the changing nature of fiduciary relationships that become increasingly data saturated; the need to distinguish between ‘academic’ and ‘commercial’ Big Data practices in terms of potential harm to data subjects; future problems with ownership of intellectual property generated from analysis of aggregated datasets; and the difficulty of providing meaningful access rights to individual data subjects that lack necessary resources. Considered together, these eleven themes provide a thorough critical framework to guide ethical assessment and governance of emerging Big Data practices. (shrink)
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...) concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms. (shrink)
Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG. (...) We introduce a database of AI4SG projects gathered using this benchmark, and discuss several key insights, including the extent to which different SDGs are being addressed. This analysis makes possible the identification of pressing problems that, if left unaddressed, risk hampering the effectiveness of AI4SG initiatives. (shrink)
AI is revolutionizing everyone’s life, and it is crucial that it does so in the right way. AI’s profound and far-reaching potential for transformation concerns the engineering of systems that have some degree of autonomous agency. This is epochal and requires establishing a new, ethical balance between human and artificial autonomy.
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
What is the impact of information and communication technologies (ICTs) on the human condition? In order to address this question, in 2012 the European Commission organized a research project entitled The Onlife Initiative: concept reengineering for rethinking societal concerns in the digital transition. This volume collects the work of the Onlife Initiative. It explores how the development and widespread use of ICTs have a radical impact on the human condition.
I love information upon all subjects that come in my way, and especially upon those that are most important. Thus boldly declares Euphranor, one of the defenders of Christian faith in Berkley’s Alciphron (Berkeley, (1732), Dialogue 1, Section 5, Paragraph 6/10). Evidently, information has been an object of philosophical desire for some time, well before the computer revolution, Internet or the dotcompandemonium (see for example Dunn (2001) and Adams (2003)). Yet what does Euphranor love, exactly? What is information? The question (...) has received many answers in different fields. Unsurprisingly, several surveys do not even converge on a single, unified definition of information (see for example Braman 1989, Losee (1997), Machlup and Mansfield (1983), Debons and Cameron (1975), Larson and Debons (1983)). (shrink)
In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...) contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combating climate change, while reducing its impact on the environment. (shrink)
In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence, entitled ‘New Generation Artificial Intelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on (...) the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents. (shrink)
The use of “levels of abstraction” in philosophical analysis (levelism) has recently come under attack. In this paper, I argue that a refined version of epistemological levelism should be retained as a fundamental method, called the method of levels of abstraction. After a brief introduction, in section “Some Definitions and Preliminary Examples” the nature and applicability of the epistemological method of levels of abstraction is clarified. In section “A Classic Application of the Method ofion”, the philosophical fruitfulness of the new (...) method is shown by using Kant’s classic discussion of the “antinomies of pure reason” as an example. In section “The Philosophy of the Method of Abstraction”, the method is further specified and supported by distinguishing it from three other forms of “levelism”: (i) levels of organisation; (ii) levels of explanation and (iii) conceptual schemes. In that context, the problems of relativism and antirealism are also briefly addressed. The conclusion discusses some of the work that lies ahead, two potential limitations of the method and some results that have already been obtained by applying the method to some long-standing philosophical problems. (shrink)
This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data Ethics builds on the foundation provided by Computer and Information (...) Ethics but, at the same time, it refines the approach endorsed so far in this research field, by shifting the Level of Abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even the data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations — the interactions among hardware, software, and data — rather than on the variety of digital technologies that enables them. And it emphasises the complexity of the ethical challenges posed by Data Science. Because of such complexity, Data Ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of Data Science and its applications within a consistent, holistic, and inclusive framework. Only as a macroethics Data Ethics will provide the solutions that can maximise the value of Data Science for our societies, for all of us, and for our environments. (shrink)
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of a ‘good (...) AI society’; the role and responsibility of the government, the private sector, and the research community in pursuing such a development; and where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society’. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach. (shrink)
Luciano Floridi presents an innovative approach to philosophy, conceived as conceptual design. His starting-point is that reality provides the data which we transform into information. He explores how we make, transform, refine, and improve the objects of our knowledge, and defends the radical idea that knowledge is design.
That AI will have a major impact on society is no longer in question. Current debate turns instead on how far this impact will be positive or negative, for whom, in which ways, in which places, and on what timescale. In order to frame these questions in a more substantive way, in this prolegomena we introduce what we consider the four core opportunities for society offered by the use of AI, four associated risks which could emerge from its overuse or (...) misuse, and the opportunity costs associated with its under use. We then offer a high-level view of the emerging advantages for organisations of taking an ethical approach to developing and deploying AI. Finally, we introduce a set of five principles which should guide the development and deployment of AI technologies. The development of laws, policies and best practices for seizing the opportunities and minimizing the risks posed by AI technologies would benefit from building on ethical frameworks such as the one offered here. (shrink)
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...) concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
There is no consensus yet on the definition of semantic information. This paper contributes to the current debate by criticising and revising the Standard Definition of semantic Information as meaningful data, in favour of the Dretske-Grice approach: meaningful and well-formed data constitute semantic information only if they also qualify as contingently truthful. After a brief introduction, SDI is criticised for providing necessary but insufficient conditions for the definition of semantic information. SDI is incorrect because truth-values do not supervene on semantic (...) information, and misinformation is not a type of semantic information, but pseudo-information, that is not semantic information at all. This is shown by arguing that none of the reasons for interpreting misinformation as a type of semantic information is convincing, whilst there are compelling reasons to treat it as pseudo-information. As a consequence, SDI is revised to include a necessary truth-condition. The last section summarises the main results of the paper and indicates some interesting areas of application of the revised definition. (shrink)
This is the revised version of an invited keynote lecture delivered at the "1st Australian Computing and Philosophy Conference". The paper is divided into two parts. The first part defends an informational approach to structural realism. It does so in three steps. First, it is shown that, within the debate about structural realism, epistemic and ontic structural realism are reconcilable. It follows that a version of OSR is defensible from a structuralist-friendly position. Second, it is argued that a version of (...) OSR is also plausible, because not all relata are logically prior to relations. Third, it is shown that a version of OSR is also applicable to both sub-observable and observable entities, by developing its ontology of structural objects in terms of informational objects. The outcome is informational structural realism, a version of OSR supporting the ontological commitment to a view of the world as the totality of informational objects dynamically interacting with each other. The paper has been discussed by several colleagues and, in the second half, ten objections that have been moved to the proposal are answered in order to clarify it further. (shrink)
What is the relation between the ethics, the law, and the governance of the digital? In this article I articulate and defend what I consider the most reasonable answer.
The goal of the book is to present the latest research on the new challenges of data technologies. It will offer an overview of the social, ethical and legal problems posed by group profiling, big data and predictive analysis and of the different approaches and methods that can be used to address them. In doing so, it will help the reader to gain a better grasp of the ethical and legal conundrums posed by group profiling. The volume first maps the (...) current and emerging uses of new data technologies and clarifies the promises and dangers of group profiling in real life situations. It then balances this with an analysis of how far the current legal paradigm grants group rights to privacy and data protection, and discusses possible routes to addressing these problems. Finally, an afterword gathers the conclusions reached by the different authors and discuss future perspectives on regulating new data technologies. (shrink)
An AI winter may be defined as the stage when technology, business, and the media come to terms with what AI can or cannot really do as a technology without exaggeration. Through discussion of previous AI winters, this paper examines the hype cycle (which by turn characterises AI as a social panacea or a nightmare of apocalyptic proportions) and argues that AI should be treated as a normal technology, neither as a miracle nor as a plague, but rather as of (...) the many solutions that human ingenuity has managed to devise. (shrink)
The concept of distributed moral responsibility (DMR) has a long history. When it is understood as being entirely reducible to the sum of (some) human, individual and already morally loaded actions, then the allocation of DMR, and hence of praise and reward or blame and punishment, may be pragmatically difficult, but not conceptually problematic. However, in distributed environments, it is increasingly possible that a network of agents, some human, some artificial (e.g. a program) and some hybrid (e.g. a group of (...) people working as a team thanks to a software platform), may cause distributed moral actions (DMAs). These are morally good or evil (i.e. morally loaded) actions caused by local interactions that are in themselves neither good nor evil (morally neutral). In this article, I analyse DMRs that are due to DMAs, and argue in favour of the allocation, by default and overridably, of full moral responsibility (faultless responsibility) to all the nodes/agents in the network causally relevant for bringing about the DMA in question, independently of intentionality. The mechanism proposed is inspired by, and adapts, three concepts: back propagation from network theory, strict liability from jurisprudence and common knowledge from epistemic logic. (shrink)
In this article, I shall argue that AI’s likely developments and possible challenges are best understood if we interpret AI not as a marriage between some biological-like intelligence and engineered artefacts, but as a divorce between agency and intelligence, that is, the ability to solve problems successfully and the necessity of being intelligent in doing so. I shall then look at five developments: (1) the growing shift from logic to statistics, (2) the progressive adaptation of the environment to AI rather (...) than of AI to the environment, (3) the increasing translation of difficult problems into complex problems, (4) the tension between regulative and constitutive rules underpinning areas of AI application, and (5) the push for synthetic data. (shrink)
We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...) patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions. (shrink)
A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...) should take the form of a continuous and constructive process, approach ethical alignment from a system perspective, and be aligned with public policies and incentives for ethically desirable behaviour. Third, we identify and discuss the constraints associated with ethics-based auditing. Only by understanding and accounting for these constraints can ethics-based auditing facilitate ethical alignment of AI, while enabling society to reap the full economic and social benefits of automation. (shrink)
Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
Artificial intelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently (...) interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space. (shrink)
Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...) automation. In this article, we consider the feasibility and efficacy of ethics-based auditing as a governance mechanism that allows organisations to validate claims made about their ADMS. Building on previous work, we define EBA as a structured process whereby an entity’s present or past behaviour is assessed for consistency with relevant principles or norms. We then offer three contributions to the existing literature. First, we provide a theoretical explanation of how EBA can contribute to good governance by promoting procedural regularity and transparency. Second, we propose seven criteria for how to design and implement EBA procedures successfully. Third, we identify and discuss the conceptual, technical, social, economic, organisational, and institutional constraints associated with EBA. We conclude that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS. (shrink)
This Guide provides an ambitious state-of-the-art survey of the fundamental themes, problems, arguments and theories constituting the philosophy of computing.
This paper outlines a quantitative theory of strongly semantic information (TSSI) based on truth-values rather than probability distributions. The main hypothesis supported in the paper is that the classic quantitative theory of weakly semantic information (TWSI), based on probability distributions, assumes that truth-values supervene on factual semantic information, yet this principle is too weak and generates a well-known semantic paradox, whereas TSSI, according to which factual semantic information encapsulates truth, can avoid the paradox and is more in line with the (...) standard conception of what generally counts as semantic information. After a brief introduction, section two outlines the semantic paradox implied by TWSI, analysing it in terms of an initial conflict between two requisites of a quantitative theory of semantic information. In section three, three criteria of semantic information equivalence are used to provide a taxonomy of quantitative approaches to semantic information and introduce TSSI. In section four, some further desiderata that should be fulfilled by a quantitative TSSI are explained. From section five to section seven, TSSI is developed on the basis of a calculus of truth-values and semantic discrepancy with respect to a given situation. In section eight, it is shown how TSSI succeeds in solving the paradox. Section nine summarises the main results of the paper and indicates some future developments. (shrink)
The essential difficulty about Computer Ethics' (CE) philosophical status is a methodological problem: standard ethical theories cannot easily be adapted to deal with CE-problems, which appear to strain their conceptual resources, and CE requires a conceptual foundation as an ethical theory. Information Ethics (IE), the philosophical foundational counterpart of CE, can be seen as a particular case of environmental ethics or ethics of the infosphere. What is good for an information entity and the infosphere in general? This is the ethical (...) question asked by IE. The answer is provided by a minimalist theory of deseerts: IE argues that there is something more elementary and fundamental than life and pain, namely being, understood as information, and entropy, and that any information entity is to be recognised as the centre of a minimal moral claim, which deserves recognition and should help to regulate the implementation of any information process involving it. IE can provide a valuable perspective from which to approach, with insight and adequate discernment, not only moral problems in CE, but also the whole range of conceptual and moral phenomena that form the ethical discourse. (shrink)