In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...) affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms. (shrink)
The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...) essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good. (shrink)
Technologies to rapidly alert people when they have been in contact with someone carrying the coronavirus SARS-CoV-2 are part of a strategy to bring the pandemic under control. Currently, at least 47 contact-tracing apps are available globally. They are already in use in Australia, South Korea and Singapore, for instance. And many other governments are testing or considering them. Here we set out 16 questions to assess whether — and to what extent — a contact-tracing app is ethically justifiable.
This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG. (...) We introduce a database of AI4SG projects gathered using this benchmark, and discuss several key insights, including the extent to which different SDGs are being addressed. This analysis makes possible the identification of pressing problems that, if left unaddressed, risk hampering the effectiveness of AI4SG initiatives. (shrink)
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...) concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms. (shrink)
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...) concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence, entitled ‘New Generation Artificial Intelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on (...) the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents. (shrink)
This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data Ethics builds on the foundation provided by Computer and Information (...) Ethics but, at the same time, it refines the approach endorsed so far in this research field, by shifting the Level of Abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even the data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations — the interactions among hardware, software, and data — rather than on the variety of digital technologies that enables them. And it emphasises the complexity of the ethical challenges posed by Data Science. Because of such complexity, Data Ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of Data Science and its applications within a consistent, holistic, and inclusive framework. Only as a macroethics Data Ethics will provide the solutions that can maximise the value of Data Science for our societies, for all of us, and for our environments. (shrink)
In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...) contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combating climate change, while reducing its impact on the environment. (shrink)
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of a ‘good (...) AI society’; the role and responsibility of the government, the private sector, and the research community in pursuing such a development; and where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society’. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
Artificial intelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently (...) interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space. (shrink)
Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of (...) automation. In this article, we consider the feasibility and efficacy of ethics-based auditing as a governance mechanism that allows organisations to validate claims made about their ADMS. Building on previous work, we define EBA as a structured process whereby an entity’s present or past behaviour is assessed for consistency with relevant principles or norms. We then offer three contributions to the existing literature. First, we provide a theoretical explanation of how EBA can contribute to good governance by promoting procedural regularity and transparency. Second, we propose seven criteria for how to design and implement EBA procedures successfully. Third, we identify and discuss the conceptual, technical, social, economic, organisational, and institutional constraints associated with EBA. We conclude that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS. (shrink)
Common mental health disorders are rising globally, creating a strain on public healthcare systems. This has led to a renewed interest in the role that digital technologies may have for improving mental health outcomes. One result of this interest is the development and use of artificial intelligence for assessing, diagnosing, and treating mental health issues, which we refer to as ‘digital psychiatry’. This article focuses on the increasing use of digital psychiatry outside of clinical settings, in the following sectors: education, (...) employment, financial services, social media, and the digital well-being industry. We analyse the ethical risks of deploying digital psychiatry in these sectors, emphasising key problems and opportunities for public health, and offer recommendations for protecting and promoting public health and well-being in information societies. (shrink)
Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. (...) However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity. (shrink)
Online service providers —such as AOL, Facebook, Google, Microsoft, and Twitter—significantly shape the informational environment and influence users’ experiences and interactions within it. There is a general agreement on the centrality of OSPs in information societies, but little consensus about what principles should shape their moral responsibilities and practices. In this article, we analyse the main contributions to the debate on the moral responsibilities of OSPs. By endorsing the method of the levels of abstract, we first analyse the moral responsibilities (...) of OSPs in the web. These concern the management of online information, which includes information filtering, Internet censorship, the circulation of harmful content, and the implementation and fostering of human rights. We then consider the moral responsibilities ascribed to OSPs on the web and focus on the existing legal regulation of access to users’ data. The overall analysis provides an overview of the current state of the debate and highlights two main results. First, topics related to OSPs’ public role—especially their gatekeeping function, their corporate social responsibilities, and their role in implementing and fostering human rights—have acquired increasing relevance in the specialised literature. Second, there is a lack of an ethical framework that can define OSPs’ responsibilities, and provide the fundamental sharable principles necessary to guide OSPs’ conduct within the multicultural and international context in which they operate. This article contributes to the ethical framework necessary to deal with and by endorsing a LoA enabling the definition of the responsibilities of OSPs with respect to the well-being of the infosphere and of the entities inhabiting it. (shrink)
This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...) The analysis first focuses on an agent’s trustworthiness , this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-order-property of first-order relations is presented. It is shown that the second-order-property of e-trust has the effect of minimising an agent’s effort and commitment in the achievement of a given goal. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificial agents of a distributed artificial system. (shrink)
This article analyses the ethical aspects of multistakeholder recommendation systems (RSs). Following the most common approach in the literature, we assume a consequentialist framework to introduce the main concepts of multistakeholder recommendation. We then consider three research questions: who are the stakeholders in a RS? How are their interests taken into account when formulating a recommendation? And, what is the scientific paradigm underlying RSs? Our main finding is that multistakeholder RSs (MRSs) are designed and theorised, methodologically, according to neoclassical welfare (...) economics. We consider and reply to some methodological objections to MRSs on this basis, concluding that the multistakeholder approach offers the resources to understand the normative social dimension of RSs. (shrink)
The modern abundance and prominence of data has led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind of enquiry (...) that it identifies; (iii) the kinds of knowledge that data science generates; (iv) the nature and epistemological significance of “black box” problems; and (v) the relationship between data science and the philosophy of science more generally. (shrink)
Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...) counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients’ health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted. (shrink)
This paper focuses on Information Warfare—the warfare characterised by the use of information and communication technologies. This is a fast growing phenomenon, which poses a number of issues ranging from the military use of such technologies to its political and ethical implications. The paper presents a conceptual analysis of this phenomenon with the goal of investigating its nature. Such an analysis is deemed to be necessary in order to lay the groundwork for future investigations into this topic, addressing the ethical (...) problems engendered by this kind of warfare. The conceptual analysis is developed in three parts. First, it delineates the relation between Information Warfare and the Information revolution. It then focuses attention on the effects that the diffusion of this phenomenon has on the concepts of war. On the basis of this analysis, a definition of Information Warfare is provided as a phenomenon not necessarily sanguinary and violent, and rather transversal concerning the environment in which it is waged, the way it is waged and the ontological and social status of its agents. The paper concludes by taking into consideration the Just War Theory and the problems arising from its application to the case of Information Warfare. (shrink)
This book offers an overview of the ethical problems posed by Information Warfare, and of the different approaches and methods used to solve them, in order to provide the reader with a better grasp of the ethical conundrums posed by this new form of warfare. -/- The volume is divided into three parts, each comprising four chapters. The first part focuses on issues pertaining to the concept of Information Warfare and the clarifications that need to be made in order to (...) address its ethical implications. The second part collects contributions focusing on Just War Theory and its application to the case of Information Warfare. The third part adopts alternative approaches to Just War Theory for analysing the ethical implications of this phenomenon. Finally, an afterword by Neelie Kroes - Vice President of the European Commission and European Digital Agenda Commissioner - concludes the volume. Her contribution describes the interests and commitments of the European Digital Agenda with respect to research for the development and deployment of robots in various circumstances, including warfare. (shrink)
This article reviews eight proposed strategies for solving the Symbol Grounding Problem (SGP), which was given its classic formulation in Harnad (1990). After a concise introduction, we provide an analysis of the requirement that must be satisfied by any hypothesis seeking to solve the SGP, the zero semantical commitment condition. We then use it to assess the eight strategies, which are organised into three main approaches: representationalism, semi-representationalism and non-representationalism. The conclusion is that all the strategies are semantically committed and (...) hence that none of them provides a valid solution to the SGP, which remains an open problem. (shrink)
In this article, I analyse deterrence theory and argue that its applicability to cyberspace is limited and that these limits are not trivial. They are the consequence of fundamental differences between deterrence theory and the nature of cyber conflicts and cyberspace. The goals of this analysis are to identify the limits of deterrence theory in cyberspace, clear the ground of inadequate approaches to cyber deterrence, and define the conceptual space for a domain-specific theory of cyber deterrence, still to be developed.
The “struggle between liberties and authorities”, as described by Mill, refers to the tension between individual rights and the rules restricting them that are imposed by public authorities exerting their power over civil society. In this paper I argue that contemporary information societies are experiencing a new form of such a struggle, which now involves liberties and authorities in the cyber-sphere and, more specifically, refers to the tension between cyber-security measures and individual liberties. Ethicists, political philosophers and political scientists have (...) long debated how to strike an ethically sound balance between security measures and individual rights. I argue that such a balance can only be reached once individual rights are clearly defined, and that such a definition cannot prescind from an analysis of individual well-being in the information age. Hence, I propose an analysis of individual well-being which rests on the capability approach, and I then identify a set of rights that individuals should claim for themselves. Finally, I consider a criterion for balancing the proposed set of individual rights with cyber-security measures in the information age. (shrink)
The paper provides a selective analysis of the main theories of trust and e-trust (that is, trust in digital environments) provided in the last twenty years, with the goal of preparing the ground for a new philosophical approach to solve the problems facing them. It is divided into two parts. The first part is functional toward the analysis of e-trust: it focuses on trust and its definition and foundation and describes the general background on which the analysis of e-trust rests. (...) The second part focuses on e-trust, its foundation and ethical implications. The paper ends by synthesising the analysis of the two parts. (shrink)
The fact that Internet companies may record our personal data and track our online behavior for commercial or political purpose has emphasized aspects related to online privacy. This has also led to the development of search engines that promise no tracking and privacy. Search engines also have a major role in spreading low-quality health information such as that of anti-vaccine websites. This study investigates the relationship between search engines’ approach to privacy and the scientific quality of the information they return. (...) We analyzed the first 30 webpages returned searching “vaccines autism” in English, Spanish, Italian, and French. The results show that not only “alternative” search engines but also other commercial engines often return more anti-vaccine pages (10–53%) than Google (0%). Some localized versions of Google, however, returned more anti-vaccine webpages (up to 10%) than Google. Health information returned by search engines has an impact on public health and, specifically, in the acceptance of vaccines. The issue of information quality when seeking information for making health-related decisions also impact the ethical aspect represented by the right to an informed consent. Our study suggests that designing a search engine that is privacy savvy and avoids issues with filter bubbles that can result from user-tracking is necessary but insufficient; instead, mechanisms should be developed to test search engines from the perspective of information quality (particularly for health-related webpages) before they can be deemed trustworthy providers of public health information. (shrink)
In this article I propose an ethical analysis of information warfare, the warfare waged in the cyber domain. The goal is twofold: filling the theoretical vacuum surrounding this phenomenon and providing the conceptual grounding for the definition of new ethical regulations for information warfare. I argue that Just War Theory is a necessary but not sufficient instrument for considering the ethical implications of information warfare and that a suitable ethical analysis of this kind of warfare is developed when Just War (...) Theory is merged with Information Ethics. In the initial part of the article, I describe information warfare and its main features and highlight the problems that arise when Just War Theory is endorsed as a means of addressing ethical problems engendered by this kind of warfare. In the final part, I introduce the main aspects of Information Ethics and define three principles for a just information warfare resulting from the integration of Just War Theory and Information Ethics. (shrink)
In this article, I offer an outline of the papers comprising the special issue. I also provide a brief overview of its topic, namely, the friction between cyber security measures and individual rights. I consider such a friction to be a new and exacerbated version of what Mill called ‘the struggle between liberties and authorities,’ and I claim that the struggle arises because of the involvement of public authorities in the management of the cyber sphere, for technological and state power (...) can put individual rights, such as privacy, anonymity and freedom of speech, under sharp devaluating pressure. Finally, I conclude by stressing the need to reach an ethical balance to fine-tune cyber security measures and individual rights. (shrink)
This article is the second step in our research into the Symbol Grounding Problem (SGP). In a previous work, we defined the main condition that must be satisfied by any strategy in order to provide a valid solution to the SGP, namely the zero semantic commitment condition (Z condition). We then showed that all the main strategies proposed so far fail to satisfy the Z condition, although they provide several important lessons to be followed by any new proposal. Here, we (...) develop a new solution of the SGP. It is called praxical in order to stress the key role played by the interactions between the agents and their environment. It is based on a new theory of meaning—Action-based Semantics (AbS)—and on a new kind of artificial agents, called two-machine artificial agents (AM²). Thanks to their architecture, AM2s implement AbS, and this allows them to ground their symbols semantically and to develop some fairly advanced semantic abilities, including the development of semantically grounded communication and the elaboration of representations, while still respecting the Z condition. (shrink)
The gig economy is a phenomenon that is rapidly expanding, redefining the nature of work and contributing to a significant change in how contemporary economies are organised. Its expansion is not unproblematic. This article provides a clear and systematic analysis of the main ethical challenges caused by the gig economy. Following a brief overview of the gig economy, its scope and scale, we map the key ethical problems that it gives rise to, as they are discussed in the relevant literature. (...) We map them onto three categories: the new organisation of work (what is done), the new nature of work (how it is done), and the new status of workers (who does it). We then evaluate a recent initiative from the EU that seeks to address the challenges of the gig economy. The 2019 report of the European High-Level Expert Group on the Impact of the Digital Transformation on EU Labour Markets is a positive step in the right direction. However, we argue that ethical concerns relating to algorithmic systems as mechanisms of control, and the discrimination, exclusion and disconnectedness faced by gig workers require further deliberation and policy response. A brief conclusion completes the analysis. The appendix presents the methodology underpinning our literature review. (shrink)
Defence agencies across the globe identify artificial intelligence as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control (...) and reliable AI systems—and related recommendations to foster ethically sound uses of AI for national defence purposes. (shrink)
This paper introduces a multi-modal polymorphic type theory to model epistemic processes characterized by trust, defined as a second-order relation affecting the communication process between sources and a receiver. In this language, a set of senders is expressed by a modal prioritized context, whereas the receiver is formulated in terms of a contextually derived modal judgement. Introduction and elimination rules for modalities are based on the polymorphism of terms in the language. This leads to a multi-modal non-homogeneous version of a (...) type theory, in which we show the embedding of the modal operators into standard group knowledge operators. (shrink)
Artificial Intelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this chapter AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently (...) interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This chapter offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space. (shrink)
This paper contributes to the debate on online trust addressing the problem of whether an online environment satisfies the necessary conditions for the emergence of trust. The paper defends the thesis that online environments can foster trust, and it does so in three steps. Firstly, the arguments proposed by the detractors of online trust are presented and analysed. Secondly, it is argued that trust can emerge in uncertain and risky environments and that it is possible to trust online identities when (...) they are diachronic and sufficient data are available to assess their reputation. Finally, a definition of trust as a second-order property of first-order relation is endorsed in order to present a new definition of online trust. According to such a definition, online trust is an occurrence of trust that specifically qualifies the relation of communication ongoing among individuals in digital environments. On the basis of this analysis, the paper concludes by arguing that online trust promotes the emergence of social behaviours rewarding honest and transparent communications. (shrink)
The use of tools and artefacts is a distinctive and problematic phenomenon in the history of humanity, and as such it has been a topic of discussion since the beginning of Western culture, from the myths of the Ancient Greek through Humanism and Romanticism to Heidegger. Several questionable aspects have been brought to the fore: the relation between technology and arts, the effects of the use of technology both on the world and on the user and the nature of the (...) trust that users place in technology . This last topic is the subject of this special issue, which has the twofold goal of fostering a cross-disciplinary debate and, in doing so, of overcoming, at least in part, the fragmentation of the literature on this topic.The problematic nature of trust in technology becomes evident with the dissemination of information and communication technologies and the subsequent information revolution, with which artefacts cease to be used mainly to perform physical and fatigui .. (shrink)
This article argues that personal medical data should be made available for scientific research, by enabling and encouraging individuals to donate their medical records once deceased, similar to the way in which they can already donate organs or bodies. This research is part of a project on posthumous medical data donation developed by the Digital Ethics Lab at the Oxford Internet Institute at the University of Oxford. Ten arguments are provided to support the need to foster posthumous medical data donation. (...) Two major risks are also identified—harm to others, and lack of control over the use of data—which could follow from unregulated donation of medical data. The argument that record-based medical research should proceed without the need to secure informed consent is rejected, and instead a voluntary and participatory approach to using personal medical data should be followed. The analysis concludes by stressing the need to develop an ethical code for data donation to minimise the risks, and offers five foundational principles for ethical medical data donation suggested as a draft code. (shrink)
In this article, we analyse the role that artificial intelligence could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...) contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment. (shrink)