Ethics has powerful teeth, but these are barely being used in the ethics of AI today – it is no wonder the ethics of AI is then blamed for having no teeth. This article argues that ‘ethics’ in the current AI ethics field is largely ineffective, trapped in an ‘ethical principles’ approach and as such particularly prone to manipulation, especially by industry actors. Using ethics as a substitute for law risks its abuse and misuse. (...) This significantly limits what ethics can achieve and is a great loss to the AI field and its impacts on individuals and society. This article discusses these risks and then highlights the teeth of ethics and the essential value they can – and should – bring to AI ethics now. (shrink)
The ethics of artificial intelligence is an upcoming field of research that deals with the ethical assessment of emerging AI applications and addresses the new kinds of moral questions that the advent of AI raises. The argument presented in this article is that, even though there exist different approaches and subfields within the ethics of AI, the field resembles a critical theory. Just like a critical theory, the ethics of AI aims to diagnose as well as change (...) society and is fundamentally concerned with human emancipation and empowerment. This is shown through a power analysis that defines the most commonly addressed ethical principles and topics within the field of AI ethics as either to do with relational power or with dispositional power. Moreover, it is concluded that recognizing AI ethics as a critical theory and borrowing insights from the tradition of critical theory can help the field forward. (shrink)
This study investigates the ethical use of Big Data and Artificial Intelligence technologies —using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues,, into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and (...) applications of AI, there is overlap and correlation between the organisations’ ethical concerns. This more detailed understanding of ethics in AI + BD is required to ensure that the multitude of suggested ways of addressing them can be targeted and succeed in mitigating the pertinent ethical issues that are often discussed in the literature. (shrink)
Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI (...)ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems—and how the effectiveness in the demands of AI ethics can be improved. (shrink)
By mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of (...) a searchable typology of tools and methods designed to translate between the five most common AI ethics principles and implementable design practices. Whilst a useful starting point, that research rested on the assumption that all AI practitioners are aware of the ethical implications of AI, understand their importance, and are actively seeking to respond to them. In reality, it is unclear whether this is the case. It is this limitation that we seek to overcome here by conducting a mixed-methods qualitative analysis to answer the following four questions: what do AI practitioners understand about the need to translate ethical principles into practice? What motivates AI practitioners to embed ethical principles into design practices? What barriers do AI practitioners face when attempting to translate ethical principles into practice? And finally, what assistance do AI practitioners want and need when translating ethical principles into practice? (shrink)
In this paper, I draw on Hannah Arendt’s notion of ‘banality of evil’ to argue that as long as AI systems are designed to follow codes of ethics or particular normative ethical theories chosen by us and programmed in them, they are Eichmanns destined to commit evil. Since intelligence alone is not sufficient for ethical decision making, rather than strive to program AI to determine the right ethical decision based on some ethical theory or criteria, AI should be concerned (...) with avoiding making the wrong decisions, and this requires hardwiring the thinking activity as a prerequisite for decision making. (shrink)
With the increased use of Artificial Intelligence in wildlife conservation, issues around whether AI-based monitoring tools in wildlife conservation comply with standards regarding AI Ethics are on the rise. This review aims to summarise current debates and identify gaps as well as suggest future research by investigating current AI Ethics and AI Ethics issues in wildlife conservation, Initiatives Stakeholders in AI for wildlife conservation should consider integrating AI Ethics in wildlife conservation. We find that the existing (...) literature weakly focuses on AI Ethics and AI Ethics in wildlife conservation while at the same time ignores AI Ethics integration in AI systems for wildlife conservation. This paper formulates an ethically aligned AI system framework and discusses pre-eminent on-demand AI systems in wildlife conservation. The proposed framework uses agile software life cycle methodology to implement guidelines towards the ethical upgrade of any existing AI system or the development of any new ethically aligned AI system. The guidelines enforce, among others, the minimisation of intentional harm and bias, diversity in data collection, design compliance, auditing of all activities in the framework and ease of code inspection. This framework will inform AI developers, users, conservationists, and policymakers on what to consider when integrating AI Ethics into AI-based systems for wildlife conservation. (shrink)
This paper assesses leading Japanese philosophical thought since the onset of Japan’s modernity: namely, from the Meiji Restoration onwards. It argues that there are lessons of global value for AI ethics to be found from examining leading Japanese philosophers of modernity and ethics, each of whom engaged closely with Western philosophical traditions. Turning to these philosophers allows us to advance from what are broadly individualistically and Western-oriented ethical debates regarding emergent technologies that function in relation to AI, by (...) introducing notions of community, wholeness, sincerity, and heart. With reference to AI that pertains to profile, judge, learn, and interact with human emotion, this paper contends that Japan itself may internally make better use of historic indigenous ethical thought, especially as it applies to question of data and relationships with technology; but also that externally Western and global ethical discussion regarding emerging technologies will find valuable insights from Japan. The paper concludes by distilling from Japanese philosophers of modernity four ethical suggestions, or spices, in relation to emerging technological contexts for Japan’s national AI policies and international fora, such as standards development and global AI ethics policymaking. (shrink)
From machine learning and computer vision to robotics and natural language processing, the application of data science and artificial intelligence is expected to transform health care (Ce...
Despite the increase in the research field of ethics in artificial intelligence, most efforts have focused on the debate about principles and guidelines for responsible AI, but not enough attention has been given to the “how” of applied ethics. This paper aims to advance the research exploring the gap between practice and principles in AI ethics by identifying how companies are applying those guidelines and principles in practice. Through a qualitative methodology based on 22 semi-structured interviews and (...) two focus groups, the goal of the current study is to understand how companies approach ethical issues related to AI systems. A structured analysis of the transcripts brought out many actual practices and findings, which are presented around the following main research topics: ethics and principles, privacy, explainability, and fairness. The interviewees also raised issues of accountability and governance. Finally, some recommendations are suggested such as developing specific sector regulations, fostering a data-driven organisational culture, considering the algorithm’s complete life cycle, developing and using a specific code of ethics, and providing specific training on ethical issues. Despite some obvious limitations, such as the type and number of companies interviewed, this work identifies real examples and direct priorities to advance the research exploring the gap between practice and principles in AI ethics, with a specific focus on Spanish companies. (shrink)
Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the (...) idea that AI operates within an accountability gap arising from technical features of AI as well as the social context in which it is deployed. The chapter also evaluates various proposals for closing this gap. I conclude that the role of accountability in AI ethics and governance is vital but also more limited than some suggest. Accountability’s primary job description is to verify compliance with substantive normative principles—once those principles are settled. Theories of accountability cannot ultimately tell us what substantive standards to account for, especially when norms are contested or still emerging. Nonetheless, formal mechanisms of accountability provide a way of diagnosing and discouraging egregious wrongdoing even in the absence of normative agreement. Providing accounts can also be an important first step toward the development of more comprehensive regulatory standards for AI. (shrink)
The debate about the ethical implications of Artificial Intelligence dates from the 1960s :741–742, 1960; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by Neural Networks and Machine Learning techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the (...) ‘what’ of AI ethics —rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. (shrink)
Purpose The paper aims to analyze the content of the newly published National AI Ethics Guideline in Thailand. Thailand’s ongoing political struggles and transformation has made it a good case to see how a policy document such as a guideline in AI ethics becomes part of the transformations. Looking at how the two are interrelated will help illuminate the political and cultural dynamics of Thailand as well as how governance of ethics itself is conceptualized. Design/methodology/approach The author (...) looks at the history of how the National AI Ethics Guidelines came to be and interprets its content, situating the Guideline within the contemporary history of the country as well as comparing the Guideline with some of the leading existing guidelines. Findings It is found that the Guideline represents an ambivalent and paradoxical nature that characterizes Thailand’s attempt at modernization. On the one hand, there is a desire to join the ranks of the more advanced economies, but, on the other hand, there is also a strong desire to maintain its own traditional values. Thailand has not been successful in resolving this tension yet, and this lack of success is shown in the way that content of the AI Ethics Guideline is presented. Practical implications The findings of the paper could be useful for further attempts in drafting and revising AI ethics guidelines in the future. Originality/value The paper represents the first attempt, so far as the author is aware, to analyze the content of the Thai AI Ethics Guideline critically. (shrink)
Artificial intelligence plays an important role in current discussions on information and communication technologies and new modes of algorithmic governance. It is an unavoidable dimension of what social mediations and modes of reproduction of our information societies will be in the future. While several works in artificial intelligence ethics address ethical issues specific to certain areas of expertise, these ethical reflections often remain confined to narrow areas of application, without considering the global ethical issues in which they are embedded. (...) We, therefore, propose to clarify the main approaches to AIE, their philosophical assumptions and the specific characteristics of each one of them, to identify the most promising approach to develop an ethical reflection on the deployment of AI in our societies, which is the one based on information ethics as proposed by Luciano Floridi. We will identify the most important features of that approach to highlight areas that need further investigation. (shrink)
Artificial intelligence ethics requires a united approach from policymakers, AI companies, and individuals, in the development, deployment, and use of these technologies. However, sometimes discussions can become fragmented because of the different levels of governance or because of different values, stakeholders, and actors involved. Recently, these conflicts became very visible, with such examples as the dismissal of AI ethics researcher Dr. Timnit Gebru from Google and the resignation of whistle-blower Frances Haugen from Facebook. Underpinning each debacle was a (...) conflict between the organisation’s economic and business interests and the morals of their employees. This paper will examine tensions between the ethics of AI organisations and the values of their employees, by providing an exploration of the AI ethics literature in this area, and a qualitative analysis of three workshops with AI developers and practitioners. Common ethical and social tensions will be discussed, along with proposals on how to avoid or reduce these conflicts in practice. Altogether, we suggest the following steps to help reduce ethical issues within AI organisations: improved and diverse ethics education and training within businesses; internal and external ethics auditing; the establishment of AI ethics ombudsmen, AI ethics review committees and an AI ethics watchdog; as well as access to trustworthy AI ethics whistle-blower organisations. (shrink)
Recent AI ethics has focused on applying abstract principles downward to practice. This paper moves in the other direction. Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems, and then cycled upward to influence theoretical debates surrounding these questions: 1) Should AI as trustworthy be sought through explainability, or accurate performance? 2) Should AI be considered trustworthy at all, or is reliability a preferable aim? 3) Should AI ethics be oriented toward establishing (...) protections for users, or toward catalyzing innovation? Specific answers are less significant than the larger demonstration that AI ethics is currently unbalanced toward theoretical principles, and will benefit from increased exposure to grounded practices and dilemmas. (shrink)
Against the backdrop of a recent history of ongoing efforts to institutionalize ethics in ways that also target corporate environments, we asked ourselves: How do company representatives at the automatica 2022 trade fair in Munich respond to questions around ethics? To this end, we started an exploratory survey at the automatica 2022 in Munich, asking 22 company representatives at various booths from various industrial sectors the basic question: “Is there somebody in your company working on ethics?” Most (...) representatives were responding positively and tried to connect the term to pre-existing practices, processes, or organizational entities in their respective companies. Mostly, they either located ethics as being relevant to their organization on an institutional level, on a cultural level, on an inter-company level, or on a product level. This exploratory investigation has also shown that the ongoing debates and regulatory efforts about ethics in AI have not yet become a major selling point for company representatives at the trade fair. (shrink)
As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between (...) the theory of AI ethics principles and the practical design of AI systems. In previous work , we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service’. (shrink)
Achieving the global benefits of artificial intelligence will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and priorities. There are many barriers to achieving this at present, including mistrust between cultures, and more practical challenges of coordinating across different locations. This paper focuses particularly on barriers to cooperation between Europe and North America on the one hand and East Asia on the other, as regions which currently have an outsized impact on (...) the development of AI ethics and governance. We suggest that there is reason to be optimistic about achieving greater cross-cultural cooperation on AI ethics and governance. We argue that misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed. Even where fundamental differences exist, these may not necessarily prevent productive cross-cultural cooperation, for two reasons: cooperation does not require achieving agreement on principles and standards for all areas of AI; and it is sometimes possible to reach agreement on practical issues despite disagreement on more abstract values or principles. We believe that academia has a key role to play in promoting cross-cultural cooperation on AI ethics and governance, by building greater mutual understanding, and clarifying where different forms of agreement will be both necessary and possible. We make a number of recommendations for practical steps and initiatives, including translation and multilingual publication of key documents, researcher exchange programmes, and development of research agendas on cross-cultural topics. (shrink)
The ethics of artificial intelligence, or AI ethics, is a rapidly growing field, and rightly so. While the range of issues and groups of stakeholders concerned by the field of AI ethics is expanding, with speculation about whether it extends even to the machines themselves, there is a group of sentient beings who are also affected by AI, but are rarely mentioned within the field of AI ethics—the nonhuman animals. This paper seeks to explore the kinds (...) of impact AI has on nonhuman animals, the severity of these impacts, and their moral implications. We hope that this paper will facilitate the development of a new field of philosophical and technical research regarding the impacts of AI on animals, namely, the ethics of AI as it affects nonhuman animals. (shrink)
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...) stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society. (shrink)
As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory (...) of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the ‘what’ and the ‘how’ of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible or too strict. This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed ‘Ethics as a Service.’. (shrink)
Principles of fairness and solidarity in AI ethics regularly overlap, creating obscurity in practice: acting in accordance with one can appear indistinguishable from deciding according to the rules of the other. However, there exist irregular cases where the two concepts split, and so reveal their disparate meanings and uses. This paper explores two cases in AI medical ethics – one that is irregular and the other more conventional – to fully distinguish fairness and solidarity. Then the distinction is (...) applied to the frequently cited COMPAS versus ProPublica dispute in judicial ethics. The application provides a broader model for settling contemporary and topical debates about fairness and solidarity. It also implies a deeper and disorienting truth about AI ethics principles and their justification. (shrink)
What counts as a good decision depends on the domain. In diagnostic imaging, for instance, a good decision involves diagnosing cancer if and only if the patient has cancer. In clinical ethics, good...
The paper presents an ethical analysis and constructive critique of the current practice of AI ethics. It identifies conceptual substantive and procedural challenges and it outlines strategies to address them. The strategies include countering the hype and understanding AI as ubiquitous infrastructure including neglected issues of ethics and justice such as structural background injustices into the scope of AI ethics and making the procedures and fora of AI ethics more inclusive and better informed with regard to (...) philosophical ethics. These measures integrate the perspective of AI justice into AI ethics, strengthening its capacity to provide comprehensive normative orientation and guidance for the development and use of AI that actually improves human lives and living together. (shrink)
In recent years, ethical questions related to the development of artificial intelligence are being increasingly discussed. However, there has not been enough corresponding increase in the research and development associated with AI technology that incorporates with ethical discussion. We therefore implemented an organic and dynamic tool for use with knowledge base of AI ethics for engineers to promote engineers’ practice of ethical AI design to realize further social values. Here, “organic” means that the tool deals with complex relationships among (...) different AI ethics. “Dynamic” means that the tool dynamically adopts new issues and helps engineers think in their own contexts. Data in the knowledge base of the tool is standardized based on the ethical design theory that consists of an extension of the hierarchical representation of artifacts to understand ethical considerations from the perspective of engineering, and a description method to express the design ideas. In addition, we apply the dynamic knowledge management model called knowledge liquidization and crystallization. To discuss the effects, we introduce three cases: a case for the clarification of differences in the structures among AI ethics and design ideas, a case for the presentation of semantic distance among them, and a case for the recommendation of the scenario paths that allow engineers to seamlessly use AI ethics in their own contexts. We discuss the effectiveness of the tool. We also show the probability that engineers can reconstruct AI ethics as a more practical one with professional ethicists. (shrink)
Enacting an AI system typically requires three iterative phases where AI engineers are in command: selection and preparation of the data, selection and configuration of algorithmic tools, and fine-tuning of the different parameters on the basis of intermediate results. Our main hypothesis is that these phases involve practices with ethical questions. This paper maps these ethical questions and proposes a way to address them in light of a neo-republican understanding of freedom, defined as absence of domination. We thereby identify different (...) types of responsibility held by AI engineers and link them to concrete suggestions on how to improve professional practices. This paper contributes to the literature on AI and ethics by focusing on the work necessary to configure AI systems, thereby offering an input to better practices and an input for societal debates. (shrink)
In recent years, there has been a surge of high-profile publications on applications of artificial intelligence systems for medical diagnosis and prognosis. While AI provides various opportunities for medical practice, there is an emerging consensus that the existing studies show considerable deficits and are unable to establish the clinical benefit of AI systems. Hence, the view that the clinical benefit of AI systems needs to be studied in clinical trials—particularly randomised controlled trials —is gaining ground. However, an issue that has (...) been overlooked so far in the debate is that, compared with drug RCTs, AI RCTs require methodological adjustments, which entail ethical challenges. This paper sets out to develop a systematic account of the ethics of AI RCTs by focusing on the moral principles of clinical equipoise, informed consent and fairness. This way, the objective is to animate further debate on the ethics of medical AI. (shrink)
As artificial intelligence deployment is growing exponentially, questions have been raised whether the developed AI ethics discourse is apt to address the currently pressing questions in the field. Building on critical theory, this article aims to expand the scope of AI ethics by arguing that in addition to ethical principles and design, the organizational dimension plays a pivotal role in the operationalization of ethics in AI development and deployment contexts. Through the prism of critical theory, and the (...) notions of underdetermination and technical code as developed by Feenberg in particular, the organizational dimension is related to two general challenges in operationalizing ethical principles in AI: the challenge of ethical principles placing conflicting demands on an AI design that cannot be satisfied simultaneously, for which the term ‘inter-principle tension’ is coined, and the challenge of translating an ethical principle to a technological form, constraint or demand, for which the term ‘intra-principle tension’ is coined. Rather than discussing principles, methods or metrics, the notion of technical code precipitates a discussion on the subsequent questions of value decisions, governance and procedural checks and balances. It is held that including and interrogating the organizational context in AI ethics approaches allows for a more in depth understanding of the current challenges concerning the formalization and implementation of ethical principles as well as of the ways in which these challenges could be met. (shrink)
A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible (...) and effective, ethics-based auditing should take the form of a continuous and constructive process, approach ethical alignment from a system perspective, and be aligned with public policies and incentives for ethically desirable behaviour. Third, we identify and discuss the constraints associated with ethics-based auditing. Only by understanding and accounting for these constraints can ethics-based auditing facilitate ethical alignment of AI, while enabling society to reap the full economic and social benefits of automation. (shrink)
A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible (...) and effective, ethics-based auditing should take the form of a continuous and constructive process, approach ethical alignment from a system perspective, and be aligned with public policies and incentives for ethically desirable behaviour. Third, we identify and discuss the constraints associated with ethics-based auditing. Only by understanding and accounting for these constraints can ethics-based auditing facilitate ethical alignment of AI, while enabling society to reap the full economic and social benefits of automation. (shrink)
The emergence of ethical concerns surrounding artificial intelligence has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this paper, (...) we propose that an ‘embedded ethics’ approach, in which ethicists and developers together address ethical issues via an iterative and continuous process from the outset of development, could be an effective means of integrating robust ethical considerations into the practical development of medical AI. (shrink)
In the original publication of this article, the Table 1 has been published in a low resolution. Now a larger version of Table 1 is published in this correction. The publisher apologizes for the error made during production.
This paper argues that the AI ethics has generally neglected the issues related to the science communication of AI. In particular, the article focuses on visual communication about AI and, more specifically, on the use of certain stock images in science communication about AI — in particular, those characterized by an excessive use of blue color and recurrent subjects, such as androgyne faces, half-flesh and half-circuit brains, and variations on Michelangelo’s The Creation of Adam. In the first section, the (...) author refers to a “referentialist” ethics of science communication for an ethical assessment of these images. From this perspective, these images are unethical. While the ethics of science communication generally promotes virtues like modesty and humility, similar images are arrogant and overconfident. In the second section, the author uses French philosopher Jacques Rancière’s concepts of “distribution of the sensible,” “disagreement,” and “pensive image.” Rancière’s thought paves the way to a deeper critique of these images of AI. The problem with similar images is not their lack of reference to the “things themselves.” It rather lies in the way they stifle any possible forms of disagreement about AI. However, the author argues that stock images and other popular images of AI are not a problem per se, and they can also be a resource. This depends on the real possibility for these images to support forms of pensiveness. In the conclusion, the question is asked whether the kind of ethics or politics of AI images proposed in this article can be applied to AI ethics tout court. (shrink)
It is widely acknowledged that high-level AI principles are difficult to translate into practices via explicit rules and design guidelines. Consequently, many AI research and development groups that claim to adopt ethics principles have been accused of unwarranted “ethics washing”. Accordingly, there remains a question as to if and how high-level principles should be expected to influence the development of safe and beneficial AI. In this short commentary I discuss two roles high-level principles might play in AI (...) class='Hi'>ethics and governance. The first and most often discussed “start-point” function quickly succumbs to the complaints outlined above. I suggest, however, that a second “cultural influence” function is where the primary value of high-level principles lies. (shrink)
Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG (...) frameworks are inadequate for AI-intensive companies. To fully account for contemporary technology, the following categories of evaluation will be developed and featured as vital investing criteria: autonomy, dignity, privacy, performance. With these priorities established, the larger goal is a model for humanitarian investing in AI-intensive companies that is intellectually robust, manageable for analysts, useful for portfolio managers, and credible for investors. (shrink)
One of the main difficulties in assessing artificial intelligence is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI. Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very (...) serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them. (shrink)
Many ethics initiatives have stipulated sets of principles and standards for good technology development in the AI sector. However, several AI ethics researchers have pointed out a lack of practical realization of these principles. Following that, AI ethics underwent a practical turn, but without deviating from the principled approach. This paper proposes a complementary to the principled approach that is based on virtue ethics. It defines four “basic AI virtues”, namely justice, honesty, responsibility and care, all (...) of which represent specific motivational settings that constitute the very precondition for ethical decision making in the AI field. Moreover, it defines two “second-order AI virtues”, prudence and fortitude, that bolster achieving the basic virtues by helping with overcoming bounded ethicality or hidden psychological forces that can impair ethical decision making and that are hitherto disregarded in AI ethics. Lastly, the paper describes measures for successfully cultivating the mentioned virtues in organizations dealing with AI research and development. (shrink)
AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and (...) AI audit frameworks. We highlight the responsibilities of funding bodies to ensure investment is channelled towards trustworthy and safe AI systems and provides case studies as to how other ethical funding principles are managed. We offer a first sight of two proposals for funding bodies to consider regarding procedures they can employ. The first proposal is for the inclusion of a Trustworthy AI Statement’ section in the grant application form and offers an example of the associated guidance. The second proposal outlines the wider management requirements of a funding body for the ethical review and monitoring of funded projects to ensure adherence to the proposed ethical strategies in the applicants Trustworthy AI Statement. The anticipated outcome for such proposals being employed would be to create a ‘stop and think’ section during the project planning and application procedure requiring applicants to implement the methods for the ethically aligned design of AI. In essence it asks funders to send the message “if you want the money, then build trustworthy AI!”. (shrink)