Related categories

913 found
Order:
1 — 50 / 913
  1. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. The Debate on the Ethics of AI in Health Care: A Reconstruction and Critical Review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies From the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. The Moral Impermissibility of Creating Artificial Intelligence.Matt Schuler - manuscript
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  5. AI Alignment Problem: “Human Values” Don’T Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Back to the Future: Curing Past Sufferings and S-Risks Via Indexical Uncertainty.Alexey Turchin - manuscript
    The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Autonomous Reboot: The Challenges of Artificial Moral Agency and the Ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  9. HARMONIZING LAW AND INNOVATIONS IN NANOMEDICINE, ARTIFICIAL INTELLIGENCE (AI) AND BIOMEDICAL ROBOTICS: A CENTRAL ASIAN PERSPECTIVE.Ammar Younas & Tegizbekova Zhyldyz Chynarbekovna - manuscript
    The recent progression in AI, nanomedicine and robotics have increased concerns about ethics, policy and law. The increasing complexity and hybrid nature of AI and nanotechnologies impact the functionality of “law in action” which can lead to legal uncertainty and ultimately to a public distrust. There is an immediate need of collaboration between Central Asian biomedical scientists, AI engineers and academic lawyers for the harmonization of AI, nanomedicines and robotics in Central Asian legal system.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Anthropomorphism and the Impact on the Perception and Implementation of AI Systems.Marie Oldfield -
    Anthropomorphism has long been used as a way for humans to make sense of their surroundings. By converting abstract concepts into objects or concepts that we can relate to we discover a common language with which we can communicate i.e "by which one thing is described in terms of another" ?. Anthropomorphism is based in multiple fields such as, sociology, psychology, neurology philosophy etc. This technique has been seen across history in such fields as religion, fables and folk takes where (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  11. Robot Ethics 2.0. From Autonomous Cars to Artificial Intelligence—Edited by Patrick Lin, Keith Abney, Ryan Jenkins. New York: Oxford University Press, 2017. Pp xiii + 421. [REVIEW]Agnė Alijauskaitė - forthcoming - Erkenntnis:1-4.
  12. Varieties of Transparency: Exploring Agency Within AI Systems.Gloria Andrada, Robert William Clowes & Paul Smart - forthcoming - AI and Society:1-11.
    AI systems play an increasingly important role in shaping and regulating the lives of millions of human beings across the world. Calls for greater transparency from such systems have been widespread. However, there is considerable ambiguity concerning what “transparency” actually means, and therefore, what greater transparency might entail. While, according to some debates, transparency requires seeing through the artefact or device, widespread calls for transparency imply seeing into different aspects of AI systems. These two notions are in apparent tension with (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13. Virtuous Vs. Utilitarian Artificial Moral Agents.William A. Bauer - forthcoming - AI and Society:1-9.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  14. AI Ethics: how can information ethics provide a framework to avoid usual conceptual pitfalls? An Overview.Frédérick Bruneault & Andréane Sabourin Laflamme - forthcoming - AI and Society:1-10.
    Artificial intelligence plays an important role in current discussions on information and communication technologies and new modes of algorithmic governance. It is an unavoidable dimension of what social mediations and modes of reproduction of our information societies will be in the future. While several works in artificial intelligence ethics address ethical issues specific to certain areas of expertise, these ethical reflections often remain confined to narrow areas of application, without considering the global ethical issues in which they are embedded. We, (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  15. The Ethics of Digital Well-Being: A Multidisciplinary Perspective.Christopher Burr & Luciano Floridi - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-Being: A Multidisciplinary Perspective. Springer.
    This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  17. Just Machines.Clinton Castro - forthcoming - Public Affairs Quarterly.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, owed to Kleinberg (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Anti-Natalism and the Creation of Artificial Minds.Bartek Chomanski - forthcoming - Journal of Applied Philosophy.
    Must opponents of creating conscious artificial agents embrace anti-natalism? Must anti-natalists be against the creation of conscious artificial agents? This article examines three attempts to argue against the creation of potentially conscious artificial intelligence (AI) in the context of these questions. The examination reveals that the argumentative strategy each author pursues commits them to the anti-natalist position with respect to procreation; that is to say, each author's argument, if applied consistently, should lead them to embrace the conclusion that procreation is, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. If Robots Are People, Can They Be Made for Profit? Commercial Implications of Robot Personhood.Bartek Chomanski - forthcoming - AI and Ethics.
    It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision Making Systems.Kathleen A. Creel & Deborah Hellman - forthcoming - Canadian Journal of Philosophy:1-18.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  21. Shortcuts to Artificial Intelligence.Nello Cristianini - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust. MIT Press.
    The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Two Arguments Against Human-Friendly AI.Ken Daley - forthcoming - AI and Ethics.
    The past few decades have seen a substantial increase in the focus on the myriad ethical implications of artificial intelligence. Included amongst the numerous issues is the existential risk that some believe could arise from the development of artificial general intelligence (AGI) which is an as-of-yet hypothetical form of AI that is able to perform all the same intellectual feats as humans. This has led to extensive research into how humans can avoid losing control of an AI that is at (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. The Ethics of Algorithmic Outsourcing in Everyday Life.John Danaher - forthcoming - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford, UK: Oxford University Press.
    We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  25. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Sexuality.John Danaher - forthcoming - In Markus Dubber, Frank Pasquale & Sunit Das (eds.), Oxford Handbook of the Ethics of Artificial Intelligence. Oxford: Oxford University Press.
    Sex is an important part of human life. It is a source of pleasure and intimacy, and is integral to many people’s self-identity. This chapter examines the opportunities and challenges posed by the use of AI in how humans express and enact their sexualities. It does so by focusing on three main issues. First, it considers the idea of digisexuality, which according to McArthur and Twist (2017) is the label that should be applied to those ‘whose primary sexual identity comes (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  27. Artificial Intelligence and Legal Disruption: A New Model for Analysis.John Danaher, Hin-Yan Liu, Matthijs Maas, Luisa Scarcella, Michaela Lexer & Leonard Van Rompaey - forthcoming - Law, Innovation and Technology.
    Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. Automation, Work and the Achievement Gap.John Danaher & Sven Nyholm - forthcoming - AI and Ethics.
    Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people’s ability (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.Benjamin Davies & Thomas Douglas - forthcoming - In Jesper Ryberg & Julian V. Roberts (eds.), Sentencing and Artificial Intelligence. Oxford: Oxford University Press.
    It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring that the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Five Ethical Challenges for Data-Driven Policing.Jeremy Davis, Duncan Purves, Juan Gilbert & Schuyler Sturm - forthcoming - AI and Ethics.
    This paper synthesizes scholarship from several academic disciplines to identify and analyze five major ethical challenges facing data-driven policing. Because the term “data-driven policing” emcompasses a broad swath of technologies, we first outline several data-driven policing initiatives currently in use in the United States. We then lay out the five ethical challenges. Certain of these challenges have received considerable attention already, while others have been largely overlooked. In many cases, the challenges have been articulated in the context of related discussions, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. The Global Governance of Artificial Intelligence: Some Normative Concerns.Eva Erman & Markus Furendal - forthcoming - Moral Philosophy and Politics.
    The creation of increasingly complex artificial intelligence (AI) systems raises urgent questions about their ethical and social impact on society. Since this impact ultimately depends on political decisions about normative issues, political philosophers can make valuable contributions by addressing such questions. Currently, AI development and application are to a large extent regulated through non-binding ethics guidelines penned by transnational entities. Assuming that the global governance of AI should be at least minimally democratic and fair, this paper sets out three desiderata (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  32. Make Them Rare or Make Them Care: Artificial Intelligence and Moral Cost-Sharing.Blake Hereth & Nicholas Evans - forthcoming - In Daniel Schoeni, Tobias Vestner & Kevin Govern (eds.), Ethical Dilemmas in the Global Defense Industry. Oxford University Press.
    The use of autonomous weaponry in warfare has increased substantially over the last twenty years and shows no sign of slowing. Our chapter raises a novel objection to the implementation of autonomous weapons, namely, that they eliminate moral cost-sharing. To grasp the basics of our argument, consider the case of uninhabited aerial vehicles that act autonomously (i.e., LAWS). Imagine that a LAWS terminates a military target and that five civilians die as a side effect of the LAWS bombing. Because LAWS (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  33. Against “Democratizing AI”.Johannes Himmelreich - forthcoming - AI and Society:1-14.
    This paper argues against the call to democratize artificial intelligence. Several authors demand to reap purported benefits that rest in direct and broad participation: In the governance of AI, more people should be more involved in more decisions about AI—from development and design to deployment. This paper opposes this call. The paper presents five objections against broadening and deepening public participation in the governance of AI. The paper begins by reviewing the literature and carving out a set of claims that (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  34. Ethics of Artificial Intelligence in Brain and Mental Health.Marcello Ienca & Fabrice Jotterand (eds.) - forthcoming
  35. Rule by Automation: How Automated Decision Systems Promote Freedom and Equality.Athmeya Jayaram & Jacob Sparks - forthcoming - Moral Philosophy and Politics.
    Using automated systems to avoid the need for human discretion in government contexts – a scenario we call ‘rule by automation’ – can help us achieve the ideal of a free and equal society. Drawing on relational theories of freedom and equality, we explain how rule by automation is a more complete realization of the rule of law and why thinkers in these traditions have strong reasons to support it. Relational theories are based on the absence of human domination and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  36. A Dilemma for Moral Deliberation in AI in Advance.Ryan Jenkins & Duncan Purves - forthcoming - International Journal of Applied Philosophy.
    Many social trends are conspiring to drive the adoption of greater automation in society, and we will certainly see a greater offloading of human decisionmaking to robots in the future. Many of these decisions are morally salient, including decisions about how benefits and burdens are distributed. Roboticists and ethicists have begun to think carefully about the moral decision making apparatus for machines. Their concerns often center around the plausible claim that robots will lack many of the mental capacities that are (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Quantum of Wisdom.Brett Karlan & Colin Allen - forthcoming - In Greg Viggiano (ed.), Quantum Computing and AI: Social, Ethical, and Geo-Political Implications. Toronto, ON, Canada: University of Toronto Press. pp. 1-6.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to value (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  38. Digital Well-Being and Manipulation Online.Michael Klenk - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach. Springer.
    Social media use is soaring globally. Existing research of its ethical implications predominantly focuses on the relationships amongst human users online, and their effects. The nature of the software-to-human relationship and its impact on digital well-being, however, has not been sufficiently addressed yet. This paper aims to close the gap. I argue that some intelligent software agents, such as newsfeed curator algorithms in social media, manipulate human users because they do not intend their means of influence to reveal the user’s (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  39. (Online) Manipulation: Sometimes Hidden, Always Careless.Michael Klenk - forthcoming - Review of Social Economy.
    Ever-increasing numbers of human interactions with intelligent software agents, online and offline, and their increasing ability to influence humans have prompted a surge in attention toward the concept of (online) manipulation. Several scholars have argued that manipulative influence is always hidden. But manipulation is sometimes overt, and when this is acknowledged the distinction between manipulation and other forms of social influence becomes problematic. Therefore, we need a better conceptualisation of manipulation that allows it to be overt and yet clearly distinct (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. The Concept of Accountability in AI Ethics and Governance.Theodore M. Lechterman - forthcoming - In Justin Bullock, Y. C. Chen, Johannes Himmelreich, V. Hudson, M. Korinek, M. Young & B. Zhang (eds.), The Oxford Handbook of AI Governance. Oxford: Oxford University Press.
    Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the idea (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. Safety Requirements Vs. Crashing Ethically: What Matters Most for Policies on Autonomous Vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article will present a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Believing in Black Boxes: Must Machine Learning in Healthcare Be Explainable to Be Evidence-Based?Liam McCoy, Connor Brenna, Stacy Chen, Karina Vold & Sunit Das - forthcoming - Journal of Clinical Epidemiology.
    Objective: To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application. Study Design and Setting: This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC. Results: We find that concerns regarding explainability are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. African Reasons Why AI Should Not Maximize Utility (Repr.).Thaddeus Metz - forthcoming - In Aribiah Attoe, Samuel Segun, Victor Nweke & John-Bosco Umezurike (eds.), Conversations on African Philosophy of Mind, Consciousness and AI. Springer.
    Reprint of a chapter first appearing in African Values, Ethics, and Technology: Questions, Issues, and Approaches (2021).
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  44. History of Digital Ethics.Vincent C. Müller - forthcoming - In Oxford handbook of digital ethics. Oxford University Press. pp. 1-18.
    Digital ethics, also known as computer ethics or information ethics, is now a lively field that draws a lot of attention, but how did it come about and what were the developments that lead to its existence? What are the traditions, the concerns, the technological and social developments that pushed digital ethics? How did ethical issues change with digitalisation of human life? How did the traditional discipline of philosophy respond? The article provides an overview, proposing historical epochs: ‘pre-modernity’ prior to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  45. Automation, Basic Income and Merit.Katharina Nieswandt - forthcoming - In Keith Breen & Jean-Philippe Deranty (eds.), Whither Work? The Politics and Ethics of Contemporary Work.
    A recent wave of academic and popular publications say that utopia is within reach: Automation will progress to such an extent and include so many high-skill tasks that much human work will soon become superfluous. The gains from this highly automated economy, authors suggest, could be used to fund a universal basic income (UBI). Today's employees would live off the robots' products and spend their days on intrinsically valuable pursuits. I argue that this prediction is unlikely to come true. Historical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - forthcoming - In Oxford Handbook of Digital Ethics. Oxford: Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  47. Public Trust, Institutional Legitimacy, and the Use of Algorithms in Criminal Justice.Duncan Purves & Jeremy Davis - forthcoming - Public Affairs Quarterly.
    A common criticism of the use of algorithms in criminal justice is that algorithms and their determinations are in some sense ‘opaque’—that is, difficult or impossible to understand, whether because of their complexity or because of intellectual property protections. Scholars have noted some key problems with opacity, including that opacity can mask unfair treatment and threaten public accountability. In this paper, we explore a different but related concern with algorithmic opacity, which centers on the role of public trust in grounding (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. Mapping the Stony Road Toward Trustworthy AI: Expectations, Problems, Conundrums.Gernot Rieder, Judith Simon & Pak-Hang Wong - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable AI. Cambridge, Mass.:
    The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  50. Deepfakes, Deep Harms.Regina Rini & Leah Cohen - forthcoming - Journal of Ethics and Social Philosophy.
    Deepfakes are algorithmically modified video and audio recordings that project one person’s appearance on to that of another, creating an apparent recording of an event that never took place. Many scholars and journalists have begun attending to the political risks of deepfake deception. Here we investigate other ways in which deepfakes have the potential to cause deeper harms than have been appreciated. First, we consider a form of objectification, virtual domination, that occurs when deepfaked ‘frankenporn’ digitally fuses the parts of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 913