About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as machines engineered to perform tasks traditionally requiring biological intelligence become ubiquitous. Consider that civil infrastructure including energy grids and mass-transit systems are increasingly moderated by increasingly intelligent machines. Ethical issues include those of responsibility and/or blameworthiness of such systems, with implications for engineers who must responsibly design them, and philosophers who must interpret impacts - both potential and actual - in order to advise ethical designers. For example, who or what is responsible in the case of an accident due to an AI system error, or due to design flaws, or due to proper operation outside of anticipated constraints, such as part of a semi-autonomous automobile or actuarial algorithm? These are issues falling under the heading of Ethics of AI, as well as to other categories, e.g. those dedicated to autonomous vehicles, algorithmic fairness or artificial system safety. Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These concerns call into question the long-term safety of intelligent systems, not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI, and works focusing on such ideas can be found here. 
Key works Some works: Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017, Dubber et al 2020, Tasioulas 2019, Müller 2021
Introductions Müller 2013, Gunkel 2012, Coeckelbergh 2020, Gordon et al 2021, Müller 2022Jecker & Nakazawa 2022, Mao & Shi-Kupfer 2023, Dietrich et al 2021, see also  https://plato.stanford.edu/entries/ethics-ai/
Related

Contents
2966 found
Order:
1 — 50 / 2966
Material to categorize
  1. Über Möglichkeiten und Grenzen der Ethik der Künstlichen Intelligenz. Eine Bestandsaufnahme am Beispiel von Sprachverarbeitungssystemen.Elisa Orrù - 2021 - Positionen 35:50-64.
    On the possibilities and limits of the ethics of artificial intelligence. An overview of current developments and debates with a focus on language processing systems. -/- Driven by the success of artificial intelligence (AI), the ethics of AI is currently enjoying a boom. Advice from ethics experts is increasingly being sought by policymakers and industry to proactively identify the risks associated with new AI technologies and to propose solutions. But how realistic are the expectations placed on AI ethics to make (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Towards a Unified List of Ethical Principles for Emerging Technologies. An Analysis of Four European Reports on Molecular Biotechnology and Artificial Intelligence,.Elisa Orrù & Joachim Boldt - 2022 - Sustainable Futures 4:1-14.
    Artificial intelligence (AI) and molecular biotechnologies (MB) are among the most promising, but also ethically hotly debated emerging technologies. In both fields, several ethics reports, which invoke lists of ethics principles, have been put forward. These reports and the principles lists are technology specific. This article aims to contribute to the ongoing debate on ethics of emerging technologies by comparatively analysing four European ethics reports from the two technology fields. Adopting a qualitative and in-depth approach, the article highlights how ethics (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Materialien zu “Ethische Fragen der Künstlichen Intelligenz” (Interview, Paper).Vincent C. Müller (ed.) - 2024 - Göttingen: Philovernetzt.
    1. I am, in fact, a person – Moralischer Status von KI 2. Superintelligenz – Ende oder Rettung der Menschheit? 3. Diskriminierung – KI als Ursache oder Lösung? -/- Authors: Dominik Balg, Larissa Bolte, Anne Burkard, Jan Constantin, Leonard Dung, Jürn Gottschalk, Kerstin Gregor-Gehrmann, Isabelle Guntermann und Katharina Schulz.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. (1 other version)Biomimicry and AI-Enabled Automation in Agriculture. Conceptual Engineering for Responsible Innovation.Marco Innocenti - 2025 - Journal of Agricultural and Environmental Ethics 38 (2):1-17.
    This paper aims to engineer the concept of biomimetic design for its application in agricultural technology as an innovation strategy to sustain non-human species’ adaptation to today’s rapid environmental changes. By questioning the alleged intrinsic morality of biomimicry, a formulation of it is sought that goes beyond the sharp distinction between nature as inspiration and the human field of application of biomimetic technologies. After reviewing the main literature on Responsible Innovation, we support Vincent Blok’s “eco-centric” perspective on biomimicry, which considers (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5. Más allá del algoritmo: oportunidades, retos y ética de la Inteligencia Artificial.Juan David Gutiérrez & Rubén Francisco Manrique (eds.) - forthcoming - Bogotá: Ediciones Uniandes.
  6. Regulating the Spread of Online Misinformation.Étienne Brown - 2021 - In Michael Hannon & Jeroen de Ridder (eds.), The Routledge Handbook of Political Epistemology. New York: Routledge. pp. 214-225.
    Attempts to influence people’s beliefs through misinformation have a long history. In the age of social media, however, there is a growing fear that the circulation of false or misleading claims will be more impactful than ever now that sophisticated technological means are available to those who desire to spread them. Should democratic societies worry about misinformation? If so, is it possible and desirable for them to control its spread by regulating it? This chapter offers an answer to these questions. (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  7. Rage Against the Authority Machines: How to Design Artificial Moral Advisors for Moral Enhancement.Ethan Landes, Cristina Voinea & Radu Uszkai - forthcoming - AI and Society:1-12.
    This paper aims to clear up the epistemology of learning morality from Artificial Moral Advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people to act morally, instead of merely (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  8. As máquinas podem cuidar?E. M. Carvalho - 2024 - O Que Nos Faz Pensar 31 (53):6-24.
    Applications and devices of artificial intelligence are increasingly common in the healthcare field. Robots fulfilling some caregiving functions are not a distant future. In this scenario, we must ask ourselves if it is possible for machines to care to the extent of completely replacing human care and if such replacement, if possible, is desirable. In this paper, I argue that caregiving requires know-how permeated by affectivity that is far from being achieved by currently available machines. I also maintain that the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Virtues for AI.Jakob Ohlhorst - manuscript
    Virtue theory is a natural approach towards the design of artificially intelligent systems, given that the design of artificial intelligence essentially aims at designing agents with excellent dispositions. This has led to a lively research programme to develop artificial virtues. However, this research programme has until now had a narrow focus on moral virtues in an Aristotelian mould. While Aristotelian moral virtue has played a foundational role for the field, it unduly constrains the possibilities of virtue theory for artificial intelligence. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. AI and Democratic Equality: How Surveillance Capitalism and Computational Propaganda Threaten Democracy.Ashton Black - 2024 - In Bernhard Steffen (ed.), Bridging the Gap Between AI and Reality. Springer Nature. pp. 333-347.
    In this paper, I argue that surveillance capitalism and computational propaganda can undermine democratic equality. First, I argue that two types of resources are relevant for democratic equality: 1) free time, which entails time that is free from systemic surveillance, and 2) epistemic resources. In order for everyone in a democratic system to be equally capable of full political participation, it’s a minimum requirement that these two resources are distributed fairly. But AI that’s used for surveillance capitalism can undermine the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. AI-enhanced nudging: A Risk-factors Analysis.Marianna Bergamaschi Ganapini & Enrico Panai - forthcoming - American Philosophical Quarterly.
    Artificial intelligent technologies are utilized to provide online personalized recommendations, suggestions, or prompts that can influence people's decision-making processes. We call this AI-enhanced nudging (or AI-nudging for short). Contrary to the received wisdom we claim that AI-enhanced nudging is not necessarily morally problematic. To start assessing the risks and moral import of AI-nudging we believe that we should adopt a risk-factor analysis: we show that both the level of risk and possibly the moral value of adopting AI-nudging ultimately depend on (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  12. The case for human–AI interaction as system 0 thinking.Marianna Bergamaschi Ganapini - 2024 - Nature Human Behaviour 8.
    The rapid integration of these artificial intelligence (AI) tools into our daily lives is reshaping how we think and make decisions. We propose that data-driven AI systems, by transcending individual artefacts and interfacing with a dynamic, multiartefact ecosystem, constitute a distinct psychological system. We call this ‘system 0’, and position it alongside Kahneman’s system 1 (fast, intuitive thinking) and system 2 (slow, analytical thinking).System 0 represents the outsourcing of certain cognitive tasks to AI, which can process vast amounts of data (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Emotional Cues and Misplaced Trust in Artificial Agents.Joseph Masotti - forthcoming - In Henry Shevlin (ed.), AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    This paper argues that the emotional cues exhibited by AI systems designed for social interaction may lead human users to hold misplaced trust in such AI systems, and this poses a substantial problem for human-AI relationships. It begins by discussing the communicative role of certain emotions relevant to perceived trustworthiness. Since displaying such emotions is a reliable indicator of trustworthiness in humans, we use such emotions to assess agents’ trustworthiness according to certain generalizations of folk psychology. Our tendency to engage (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. Publishing Robots.Nick Hadsell, Rich Eva & Kyle Huitt - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    If AI can write an excellent philosophy paper, we argue that philosophy journals should strongly consider publishing that paper. After all, AI stands to make significant contributions to ongoing projects in some subfields, and it benefits the world of philosophy for those contributions to be published in journals, the primary purpose of which is to disseminate significant contributions to philosophy. We also propose the Sponsorship Model of AI journal refereeing to mitigate any costs associated with our view. This model requires (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. Review of Wild Wise Weird: The Kingfisher Story collection.Manh-Tung Ho - manuscript
    In this essay, I review one of my beloved fictional titles, Wild Wise Weird: The Kingfisher Story collection. The minimal sense of humor and satire in storytelling of Wild Wise Weird are sure to bring readers smiles, better yet, moments of quiet reflection, a much under-appreciated remedy in the world driven almost insane with the abundance of information co-created with AI technologies. I hope to deliver justice to the book.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Speciesism in Natural Language Processing Research.Masashi Takeshita & Rafal Rzepka - forthcoming - AI and Ethics.
    Natural Language Processing (NLP) research on AI Safety and social bias in AI has focused on safety for humans and social bias against human minorities. However, some AI ethicists have argued that the moral significance of nonhuman animals has been ignored in AI research. Therefore, the purpose of this study is to investigate whether there is speciesism, i.e., discrimination against nonhuman animals, in NLP research. First, we explain why nonhuman animals are relevant in NLP research. Next, we survey the findings (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. Multimodal Artificial Intelligence in Medicine.Joshua August Skorburg - forthcoming - Kidney360.
    Traditional medical Artificial Intelligence models, approved for clinical use, restrict themselves to single-modal data e.g. images only, limiting their applicability in the complex, multimodal environment of medical diagnosis and treatment. Multimodal Transformer Models in healthcare can effectively process and interpret diverse data forms such as text, images, and structured data. They have demonstrated impressive performance on standard benchmarks like USLME question banks and continue to improve with scale. However, the adoption of these advanced AI models is not without challenges. While (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. Artificial Intelligence, Creativity, and the Precarity of Human Connection.Lindsay Brainard - forthcoming - Oxford Intersections: Ai in Society.
    There is an underappreciated respect in which the widespread availability of generative artificial intelligence (AI) models poses a threat to human connection. My central contention is that human creativity is especially capable of helping us connect to others in a valuable way, but the widespread availability of generative AI models reduces our incentives to engage in various sorts of creative work in the arts and sciences. I argue that creative endeavors must be motivated by curiosity, and so they must disclose (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  19. Automated Influence and Value Collapse.Dylan J. White - 2024 - American Philosophical Quarterly 61 (4):369-386.
    Automated influence is one of the most pervasive applications of artificial intelligence in our day-to-day lives, yet a thoroughgoing account of its associated individual and societal harms is lacking. By far the most widespread, compelling, and intuitive account of the harms associated with automated influence follows what I call the control argument. This argument suggests that users are persuaded, manipulated, and influenced by automated influence in a way that they have little or no control over. Based on evidence about the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20. Memory and Mimesis in Our Relationships with Posthumous Avatars.Michael Cholbi - forthcoming - In Henry Shevlin (ed.), AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    Critics have raised many moral and legal concerns about posthumous digital avatars. Here my focus instead falls on whether they are likely to enable the bonds with the dead that users apparently yearn for. I conclude that though posthumous avatars can have short-term therapeutic benefits in replicating “habits of intimacy” with the dead, users’ expectations for sustaining long-term bonds with the deceased via posthumous avatars are unlikely to be fulfilled. Posthumous avatars are unlikely to foster the construction of valued memories (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  21. Sự gia tăng của AI tạo sinh và những rủi ro tiềm ẩn cho con người.Hoang Tung-Duong, Dang Tuan-Dung & Manh-Tung Ho - 2024 - Tạp Chí Thông Tin Và Truyền Thông 9 (9/2024):66-73.
    Sự xuật hiện của các công cụ AI tạo sinh trên nền tảng các mô hình ngôn ngữ lớn (LLMs) đã đem đến một công cụ mới cho con người, đặc biệt là trong các ngành sư phạm, báo chí, nhưng chúng cũng đem đến nhiều vấn đề Trong bài viết này, nhóm tác giả sẽ chỉ ra những những bất cập mới xuất hiện hoặc những vấn đề đã tồn tại nhưng có nguy cơ được đẩy lên cao hơn (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose a (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  23. Algorithms Advise, Humans Decide: the Evidential Role of the Patient Preference Predictor.Nicholas Makins - forthcoming - Journal of Medical Ethics.
    An AI-based “patient preference predictor” (PPP) is a proposed method for guiding healthcare decisions for patients who lack decision-making capacity. The proposal is to use correlations between sociodemographic data and known healthcare preferences to construct a model that predicts the unknown preferences of a particular patient. In this paper, I highlight a distinction that has been largely overlooked so far in debates about the PPP–that between algorithmic prediction and decision-making–and argue that much of the recent philosophical disagreement stems from this (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24. Will AI and Humanity Go to War?Simon Goldstein - manuscript
    This paper offers the first careful analysis of the possibility that AI and humanity will go to war. The paper focuses on the case of artificial general intelligence, AI with broadly human capabilities. The paper uses a bargaining model of war to apply standard causes of war to the special case of AI/human conflict. The paper argues that information failures and commitment problems are especially likely in AI/human conflict. Information failures would be driven by the difficulty of measuring AI capabilities, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. The Harm of Social Media to Public Reason.Paige Benton & Michael W. Schmidt - forthcoming - Topoi.
    It is commonly agreed that so-called echo chambers and epistemic bubbles, associated with social media, are detrimental to liberal democracies. Drawing on John Rawls’s political liberalism, we offer a novel explanation of why social media platforms amplifying echo chambers and epistemic bubbles are likely contributing to the violation of the democratic norms connected to the ideal of public reason. These norms are clarified with reference to the method of (full) reflective equilibrium, which we argue should be cultivated as a civic (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  26. The Ethics of Privacy and Surveillance.Carissa Veliz - 2024 - Oxford: Oxford University Press.
    Privacy matters because it shields us from possible abuses of power. Human beings need privacy just as much as they need community. Our need for socialization brings with it risks and burdens which in turn give rise to the need for spaces and time away from others. To impose surveillance upon someone is an act of domination. The foundations of democracy quiver under surveillance. -/- This book is intended to contribute to a better understanding of privacy from a philosophical point (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Implicaciones de la tecnosecuritización en las relaciones internacionales contemporáneas.Felix D. Andueza Araque - 2022 - Dissertation, Pontificia Universidad Católica Del Ecuador
    In recent years there has been a transition within society towards governments much more permeated by technology. One of the areas where this technological increase has been seen is in security processes. Currently, under the justification of a safer world, perpetual surveillance is given to the population through facial recognition cameras and data collection. For this reason, this research work seeks to explain how securitisation processes have become more complex to give way to much broader subjective prevention of security problems. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. Should You Trust Your Voice Assistant? It’s Complicated, but No.Filippos Stamatiou & Xenofon Karakonstantis - 2024 - In Florian Westphal, Einav Peretz-Andersson, Maria Riveiro, Kerstin Bach & Fredrik Heintz (eds.), 14th Scandinavian Conference on Artificial Intelligence SCAI 2024. Linköping, Sweden: Linköping Electronic Conference Proceedings 208.
  29. Deontologische Ansätze.Micha H. Werner - 2024 - In Petra Grimm, Kai Erik Trost & Oliver Zöllner (eds.), Digitale Ethik. Baden-Baden: Nomos | Verlag Karl Alber. pp. 25-36.
  30. 50 questions on Active Assisted Living technologies. Global edition.Francisco Florez-Revuelta, Alin Ake-Kob, Pau Climent-Perez, Paulo Coelho, Liane Colonna, Laila Dahabiyeh, Carina Dantas, Esra Dogru-Huzmeli, Hazim Kemal Ekenel, Aleksandar Jevremovic, Nina Hosseini-Kivanani, Aysegul Ilgaz, Mladjan Jovanovic, Andrzej Klimczuk, Maksymilian M. Kuźmicz, Petre Lameski, Ferlanda Luna, Natália Machado, Tamara Mujirishvili, Zada Pajalic, Galidiya Petrova, Nathalie G. S. Puaschitz, Maria Jose Santofimia, Agusti Solanas, Wilhelmina van Staalduinen & Ziya Ata Yazici - 2024 - Alicante: University of Alicante.
    This booklet on Active Assisted Living (AAL) technologies has been created as part of the GoodBrother COST Action, which has run from 2020 to 2024. COST Actions are European research programs that promote collaboration across borders, uniting researchers, professionals, and institutions to address key societal challenges. GoodBrother focused on ethical and privacy concerns surrounding video and audio monitoring in care settings. The aim was to ensure that while AAL technologies help older adults and vulnerable individuals, their privacy and data protection (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31. “Der Mann mit Eigenschaften”, review of Joseph LeDoux: Im Netz der Persönlichkeit: Wie unser Selbst entsteht [Synaptic Self],. [REVIEW]Vincent C. Müller - 2004 - Süddeutsche Zeitung 2014 (14.01.2004):14.
    Review of Joseph LeDoux: Das Netz der Persönlichkeit. Wie unser Selbst entsteht. Walter Verlag, Düsseldorf 2003. 510 Seiten (mit Abbildungen), 39,90 Euro. - Der eine Mensch ist mißtrauisch, der nächste leichtgläubig, diese ist warmherzig, jene kaltschnäuzig. Viele haben Charakter, manche sogar Persönlichkeit. Wie kommt es dazu? In seinem neuen Buch untersucht der Neurowissenschaftler Joseph LeDoux wie unser Selbst entsteht. In dem sehr lesbaren und angenehm übersetzten Werk wird anschaulich und detailliert berichtet, wie sich in unserem Gehirn die Charakteristika eines Individuums (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
Algorithmic Fairness
  1. Trustworthy use of artificial intelligence: Priorities from a philosophical, ethical, legal, and technological viewpoint as a basis for certification of artificial intelligence.Jan Voosholz, Maximilian Poretschkin, Frauke Rostalski, Armin B. Cremers, Alex Englander, Markus Gabriel, Hecker Dirk, Michael Mock, Julia Rosenzweig, Joachim Sicking, Julia Volmer, Angelika Voss & Stefan Wrobel - 2019 - Fraunhofer Institute for Intelligent Analysis and Information Systems Iais.
    This publication forms a basis for the interdisciplinary development of a certification system for artificial intelligence. In view of the rapid development of artificial intelligence with disruptive and lasting consequences for the economy, society, and everyday life, it highlights the resulting challenges that can be tackled only through interdisciplinary dialogue between IT, law, philosophy, and ethics. As a result of this interdisciplinary exchange, it also defines six AI-specific audit areas for trustworthy use of artificial intelligence. They comprise fairness, transparency, autonomy (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2. Vertrauenswürdiger Einsatz von Künstlicher Intelligenz.Jan Voosholz, Maximilian Poretschkin, Frauke Rostalski, Armin B. Cremers, Alex Englander, Markus Gabriel, Dirk Hecker, Michael Mock, Julia Rosenzweig, Joachim Sicking, Julia Volmer, Angelika Voss & Stefan Wrobel - 2019 - Fraunhofer-Institut Für Intelligente Analyse- Und Informationssysteme Iais.
    Die vorliegende Publikation dient als Grundlage für die interdisziplinäre Entwicklung einer Zertifizierung von Künstlicher Intelligenz. Angesichts der rasanten Entwicklung von Künstlicher Intelligenz mit disruptiven und nachhaltigen Folgen für Wirtschaft, Gesellschaft und Alltagsleben verdeutlicht sie, dass sich die hieraus ergebenden Herausforderungen nur im interdisziplinären Dialog von Informatik, Rechtswissenschaften, Philosophie und Ethik bewältigen lassen. Als Ergebnis dieses interdisziplinären Austauschs definiert sie zudem sechs KI-spezifische Handlungsfelder für den vertrauensvollen Einsatz von Künstlicher Intelligenz: Sie umfassen Fairness, Transparenz, Autonomie und Kontrolle, Datenschutz sowie Sicherheit und (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3. Spanning in and Spacing out? A Reply to Eva.Michael Nielsen & Rush Stewart - 2024 - Philosophy and Technology 37 (4):1-4.
    We reply to Eva's comment on our "New Possibilities for Fair Algorithms," comparing and contrasting our Spanning criterion with his suggested Spacing criterion.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  4. AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development.Kristina Sekrst, Jeremy McHugh & Jonathan Rodriguez Cefalu - manuscript
    This paper explores the development of an ethical guardrail framework for AI systems, emphasizing the importance of customizable guardrails that align with diverse user values and underlying ethics. We address the challenges of AI ethics by proposing a structure that integrates rules, policies, and AI assistants to ensure responsible AI behavior, while comparing the proposed framework to the existing state-of-the-art guardrails. By focusing on practical mechanisms for implementing ethical standards, we aim to enhance transparency, user autonomy, and continuous improvement in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Generative AI and the Future of Democratic Citizenship.Paul Formosa, Bhanuraj Kashyap & Siavosh Sahebi - 2024 - Digital Government: Research and Practice 2691 (2024/05-ART).
    Generative AI technologies have the potential to be socially and politically transformative. In this paper, we focus on exploring the potential impacts that Generative AI could have on the functioning of our democracies and the nature of citizenship. We do so by drawing on accounts of deliberative democracy and the deliberative virtues associated with it, as well as the reciprocal impacts that social media and Generative AI will have on each other and the broader information landscape. Drawing on this background (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - forthcoming - Biolaw Journal.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Fair equality of chances for prediction-based decisions.Michele Loi, Anders Herlitz & Hoda Heidari - 2024 - Economics and Philosophy 40 (3):557-580.
    This article presents a fairness principle for evaluating decision-making based on predictions: a decision rule is unfair when the individuals directly impacted by the decisions who are equal with respect to the features that justify inequalities in outcomes do not have the same statistical prospects of being benefited or harmed by them, irrespective of their socially salient morally arbitrary traits. The principle can be used to evaluate prediction-based decision-making from the point of view of a wide range of antecedently specified (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8. What’s Impossible about Algorithmic Fairness?Otto Sahlgren - 2024 - Philosophy and Technology 37 (4):1-23.
    The now well-known impossibility results of algorithmic fairness demonstrate that an error-prone predictive model cannot simultaneously satisfy two plausible conditions for group fairness apart from exceptional circumstances where groups exhibit equal base rates. The results sparked, and continue to shape, lively debates surrounding algorithmic fairness conditions and the very possibility of building fair predictive models. This article, first, highlights three underlying points of disagreement in these debates, which have led to diverging assessments of the feasibility of fairness in prediction-based decision-making. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Making a Murderer: How Risk Assessment Tools May Produce Rather Than Predict Criminal Behavior.Donal Khosrowi & Philippe van Basshuysen - 2024 - American Philosophical Quarterly 61 (4):309-325.
    Algorithmic risk assessment tools, such as COMPAS, are increasingly used in criminal justice systems to predict the risk of defendants to reoffend in the future. This paper argues that these tools may not only predict recidivism, but may themselves causally induce recidivism through self-fulfilling predictions. We argue that such “performative” effects can yield severe harms both to individuals and to society at large, which raise epistemic-ethical responsibilities on the part of developers and users of risk assessment tools. To meet these (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10. The Ideals Program in Algorithmic Fairness.Rush T. Stewart - forthcoming - AI and Society.
    I consider statistical criteria of algorithmic fairness from the perspective of the _ideals_ of fairness to which these criteria are committed. I distinguish and describe three theoretical roles such ideals might play. The usefulness of this program is illustrated by taking Base Rate Tracking and its ratio variant as a case study. I identify and compare the ideals of these two criteria, then consider them in each of the aforementioned three roles for ideals. This ideals program may present a way (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. New Possibilities for Fair Algorithms.Michael Nielsen & Rush Stewart - 2024 - Philosophy and Technology 37 (4):1-17.
    We introduce a fairness criterion that we call Spanning. Spanning i) is implied by Calibration, ii) retains interesting properties of Calibration that some other ways of relaxing that criterion do not, and iii) unlike Calibration and other prominent ways of weakening it, is consistent with Equalized Odds outside of trivial cases.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  13. (1 other version)If the Difference Principle Won’t Make a Real Difference in Algorithmic Fairness, What Will? [REVIEW]Reuben Binns - manuscript
    In ‘Rawlsian algorithmic fairness and a missing aggregation property of the difference Principle’, the authors argue that there is a false assumption in algorithmic fairness interventions inspired by John Rawls’ theory of justice. They argue that applying the difference principle at the level of a local algorithmic decision-making context (what they term a ‘constituent situation’), is neither necessary nor sufficient for the difference principle to be upheld at the aggregate level of society at large. I find these arguments compelling. They (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?Joshua Hatherley - forthcoming - Journal of Medical Ethics.
    It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this ‘the disclosure thesis.’ Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Multiplicative Metric Fairness Under Composition.Milan Mossé - 2023 - Symposium on Foundations of Responsible Computing 4.
    Dwork, Hardt, Pitassi, Reingold, & Zemel [6] introduced two notions of fairness, each of which is meant to formalize the notion of similar treatment for similarly qualified individuals. The first of these notions, which we call additive metric fairness, has received much attention in subsequent work studying the fairness of a system composed of classifiers which are fair when considered in isolation [3, 4, 7, 8, 12] and in work studying the relationship between fair treatment of individuals and fair treatment (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Artificial Intelligence in Higher Education in South Africa: Some Ethical Considerations.Tanya de Villiers-Botha - 2024 - Kagisano 15:165-188.
    There are calls from various sectors, including the popular press, industry, and academia, to incorporate artificial intelligence (AI)-based technologies in general, and large language models (LLMs) (such as ChatGPT and Gemini) in particular, into various spheres of the South African higher education sector. Nonetheless, the implementation of such technologies is not without ethical risks, notably those related to bias, unfairness, privacy violations, misinformation, lack of transparency, and threats to autonomy. This paper gives an overview of the more pertinent ethical concerns (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Do AI systems Allow Online Advertisers to Control Others?Gabriel De Marco & T. Douglas - 2024 - In David Edmonds (ed.), AI Morality. Oxford: Oxford University Press USA.
  18. Equity, autonomy, and the ethical risks and opportunities of generalist medical AI.Reuben Sass - 2023 - AI and Ethics:1-11.
    This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  19. (1 other version)Towards a Feminist Metaethics of AI.Anastasia Siapka - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society:665–674.
    The proliferation of Artificial Intelligence (AI) has sparked an overwhelming number of AI ethics guidelines, boards and codes of conduct. These outputs primarily analyse competing theories, principles and values for AI development and deployment. However, as a series of recent problematic incidents about AI ethics/ethicists demonstrate, this orientation is insufficient. Before proceeding to evaluate other professions, AI ethicists should critically evaluate their own; yet, such an evaluation should be more explicitly and systematically undertaken in the literature. I argue that these (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
1 — 50 / 2966