About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as machines engineered to perform tasks traditionally requiring biological intelligence become ubiquitous. Consider that civil infrastructure including energy grids and mass-transit systems are increasingly moderated by increasingly intelligent machines. Ethical issues include those of responsibility and/or blameworthiness of such systems, with implications for engineers who must responsibly design them, and philosophers who must interpret impacts - both potential and actual - in order to advise ethical designers. For example, who or what is responsible in the case of an accident due to an AI system error, or due to design flaws, or due to proper operation outside of anticipated constraints, such as part of a semi-autonomous automobile or actuarial algorithm? These are issues falling under the heading of Ethics of AI, as well as to other categories, e.g. those dedicated to autonomous vehicles, algorithmic fairness or artificial system safety. Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These concerns call into question the long-term safety of intelligent systems, not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI, and works focusing on such ideas can be found here. 
Key works Some works: Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017, Dubber et al 2020, Tasioulas 2019, Müller 2021
Introductions Müller 2013, Gunkel 2012, Coeckelbergh 2020, Gordon et al 2021, Müller 2022Jecker & Nakazawa 2022, Mao & Shi-Kupfer 2023, Dietrich et al 2021, see also  https://plato.stanford.edu/entries/ethics-ai/
Related

Contents
2516 found
Order:
1 — 50 / 2516
Material to categorize
  1. Artificial Psychology.Jay Friedenberg - 2008 - Psychology Press.
    What does it mean to be human? Philosophers and theologians have been wrestling with this question for centuries. Recent advances in cognition, neuroscience, artificial intelligence and robotics have yielded insights that bring us even closer to an answer. There are now computer programs that can accurately recognize faces, engage in conversation, and even compose music. There are also robots that can walk up a flight of stairs, work cooperatively with each other and express emotion. If machines can do everything we (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
Algorithmic Fairness
  1. Criteria for Assessing AI-Based Sentencing Algorithms: A Reply to Ryberg.Thomas Douglas - 2024 - Philosophy and Technology 37 (1):1-4.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2. An Impossibility Theorem for Base Rate Tracking and Equalised Odds.Rush T. Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby establishing (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Big Data as Tracking Technology and Problems of the Group and its Members.Haleh Asgarinia - 2023 - In Kevin Macnish & Adam Henschke (eds.), The Ethics of Surveillance in Times of Emergency. Oxford University Press. pp. 60-75.
    Digital data help data scientists and epidemiologists track and predict outbreaks of disease. Mobile phone GPS data, social media data, or other forms of information updates such as the progress of epidemics are used by epidemiologists to recognize disease spread among specific groups of people. Targeting groups as potential carriers of a disease, rather than addressing individuals as patients, risks causing harm to groups. While there are rules and obligations at the level of the individual, we have to reach a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. "Bare statistical evidence and the legitimacy of software-based judicial decisions".Eva Schmidt, Maximilian Köhl & Andreas Sesing-Wagenpfeil - 2023 - Synthese 201:1-27.
  5. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate how (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  6. Algorithmic Transparency and Manipulation.Michael Klenk - 2023 - Philosophy and Technology 36 (4):1-20.
    A series of recent papers raises worries about the manipulative potential of algorithmic transparency (to wit, making visible the factors that influence an algorithm’s output). But while the concern is apt and relevant, it is based on a fraught understanding of manipulation. Therefore, this paper draws attention to the ‘indifference view’ of manipulation, which explains better than the ‘vulnerability view’ why algorithmic transparency has manipulative potential. The paper also raises pertinent research questions for future studies of manipulation in the context (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  7. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches to algorithmic (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. New York: Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered by conflations (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9. Markets, market algorithms, and algorithmic bias.Philippe van Basshuysen - 2022 - Journal of Economic Methodology 30 (4):310-321.
    Where economists previously viewed the market as arising from a ‘spontaneous order’, antithetical to design, they now design markets to achieve specific purposes. This paper reconstructs how this change in what markets are and can do came about and considers some consequences. Two decisive developments in economic theory are identified: first, Hurwicz’s view of institutions as mechanisms, which should be designed to align incentives with social goals; and second, the notion of marketplaces – consisting of infrastructure and algorithms – which (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  10. Fair equality of chances for prediction-based decisions.Michele Loi, Anders Herlitz & Hoda Heidari - forthcoming - Economics and Philosophy:1-24.
    This article presents a fairness principle for evaluating decision-making based on predictions: a decision rule is unfair when the individuals directly impacted by the decisions who are equal with respect to the features that justify inequalities in outcomes do not have the same statistical prospects of being benefited or harmed by them, irrespective of their socially salient morally arbitrary traits. The principle can be used to evaluate prediction-based decision-making from the point of view of a wide range of antecedently specified (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. Shared decision-making and maternity care in the deep learning age: Acknowledging and overcoming inherited defeaters.Keith Begley, Cecily Begley & Valerie Smith - 2021 - Journal of Evaluation in Clinical Practice 27 (3):497–503.
    In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with backgrounds in (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. ChatGPT’s Responses to Dilemmas in Medical Ethics: The Devil is in the Details.Lukas J. Meier - 2023 - American Journal of Bioethics 23 (10):63-65.
    In their Target Article, Rahimzadeh et al. (2023) discuss the virtues and vices of employing ChatGPT in ethics education for healthcare professionals. To this end, they confront the chatbot with a moral dilemma and analyse its response. In interpreting the case, ChatGPT relies on Beauchamp and Childress’ four prima-facie principles: beneficence, non-maleficence, respect for patient autonomy, and justice. While the chatbot’s output appears admirable at first sight, it is worth taking a closer look: ChatGPT not only misses the point when (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13. What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems.Alex John London & Hoda Heidari - manuscript
    The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum's capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders' ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. ACROCPoLis: A Descriptive Framework for Making Sense of Fairness.Andrea Aler Tubella, Dimitri Coelho Mollo, Adam Dahlgren, Hannah Devinney, Virginia Dignum, Petter Ericson, Anna Jonsson, Tim Kampik, Tom Lenaerts, Julian Mendez & Juan Carlos Nieves Sanchez - 2023 - Proceedings of the 2023 Acm Conference on Fairness, Accountability, and Transparency:1014-1025.
    Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available. However, many of the fairness solutions proposed revolve around technical considerations and not the needs of and consequences for the most impacted communities. We therefore want to take the focus away from definitions and allow for the inclusion of societal and relational aspects to represent how the effects of AI systems impact and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Artificial intelligence ELSI score for science and technology: a comparison between Japan and the US.Tilman Hartwig, Yuko Ikkatai, Naohiro Takanashi & Hiromi M. Yokoyama - 2023 - AI and Society 38 (4):1609-1626.
    Artificial intelligence (AI) has become indispensable in our lives. The development of a quantitative scale for AI ethics is necessary for a better understanding of public attitudes toward AI research ethics and to advance the discussion on using AI within society. For this study, we developed an AI ethics scale based on AI-specific scenarios. We investigated public attitudes toward AI ethics in Japan and the US using online questionnaires. We designed a test set using four dilemma scenarios and questionnaire items (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  17. Informational richness and its impact on algorithmic fairness.Marcello Di Bello & Ruobin Gong - forthcoming - Philosophical Studies:1-29.
    The literature on algorithmic fairness has examined exogenous sources of biases such as shortcomings in the data and structural injustices in society. It has also examined internal sources of bias as evidenced by a number of impossibility theorems showing that no algorithm can concurrently satisfy multiple criteria of fairness. This paper contributes to the literature stemming from the impossibility theorems by examining how informational richness affects the accuracy and fairness of predictive algorithms. With the aid of a computer simulation, we (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Dirty data labeled dirt cheap: epistemic injustice in machine learning systems.Gordon Hull - 2023 - Ethics and Information Technology 25 (3):1-14.
    Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic injustice. To substantiate this claim, I argue that (1) pretrial detention and physiognomic AI systems commit testimonial injustice because their (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Bias Optimizers.Damien P. Williams - 2023 - American Scientist 111 (4):204-207.
  20. Algorithmic legitimacy in clinical decision-making.Sune Holm - 2023 - Ethics and Information Technology 25 (3):1-10.
    Machine learning algorithms are expected to improve referral decisions. In this article I discuss the legitimacy of deferring referral decisions in primary care to recommendations from such algorithms. The standard justification for introducing algorithmic decision procedures to make referral decisions is that they are more accurate than the available practitioners. The improvement in accuracy will ensure more efficient use of scarce health resources and improve patient care. In this article I introduce a proceduralist framework for discussing the legitimacy of algorithmic (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use.Joachim Baumann & Michele Loi - 2023 - Philosophy and Technology 36 (3):1-31.
    Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  22. Predictive policing and algorithmic fairness.Tzu-Wei Hung & Chun-Ping Yen - 2023 - Synthese 201 (6):1-29.
    This paper examines racial discrimination and algorithmic bias in predictive policing algorithms (PPAs), an emerging technology designed to predict threats and suggest solutions in law enforcement. We first describe what discrimination is in a case study of Chicago’s PPA. We then explain their causes with Broadbent’s contrastive model of causation and causal diagrams. Based on the cognitive science literature, we also explain why fairness is not an objective truth discoverable in laboratories but has context-sensitive social meanings that need to be (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. Using (Un)Fair Algorithms in an Unjust World.Kasper Lippert-Rasmussen - 2022 - Res Publica 29 (2):283-302.
    Algorithm-assisted decision procedures—including some of the most high-profile ones, such as COMPAS—have been described as unfair because they compound injustice. The complaint is that in such procedures a decision disadvantaging members of a certain group is based on information reflecting the fact that the members of the group have already been unjustly disadvantaged. I assess this reasoning. First, I distinguish the anti-compounding duty from a related but distinct duty—the proportionality duty—from which at least some of the intuitive appeal of the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. The Fairness in Algorithmic Fairness.Sune Holm - 2023 - Res Publica 29 (2):265-281.
    With the increasing use of algorithms in high-stakes areas such as criminal justice and health has come a significant concern about the fairness of prediction-based decision procedures. In this article I argue that a prominent class of mathematically incompatible performance parity criteria can all be understood as applications of John Broome’s account of fairness as the proportional satisfaction of claims. On this interpretation these criteria do not disagree on what it means for an algorithm to be _fair_. Rather they express (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  26. Correction: The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):339-340.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Künstliche Intelligenz: Fluch oder Segen?Jens Kipper - 2020 - Metzler.
    Künstliche Intelligenz (KI) ist heute schon ein fester Bestandteil unseres Lebens, auch wenn sie oft im Verborgenen wirkt. Wo wird diese Entwicklung hinführen und was wird das für uns bedeuten? Jens Kipper erklärt, wie moderne KI funktioniert, was sie heute schon kann und welche Auswirkungen ihre Verwendung in Waffensystemen, in der Medizin und Wissenschaft, im Arbeitsleben und anderswo haben wird. Kipper argumentiert dafür, dass die Entwicklung von KI zu großen gesellschaftlichen Umwälzungen führen wird. Er erläutert zudem, wovon es abhängt, dass (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Algorithmic fairness through group parities? The case of COMPAS-SAPMOC.Francesca Lagioia, Riccardo Rovatti & Giovanni Sartor - 2023 - AI and Society 38 (2):459-478.
    Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, linking a classification to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. From AI for people to AI for the world and the universe.Seth D. Baum & Andrea Owe - 2023 - AI and Society 38 (2):679-680.
    Recent work in AI ethics often calls for AI to advance human values and interests. The concept of “AI for people” is one notable example. Though commendable in some respects, this work falls short by excluding the moral significance of nonhumans. This paper calls for a shift in AI ethics to more inclusive paradigms such as “AI for the world” and “AI for the universe”. The paper outlines the case for more inclusive paradigms and presents implications for moral philosophy and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  32. Reconciling Algorithmic Fairness Criteria.Fabian Beigang - 2023 - Philosophy and Public Affairs 51 (2):166-190.
    Philosophy &Public Affairs, Volume 51, Issue 2, Page 166-190, Spring 2023.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  33. Bare statistical evidence and the legitimacy of software-based judicial decisions.Eva Schmidt, Andreas Sesing-Wagenpfeil & Maximilian A. Köhl - 2023 - Synthese 201 (4):1-27.
    Can the evidence provided by software systems meet the standard of proof for civil or criminal cases, and is it individualized evidence? Or, to the contrary, do software systems exclusively provide bare statistical evidence? In this paper, we argue that there are cases in which evidence in the form of probabilities computed by software systems is not bare statistical evidence, and is thus able to meet the standard of proof. First, based on the case of State v. Loomis, we investigate (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Investigating gender and racial biases in DALL-E Mini Images.Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Willem Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano & Colin Klein - forthcoming - Acm Journal on Responsible Computing.
    Generative artificial intelligence systems based on transformers, including both text-generators like GPT-4 and image generators like DALL-E 3, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this paper, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. Ethics and Artificial Intelligence in Public Health Social Work.David Gray Grant - 2018 - In Milind Tambe & Eric Rice (eds.), Artificial Intelligence and Social Work. Cambridge University Press.
  36. Equalized Odds is a Requirement of Algorithmic Fairness.David Gray Grant - 2023 - Synthese 201 (3).
    Statistical criteria of fairness are formal measures of how an algorithm performs that aim to help us determine whether an algorithm would be fair to use in decision-making. In this paper, I introduce a new version of the criterion known as “Equalized Odds,” argue that it is a requirement of procedural fairness, and show that it is immune to a number of objections to the standard version.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  37. (Un)Fairness in AI: An Intersectional Feminist Analysis.Youjin Kong - 2022 - Blog of the American Philosophical Association, Women in Philosophy Series.
    Racial, Gender, and Intersectional Biases in AI / -/- Dominant View of Intersectional Fairness in the AI Literature / -/- Three Fundamental Problems with the Dominant View / 1. Overemphasis on Intersections of Attributes / 2. Dilemma between Infinite Regress and Fairness Gerrymandering / 3. Narrow Understanding of Fairness as Parity / -/- Rethinking AI Fairness: from Weak to Strong Fairness.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  38. Are “Intersectionally Fair” AI Algorithms Really Fair to Women of Color? A Philosophical Analysis.Youjin Kong - 2022 - Facct: Proceedings of the Acm Conference on Fairness, Accountability, and Transparency:485-494.
    A growing number of studies on fairness in artificial intelligence (AI) use the notion of intersectionality to measure AI fairness. Most of these studies take intersectional fairness to be a matter of statistical parity among intersectional subgroups: an AI algorithm is “intersectionally fair” if the probability of the outcome is roughly the same across all subgroups defined by different combinations of the protected attributes. This paper identifies and examines three fundamental problems with this dominant interpretation of intersectional fairness in AI. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  39. Correction to: Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness.Ben Green - 2023 - Philosophy and Technology 36 (1):1-1.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. Having Your Day in Robot Court.Benjamin Chen, Alexander Stremitzer & Kevin Tobia - 2023 - Harvard Journal of Law and Technology 36.
    Should machines be judges? Some say no, arguing that citizens would see robot-led legal proceedings as procedurally unfair because “having your day in court” is having another human adjudicate your claims. Prior research established that people obey the law in part because they see it as procedurally just. The introduction of artificially intelligent (AI) judges could therefore undermine sentiments of justice and legal compliance if citizens intuitively take machine-adjudicated proceedings to be less fair than the human-adjudicated status quo. Two original (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. Measurement invariance, selection invariance, and fair selection revisited.Remco Heesen & Jan-Willem Romeijn - 2023 - Psychological Methods 28 (3):687-690.
    This note contains a corrective and a generalization of results by Borsboom et al. (2008), based on Heesen and Romeijn (2019). It highlights the relevance of insights from psychometrics beyond the context of psychological testing.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. (Some) algorithmic bias as institutional bias.Camila Hernandez Flowerman - 2023 - Ethics and Information Technology 25 (2):1-10.
    In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithms appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of algorithmic systems cannot be understood or accounted for without a kind of structural account. Understanding algorithmic bias as institutional bias in particular (as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Should Algorithms that Predict Recidivism Have Access to Race?Duncan Purves & Jeremy Davis - 2023 - American Philosophical Quarterly 60 (2):205-220.
    Recent studies have shown that recidivism scoring algorithms like COMPAS have significant racial bias: Black defendants are roughly twice as likely as white defendants to be mistakenly classified as medium- or high-risk. This has led some to call for abolishing COMPAS. But many others have argued that algorithms should instead be given access to a defendant's race, which, perhaps counterintuitively, is likely to improve outcomes. This approach can involve either establishing race-sensitive risk thresholds, or distinct racial ‘tracks’. Is there a (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44. Egalitarianism and Algorithmic Fairness.Sune Holm - 2023 - Philosophy and Technology 36 (1):1-18.
    What does it mean for algorithmic classifications to be fair to different socially salient groups? According to classification parity criteria, what is required is equality across groups with respect to some performance measure such as error rates. Critics of classification parity object that classification parity entails that achieving fairness may require us to choose an algorithm that makes no group better off and some groups worse off than an alternative. In this article, I interpret the problem of algorithmic fairness as (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - 2023 - Res Publica (2):1-23.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination between groups, as opposed to against groups, and that it is (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46. Knowledge representation and acquisition for ethical AI: challenges and opportunities.Vaishak Belle - 2023 - Ethics and Information Technology 25 (1):1-12.
    Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Bias infects the algorithms that wield increasing control over our lives. Predictive policing systems overestimate crime in communities of color; hiring algorithms dock qualified female candidates; and facial recognition software struggles to recognize dark-skinned faces. Algorithmic bias has received significant attention. Algorithmic neutrality, in contrast, has been largely neglected. Algorithmic neutrality is my topic. I take up three questions. What is algorithmic neutrality? Is algorithmic neutrality possible? When we have algorithmic neutrality in mind, what can we learn about algorithmic bias? (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.Atoosa Kasirzadeh - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society.
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- ideals (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49. MinMax fairness: from Rawlsian Theory of Justice to solution for algorithmic bias.Flavia Barsotti & Rüya Gökhan Koçer - forthcoming - AI and Society:1-14.
    This paper presents an intuitive explanation about why and how Rawlsian Theory of Justice (Rawls in A theory of justice, Harvard University Press, Harvard, 1971) provides the foundations to a solution for algorithmic bias. The contribution of the paper is to discuss and show why Rawlsian ideas in their original form (e.g. the veil of ignorance, original position, and allowing inequalities that serve the worst-off) are relevant to operationalize fairness for algorithmic decision making. The paper also explains how this leads (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 2516