Results for 'algorithmic bias'

993 found
Order:
  1. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic (...) are hampered by conflations of various understandings of bias, ranging from neutral deviations from a standard to morally problematic instances of injustice due to prejudice, discrimination, and disparate treatment. This terminological confusion impedes efforts to address clear cases of discrimination. -/- In this paper, we examine the promises and challenges of different approaches to disambiguating bias and designing for justice. While both approaches aid in understanding and addressing clear algorithmic harms, we argue that they also risk being leveraged in ways that ultimately deflect accountability from those building and deploying these systems. Applying this analysis to recent examples of generative AI, our argument highlights unseen dangers in current methods of evaluating algorithmic bias and points to ways to redirect approaches to addressing bias in generative AI at its early stages in ways that can more robustly meet the demands of justice. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Algorithmic bias: on the implicit biases of social technology.Gabbrielle M. Johnson - 2020 - Synthese 198 (10):9941-9961.
    Often machine learning programs inherit social patterns reflected in their training data without any directed effort by programmers to include such biases. Computer scientists call this algorithmic bias. This paper explores the relationship between machine bias and human cognitive bias. In it, I argue similarities between algorithmic and cognitive biases indicate a disconcerting sense in which sources of bias emerge out of seemingly innocuous patterns of information processing. The emergent nature of this bias (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  4. Algorithmic bias: Senses, sources, solutions.Sina Fazelpour & David Danks - 2021 - Philosophy Compass 16 (8):e12760.
    Data‐driven algorithms are widely used to make or assist decisions in sensitive domains, including healthcare, social services, education, hiring, and criminal justice. In various cases, such algorithms have preserved or even exacerbated biases against vulnerable communities, sparking a vibrant field of research focused on so‐called algorithmic biases. This research includes work on identification, diagnosis, and response to biases in algorithm‐based decision‐making. This paper aims to facilitate the application of philosophical analysis to these contested issues by providing an overview of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  5. Algorithmic bias and the Value Sensitive Design approach.Judith Simon, Pak-Hang Wong & Gernot Rieder - 2020 - Internet Policy Review 9 (4).
    Recently, amid growing awareness that computer algorithms are not neutral tools but can cause harm by reproducing and amplifying bias, attempts to detect and prevent such biases have intensified. An approach that has received considerable attention in this regard is the Value Sensitive Design (VSD) methodology, which aims to contribute to both the critical analysis of (dis)values in existing technologies and the construction of novel technologies that account for specific desired values. This article provides a brief overview of the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  6. Algorithmic Bias and Risk Assessments: Lessons from Practice.Ali Hasan, Shea Brown, Jovana Davidovic, Benjamin Lange & Mitt Regan - 2022 - Digital Society 1 (1):1-15.
    In this paper, we distinguish between different sorts of assessments of algorithmic systems, describe our process of assessing such systems for ethical risk, and share some key challenges and lessons for future algorithm assessments and audits. Given the distinctive nature and function of a third-party audit, and the uncertain and shifting regulatory landscape, we suggest that second-party assessments are currently the primary mechanisms for analyzing the social impacts of systems that incorporate artificial intelligence. We then discuss two kinds of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  19
    Algorithmic bias in anthropomorphic artificial intelligence: Critical perspectives through the practice of women media artists and designers.Caterina Antonopoulou - 2023 - Technoetic Arts 21 (2):157-174.
    Current research in artificial intelligence (AI) sheds light on algorithmic bias embedded in AI systems. The underrepresentation of women in the AI design sector of the tech industry, as well as in training datasets, results in technological products that encode gender bias, reinforce stereotypes and reproduce normative notions of gender and femininity. Biased behaviour is notably reflected in anthropomorphic AI systems, such as personal intelligent assistants (PIAs) and chatbots, that are usually feminized through various design parameters, such (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  60
    (Some) algorithmic bias as institutional bias.Camila Hernandez Flowerman - 2023 - Ethics and Information Technology 25 (2):1-10.
    In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithms appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of algorithmic systems cannot be understood or accounted for without a kind of structural account. Understanding algorithmic bias (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  35
    Algorithmic bias: should students pay the price?Helen Smith - 2020 - AI and Society 35 (4):1077-1078.
  10. Disability, fairness, and algorithmic bias in AI recruitment.Nicholas Tilmes - 2022 - Ethics and Information Technology 24 (2).
    While rapid advances in artificial intelligence hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  24
    Evaluating causes of algorithmic bias in juvenile criminal recidivism.Marius Miron, Songül Tolan, Emilia Gómez & Carlos Castillo - 2020 - Artificial Intelligence and Law 29 (2):111-147.
    In this paper we investigate risk prediction of criminal re-offense among juvenile defendants using general-purpose machine learning algorithms. We show that in our dataset, containing hundreds of cases, ML models achieve better predictive power than a structured professional risk assessment tool, the Structured Assessment of Violence Risk in Youth, at the expense of not satisfying relevant group fairness metrics that SAVRY does satisfy. We explore in more detail two possible causes of this algorithmic bias that are related to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12.  27
    Markets, market algorithms, and algorithmic bias.Philippe van Basshuysen - 2022 - Journal of Economic Methodology 30 (4):310-321.
    Where economists previously viewed the market as arising from a ‘spontaneous order’, antithetical to design, they now design markets to achieve specific purposes. This paper reconstructs how this change in what markets are and can do came about and considers some consequences. Two decisive developments in economic theory are identified: first, Hurwicz’s view of institutions as mechanisms, which should be designed to align incentives with social goals; and second, the notion of marketplaces – consisting of infrastructure and algorithms – which (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13.  51
    Assembled Bias: Beyond Transparent Algorithmic Bias.Robyn Repko Waller & Russell L. Waller - 2022 - Minds and Machines 32 (3):533-562.
    In this paper we make the case for the emergence of novel kind of bias with the use of algorithmic decision-making systems. We argue that the distinctive generative process of feature creation, characteristic of machine learning (ML), contorts feature parameters in ways that can lead to emerging feature spaces that encode novel algorithmic bias involving already marginalized groups. We term this bias _assembled bias._ Moreover, assembled biases are distinct from the much-discussed algorithmic (...), both in source (training data versus feature creation) and in content (mimics of extant societal bias versus reconfigured categories). As such, this problem is distinct from issues arising from bias-encoding training feature sets or proxy features. Assembled bias is not epistemically transparent in source or content. Hence, when these ML models are used as a basis for decision-making in social contexts, algorithmic fairness concerns are compounded. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14.  19
    Artificial Intelligence and Healthcare: The Impact of Algorithmic Bias on Health Disparities.Natasha H. Williams - 2023 - Springer Verlag.
    This book explores the ethical problems of algorithmic bias and its potential impact on populations that experience health disparities by examining the historical underpinnings of explicit and implicit bias, the influence of the social determinants of health, and the inclusion of racial and ethnic minorities in data. Over the last twenty-five years, the diagnosis and treatment of disease have advanced at breakneck speeds. Currently, we have technologies that have revolutionized the practice of medicine, such as telemedicine, precision (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  4
    The preliminary consideration for Discrimination by AI and the responsibility problem - On Algorithm Bias learning and Human agent. 허유선 - 2018 - Korean Feminist Philosophy 29:165-209.
    이 글은 인공지능에 의한 차별과 그 책임 논의를 철학적 차원에서 본격적으로 연구하기에 앞선 예비적 고찰이다. 인공지능에 의한 차별을 철학자들의 연구를 요하는 당면 ‘문제’로 제기하고, 이를 위해 ‘인공지능에 의한 차별’이라는 문제의 성격과 원인을 규명하는 것이 이 글의 주된 목적이다. 인공지능은 기존 차별을 그대로 반복하여 현존하는 차별의 강화 및 영속화를 야기할 수 있으며, 이는 먼 미래의 일이 아니다. 이러한 문제는 현재 발생 중이며 공동체적 대응을 요구한다. 그러나 철학자의 입장에서 그와 관련한 책임 논의를 다루기는 쉽지 않다. 그 이유는 크게 인공지능의 복잡한 기술적 문제와 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16.  62
    MinMax fairness: from Rawlsian Theory of Justice to solution for algorithmic bias.Flavia Barsotti & Rüya Gökhan Koçer - forthcoming - AI and Society:1-14.
    This paper presents an intuitive explanation about why and how Rawlsian Theory of Justice (Rawls in A theory of justice, Harvard University Press, Harvard, 1971) provides the foundations to a solution for algorithmic bias. The contribution of the paper is to discuss and show why Rawlsian ideas in their original form (e.g. the veil of ignorance, original position, and allowing inequalities that serve the worst-off) are relevant to operationalize fairness for algorithmic decision making. The paper also explains (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  34
    Towards a pragmatist dealing with algorithmic bias in medical machine learning.Georg Starke, Eva De Clercq & Bernice S. Elger - 2021 - Medicine, Health Care and Philosophy 24 (3):341-349.
    Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  18. The Bias Dilemma: The Ethics of Algorithmic Bias in Natural-Language Processing.Oisín Deery & Katherine Bailey - 2022 - Feminist Philosophy Quarterly 8 (3).
    Addressing biases in natural-language processing (NLP) systems presents an underappreciated ethical dilemma, which we think underlies recent debates about bias in NLP models. In brief, even if we could eliminate bias from language models or their outputs, we would thereby often withhold descriptively or ethically useful information, despite avoiding perpetuating or amplifying bias. Yet if we do not debias, we can perpetuate or amplify bias, even if we retain relevant descriptively or ethically useful information. Understanding this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  41
    Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives.Yves Saint James Aquino, Stacy M. Carter, Nehmat Houssami, Annette Braunack-Mayer, Khin Than Win, Chris Degeling, Lei Wang & Wendy A. Rogers - forthcoming - Journal of Medical Ethics.
    BackgroundThere is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race).ObjectivesOur objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias.MethodologyThe study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers.ResultsFindings reveal considerable divergent views on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Algorithms are not neutral: Bias in collaborative filtering.Catherine Stinson - 2022 - AI and Ethics 2 (4):763-770.
    When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory. The question of whether algorithms themselves can be among the sources of bias has been the subject of recent debate among Artificial Intelligence researchers, and scholars who study the social impact of technology. There has been a tendency to focus on examples, where the data set used to train the AI is biased, and denial (...)
     
    Export citation  
     
    Bookmark  
  21. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic (...) against people’s political orientation can arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  22. Algorithmic Political Bias Can Reduce Political Polarization.Uwe Peters - 2022 - Philosophy and Technology 35 (3):1-7.
    Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that (...) political classifications entrench political identities, I contend that they may often produce the opposite result. They can lead people to change in ways that disconfirm the classifications. Consequently and counterintuitively, algorithmic political bias can in fact decrease political entrenchment and polarization. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  23. Bias in algorithmic filtering and personalization.Engin Bozdag - 2013 - Ethics and Information Technology 15 (3):209-227.
    Online information intermediaries such as Facebook and Google are slowly replacing traditional media channels thereby partly becoming the gatekeepers of our society. To deal with the growing amount of information on the social web and the burden it brings on the average user, these gatekeepers recently started to introduce personalization features, algorithms that filter information per individual. In this paper we show that these online services that filter information are not merely algorithms. Humans not only affect the design of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  24. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over which jobs we get, whether we're granted loans, what information we're exposed to online, and so on. Algorithms can, and often do, wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has gone largely neglected. I investigate three questions about algorithmic neutrality: What is it? Is it possible? And when we have it in mind, what can we learn (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  20
    Bias in algorithms of AI systems developed for COVID-19: A scoping review.Janet Delgado, Alicia de Manuel, Iris Parra, Cristian Moyano, Jon Rueda, Ariel Guersenzvaig, Txetxu Ausin, Maite Cruz, David Casacuberta & Angel Puyol - 2022 - Journal of Bioethical Inquiry 19 (3):407-419.
    To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. ​Studies mentioning biases on AI algorithms developed for contact (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  12
    Review of The Information Manifold: Why Computers Can't Solve Algorithmic Bias and Fake News. [REVIEW]Jeff Pooley - 2022 - Spontaneous Generations 10 (1):138-139.
  27.  84
    Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  13
    Algorithmic Political Bias—an Entrenchment Concern.Ulrik Franke - 2022 - Philosophy and Technology 35 (3):1-6.
    This short commentary on Peters identifies the entrenchment of political positions as one additional concern related to algorithmic political bias, beyond those identified by Peters. First, it is observed that the political positions detected and predicted by algorithms are typically contingent and largely explained by “political tribalism”, as argued by Brennan. Second, following Hacking, the social construction of political identities is analyzed and it is concluded that algorithmic political bias can contribute to such identities. Third, following (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Bias in Information, Algorithms, and Systems.Alan Rubel, Clinton Castro & Adam Pham - 2018 - In Jo Bates, Paul D. Clough, Robert Jäschke & Jahna Otterbacher (eds.), Proceedings of the International Workshop on Bias in Information, Algorithms, and Systems (BIAS). pp. 9-13.
    We argue that an essential element of understanding the moral salience of algorithmic systems requires an analysis of the relation between algorithms and agency. We outline six key ways in which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making.
    Direct download  
     
    Export citation  
     
    Bookmark  
  30. Detecting racial bias in algorithms and machine learning.Nicol Turner Lee - 2018 - Journal of Information, Communication and Ethics in Society 16 (3):252-260.
    Purpose The online economy has not resolved the issue of racial bias in its applications. While algorithms are procedures that facilitate automated decision-making, or a sequence of unambiguous instructions, bias is a byproduct of these computations, bringing harm to historically disadvantaged populations. This paper argues that algorithmic biases explicitly and implicitly harm racial groups and lead to forms of discrimination. Relying upon sociological and technical research, the paper offers commentary on the need for more workplace diversity within (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  31. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  32. A polynomial time algorithm for determining Dag equivalence in the presence of latent variables and selection bias.Peter Spirtes - unknown
    if and only if for every W in V, W is independent of the set of all its non-descendants conditional on the set of its parents. One natural question that arises with respect to DAGs is when two DAGs are “statistically equivalent”. One interesting sense of “statistical equivalence” is “d-separation equivalence” (explained in more detail below.) In the case of DAGs, d-separation equivalence is also corresponds to a variety of other natural senses of statistical equivalence (such as representing the same (...)
     
    Export citation  
     
    Bookmark   2 citations  
  33.  16
    Quantifying inductive bias: AI learning algorithms and Valiant's learning framework.David Haussler - 1988 - Artificial Intelligence 36 (2):177-221.
  34. Algorithmic Fairness and the Situated Dynamics of Justice.Sina Fazelpour, Zachary C. Lipton & David Danks - 2022 - Canadian Journal of Philosophy 52 (1):44-60.
    Machine learning algorithms are increasingly used to shape high-stake allocations, sparking research efforts to orient algorithm design towards ideals of justice and fairness. In this research on algorithmic fairness, normative theorizing has primarily focused on identification of “ideally fair” target states. In this paper, we argue that this preoccupation with target states in abstraction from the situated dynamics of deployment is misguided. We propose a framework that takes dynamic trajectories as direct objects of moral appraisal, highlighting three respects in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  36. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  37.  20
    Equal accuracy for Andrew and Abubakar—detecting and mitigating bias in name-ethnicity classification algorithms.Lena Hafner, Theodor Peter Peifer & Franziska Sofia Hafner - forthcoming - AI and Society:1-25.
    Uncovering the world’s ethnic inequalities is hampered by a lack of ethnicity-annotated datasets. Name-ethnicity classifiers (NECs) can help, as they are able to infer people’s ethnicities from their names. However, since the latest generation of NECs rely on machine learning and artificial intelligence (AI), they may suffer from the same racist and sexist biases found in many AIs. Therefore, this paper offers an algorithmic fairness audit of three NECs. It finds that the UK-Census-trained EthnicityEstimator displays large accuracy biases with (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  44
    Predictive policing and algorithmic fairness.Tzu-Wei Hung & Chun-Ping Yen - 2023 - Synthese 201 (6):1-29.
    This paper examines racial discrimination and algorithmic bias in predictive policing algorithms (PPAs), an emerging technology designed to predict threats and suggest solutions in law enforcement. We first describe what discrimination is in a case study of Chicago’s PPA. We then explain their causes with Broadbent’s contrastive model of causation and causal diagrams. Based on the cognitive science literature, we also explain why fairness is not an objective truth discoverable in laboratories but has context-sensitive social meanings that need (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39.  6
    Bias Dilemma.Oisín Deery & Katherine Bailey - 2022 - Feminist Philosophy Quarterly 8 (3/4).
    Addressing biases in natural-language processing (NLP) systems presents an underappreciated ethical dilemma, which we think underlies recent debates about bias in NLP models. In brief, even if we could eliminate bias from language models or their outputs, we would thereby often withhold descriptively or ethically useful information, despite avoiding perpetuating or amplifying bias. Yet if we do not debias, we can perpetuate or amplify bias, even if we retain relevant descriptively or ethically useful information. Understanding this (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40.  35
    Algorithmic Racial Discrimination.Alysha Kassam & Patricia Marino - 2022 - Feminist Philosophy Quarterly 8 (3).
    This paper contributes to debates over algorithmic discrimination with particular attention to structural theories of racism and the problem of “proxy discrimination”—discriminatory effects that arise even when an algorithm has no information about socially sensitive characteristics such as race. Structural theories emphasize the ways that unequal power structures contribute to the subordination of marginalized groups: these theories thus understand racism in ways that go beyond individual choices and bad intentions. Our question is, how should a structural understanding of racism (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41. Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  42. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  43.  33
    Mitigating Racial Bias in Machine Learning.Kristin M. Kostick-Quenet, I. Glenn Cohen, Sara Gerke, Bernard Lo, James Antaki, Faezah Movahedi, Hasna Njah, Lauren Schoen, Jerry E. Estep & J. S. Blumenthal-Barby - 2022 - Journal of Law, Medicine and Ethics 50 (1):92-100.
    When applied in the health sector, AI-based applications raise not only ethical but legal and safety concerns, where algorithms trained on data from majority populations can generate less accurate or reliable results for minorities and other disadvantaged groups.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  44.  17
    The Bias–Variance Tradeoff in Cognitive Science.Shayan Doroudi & Seyed Ali Rastegar - 2023 - Cognitive Science 47 (1):e13241.
    The bias–variance tradeoff is a theoretical concept that suggests machine learning algorithms are susceptible to two kinds of error, with some algorithms tending to suffer from one more than the other. In this letter, we claim that the bias–variance tradeoff is a general concept that can be applied to human cognition as well, and we discuss implications for research in cognitive science. In particular, we show how various strands of research in cognitive science can be interpreted in light (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45.  22
    Addressing bias in artificial intelligence for public health surveillance.Lidia Flores, Seungjun Kim & Sean D. Young - 2024 - Journal of Medical Ethics 50 (3):190-194.
    Components of artificial intelligence (AI) for analysing social big data, such as natural language processing (NLP) algorithms, have improved the timeliness and robustness of health data. NLP techniques have been implemented to analyse large volumes of text from social media platforms to gain insights on disease symptoms, understand barriers to care and predict disease outbreaks. However, AI-based decisions may contain biases that could misrepresent populations, skew results or lead to errors. Bias, within the scope of this paper, is described (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47.  30
    Algorithmic Decision-making, Statistical Evidence and the Rule of Law.Vincent Chiao - forthcoming - Episteme:1-24.
    The rapidly increasing role of automation throughout the economy, culture and our personal lives has generated a large literature on the risks of algorithmic decision-making, particularly in high-stakes legal settings. Algorithmic tools are charged with bias, shrouded in secrecy, and frequently difficult to interpret. However, these criticisms have tended to focus on particular implementations, specific predictive techniques, and the idiosyncrasies of the American legal-regulatory regime. They do not address the more fundamental unease about the prospect that we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48. Should Algorithms that Predict Recidivism Have Access to Race?Duncan Purves & Jeremy Davis - 2023 - American Philosophical Quarterly 60 (2):205-220.
    Recent studies have shown that recidivism scoring algorithms like COMPAS have significant racial bias: Black defendants are roughly twice as likely as white defendants to be mistakenly classified as medium- or high-risk. This has led some to call for abolishing COMPAS. But many others have argued that algorithms should instead be given access to a defendant's race, which, perhaps counterintuitively, is likely to improve outcomes. This approach can involve either establishing race-sensitive risk thresholds, or distinct racial ‘tracks’. Is there (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49. Negligent Algorithmic Discrimination.Andrés Páez - 2021 - Law and Contemporary Problems 84 (3):19-33.
    The use of machine learning algorithms has become ubiquitous in hiring decisions. Recent studies have shown that many of these algorithms generate unlawful discriminatory effects in every step of the process. The training phase of the machine learning models used in these decisions has been identified as the main source of bias. For a long time, discrimination cases have been analyzed under the banner of disparate treatment and disparate impact, but these concepts have been shown to be ineffective in (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  50. Automation Bias and Procedural Fairness: A Short Guide for the UK Civil Service.John Zerilli, Iñaki Goñi & Matilde Masetti Placci - forthcoming - Braid Reports.
    The use of advanced AI and data-driven automation in the public sector poses several organisational, practical, and ethical challenges. One that is easy to underestimate is automation bias, which, in turn, has underappreciated legal consequences. Automation bias is an attitude in which the operator of an autonomous system will defer to its outputs to the point where the operator overlooks or ignores evidence that the system is failing. The legal problem arises when statutory office-holders (or their employees) either (...)
     
    Export citation  
     
    Bookmark  
1 — 50 / 993