Results for 'Algorithmic Discrimination'

993 found
Order:
  1. Negligent Algorithmic Discrimination.Andrés Páez - 2021 - Law and Contemporary Problems 84 (3):19-33.
    The use of machine learning algorithms has become ubiquitous in hiring decisions. Recent studies have shown that many of these algorithms generate unlawful discriminatory effects in every step of the process. The training phase of the machine learning models used in these decisions has been identified as the main source of bias. For a long time, discrimination cases have been analyzed under the banner of disparate treatment and disparate impact, but these concepts have been shown to be ineffective in (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  91
    What is “Race” in Algorithmic Discrimination on the Basis of Race?Lily Hu - 2023 - Journal of Moral Philosophy 21 (1-2):1-26.
    Machine learning algorithms bring out an under-appreciated puzzle of discrimination, namely figuring out when a decision made on the basis of a factor correlated with race is a decision made on the basis of race. I argue that prevailing approaches, which are based on identifying and then distinguishing among causal effects of race, in their metaphysical timidity, fail to get off the ground. I suggest, instead, that adopting a constructivist theory of race answers this puzzle in a principled manner. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Three Lessons For and From Algorithmic Discrimination.Frej Klem Thomsen - 2023 - Res Publica (2):1-23.
    Algorithmic discrimination has rapidly become a topic of intense public and academic interest. This article explores three issues raised by algorithmic discrimination: 1) the distinction between direct and indirect discrimination, 2) the notion of disadvantageous treatment, and 3) the moral badness of discriminatory automated decision-making. It argues that some conventional distinctions between direct and indirect discrimination appear not to apply to algorithmic discrimination, that algorithmic discrimination may often be discrimination (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4.  35
    Algorithmic Racial Discrimination.Alysha Kassam & Patricia Marino - 2022 - Feminist Philosophy Quarterly 8 (3).
    This paper contributes to debates over algorithmic discrimination with particular attention to structural theories of racism and the problem of “proxy discrimination”—discriminatory effects that arise even when an algorithm has no information about socially sensitive characteristics such as race. Structural theories emphasize the ways that unequal power structures contribute to the subordination of marginalized groups: these theories thus understand racism in ways that go beyond individual choices and bad intentions. Our question is, how should a structural understanding (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5. Challenging algorithmic profiling: The limits of data protection and anti-discrimination in responding to emergent discrimination.Tobias Matzner & Monique Mann - 2019 - Big Data and Society 6 (2).
    The potential for biases being built into algorithms has been known for some time, yet literature has only recently demonstrated the ways algorithmic profiling can result in social sorting and harm marginalised groups. We contend that with increased algorithmic complexity, biases will become more sophisticated and difficult to identify, control for, or contest. Our argument has four steps: first, we show how harnessing algorithms means that data gathered at a particular place and time relating to specific persons, can (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  6. Algorithmic Indirect Discrimination, Fairness, and Harm.Frej Klem Thomsen - 2023 - AI and Ethics.
    Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyses the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic differential treatment. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  7
    Challenging Disability Discrimination in the Clinical Use of PDMP Algorithms.Elizabeth Pendo & Jennifer Oliva - 2024 - Hastings Center Report 54 (1):3-7.
    State prescription drug monitoring programs (PDMPs) use proprietary, predictive software platforms that deploy algorithms to determine whether a patient is at risk for drug misuse, drug diversion, doctor shopping, or substance use disorder (SUD). Clinical overreliance on PDMP algorithm‐generated information and risk scores motivates clinicians to refuse to treat—or to inappropriately treat—vulnerable people based on actual, perceived, or past SUDs, chronic pain conditions, or other disabilities. This essay provides a framework for challenging PDMP algorithmic discrimination as disability (...) under federal antidiscrimination laws, including a new proposed rule interpreting section 1557 of the Affordable Care Act. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  38
    Algorithmic Fairness and Statistical Discrimination.John W. Patty & Elizabeth Maggie Penn - 2022 - Philosophy Compass 18 (1):e12891.
    Algorithmic fairness is a new interdisciplinary field of study focused on how to measure whether a process, or algorithm, may unintentionally produce unfair outcomes, as well as whether or how the potential unfairness of such processes can be mitigated. Statistical discrimination describes a set of informational issues that can induce rational (i.e., Bayesian) decision-making to lead to unfair outcomes even in the absence of discriminatory intent. In this article, we provide overviews of these two related literatures and draw (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  9.  85
    Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has not yet been sufficiently (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  10.  30
    Discrimination, Fairness, and the Use of Algorithms.Sune Hannibal Holm & Kasper Lippert-Rasmussen - 2023 - Res Publica 29 (2):177-183.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11.  22
    Choosing how to discriminate: navigating ethical trade-offs in fair algorithmic design for the insurance sector.Michele Loi & Markus Christen - 2021 - Philosophy and Technology 34 (4):967-992.
    Here, we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts, business managers, and regulators, proposing a framework of moral reasoning behind the choice of fairness goals for prediction-based decisions in the insurance domain. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  12.  4
    The preliminary consideration for Discrimination by AI and the responsibility problem - On Algorithm Bias learning and Human agent. 허유선 - 2018 - Korean Feminist Philosophy 29:165-209.
    이 글은 인공지능에 의한 차별과 그 책임 논의를 철학적 차원에서 본격적으로 연구하기에 앞선 예비적 고찰이다. 인공지능에 의한 차별을 철학자들의 연구를 요하는 당면 ‘문제’로 제기하고, 이를 위해 ‘인공지능에 의한 차별’이라는 문제의 성격과 원인을 규명하는 것이 이 글의 주된 목적이다. 인공지능은 기존 차별을 그대로 반복하여 현존하는 차별의 강화 및 영속화를 야기할 수 있으며, 이는 먼 미래의 일이 아니다. 이러한 문제는 현재 발생 중이며 공동체적 대응을 요구한다. 그러나 철학자의 입장에서 그와 관련한 책임 논의를 다루기는 쉽지 않다. 그 이유는 크게 인공지능의 복잡한 기술적 문제와 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14.  45
    Algorithmic fairness and resentment.Boris Babic & Zoë Johnson King - forthcoming - Philosophical Studies:1-33.
    In this paper we develop a general theory of algorithmic fairness. Drawing on Johnson King and Babic’s work on moral encroachment, on Gary Becker’s work on labor market discrimination, and on Strawson’s idea of resentment and indignation as responses to violations of the demand for goodwill toward oneself and others, we locate attitudes to fairness in an agent’s utility function. In particular, we first argue that fairness is a matter of a decision-maker’s relative concern for the plight of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  16.  58
    Discrimination in the age of artificial intelligence.Bert Heinrichs - 2022 - AI and Society 37 (1):143-154.
    In this paper, I examine whether the use of artificial intelligence (AI) and automated decision-making (ADM) aggravates issues of discrimination as has been argued by several authors. For this purpose, I first take up the lively philosophical debate on discrimination and present my own definition of the concept. Equipped with this account, I subsequently review some of the recent literature on the use AI/ADM and discrimination. I explain how my account of discrimination helps to understand that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  17.  45
    Predictive policing and algorithmic fairness.Tzu-Wei Hung & Chun-Ping Yen - 2023 - Synthese 201 (6):1-29.
    This paper examines racial discrimination and algorithmic bias in predictive policing algorithms (PPAs), an emerging technology designed to predict threats and suggest solutions in law enforcement. We first describe what discrimination is in a case study of Chicago’s PPA. We then explain their causes with Broadbent’s contrastive model of causation and causal diagrams. Based on the cognitive science literature, we also explain why fairness is not an objective truth discoverable in laboratories but has context-sensitive social meanings that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Algorithms are not neutral: Bias in collaborative filtering.Catherine Stinson - 2022 - AI and Ethics 2 (4):763-770.
    When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory. The question of whether algorithms themselves can be among the sources of bias has been the subject of recent debate among Artificial Intelligence researchers, and scholars who study the social impact of technology. There has been a tendency to focus on examples, where the data set used to train the AI is biased, and denial on (...)
     
    Export citation  
     
    Bookmark  
  19.  44
    Algorithms and values in justice and security.Paul Hayes, Ibo van de Poel & Marc Steen - 2020 - AI and Society 35 (3):533-555.
    This article presents a conceptual investigation into the value impacts and relations of algorithms in the domain of justice and security. As a conceptual investigation, it represents one step in a value sensitive design based methodology. Here, we explicate and analyse the expression of values of accuracy, privacy, fairness and equality, property and ownership, and accountability and transparency in this context. We find that values are sensitive to disvalue if algorithms are designed, implemented or deployed inappropriately or without sufficient consideration (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  20.  9
    Algorithmic injustice and human rights.Denis Coitinho & André Luiz Olivier da Silva - 2024 - Filosofia Unisinos 25 (1):1-17.
    The central goal of this paper is to investigate the injustices that can occur with the use of new technologies, especially Artificial Intelligence (AI), focusing on the issues concerning respect to human rights and the protection of victims and the most vulnerable. We aim to study the impacts of AI in daily life and the possible threats to human dignity imposed by it, such as discrimination based on prejudices, identity-oriented stereotypes, and unequal access to health services. We characterize such (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Identity, profiling algorithms and a world of ambient intelligence.Katja de Vries - 2010 - Ethics and Information Technology 12 (1):71-85.
    The tendency towards an increasing integration of the informational web into our daily physical world (in particular in so-called Ambient Intelligent technologies which combine ideas derived from the field of Ubiquitous Computing, Intelligent User Interfaces and Ubiquitous Communication) is likely to make the development of successful profiling and personalization algorithms, like the ones currently used by internet companies such as Amazon , even more important than it is today. I argue that the way in which we experience ourselves necessarily goes (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  22.  23
    From algorithmic governance to govern algorithm.Zichun Xu - forthcoming - AI and Society:1-10.
    Algorithm is the core category and basic methods of the digital age, and advanced technologies such as big data, artificial intelligence, and blockchain all need to rely on various algorithm designs or take the algorithm as the underlying principle. However, due to the characteristics of algorithm design, application, and technology itself, there are also hidden worries such as algorithm black-box, algorithm discrimination, and difficulty in accountability in the operation process to varying degrees. This paper summarizes these problems into three (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  23. An Epistemic Lens on Algorithmic Fairness.Elizabeth Edenberg & Alexandra Wood - 2023 - Eaamo '23: Proceedings of the 3Rd Acm Conference on Equity and Access in Algorithms, Mechanisms, and Optimization.
    In this position paper, we introduce a new epistemic lens for analyzing algorithmic harm. We argue that the epistemic lens we propose herein has two key contributions to help reframe and address some of the assumptions underlying inquiries into algorithmic fairness. First, we argue that using the framework of epistemic injustice helps to identify the root causes of harms currently framed as instances of representational harm. We suggest that the epistemic lens offers a theoretical foundation for expanding approaches (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  51
    Identity, profiling algorithms and a world of ambient intelligence.Katja Vries - 2010 - Ethics and Information Technology 12 (1):71-85.
    The tendency towards an increasing integration of the informational web into our daily physical world (in particular in so-called Ambient Intelligent technologies which combine ideas derived from the field of Ubiquitous Computing, Intelligent User Interfaces and Ubiquitous Communication) is likely to make the development of successful profiling and personalization algorithms, like the ones currently used by internet companies such as Amazon, even more important than it is today. I argue that the way in which we experience ourselves necessarily goes through (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  25. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted toolkits that (...)
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  26. Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   48 citations  
  27. Detecting racial bias in algorithms and machine learning.Nicol Turner Lee - 2018 - Journal of Information, Communication and Ethics in Society 16 (3):252-260.
    Purpose The online economy has not resolved the issue of racial bias in its applications. While algorithms are procedures that facilitate automated decision-making, or a sequence of unambiguous instructions, bias is a byproduct of these computations, bringing harm to historically disadvantaged populations. This paper argues that algorithmic biases explicitly and implicitly harm racial groups and lead to forms of discrimination. Relying upon sociological and technical research, the paper offers commentary on the need for more workplace diversity within high-tech (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  28.  21
    The Emerging Hazard of AI‐Related Health Care Discrimination.Sharona Hoffman - 2020 - Hastings Center Report 51 (1):8-9.
    Artificial intelligence holds great promise for improved health‐care outcomes. But it also poses substantial new hazards, including algorithmic discrimination. For example, an algorithm used to identify candidates for beneficial “high risk care management” programs routinely failed to select racial minorities. Furthermore, some algorithms deliberately adjust for race in ways that divert resources away from minority patients. To illustrate, algorithms have underestimated African Americans’ risks of kidney stones and death from heart failure. Algorithmic discrimination can violate Title (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  27
    Biased Humans, (Un)Biased Algorithms?Florian Pethig & Julia Kroenung - 2022 - Journal of Business Ethics 183 (3):637-652.
    Previous research has shown that algorithmic decisions can reflect gender bias. The increasingly widespread utilization of algorithms in critical decision-making domains (e.g., healthcare or hiring) can thus lead to broad and structural disadvantages for women. However, women often experience bias and discrimination through human decisions and may turn to algorithms in the hope of receiving neutral and objective evaluations. Across three studies (N = 1107), we examine whether women’s receptivity to algorithms is affected by situations in which they (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  7
    Causal models and algorithmic fairness.Fabian Beigang - unknown
    This thesis aims to clarify a number of conceptual aspects of the debate surrounding algorithmic fairness. The particular focus here is the role of causal modeling in defining criteria of algorithmic fairness. In Chapter 1, I argue that in the discussion of algorithmic fairness, two fundamentally distinct notions of fairness have been conflated. Subsequently, I propose that what is usually taken to be the problem of algorithmic fairness should be divided into two subproblems, the problem of (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31.  39
    Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data.Reuben Binns & Michael Veale - 2017 - Big Data and Society 4 (2).
    Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate emergent indirect (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   17 citations  
  32.  23
    Mapping the Ethicality of Algorithmic Pricing: A Review of Dynamic and Personalized Pricing. [REVIEW]Peter Seele, Claus Dierksmeier, Reto Hofstetter & Mario D. Schultz - 2019 - Journal of Business Ethics 170 (4):697-719.
    Firms increasingly deploy algorithmic pricing approaches to determine what to charge for their goods and services. Algorithmic pricing can discriminate prices both dynamically over time and personally depending on individual consumer information. Although legal, the ethicality of such approaches needs to be examined as often they trigger moral concerns and sometimes outrage. In this research paper, we provide an overview and discussion of the ethical challenges germane to algorithmic pricing. As a basis for our discussion, we perform (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  33.  38
    Measuring the Biases that Matter: The Ethical and Causal Foundations for Measures of Fairness in Algorithms.Jonathan Herington & Bruce Glymour - 2019 - Proceedings of the Conference on Fairness, Accountability, and Transparency 2019:269-278.
    Measures of algorithmic bias can be roughly classified into four categories, distinguished by the conditional probabilistic dependencies to which they are sensitive. First, measures of "procedural bias" diagnose bias when the score returned by an algorithm is probabilistically dependent on a sensitive class variable (e.g. race or sex). Second, measures of "outcome bias" capture probabilistic dependence between class variables and the outcome for each subject (e.g. parole granted or loan denied). Third, measures of "behavior-relative error bias" capture probabilistic dependence (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  34.  10
    3D Face Modeling Algorithm for Film and Television Animation Based on Lightweight Convolutional Neural Network.Cheng Di, Jing Peng, Yihua Di & Siwei Wu - 2021 - Complexity 2021:1-10.
    Through the analysis of facial feature extraction technology, this paper designs a lightweight convolutional neural network. The LW-CNN model adopts a separable convolution structure, which can propose more accurate features with fewer parameters and can extract 3D feature points of a human face. In order to enhance the accuracy of feature extraction, a face detection method based on the inverted triangle structure is used to detect the face frame of the images in the training set before the model extracts the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  10
    Fairness Hacking: The Malicious Practice of Shrouding Unfairness in Algorithms.Kristof Meding & Thilo Hagendorff - 2024 - Philosophy and Technology 37 (1):1-22.
    Fairness in machine learning (ML) is an ever-growing field of research due to the manifold potential for harm from algorithmic discrimination. To prevent such harm, a large body of literature develops new approaches to quantify fairness. Here, we investigate how one can divert the quantification of fairness by describing a practice we call “fairness hacking” for the purpose of shrouding unfairness in algorithms. This impacts end-users who rely on learning algorithms, as well as the broader community interested in (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36.  21
    A New Subject-Specific Discriminative and Multi-Scale Filter Bank Tangent Space Mapping Method for Recognition of Multiclass Motor Imagery.Fan Wu, Anmin Gong, Hongyun Li, Lei Zhao, Wei Zhang & Yunfa Fu - 2021 - Frontiers in Human Neuroscience 15.
    Objective: Tangent Space Mapping using the geometric structure of the covariance matrices is an effective method to recognize multiclass motor imagery. Compared with the traditional CSP method, the Riemann geometric method based on TSM takes into account the nonlinear information contained in the covariance matrix, and can extract more abundant and effective features. Moreover, the method is an unsupervised operation, which can reduce the time of feature extraction. However, EEG features induced by MI mental activities of different subjects are not (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. From human resources to human rights: Impact assessments for hiring algorithms.Josephine Yam & Joshua August Skorburg - 2021 - Ethics and Information Technology 23 (4):611-623.
    Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two reasons. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  38.  62
    Expansions of Semi-Heyting Algebras I: Discriminator Varieties.H. P. Sankappanavar - 2011 - Studia Logica 98 (1-2):27-81.
    This paper is a contribution toward developing a theory of expansions of semi-Heyting algebras. It grew out of an attempt to settle a conjecture we had made in 1987. Firstly, we unify and extend strikingly similar results of [ 48 ] and [ 50 ] to the (new) equational class DHMSH of dually hemimorphic semi-Heyting algebras, or to its subvariety BDQDSH of blended dual quasi-De Morgan semi-Heyting algebras, thus settling the conjecture. Secondly, we give a criterion for a unary expansion (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  39.  20
    From pool to profile: Social consequences of algorithmic prediction in insurance.Elena Esposito & Alberto Cevolini - 2020 - Big Data and Society 7 (2).
    The use of algorithmic prediction in insurance is regarded as the beginning of a new era, because it promises to personalise insurance policies and premiums on the basis of individual behaviour and level of risk. The core idea is that the price of the policy would no longer refer to the calculated uncertainty of a pool of policyholders, with the consequence that everyone would have to pay only for her real exposure to risk. For insurance, however, uncertainty is not (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  40.  9
    Bored Techies Being Casually Racist: Race as Algorithm.Sareeta Amrute - 2020 - Science, Technology, and Human Values 45 (5):903-933.
    Connecting corporate software work in the United States and Germany, this essay tracks the racialization of mostly male Indian software engineers through the casualization of their labor. In doing so, I show the connections between overt, anti-immigrant violence today and the ongoing use of race to sediment divisions of labor in the industry as a whole. To explain racialization in the tech industry, I develop the concept of race-as-algorithm as a device to unpack how race is made productive within digital (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  41.  37
    AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making.Hugo Cossette-Lefebvre & Jocelyn Maclure - 2022 - AI and Ethics.
    The use of predictive machine learning algorithms is increasingly common to guide or even take decisions in both public and private settings. Their use is touted by some as a potentially useful method to avoid discriminatory decisions since they are, allegedly, neutral, objective, and can be evaluated in ways no human decisions can. By (fully or partly) outsourcing a decision process to an algorithm, it should allow human organizations to clearly define the parameters of the decision and to, in principle, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  42.  3
    Rumor Situation Discrimination Based on Empirical Mode Decomposition Correlation Dimension.Yanwen Xin & Fengming Liu - 2021 - Complexity 2021:1-12.
    To effectively identify network rumors and block their spread, this paper uses fractal theory to analyze a network rumor spreading situation time series, reveal its inner regularity, extract features, and establish a network rumor recognition model. The model is based on an empirical mode decomposition correlation dimension and K-nearest neighbor approach. Firstly, a partition function is used to determine if the time series of the rumor spreading situation is a uniform fractal process. Secondly, the rumor spreading situation is subjected to (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  43.  6
    A Risk Assessment Algorithm for College Student Entrepreneurship Based on Big Data Analysis.Chengjun Zhou & DuanXu Wang - 2021 - Complexity 2021:1-12.
    College student entrepreneurship is a complex and dynamic process, in which the potential risks faced by entrepreneurial enterprises are interactive and diverse. The changes in risk assessment for college student entrepreneurship are also dynamic and nonlinear and are affected by many factors, which make the risk assessment process for college student entrepreneurship quite complicated. Big data analysis technology is a new product formed under the background of cloud computing and Internet technology, which has the characteristics of large data scale, multiple (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44.  23
    Social context of the issue of discriminatory algorithmic decision-making systems.Daniel Varona & Juan Luis Suarez - forthcoming - AI and Society:1-13.
    Algorithmic decision-making systems have the potential to amplify existing discriminatory patterns and negatively affect perceptions of justice in society. There is a need for a revision of mechanisms to address discrimination in light of the unique challenges presented by these systems, which are not easily auditable or explainable. Research efforts to bring fairness to ADM solutions should be viewed as a matter of justice and trust among actors should be ensured through technology design. Ideas that move us to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  26
    AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies.Justyna Stypinska - 2023 - AI and Society 38 (2):665-677.
    In the last few years, we have witnessed a surge in scholarly interest and scientific evidence of how algorithms can produce discriminatory outcomes, especially with regard to gender and race. However, the analysis of fairness and bias in AI, important for the debate of AI for social good, has paid insufficient attention to the category of age and older people. Ageing populations have been largely neglected during the turn to digitality and AI. In this article, the concept of AI ageism (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  46.  17
    How I Would have been Differently Treated. Discrimination Through the Lens of Counterfactual Fairness.Michele Https://Orcidorg Loi, Francesco Https://Orcidorg Nappo & Eleonora Https://Orcidorg Vigano - 2023 - Res Publica 29 (2):185-211.
    The widespread use of algorithms for prediction-based decisions urges us to consider the question of what it means for a given act or practice to be discriminatory. Building upon work by Kusner and colleagues in the field of machine learning, we propose a counterfactual condition as a necessary requirement on discrimination. To demonstrate the philosophical relevance of the proposed condition, we consider two prominent accounts of discrimination in the recent literature, by Lippert-Rasmussen and Hellman respectively, that do not (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Patterned Inequality, Compounding Injustice, and Algorithmic Prediction.Benjamin Eidelson - 2021 - American Journal of Law and Equality 1 (1):252-276.
    If whatever counts as merit for some purpose is unevenly distributed, a decision procedure that accurately sorts people on that basis will “pick up” and reproduce the pre-existing pattern in ways that more random, less merit-tracking procedures would not. This dynamic is an important cause for concern about the use of predictive models to allocate goods and opportunities. In this article, I distinguish two different objections that give voice to that concern in different ways. First, decision procedures may contribute to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Counterfactual fairness: The case study of a food delivery platform’s reputational-ranking algorithm.Marco Piccininni - 2022 - Frontiers in Psychology 13.
    Data-driven algorithms are currently deployed in several fields, leading to a rapid increase in the importance algorithms have in decision-making processes. Over the last years, several instances of discrimination by algorithms were observed. A new branch of research emerged to examine the concept of “algorithmic fairness.” No consensus currently exists on a single operationalization of fairness, although causal-based definitions are arguably more aligned with the human conception of fairness. The aim of this article is to investigate the degree (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  38
    Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models.Indrė Žliobaitė & Bart Custers - 2016 - Artificial Intelligence and Law 24 (2):183-201.
    Increasing numbers of decisions about everyday life are made using algorithms. By algorithms we mean predictive models captured from historical data using data mining. Such models often decide prices we pay, select ads we see and news we read online, match job descriptions and candidate CVs, decide who gets a loan, who goes through an extra airport security check, or who gets released on parole. Yet growing evidence suggests that decision making by algorithms may discriminate people, even if the computing (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  50.  16
    DefogNet: A Single-Image Dehazing Algorithm with Cyclic Structure and Cross-Layer Connections.Suting Chen, Wenhao Fan, Shaw Peter, Chuang Zhang, Kui Chen & Yong Huang - 2021 - Complexity 2021:1-13.
    Inspired by the application of CycleGAN networks to the image style conversion problem Zhu et al., this paper proposes an end-to-end network, DefogNet, for solving the single-image dehazing problem, treating the image dehazing problem as a style conversion problem from a fogged image to a nonfogged image, without the need to estimate a priori information from an atmospheric scattering model. DefogNet improves on CycleGAN by adding a cross-layer connection structure in the generator to enhance the network’s multiscale feature extraction capability. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 993