Results for 'AI ethics · Machine learning · Procedural justice · Relational theory · Bail decisions · Trustworthiness'

989 found
Order:
  1.  4
    Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective.Gianfranco Basti & Giuseppe Vitiello - 2023 - In Raffaela Giovagnoli & Robert Lowe (eds.), The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73.
    In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  22
    “I don’t think people are ready to trust these algorithms at face value”: trust and the use of machine learning algorithms in the diagnosis of rare disease.Angeliki Kerasidou, Christoffer Nellåker, Aurelia Sauerbrei, Shirlene Badger & Nina Hallowell - 2022 - BMC Medical Ethics 23 (1):1-14.
    BackgroundAs the use of AI becomes more pervasive, and computerised systems are used in clinical decision-making, the role of trust in, and the trustworthiness of, AI tools will need to be addressed. Using the case of computational phenotyping to support the diagnosis of rare disease in dysmorphology, this paper explores under what conditions we could place trust in medical AI tools, which employ machine learning.MethodsSemi-structured qualitative interviews with stakeholders who design and/or work with computational phenotyping systems. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented (...) because it helps to maximize patients’ benefits and optimizes limited resources. However, we claim that the opaqueness of the algorithmic black box and its absence of explainability threatens core commitments of procedural fairness such as accountability, avoidance of bias, and transparency. To illustrate this, we discuss liver transplantation as a case of critical medical resources in which the lack of explainability in AI-based allocation algorithms is procedurally unfair. Finally, we provide a number of ethical recommendations for when considering the use of unexplainable algorithms in the distribution of health-related resources. (shrink)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  4.  76
    Teasing out Artificial Intelligence in Medicine: An Ethical Critique of Artificial Intelligence and Machine Learning in Medicine.Mark Henderson Arnold - 2021 - Journal of Bioethical Inquiry 18 (1):121-139.
    The rapid adoption and implementation of artificial intelligence in medicine creates an ontologically distinct situation from prior care models. There are both potential advantages and disadvantages with such technology in advancing the interests of patients, with resultant ontological and epistemic concerns for physicians and patients relating to the instatiation of AI as a dependent, semi- or fully-autonomous agent in the encounter. The concept of libertarian paternalism potentially exercised by AI (and those who control it) has created challenges to conventional assessments (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  5.  41
    The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory.Sabine Salloch & Nils B. Heyen - 2021 - BMC Medical Ethics 22 (1):1-9.
    BackgroundMachine learning-based clinical decision support systems (ML_CDSS) are increasingly employed in various sectors of health care aiming at supporting clinicians’ practice by matching the characteristics of individual patients with a computerised clinical knowledge base. Some studies even indicate that ML_CDSS may surpass physicians’ competencies regarding specific isolated tasks. From an ethical perspective, however, the usage of ML_CDSS in medical practice touches on a range of fundamental normative issues. This article aims to add to the ethical discussion by using professionalisation (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6. AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  7.  99
    Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.Benedetta Giovanola & Simona Tiribelli - 2023 - AI and Society 38 (2):549-563.
    The increasing implementation of and reliance on machine-learning (ML) algorithms to perform tasks, deliver services and make decisions in health and healthcare have made the need for fairness in ML, and more specifically in healthcare ML algorithms (HMLA), a very important and urgent task. However, while the debate on fairness in the ethics of artificial intelligence (AI) and in HMLA has grown significantly over the last decade, the very concept of fairness as an ethical value has (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  8.  16
    Automatisierte Ungleichheit: Ethik der Künstlichen Intelligenz in der biopolitischen Wende des Digitalen Kapitalismus.Rainer Mühlhoff - 2020 - Deutsche Zeitschrift für Philosophie 68 (6):867-890.
    This paper sets out the notion of a current “biopolitical turn of digital capitalism” resulting from the increasing deployment of AI and data analytics technologies in the public sector. With applications of AI-based automated decisions currently shifting from the domain of business to customer (B2C) relations to government to citizen (G2C) relations, a new form of governance arises that operates through “algorithmic social selection”. Moreover, the paper describes how the ethics of AI is at an impasse concerning these (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Redesigning Relations: Coordinating Machine Learning Variables and Sociobuilt Contexts in COVID-19 and Beyond.Hannah Howland, Vadim Keyser & Farzad Mahootian - 2022 - In Sepehr Ehsani, Patrick Glauner, Philipp Plugmann & Florian M. Thieringer (eds.), The Future Circle of Healthcare: AI, 3D Printing, Longevity, Ethics, and Uncertainty Mitigation. Springer. pp. 179–205.
    We explore multi-scale relations in artificial intelligence (AI) use in order to identify difficulties with coordinating relations between users, machine learning (ML) processes, and “sociobuilt contexts”—specifically in terms of their applications to medical technologies and decisions. We begin by analyzing a recent COVID-19 machine learning case study in order to present the difficulty of traversing the detailed causal topography of “sociobuilt contexts.” We propose that the adequate representation of the interactions between social and built processes (...)
     
    Export citation  
     
    Bookmark  
  10.  61
    Justice, injustice, and artificial intelligence: Lessons from political theory and philosophy.Lucia M. Rafanelli - 2022 - Big Data and Society 9 (1).
    Some recent uses of artificial intelligence for facial recognition, evaluating resumes, and sorting photographs by subject matter have revealed troubling disparities in performance or impact based on the demographic traits of subject populations. These disparities raise pressing questions about how using artificial intelligence can work to promote justice or entrench injustice. Political theorists and philosophers have developed nuanced vocabularies and theoretical frameworks for understanding and adjudicating disputes about what justice requires and what constitutes injustice. The interdisciplinary community committed (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  11.  51
    Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI).Alexis Fritz, Wiebke Brandt, Henner Gimpel & Sarah Bayer - 2020 - De Ethica 6 (1):3-22.
    Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the concepts of ‘(moral) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  12.  53
    Do the Ends Justify the Means? Variation in the Distributive and Procedural Fairness of Machine Learning Algorithms.Lily Morse, Mike Horia M. Teodorescu, Yazeed Awwad & Gerald C. Kane - 2021 - Journal of Business Ethics 181 (4):1083-1095.
    Recent advances in machine learning methods have created opportunities to eliminate unfairness from algorithmic decision making. Multiple computational techniques (i.e., algorithmic fairness criteria) have arisen out of this work. Yet, urgent questions remain about the perceived fairness of these criteria and in which situations organizations should use them. In this paper, we seek to gain insight into these questions by exploring fairness perceptions of five algorithmic criteria. We focus on two key dimensions of fairness evaluations: distributive fairness and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  13. Fair machine learning under partial compliance.Jessica Dai, Sina Fazelpour & Zachary Lipton - 2021 - In Jessica Dai, Sina Fazelpour & Zachary Lipton (eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65.
    Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  15.  24
    Enter the metrics: critical theory and organizational operationalization of AI ethics.Joris Krijger - 2022 - AI and Society 37 (4):1427-1437.
    As artificial intelligence (AI) deployment is growing exponentially, questions have been raised whether the developed AI ethics discourse is apt to address the currently pressing questions in the field. Building on critical theory, this article aims to expand the scope of AI ethics by arguing that in addition to ethical principles and design, the organizational dimension (i.e. the background assumptions and values influencing design processes) plays a pivotal role in the operationalization of ethics in AI development (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  24
    Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.Angeliki Kerasidou, Antoniya Georgieva & Rachel Dlugatch - 2023 - BMC Medical Ethics 24 (1):1-16.
    BackgroundDespite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness.MethodsSeventeen semi-structured interviews were conducted with birth (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  76
    Machine learning’s limitations in avoiding automation of bias.Daniel Varona, Yadira Lizama-Mue & Juan Luis Suárez - 2021 - AI and Society 36 (1):197-203.
    The use of predictive systems has become wider with the development of related computational methods, and the evolution of the sciences in which these methods are applied Solon and Selbst and Pedreschi et al.. The referred methods include machine learning techniques, face and/or voice recognition, temperature mapping, and other, within the artificial intelligence domain. These techniques are being applied to solve problems in socially and politically sensitive areas such as crime prevention and justice management, crowd management, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  15
    The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative-relational Approach.Florian Funer - 2022 - Medicine, Health Care and Philosophy 25 (2):167-178.
    Developments in Machine Learning (ML) have attracted attention in a wide range of healthcare fields to improve medical practice and the benefit of patients. Particularly, this should be achieved by providing more or less automated decision recommendations to the treating physician. However, some hopes placed in ML for healthcare seem to be disappointed, at least in part, by a lack of transparency or traceability. Skepticism exists primarily in the fact that the physician, as the person responsible for diagnosis, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  19.  68
    On the Site of Predictive Justice.Seth Lazar & Jake Stone - forthcoming - Noûs.
    Optimism about our ability to enhance societal decision‐making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML‐based decision‐making, there can be moral grounds for the criticism of these predictions themselves. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  20.  29
    Artificial Intelligent Systems and Ethical Agency.Reena Cheruvalath - 2023 - Journal of Human Values 29 (1):33-47.
    The article examines the challenges involved in the process of developing artificial ethical agents. The process involves the creators or designing professionals, the procedures to develop an ethical agent and the artificial systems. There are two possibilities available to create artificial ethical agents: (a) programming ethical guidance in the artificial Intelligence (AI)-equipped machines and/or (b) allowing AI-equipped machines to learn ethical decision-making by observing humans. However, it is difficult to fulfil these possibilities due to the subjective nature of ethical decision-making. (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21.  25
    Artificial Intelligent Systems and Ethical Agency.Reena Cheruvalath - 2023 - Journal of Human Values 29 (1):33-47.
    The article examines the challenges involved in the process of developing artificial ethical agents. The process involves the creators or designing professionals, the procedures to develop an ethical agent and the artificial systems. There are two possibilities available to create artificial ethical agents: (a) programming ethical guidance in the artificial Intelligence (AI)-equipped machines and/or (b) allowing AI-equipped machines to learn ethical decision-making by observing humans. However, it is difficult to fulfil these possibilities due to the subjective nature of ethical decision-making. (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22.  15
    Ethical Considerations in the Application of Artificial Intelligence to Monitor Social Media for COVID-19 Data.Lidia Flores & Sean D. Young - 2022 - Minds and Machines 32 (4):759-768.
    The COVID-19 pandemic and its related policies (e.g., stay at home and social distancing orders) have increased people’s use of digital technology, such as social media. Researchers have, in turn, utilized artificial intelligence to analyze social media data for public health surveillance. For example, through machine learning and natural language processing, they have monitored social media data to examine public knowledge and behavior. This paper explores the ethical considerations of using artificial intelligence to monitor social media to understand (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  18
    The ethics of algorithms from the perspective of the cultural history of consciousness: first look.Carlos Andres Salazar Martinez & Olga Lucia Quintero Montoya - 2023 - AI and Society 38 (2):763-775.
    Theories related to cognitive sciences, Human-in-the-loop Cyber-physical systems, data analysis for decision-making, and computational ethics make clear the need to create transdisciplinary learning, research, and application strategies to bring coherence to the paradigm of a truly human-oriented technology. Autonomous objects assume more responsibilities for individual and collective phenomena, they have gradually filtered into routines and require the incorporation of ethical practice into the professions related to the development, modeling, and design of algorithms. To make this possible, it is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  13
    Citizens’ data afterlives: Practices of dataset inclusion in machine learning for public welfare.Helene Friis Ratner & Nanna Bonde Thylstrup - forthcoming - AI and Society:1-11.
    Public sector adoption of AI techniques in welfare systems recasts historic national data as resource for machine learning. In this paper, we examine how the use of register data for development of predictive models produces new ‘afterlives’ for citizen data. First, we document a Danish research project’s practical efforts to develop an algorithmic decision-support model for social workers to classify children’s risk of maltreatment. Second, we outline the tensions emerging from project members’ negotiations about which datasets to include. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine (...) tools to be interpretable at the algorithmic level to make them trustworthy, as long as they meet some strict empirical desiderata. In this paper, we analyse and develop London’s position. In particular, we make two claims. First, we claim that London’s solution to the problem of trust can potentially address another problem, which is how to evaluate the reliability of ML tools in medicine for regulatory purposes. Second, we claim that to deal with this problem, we need to develop London’s views by shifting the focus from the opacity of algorithmic details to the opacity of the way in which ML tools are trained and built. We claim that to regulate AI tools and evaluate their reliability, agencies need an explanation of how ML tools have been built, which requires documenting and justifying the technical choices that practitioners have made in designing such tools. This is because different algorithmic designs may lead to different outcomes, and to the realization of different purposes. However, given that technical choices underlying algorithmic design are shaped by value-laden considerations, opening the black box of the design process means also making transparent and motivating (technical and ethical) values and preferences behind such choices. Using tools from philosophy of technology and philosophy of science, we elaborate a framework showing how an explanation of the training processes of ML tools in medicine should look like. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  26. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  27.  20
    The Explanation Game: A Formal Framework for Interpretable Machine Learning.David S. Watson & Luciano Floridi - 2021 - In Josh Cowls & Jessica Morley (eds.), The 2020 Yearbook of the Digital Ethics Lab. Springer Verlag. pp. 109-143.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  28.  23
    AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):185-186.
    In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalism, (2) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Building machines that learn and think about morality.Christopher Burr & Geoff Keeling - 2018 - In Christopher Burr & Geoff Keeling (eds.), Proceedings of the Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2018). Society for the Study of Artificial Intelligence and Simulation of Behaviour.
    Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  30.  30
    Machine learning and power relations.Jonne Maas - forthcoming - AI and Society.
    There has been an increased focus within the AI ethics literature on questions of power, reflected in the ideal of accountability supported by many Responsible AI guidelines. While this recent debate points towards the power asymmetry between those who shape AI systems and those affected by them, the literature lacks normative grounding and misses conceptual clarity on how these power dynamics take shape. In this paper, I develop a workable conceptualization of said power dynamics according to Cristiano Castelfranchi’s conceptual (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  31.  25
    Can AI Weapons Make Ethical Decisions?Ross W. Bellaby - 2021 - Criminal Justice Ethics 40 (2):86-107.
    The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32.  13
    Who and what gets recognized in digital agriculture: agriculture 4.0 at the intersectionality of (Dis)Ableism, labor, and recognition justice[REVIEW]Michael Carolan - forthcoming - Agriculture and Human Values:1-16.
    This paper builds on prior critical scholarship on Agriculture 4.0—an umbrella term to reference the utilization of robotics and automation, AI, remote sensing, big data, and the like in agriculture—especially the literature focusing on issues relating to equity and social sustainability. Critical agrifood scholarship has spent considerable energy interrogating who gets what, how decisions get made, and who counts as a “stakeholder” in the context of decision making, questions relating to distributive justice, procedural justice, and representative (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  11
    Practicing trustworthy machine learning: consistent, transparent, and fair AI pipelines.Yada Pruksachatkun - 2022 - Boston: O'Reilly. Edited by Matthew McAteer & Subhabrata Majumdar.
    With the increasing use of AI in high-stakes domains such as medicine, law, and defense, organizations spend a lot of time and money to make ML models trustworthy. Many books on the subject offer deep dives into theories and concepts. This guide provides a practical starting point to help development teams produce models that are secure, more robust, less biased, and more explainable. Authors Yada Pruksachatkun, Matthew McAteer, and Subhabrata Majumdar translate best practices in the academic literature for curating datasets (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  34.  1
    Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  35.  37
    The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models.Torbjørn Gundersen & Kristine Bærøe - 2022 - Science and Engineering Ethics 28 (2):1-16.
    This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  36.  60
    AI led ethical digital transformation: framework, research and managerial implications.Kumar Saurabh, Ridhi Arora, Neelam Rani, Debasisha Mishra & M. Ramkumar - 2022 - Journal of Information, Communication and Ethics in Society 20 (2):229-256.
    Purpose Digital transformation leverages digital technologies to change current processes and introduce new processes in any organisation’s business model, customer/user experience and operational processes. Artificial intelligence plays a significant role in achieving DT. As DT is touching each sphere of humanity, AI led DT is raising many fundamental questions. These questions raise concerns for the systems deployed, how they should behave, what risks they carry, the monitoring and evaluation control we have in hand, etc. These issues call for the need (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  37.  7
    Manager Trustworthiness or Interactional Justice? Predicting Organizational Citizenship Behaviors.Dan S. Chiaburu & Audrey S. Lim - 2008 - Journal of Business Ethics 83 (3):453-467.
    Organizational citizenship behaviors (OCBs) are essential for effective organizational functioning. Decisions by employees to engage in these important discretionary behaviors are based on how they make sense of the organizational context. Using fairness heuristic theory, we tested two important OCB predictors: manager trustworthiness and interactional justice. In the process, we control for the effects of dispositional factors (propensity to trust) and for system-based organizational fairness (procedural and distributive justice). Results, based on surveys collected from (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  38. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In Atoosa Kasirzadeh & Andrew Smart (eds.), ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  39.  37
    Knowledge representation and acquisition for ethical AI: challenges and opportunities.Vaishak Belle - 2023 - Ethics and Information Technology 25 (1):1-12.
    Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted (...)
    Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
  41. Predictive Policing and the Ethics of Preemption.Daniel Susser - 2021 - In Ben Jones & Eduardo Mendieta (eds.), The Ethics of Policing: New Perspectives on Law Enforcement. New York: NYU Press.
    The American justice system, from police departments to the courts, is increasingly turning to information technology for help identifying potential offenders, determining where, geographically, to allocate enforcement resources, assessing flight risk and the potential for recidivism amongst arrestees, and making other judgments about when, where, and how to manage crime. In particular, there is a focus on machine learning and other data analytics tools, which promise to accurately predict where crime will occur and who will perpetrate it. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  42.  8
    What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids.Sabine Salloch & Andreas Eriksen - forthcoming - American Journal of Bioethics:1-12.
    Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as “human in the loop” or “meaningful human control” are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  7
    Fairness: Theory & Practice of Distributive Justice.Nicholas Rescher - 2002 - Transaction.
    In theory and practice, the notion of fairness is far from simple. The principle is often elusive and subject to confusion, even in institutions of law, usage, and custom. In Fairness, Nicholas Rescher aims to liberate this concept from misunderstandings by showing how its definitive characteristics prevent it from being absorbed by such related conceptions as paternalistic benevolence, radical egalitarianism, and social harmonization. Rescher demonstrates that equality before the state is an instrument of justice, not of social utility (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  44.  24
    Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions.David Casacuberta, Ariel Guersenzvaig & Cristian Moyano-Fernández - 2024 - AI and Society 39 (1):279-293.
    Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45.  54
    Algorithmic legitimacy in clinical decision-making.Sune Holm - 2023 - Ethics and Information Technology 25 (3):1-10.
    Machine learning algorithms are expected to improve referral decisions. In this article I discuss the legitimacy of deferring referral decisions in primary care to recommendations from such algorithms. The standard justification for introducing algorithmic decision procedures to make referral decisions is that they are more accurate than the available practitioners. The improvement in accuracy will ensure more efficient use of scarce health resources and improve patient care. In this article I introduce a proceduralist framework for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  14
    Indigenous, feminine and technologist relational philosophies in the time of machine learning.Troy A. Richardson - 2023 - Ethics and Education 18 (1):6-22.
    Machine Learning (ML) and Artificial Intelligence (AI) are for many the defining features of the early twenty-first century. With such a provocation, this essay considers how one might understand the relational philosophies articulated by Indigenous learning scientists, Indigenous technologists and feminine philosophers of education as co-constitutive of an ensemble mediating or regulating an educative philosophy interfacing with ML/AI. In these mediations, differing vocabularies – kin, the one caring, cooperative – are recognized for their ethical commitments, yet (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. What's Wrong with Machine Bias.Clinton Castro - 2019 - Ergo: An Open Access Journal of Philosophy 6.
    Data-driven, decision-making technologies used in the justice system to inform decisions about bail, parole, and prison sentencing are biased against historically marginalized groups (Angwin, Larson, Mattu, & Kirchner 2016). But these technologies’ judgments—which reproduce patterns of wrongful discrimination embedded in the historical datasets that they are trained on—are well-evidenced. This presents a puzzle: how can we account for the wrong these judgments engender without also indicting morally permissible statistical inferences about persons? I motivate this puzzle and attempt (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  48.  13
    Islamic Perspectives on Polygenic Testing and Selection of IVF Embryos (PGT-P) for Optimal Intelligence and Other Non–Disease-Related Socially Desirable Traits.A. H. B. Chin, Q. Al-Balas, M. F. Ahmad, N. Alsomali & M. Ghaly - forthcoming - Journal of Bioethical Inquiry:1-8.
    In recent years, the genetic testing and selection of IVF embryos, known as preimplantation genetic testing (PGT), has gained much traction in clinical assisted reproduction for preventing transmission of genetic defects. However, a more recent ethically and morally controversial development in PGT is its possible use in selecting IVF embryos for optimal intelligence quotient (IQ) and other non–disease-related socially desirable traits, such as tallness, fair complexion, athletic ability, and eye and hair colour, based on polygenic risk scores (PRS), in what (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  5
    Live Like Nobody Is Watching: Relational Autonomy in the Age of Artificial Intelligence Health Monitoring by Anita Ho.Tina Nguyen - 2024 - International Journal of Feminist Approaches to Bioethics 17 (1):101-105.
    In lieu of an abstract, here is a brief excerpt of the content:Reviewed by:Live Like Nobody Is Watching: Relational Autonomy in the Age of Artificial Intelligence Health Monitoring by Anita HoTina Nguyen (bio)Live Like Nobody Is Watching: Relational Autonomy in the Age of Artificial Intelligence Health Monitoring by Anita Ho New York: Oxford University Press, 2023As the reach of artificial intelligence (AI)- and machine learning (ML)-enabled technologies continues to expand in the healthcare field, bioethicists have examined (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50.  9
    Justice and Surgical Innovation: The Case of Robotic Prostatectomy.Katrina Hutchison, Jane Johnson & Drew Carter - 2016 - Bioethics 30 (7):536-546.
    Surgical innovation promises improvements in healthcare, but it also raises ethical issues including risks of harm to patients, conflicts of interest and increased injustice in access to health care. In this article, we focus on risks of injustice, and use a case study of robotic prostatectomy to identify features of surgical innovation that risk introducing or exacerbating injustices. Interpreting justice as encompassing matters of both efficiency and equity, we first examine questions relating to government decisions about whether to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 989