Results for 'Artificial intelligence, Machine learning, Transparency, Interpretability, Opacity, Decision-making, Explanation, Right to explanation'

999 found
Order:
  1. What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2.  24
    Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions.David Casacuberta, Ariel Guersenzvaig & Cristian Moyano-Fernández - 2024 - AI and Society 39 (1):279-293.
    Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3.  7
    Using artificial intelligence to enhance patient autonomy in healthcare decision-making.Jose Luis Guerrero Quiñones - forthcoming - AI and Society:1-10.
    The use of artificial intelligence in healthcare contexts is highly controversial for the (bio)ethical conundrums it creates. One of the main problems arising from its implementation is the lack of transparency of machine learning algorithms, which is thought to impede the patient’s autonomous choice regarding their medical decisions. If the patient is unable to clearly understand why and how an AI algorithm reached certain medical decision, their autonomy is being hovered. However, there are alternatives to prevent the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4. Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - International Data Privacy Law 1 (2):76-99.
    Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that the GDPR will legally mandate a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   63 citations  
  5.  65
    Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach.Tae Wan Kim & Bryan R. Routledge - 2022 - Business Ethics Quarterly 32 (1):75-102.
    Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  6.  18
    Making Artificial Intelligence Transparent: Fairness and the Problem of Proxy Variables.Richard Warner & Robert H. Sloan - 2021 - Criminal Justice Ethics 40 (1):23-39.
    AI-driven decisions can draw data from virtually any area of your life to make a decision about virtually any other area of your life. That creates fairness issues. Effective regulation to ensure fairness requires that AI systems be transparent. That is, regulators must have sufficient access to the factors that explain and justify the decisions. One approach to transparency is to require that systems be explainable, as that concept is understood in computer science. A system is explainable if one (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul B. de Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  8. The Pragmatic Turn in Explainable Artificial Intelligence (XAI).Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   31 citations  
  9.  34
    Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (“gaming the system” in particular), the potential loss of companies’ competitive edge, and the limited gains in answerability to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  10.  62
    Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Massimo Durante & Marcello D'Agostino - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  11.  20
    The paradoxical transparency of opaque machine learning.Felix Tun Han Lo - forthcoming - AI and Society:1-13.
    This paper examines the paradoxical transparency involved in training machine-learning models. Existing literature typically critiques the opacity of machine-learning models such as neural networks or collaborative filtering, a type of critique that parallels the black-box critique in technology studies. Accordingly, people in power may leverage the models’ opacity to justify a biased result without subjecting the technical operations to public scrutiny, in what Dan McQuillan metaphorically depicts as an “algorithmic state of exception”. This paper attempts to differentiate the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. AISC 17 Talk: The Explanatory Problems of Deep Learning in Artificial Intelligence and Computational Cognitive Science: Two Possible Research Agendas.Antonio Lieto - 2018 - In Proceedings of AISC 2017.
    Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13. Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   69 citations  
  14. Explaining Machine Learning Decisions.John Zerilli - 2022 - Philosophy of Science 89 (1):1-19.
    The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  15.  29
    Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model.Basim Mahbooba, Mohan Timilsina, Radhya Sahal & Martin Serrano - 2021 - Complexity 2021:1-11.
    Despite the growing popularity of machine learning models in the cyber-security applications ), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the system. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16. The virtues of interpretable medical artificial intelligence.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  17. Shared decision-making and maternity care in the deep learning age: Acknowledging and overcoming inherited defeaters.Keith Begley, Cecily Begley & Valerie Smith - 2021 - Journal of Evaluation in Clinical Practice 27 (3):497–503.
    In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. Artificial intelligence, transparency, and public decision-making.Karl de Fine Licht & Jenny de Fine Licht - 2020 - AI and Society 35 (4):917-926.
    The increasing use of Artificial Intelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  19. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  20.  79
    The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  21. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus (...)
    Direct download  
     
    Export citation  
     
    Bookmark   42 citations  
  22.  67
    Machine Decisions and Human Consequences.Teresa Scantamburlo, Andrew Charlesworth & Nello Cristianini - 2019 - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford: Oxford University Press.
    As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  23. Using machine learning to predict decisions of the European Court of Human Rights.Masha Medvedeva, Michel Vols & Martijn Wieling - 2020 - Artificial Intelligence and Law 28 (2):237-266.
    When courts started publishing judgements, big data analysis within the legal domain became possible. By taking data from the European Court of Human Rights as an example, we investigate how natural language processing tools can be used to analyse texts of the court proceedings in order to automatically predict judicial decisions. With an average accuracy of 75% in predicting the violation of 9 articles of the European Convention on Human Rights our approach highlights the potential of machine learning approaches (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  24.  15
    Using machine learning to predict decisions of the European Court of Human Rights.Masha Medvedeva, Michel Vols & Martijn Wieling - 2020 - Artificial Intelligence and Law 28 (2):237-266.
    When courts started publishing judgements, big data analysis within the legal domain became possible. By taking data from the European Court of Human Rights as an example, we investigate how natural language processing tools can be used to analyse texts of the court proceedings in order to automatically predict judicial decisions. With an average accuracy of 75% in predicting the violation of 9 articles of the European Convention on Human Rights our approach highlights the potential of machine learning approaches (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  25.  19
    Algorithmic decision-making employing profiling: will trade secrecy protection render the right to explanation toothless?Paul B. de Laat - 2022 - Ethics and Information Technology 24 (2).
    Algorithmic decision-making based on profiling may significantly affect people’s destinies. As a rule, however, explanations for such decisions are lacking. What are the chances for a “right to explanation” to be realized soon? After an exploration of the regulatory efforts that are currently pushing for such a right it is concluded that, at the moment, the GDPR stands out as the main force to be reckoned with. In cases of profiling, data subjects are granted the (...) to receive meaningful information about the functionality of the system in use; for fully automated profiling decisions even an explanation has to be given. However, the trade secrets and intellectual property rights (IPRs) involved must be respected as well. These conflicting rights must be balanced against each other; what will be the outcome? Looking back to 1995, when a similar kind of balancing had been decreed in Europe concerning the right of access (DPD), Wachter et al. (2017) find that according to judicial opinion only generalities of the algorithm had to be disclosed, not specific details. This hardly augurs well for a future right of access let alone to explanation. Thereupon the landscape of IPRs for machine learning (ML) is analysed. Spurred by new USPTO guidelines that clarify when inventions are eligible to be patented, the number of patent applications in the US related to ML in general, and to “predictive analytics” in particular, has soared since 2010—and Europe has followed. I conjecture that in such a climate of intensified protection of intellectual property, companies may legitimately claim that the more their application combines several ML assets that, in addition, are useful in multiple sectors, the more value is at stake when confronted with a call for explanation by data subjects. Consequently, the right to explanation may be severely crippled. (shrink)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  27. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   53 citations  
  28.  30
    Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  13
    A machine learning approach to detecting fraudulent job types.Marcel Naudé, Kolawole John Adebayo & Rohan Nanda - 2023 - AI and Society 38 (2):1013-1024.
    Job seekers find themselves increasingly duped and misled by fraudulent job advertisements, posing a threat to their privacy, security and well-being. There is a clear need for solutions that can protect innocent job seekers. Existing approaches to detecting fraudulent jobs do not scale well, function like a black-box, and lack interpretability, which is essential to guide applicants’ decision-making. Moreover, commonly used lexical features may be insufficient as the representation does not capture contextual semantics of the underlying document. Hence, this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  60
    Criminal Justice and Artificial Intelligence: How Should we Assess the Performance of Sentencing Algorithms?Jesper Ryberg - 2024 - Philosophy and Technology 37 (1):1-15.
    Artificial intelligence is increasingly permeating many types of high-stake societal decision-making such as the work at the criminal courts. Various types of algorithmic tools have already been introduced into sentencing. This article concerns the use of algorithms designed to deliver sentence recommendations. More precisely, it is considered how one should determine whether one type of sentencing algorithm (e.g., a model based on machine learning) would be ethically preferable to another type of sentencing algorithm (e.g., a model based (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  31.  32
    Sentencing and Artificial Intelligence.Jesper Ryberg & Julian V. Roberts - 2022 - Oxford: OUP.
    Is it morally acceptable to use artificial intelligence (AI) in the form of computer-driven algorithms in the determination of sentences on those who have broken the law? If so, how should such algorithms be used? This book is the first collective work devoted exclusively to the ethical and penal theoretical considerations of the use of AI at sentencing. It deals with a wide range of highly pertinent issues, such as the following: Should algorithmic-based decision-making be transparent? If so, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  33. AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  34.  14
    Artificial intelligence, public control, and supply of a vital commodity like COVID-19 vaccine.Vladimir Tsyganov - 2023 - AI and Society 38 (6):2619-2628.
    The article examines the problem of ensuring the political stability of a democratic social system with a shortage of a vital commodity (like vaccine against COVID-19). In such a system, members of society citizens assess the authorities. Thus, actions by the authorities to increase the supply of this commodity can contribute to citizens' approval and hence political stability. However, this supply is influenced by random factors, the actions of competitors, etc. Therefore, citizens do not have sufficient information about all the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Explanation and the Right to Explanation.Elanor Taylor - 2023 - Journal of the American Philosophical Association 1:1-16.
    In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this paper I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision-making, and recommend it as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  50
    Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  37.  49
    The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective.Dorine Eva van Norren - 2023 - Journal of Information, Communication and Ethics in Society 21 (1):112-128.
    PurposeThis paper aims to demonstrate the relevance of worldviews of the global south to debates of artificial intelligence, enhancing the human rights debate on artificial intelligence (AI) and critically reviewing the paper of UNESCO Commission on the Ethics of Scientific Knowledge and Technology (COMEST) that preceded the drafting of the UNESCO guidelines on AI. Different value systems may lead to different choices in programming and application of AI. Programming languages may acerbate existing biases as a people’s worldview is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  39. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns.Aurelia Tamò-Larrieux, Christoph Lutz, Eduard Fosch Villaronga & Heike Felzmann - 2019 - Big Data and Society 6 (1).
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss (...)
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  40.  43
    Big data and algorithmic decision-making.Paul B. de Laat - 2017 - Acm Sigcas Computers and Society 47 (3):39-53.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Can transparency contribute to restoring accountability for such systems? Several objections are examined: the loss of privacy when data sets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms are inherently opaque. It is concluded that (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41.  28
    Going beyond the “common suspects”: to be presumed innocent in the era of algorithms, big data and artificial intelligence.Athina Sachoulidou - forthcoming - Artificial Intelligence and Law:1-54.
    This article explores the trend of increasing automation in law enforcement and criminal justice settings through three use cases: predictive policing, machine evidence and recidivism algorithms. The focus lies on artificial-intelligence-driven tools and technologies employed, whether at pre-investigation stages or within criminal proceedings, in order to decode human behaviour and facilitate decision-making as to whom to investigate, arrest, prosecute, and eventually punish. In this context, this article first underlines the existence of a persistent dilemma between the goal (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Algorithmic and human decision making: for a double standard of transparency.Mario Günther & Atoosa Kasirzadeh - 2022 - AI and Society 37 (1):375-381.
    Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  44. Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   48 citations  
  45.  18
    The Explanation Game: A Formal Framework for Interpretable Machine Learning.David S. Watson & Luciano Floridi - 2021 - In Josh Cowls & Jessica Morley (eds.), The 2020 Yearbook of the Digital Ethics Lab. Springer Verlag. pp. 109-143.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  46. The ethical use of artificial intelligence in human resource management: a decision-making framework.Sarah Bankins - 2021 - Ethics and Information Technology 23 (4):841-854.
    Artificial intelligence is increasingly inputting into various human resource management functions, such as sourcing job applicants and selecting staff, allocating work, and offering personalized career coaching. While the use of AI for such tasks can offer many benefits, evidence suggests that without careful and deliberate implementation its use also has the potential to generate significant harms. This raises several ethical concerns regarding the appropriateness of AI deployment to domains such as HRM, which directly deal with managing sometimes sensitive aspects (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  47.  76
    Humanistic interpretation and machine learning.Juho Pääkkönen & Petri Ylikoski - 2021 - Synthese 199:1461–1497.
    This paper investigates how unsupervised machine learning methods might make hermeneutic interpretive text analysis more objective in the social sciences. Through a close examination of the uses of topic modeling—a popular unsupervised approach in the social sciences—it argues that the primary way in which unsupervised learning supports interpretation is by allowing interpreters to discover unanticipated information in larger and more diverse corpora and by improving the transparency of the interpretive process. This view highlights that unsupervised modeling does not eliminate (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49. Algorithmic decision-making: the right to explanation and the significance of stakes.Lauritz Munch, Jens Christian Bjerring & Jakob Mainz - forthcoming - Big Data and Society.
    The stakes associated with an algorithmic decision are often said to play a role in determining whether the decision engenders a right to an explanation. More specifically, “high stakes” decisions are often said to engender such a right to explanation whereas “low stakes” or “non-high” stakes decisions do not. While the overall gist of these ideas is clear enough, the details are lacking. In this paper, we aim to provide these details through a detailed (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  50. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 999