Results for 'interpretable AI'

980 found
Order:
  1.  9
    Toleration and Justice in the Laozi: Engaging with Tao Jiang's Origins of Moral-Political Philosophy in Early China.Ai Yuan - 2023 - Philosophy East and West 73 (2):466-475.
    In lieu of an abstract, here is a brief excerpt of the content:Toleration and Justice in the Laozi:Engaging with Tao Jiang's Origins of Moral-Political Philosophy in Early ChinaAi Yuan (bio)IntroductionThis review article engages with Tao Jiang's ground-breaking monograph on the Origins of Moral-Political Philosophy in Early China with particular focus on the articulation of toleration and justice in the Laozi (otherwise called the Daodejing).1 Jiang discusses a naturalistic turn and the re-alignment of values in the Laozi, resulting in a naturalization (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2.  21
    What Science Fiction Can Demonstrate About Novelty in the Context of Discovery and Scientific Creativity.Clarissa Ai Ling Lee - 2019 - Foundations of Science 24 (4):705-725.
    Four instances of how science fiction contributes to the elucidation of novelty in the context of discovery are considered by extending existing discussions on temporal and use-novelty. In the first instance, science fiction takes an already well-known theory and produces its own re-interpretation; in the second instance, the scientific account is usually straightforward and whatever novelty that may occur would be more along the lines of how the science is deployed to extra-scientific matters; in the third instance, science fiction takes (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  7
    Matched design for marginal causal effect on restricted mean survival time in observational studies.Bo Lu, Ai Ni & Zihan Lin - 2023 - Journal of Causal Inference 11 (1).
    Investigating the causal relationship between exposure and time-to-event outcome is an important topic in biomedical research. Previous literature has discussed the potential issues of using hazard ratio (HR) as the marginal causal effect measure due to noncollapsibility. In this article, we advocate using restricted mean survival time (RMST) difference as a marginal causal effect measure, which is collapsible and has a simple interpretation as the difference of area under survival curves over a certain time horizon. To address both measured and (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  4. Interpreting AI-Generated Art: Arthur Danto’s Perspective on Intention, Authorship, and Creative Traditions in the Age of Artificial Intelligence.Raquel Cascales - 2023 - Polish Journal of Aesthetics 71 (4):17-29.
    Arthur C. Danto did not live to witness the proliferation of AI in artistic creation. However, his philosophy of art offers key ideas about art that can provide an interesting perspective on artwork generated by artificial intelligence (AI). In this article, I analyze how his ideas about contemporary art, intention, interpretation, and authorship could be applied to the ongoing debate about AI and artistic creation. At the same time, it is also interesting to consider whether the incorporation of AI into (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  5.  22
    Interpreting ordinary uses of psychological and moral terms in the AI domain.Hyungrae Noh - 2023 - Synthese 201 (6):1-33.
    Intuitively, proper referential extensions of psychological and moral terms exclude artifacts. Yet ordinary speakers commonly treat AI robots as moral patients and use psychological terms to explain their behavior. This paper examines whether this referential shift from the human domain to the AI domain entails semantic changes: do ordinary speakers literally consider AI robots to be psychological or moral beings? Three non-literalist accounts for semantic changes concerning psychological and moral terms used in the AI domain will be discussed: the technical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  24
    «J'ai un corps» Les enjeux missionnaires de la traduction et de l'interprétation chez Maurice Leenhardt.Michel Naepels - 2002 - Philosophia Scientiae 6 (2):15-30.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. The Four Fundamental Components for Intelligibility and Interpretability in AI Ethics.Moto Kamiura - forthcoming - American Philosophical Quarterly.
    Intelligibility and interpretability related to artificial intelligence (AI) are crucial for enabling explicability, which is vital for establishing constructive communication and agreement among various stakeholders, including users and designers of AI. It is essential to overcome the challenges of sharing an understanding of the details of the various structures of diverse AI systems, to facilitate effective communication and collaboration. In this paper, we propose four fundamental terms: “I/O,” “Constraints,” “Objectives,” and “Architecture.” These terms help mitigate the challenges associated with intelligibility (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  9. Artificial explanations: the epistemological interpretation of explanation in AI.Andrés Páez - 2009 - Synthese 170 (1):131-146.
    In this paper I critically examine the notion of explanation used in Artificial Intelligence in general, and in the theory of belief revision in particular. I focus on two of the best known accounts in the literature: Pagnucco’s abductive expansion functions and Gärdenfors’ counterfactual analysis. I argue that both accounts are at odds with the way in which this notion has historically been understood in philosophy. They are also at odds with the explanatory strategies used in actual scientific practice. At (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  10. Making AI Intelligible: Philosophical Foundations.Herman Cappelen & Josh Dever - 2021 - New York, USA: Oxford University Press.
    Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  11. AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  12. Certifiable AI.Jobst Landgrebe - 2022 - Applied Sciences 12 (3):1050.
    Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  13. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Direct download  
     
    Export citation  
     
    Bookmark   42 citations  
  14.  9
    Using AI to detect panic buying and improve products distribution amid pandemic.Yossiri Adulyasak, Omar Benomar, Ahmed Chaouachi, Maxime C. Cohen & Warut Khern-Am-Nuai - forthcoming - AI and Society:1-30.
    The COVID-19 pandemic has triggered panic-buying behavior around the globe. As a result, many essential supplies were consistently out-of-stock at common point-of-sale locations. Even though most retailers were aware of this problem, they were caught off guard and are still lacking the technical capabilities to address this issue. The primary objective of this paper is to develop a framework that can systematically alleviate this issue by leveraging AI models and techniques. We exploit both internal and external data sources and show (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  15. Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  16. Excavating AI: the politics of images in machine learning training sets.Kate Crawford & Trevor Paglen - forthcoming - AI and Society:1-12.
    By looking at the politics of classification within machine learning systems, this article demonstrates why the automated interpretation of images is an inherently social and political project. We begin by asking what work images do in computer vision systems, and what is meant by the claim that computers can “recognize” an image? Next, we look at the method for introducing images into computer systems and look at how taxonomies order the foundational concepts that will determine how a system interprets the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  17.  79
    Indexical AI.Leif Weatherby & Brian Justie - 2022 - Critical Inquiry 48 (2):381-415.
    This article argues that the algorithms known as neural nets underlie a new form of artificial intelligence that we call indexical AI. Contrasting with the once dominant symbolic AI, large-scale learning systems have become a semiotic infrastructure underlying global capitalism. Their achievements are based on a digital version of the sign-function index, which points rather than describes. As these algorithms spread to parse the increasingly heavy data volumes on platforms, it becomes harder to remain skeptical of their results. We call (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  18.  34
    Eros Roberto Grau: Pourquoi j’ai peur des juges. L’interprétation du droit et les principes juridiques: Avant-propos d’Antoine Jeammaud, Paris, Kimé, 2014, 199 pp.Jérémy Mercier - 2015 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 28 (4):879-885.
    Les juges créent-ils du droit ? Eros Roberto Grau, avocat, ancien professeur à la prestigieuse Faculté de droit de l’Université de São Paulo et ancien membre de la Cour suprême brésilienne de 2004 à 2010, aurait sans aucun doute pu faire un livre inaccessible sur cette question, tant son parcours, ses forts engagements et ses réflexions prolifiques l’y autorisent.Sa biographie est en particulier disponible en brésilien sur le site de la Cour suprême brésilienne et sur son site personnel . Mais (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Why AI will never rule the world (interview).Luke Dormehl, Jobst Landgrebe & Barry Smith - 2022 - Digital Trends.
    Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity — for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans. -/- According to the theory, advances in AI — specifically of the machine learning type that’s able to take on new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  50
    AI recommendations’ impact on individual and social practices of Generation Z on social media: a comparative analysis between Estonia, Italy, and the Netherlands.Daria Arkhipova & Marijn Janssen - forthcoming - Semiotica.
    Social media (SM) influence young adults’ communication practices. Artificial Intelligence (AI) is increasingly used for making recommendations on SM. Yet, its effects on different generations of SM users are unknown. SM can use AI recommendations to sort texts and prioritize them, shaping users’ online and offline experiences. Current literature primarily addresses technological or human-user perspectives, overlooking cognitive perspectives. This research aims to propose methods for mapping users’ interactions with AI recommendations (AiRS) and analyzes how embodied interactions mediated by a digital (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  18
    Is AI case that is explainable, intelligible or hopeless?Łukasz Mścisławski - 2022 - Zagadnienia Filozoficzne W Nauce 73:357-369.
    Wrocław University of Science and Technology, Poland This article is a review of the book _Making AI Intelligible. Philosophical Foundations_, written by Herman Cappelen and Josh Dever, and published in 2021 by Oxford University Press. The authors of the reviewed book address the difficult issue of interpreting the results provided by AI systems and the links between human-specific content handling and the internal mechanisms of these systems. Considering the potential usefulness of various frameworks developed in philosophy to solve the problem, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  22. The Whiteness of AI.Stephen Cave & Kanta Dihal - 2020 - Philosophy and Technology 33 (4):685-703.
    This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  23.  20
    Should AI allocate livers for transplant? Public attitudes and ethical considerations.Max Drezga-Kleiminger, Joanna Demaree-Cotton, Julian Koplin, Julian Savulescu & Dominic Wilkinson - 2023 - BMC Medical Ethics 24 (1):1-11.
    Background: Allocation of scarce organs for transplantation is ethically challenging. Artificial intelligence (AI) has been proposed to assist in liver allocation, however the ethics of this remains unexplored and the view of the public unknown. The aim of this paper was to assess public attitudes on whether AI should be used in liver allocation and how it should be implemented. Methods: We first introduce some potential ethical issues concerning AI in liver allocation, before analysing a pilot survey including online responses (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  28
    Contestable AI by Design: Towards a Framework.Kars Alfrink, Ianus Keller, Gerd Kortuem & Neelke Doorn - 2023 - Minds and Machines 33 (4):613-639.
    As the use of AI systems continues to increase, so do concerns over their lack of fairness, legitimacy and accountability. Such harmful automated decision-making can be guarded against by ensuring AI systems are contestable by design: responsive to human intervention throughout the system lifecycle. Contestable AI by design is a small but growing field of research. However, most available knowledge requires a significant amount of translation to be applicable in practice. A proven way of conveying intermediate-level, generative design knowledge is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  5
    A Study on the Multiple Models of the Interpretation of 'Li(理)' with the Advent of the AI Era - Focusing on Toegye -. 김승영 - 2021 - Journal of the Daedong Philosophical Association 94:63-84.
    우리에게 익숙한 용어가 되어버렸고 또한 주목받는 화두 가운데 하나가 4차 산업혁명 (4IR; fourth industrial revolution) 이라고 할 수 있다. 이 4차 산업혁명의 특징 가운데 하 나가 인공지능(AI; artificial intelligence)의 등장이다. 이 논문은 인공지능을 조선조 성리 학의 집대성자인 이황의 핵심철학인 리 와 결부하여 해석하는 것을 목적으로 한다. 인공 지능은 인간이 했을 경우에 사람들이 지능적이라고 받아들일 행동을 할 수 있는 컴퓨터 프 로그램이나 기계를 창조한다는 것이 주요한 개념이다. 이황은 주희철학을 비판적으로 수 용하면서도 계승 발전시킨다. 그 가운데서도 리동(理動)ㆍ리발(理發)ㆍ리도(理到)가 핵심 적 명제이다. 이황은 리동에서 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  10
    The Debate on the Wu-Chi and T'ai-Chi in the early period of Chosun Dynasty: An Educational Interpretation.Jae-Mun Park - 2007 - Journal of Moral Education 18 (2):1.
    Direct download  
     
    Export citation  
     
    Bookmark  
  27.  57
    Apprehending AI moral purpose in practical wisdom.Mark Graves - 2022 - AI and Society:1-14.
    Practical wisdom enables moral decision-making and action by aligning one’s apprehension of proximate goods with a distal, socially embedded interpretation of a more ultimate Good. A focus on purpose within the overall process mutually informs human moral psychology and moral AI development in their examinations of practical wisdom. AI practical wisdom could ground an AI system’s apprehension of reality in a sociotechnical moral process committed to orienting AI development and action in light of a pluralistic, diverse interpretation of that Good. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  28. Emergent Models for Moral AI Spirituality.Mark Graves - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):7-15.
    Examining AI spirituality can illuminate problematic assumptions about human spirituality and AI cognition, suggest possible directions for AI development, reduce uncertainty about future AI, and yield a methodological lens sufficient to investigate human-AI sociotechnical interaction and morality. Incompatible philosophical assumptions about human spirituality and AI limit investigations of both and suggest a vast gulf between them. An emergentist approach can replace dualist assumptions about human spirituality and identify emergent behavior in AI computation to overcome overly reductionist assumptions about computation. Using (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  6
    Superhuman AI.Gabriele Gramelsberger - 2023 - Philosophisches Jahrbuch 130 (2):81-91.
    The modern program of operationalizing the mind, from Descartes to Kant, in the form of the externalization of human mind functions in logic and calculations, and its continuation in the program of formalization from the middle of the 19th century with Boole, Peirce and Turing, have led to the form of rationality that has become machine rationality: the digital computer as a logical-mathematical machine and algorithms as machine-rational interpretations of human thinking in the form of problem solving and decision making. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  45
    From AI Ethics Principles to Practices: A Teleological Methodology to Apply AI Ethics Principles in The Defence Domain.Christopher Thomas, Alexander Blanchard & Mariarosaria Taddeo - 2024 - Philosophy and Technology 37 (1):1-21.
    This article provides a methodology for the interpretation of AI ethics principles to specify ethical criteria for the development and deployment of AI systems in high-risk domains. The methodology consists of a three-step process deployed by an independent, multi-stakeholder ethics board to: (1) identify the appropriate level of abstraction for modelling the AI lifecycle; (2) interpret prescribed principles to extract specific requirements to be met at each step of the AI lifecycle; and (3) define the criteria to inform purpose- and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  32.  18
    Open AI meets open notes: surveillance capitalism, patient privacy and online record access.Charlotte Blease - 2024 - Journal of Medical Ethics 50 (2):84-89.
    Patient online record access (ORA) is spreading worldwide, and in some countries, including Sweden, and the USA, access is advanced with patients obtaining rapid access to their full records. In the UK context, from 31 October 2023 as part of the new NHS England general practitioner (GP) contract it will be mandatory for GPs to offer ORA to patients aged 16 and older. Patients report many benefits from reading their clinical records including feeling more empowered, better understanding and remembering their (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  34.  60
    Ai ssu-ch'I: The Apostle of chinese communism.Ignatius J. H. Ts'ao - 1972 - Studies in East European Thought 12 (1):2-36.
    Ai Ssu-ch'i is a little known but very important figure in the introduction of Marxism-Leninism into China. This first article provides a brief biography of Ai Ssu-ch'i as well as a detailed account of his activities as teacher, author and propagandist. Among his other services to the cause of Marxism-Leninism in China, one has to stress Ai Ssu-ch'i's systematic opposition to Yeh Ch'ing and to the non-Communist interpretation of Dr. Sun Yat-sen's Three Principles of the People. (cf.SST 10 (1970), 138–166.).
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  35. Interpretability and Unification.Adrian Erasmus & Tyler D. P. Brunet - 2022 - Philosophy and Technology 35 (2):1-6.
    In a recent reply to our article, “What is Interpretability?,” Prasetya argues against our position that artificial neural networks are explainable. It is claimed that our indefeasibility thesis—that adding complexity to an explanation of a phenomenon does not make the phenomenon any less explainable—is false. More precisely, Prasetya argues that unificationist explanations are defeasible to increasing complexity, and thus, we may not be able to provide such explanations of highly complex AI models. The reply highlights an important lacuna in our (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36.  5
    Ai Development and the ‘Fuzzy Logic' of Chinese Cyber Security and Data Laws.Max Parasol - 2021 - Cambridge University Press.
    The book examines the extent to which Chinese cyber and network security laws and policies act as a constraint on the emergence of Chinese entrepreneurialism and innovation. Specifically, how the contradictions and tensions between data localisation laws affect innovation in artificial intelligence. The book surveys the globalised R&D networks, and how the increasing use of open-source platforms by leading Chinese AI firms during 2017–2020, exacerbated the apparent contradiction between Network Sovereignty and Chinese innovation. The drafting of the Cyber Security Law (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  37. An AI model of case-based legal argument from a jurisprudential viewpoint.Kevin D. Ashley - 2002 - Artificial Intelligence and Law 10 (1-3):163-218.
    This article describes recent jurisprudential accountsof analogical legal reasoning andcompares them in detail to the computational modelof case-based legal argument inCATO. The jurisprudential models provide a theoryof relevance based on low-levellegal principles generated in a process ofcase-comparing reflective adjustment. Thejurisprudential critique focuses on the problemsof assigning weights to competingprinciples and dealing with erroneously decidedprecedents. CATO, a computerizedinstructional environment, employs ArtificialIntelligence techniques to teach lawstudents how to make basic legal argumentswith cases. The computational modelhelps students test legal hypotheses againsta database of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  38. What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  39.  46
    Embodied AI, Creation, and Cog.Anne Foerst - 1998 - Zygon 33 (3):455-461.
    This is a reply to comments on my paper Cog, a Humanoid Robot, and the Questions of the Image of God; one was written by Mary Gerhart and Allan Melvin Russell, and another one by Helmut Reich. I will start with the suggested analogy of the relationship between God and us and the one between us and the humanoid robot Cog and will show why this analogy is not helpful for the dialogue between theology and artificial intelligence (AI). Such a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  40.  20
    The Thick Machine: Anthropological AI between explanation and explication.Mathieu Jacomy, Asger Gehrt Olesen & Anders Kristian Munk - 2022 - Big Data and Society 9 (1).
    According to Clifford Geertz, the purpose of anthropology is not to explain culture but to explicate it. That should cause us to rethink our relationship with machine learning. It is, we contend, perfectly possible that machine learning algorithms, which are unable to explain, and could even be unexplainable themselves, can still be of critical use in a process of explication. Thus, we report on an experiment with anthropological AI. From a dataset of 175K Facebook comments, we trained a neural network (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  41.  62
    Transparent AI: reliabilist and proud.Abhishek Mishra - forthcoming - Journal of Medical Ethics.
    Durán et al argue in ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’1 that traditionally proposed solutions to make black box machine learning models in medicine less opaque and more transparent are, though necessary, ultimately not sufficient to establish their overall trustworthiness. This is because transparency procedures currently employed, such as the use of an interpretable predictor,2 cannot fully overcome the opacity of such models. Computational reliabilism, an alternate approach (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42.  16
    Ai confini dell' anima: I greci e la follia.Cecilia Josefina Perczyk - 2012 - Argos (Universidad Simón Bolívar) 35 (2):104-106.
    El presente artículo aborda las connotaciones y los fundamentos de la paráfrasis cum canere vellem en Serv. Ecl. 6. 3. El análisis del sentido del verbo volo en este contexto y la confrontación del pasaje con Serv. Ecl. 6. 5 revelan que Servio interpreta la frase cum canerem reges et proelia como referencia a un temprano empeño de Virgilio en componer poesía épica, del que pronto desistió. Esta interpretación está condicionada por la idea de que la secuencia cronológica Églogas - (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  43. The virtues of interpretable medical artificial intelligence.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  44.  25
    The Dawn of the AI Robots: Towards a New Framework of AI Robot Accountability.Zsófia Tóth, Robert Caruana, Thorsten Gruber & Claudia Loebbecke - 2022 - Journal of Business Ethics 178 (4):895-916.
    Business, management, and business ethics literature pay little attention to the topic of AI robots. The broad spectrum of potential ethical issues pertains to using driverless cars, AI robots in care homes, and in the military, such as Lethal Autonomous Weapon Systems. However, there is a scarcity of in-depth theoretical, methodological, or empirical studies that address these ethical issues, for instance, the impact of morality and where accountability resides in AI robots’ use. To address this dearth, this study offers a (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  45.  64
    Explanations in AI as Claims of Tacit Knowledge.Nardi Lam - 2022 - Minds and Machines 32 (1):135-158.
    As AI systems become increasingly complex it may become unclear, even to the designer of a system, why exactly a system does what it does. This leads to a lack of trust in AI systems. To solve this, the field of explainable AI has been working on ways to produce explanations of these systems’ behaviors. Many methods in explainable AI, such as LIME, offer only a statistical argument for the validity of their explanations. However, some methods instead study the internal (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46.  48
    How AI can be surprisingly dangerous for the philosophy of mathematics— and of science.Walter Carnielli - 2021 - Circumscribere: International Journal for the History of Science 27:1-12.
    In addition to the obvious social and ethical risks, there are philosophical hazards behind artificial intelligence and machine learning. I try to raise here some critical points that might counteract some naive optimism, and warn against the possibility that synthetic intelligence may surreptitiously influence the agenda of science before we can realize it.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  26
    The Thailand national AI ethics guideline: an analysis.Soraj Hongladarom - 2021 - Journal of Information, Communication and Ethics in Society 19 (4):480-491.
    Purpose The paper aims to analyze the content of the newly published National AI Ethics Guideline in Thailand. Thailand’s ongoing political struggles and transformation has made it a good case to see how a policy document such as a guideline in AI ethics becomes part of the transformations. Looking at how the two are interrelated will help illuminate the political and cultural dynamics of Thailand as well as how governance of ethics itself is conceptualized. Design/methodology/approach The author looks at the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48. The future of AI in our hands? - To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction?Erik Persson & Maria Hedlund - 2022 - AI and Ethics 2:683-695.
    Artificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  50.  25
    Risposte ai miei critici.Maurizio Ferraris - 2012 - Rivista di Estetica 50:391-409.
    In this paper I discuss the commentaries and the criticism that my friends and colleagues have made to the theory of social objects that I put forward in my book Documentalità. Perché è necessario lasciar tracce. In particular, I have articulated my responses along the following main lines: realism; truth (and falsity); ontology vs. epistemology and facts vs. interpretations; textualism and writing; politics; intentionality; consciousness.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 980