Philosophy of Artificial Intelligence

Edited by Eric Dietrich (State University of New York at Binghamton)
Assistant editor: Michelle Thomas (University of Western Ontario)
About this topic
Summary

The philosophy of artificial intelligence is a collection of issues primarily concerned with whether or not AI is possible -- with whether or not it is possible to build an intelligent thinking machine.  Also of concern is whether humans and other animals are best thought of as machines (computational robots, say) themselves. The most important of the "whether-possible" problems lie at the intersection of theories of the semantic contents of thought and the nature of computation. A second suite of problems surrounds the nature of rationality. A third suite revolves around the seeming “transcendent” reasoning powers of the human mind. These problems derive from Kurt Gödel's famous Incompleteness Theorem.  A fourth collection of problems concerns the architecture of an intelligent machine.  Should a thinking computer use discrete or continuous modes of computing and representing, is having a body necessary, and is being conscious necessary.  This takes us to the final set of questions. Can a computer be conscious?  Can a computer have a moral sense? Would we have duties to thinking computers, to robots?  For example, is it moral for humans to even attempt to build an intelligent machine?  If we did build such a machine, would turning it off be the equivalent of murder?  If we had a race of such machines, would it be immoral to force them to work for us?

Key works Probably the most important attack on whether AI is possible is John Searle's famous Chinese Room Argument: Searle 1980.  This attack focuses on the semantic aspects (mental semantics) of thoughts, thinking, and computing.   For some replies to this argument, see the same 1980 journal issue as Searle's original paper.  For the problem of the nature of rationality, see Pylyshyn 1987.  An especially strong attack on AI from this angle is Jerry Fodor's work on the frame problem: Fodor 1987.  On the frame problem in general, see McCarthy & Hayes 1969.  For some replies to Fodor and advances on the frame problem, see Ford & Pylyshyn 1996.  For the transcendent reasoning issue, a central and important paper is Hilary Putnam's Putnam 1960.  This paper is arguably the source for the computational turn in 1960s-70s philosophy of mind.  For architecture-of-mind issues, see, for starters: M. Spivey's The Contintuity of Mind, Oxford, which argues against the notion of discrete representations. See also, Gelder & Port 1995.  For an argument for discrete representations, see, Dietrich & Markman 2003.  For an argument that the mind's boundaries do not end at the body's boundaries, see, Clark & Chalmers 1998.  For a statement of and argument for computationalism -- the thesis that the mind is a kind of computer -- see Shimon Edelman's excellent book Edelman 2008. See also Chapter 9 of Chalmers's book Chalmers 1996.
Introductions Chinese Room Argument: Searle 1980. Frame problem: Fodor 1987, Computationalism and Godelian style refutation: Putnam 1960. Architecture: M. Spivey's The Contintuity of Mind, Oxford and Shimon Edelman's Edelman 2008. Ethical issues: Anderson & Anderson 2011 and Müller 2020.  Conscious computers: Chalmers 2011.
Related
Subcategories

Contents
13729 found
Order:
1 — 50 / 13729
Material to categorize
  1. Development of a scale for capturing psychological aspects of physical–digital integration: relationships with psychosocial functioning and facial emotion recognition.Daiana Colledani, Pasquale Anselmi & Egidio Robusto - forthcoming - AI and Society:1-13.
    The present work aims at developing a scale for the assessment of a construct that we called “physical–digital integration”, which refers to the tendency of some individuals not to perceive a clear differentiation between feelings and perceptions that pertain to the physical or digital environment. The construct is articulated in four facets: identity, social relationships, time–space perception, and sensory perception. Data from a sample of 369 participants were collected to evaluate factor structure (unidimensional model, bifactor model, correlated four-factor model), internal (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2. Computers as Interactive Machines: Can We Build an Explanatory Abstraction?Alice Martin, Mathieu Magnaudet & Stéphane Conversy - forthcoming - Minds and Machines:1-30.
    In this paper, we address the question of what current computers are from the point of view of human-computer interaction. In the early days of computing, the Turing machine (TM) has been the cornerstone of the understanding of computers. The TM defines what can be computed and how computation can be carried out. However, in the last decades, computers have evolved and increasingly become interactive systems, reacting in real-time to external events in an ongoing loop. We argue that the TM (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  3. Autonomous Force Beyond Armed Conflict.Alexander Blanchard - forthcoming - Minds and Machines:1-10.
    Proposals by the San Francisco Police Department (SFPD) to use bomb disposal robots for deadly force against humans have met with widespread condemnation. Media coverage of the furore has tended, incorrectly, to conflate these robots with autonomous weapon systems (AWS), the AI-based weapons used in armed conflict. These two types of systems should be treated as distinct since they have different sets of social, ethical, and legal implications. However, the conflation does raise a pressing question: what _if_ the SFPD had (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4. Technological grandparents: how communication technologies can improve the well-being of the elderly?Laura Corti, Maria Rosaria Brizi, Maddalena Pennacchini & Marta Bertolaso - forthcoming - AI and Society:1-8.
    The ageing of the population is one of the most significant social transformations that the twenty first century is showcasing and a challenge that impacts society at large. The elderly, inasmuch as everybody else, are confronted with continuous transformations that are induced by technology, although they seldom benefit from the opportunities that technology entails. The digital divide amongst various segments of the population is often age-related and due to different reasons, including biological, psychological, social and financial ones. There is an (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5. Special Issue of Minds and Machines on Causality, Uncertainty and Ignorance.Stephan Hartmann & Rolf Haenni - 2006 - Minds and Machines 16 (3):237-238.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Asking Chatbase to learn about academic retractions.Aisdl Team - 2023 - Sm3D Science Portal.
    It is noteworthy that Chatbase has the capability to identify notable authors writing about the topic, including the co-founders of Retraction Watch, Ivan Oransky and Adam Marcus.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Chatting with Chatbase over the rationality issue of the cost of science.Aisdl Team - 2023 - Sm3D Science Portal.
    In this article, we present the outcome of our first experiment with Chatbase, a chatbot built on chatGPT’s functioning model(s). Our idea is to try instructing Chatbase to perform a reading, digesting, and summarizing task for a specifically formatted academic document.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Explanation and the Right to Explanation.Elanor Taylor - forthcoming - Journal of the American Philosophical Association.
    In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this paper I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision-making, and recommend it as a way to pursue the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. When facial recognition does not ‘recognise’: erroneous identifications and resulting liabilities.Vera Lúcia Raposo - forthcoming - AI and Society:1-13.
    Facial recognition is an artificial intelligence-based technology that, like many other forms of artificial intelligence, suffers from an accuracy deficit. This paper focuses on one particular use of facial recognition, namely identification, both as authentication and as recognition. Despite technological advances, facial recognition technology can still produce erroneous identifications. This paper addresses algorithmic identification failures from an upstream perspective by identifying the main causes of misidentifications (in particular, the probabilistic character of this technology, its ‘black box’ nature and its algorithmic (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. Lorenzo Magnani: Discoverability—the urgent need of an ecology of human creativity. [REVIEW]Jeffrey White - 2023 - AI and Society:1-2.
    Discoverability: the urgent need of an ecology of human creativity from the prolific Lorenzo Magnani is worthy of direct attention. The message may be of special interest to philosophers, ethicists and organizing scientists involved in the development of AI and related technologies which are increasingly directed at reinforcing conditions against which Magnani directly warns, namely the “overcomputationalization” of life marked by the gradual encroachment of technologically “locked strategies” into everyday decision-making until “freedom, responsibility, and ownership of our destinies” are ceded (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. On the Foundations of Computing. Computing as the Fourth Great Domain of Science. [REVIEW]Gordana Dodig-Crnkovic - 2023 - Global Philosophy 33 (1):1-12.
    This review essay analyzes the book by Giuseppe Primiero, On the foundations of computing. Oxford: Oxford University Press (ISBN 978-0-19-883564-6/hbk; 978-0-19-883565-3/pbk). xix, 296 p. (2020). It gives a critical view from the perspective of physical computing as a foundation of computing and argues that the neglected pillar of material computation (Stepney) should be brought centerstage and computing recognized as the fourth great domain of science (Denning).
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12. Three Early Formal Approaches to the Verification of Concurrent Programs.Cliff B. Jones - forthcoming - Minds and Machines:1-20.
    This paper traces a relatively linear sequence of early research approaches to the formal verification of concurrent programs. It does so forwards and then backwards in time. After briefly outlining the context, the key insights from three distinct approaches from the 1970s are identified (Ashcroft/Manna, Ashcroft (solo) and Owicki). The main technical material in the paper focuses on a specific program taken from the last published of the three pieces of research (Susan Owicki’s): her own verification of her _Findpos_ example (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Grounding the Vector Space of an Octopus: Word Meaning from Raw Text.Anders Søgaard - forthcoming - Minds and Machines:1-22.
    Most, if not all, philosophers agree that computers cannot learn what words refers to from raw text alone. While many attacked Searle’s Chinese Room thought experiment, no one seemed to question this most basic assumption. For how can computers learn something that is not in the data? Emily Bender and Alexander Koller ( 2020 ) recently presented a related thought experiment—the so-called Octopus thought experiment, which replaces the rule-based interlocutor of Searle’s thought experiment with a neural language model. The Octopus (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. The Blueprint for an AI Bill of Rights: In Search of Enaction, at Risk of Inaction.Emmie Hine & Luciano Floridi - forthcoming - Minds and Machines:1-8.
    The US is promoting a new vision of a “Good AI Society” through its recent AI Bill of Rights. This offers a promising vision of community-oriented equity unique amongst peer countries. However, it leaves the door open for potential rights violations. Furthermore, it may have some federal impact, but it is non-binding, and without concrete legislation, the private sector is likely to ignore it.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15. Deepfakes, Fake Barns, and Knowledge from Videos.Taylor Matthews - 2023 - Synthese 201 (2):1-18.
    Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  16. A Dilemma for Dispositional Answers to Kripkenstein’s Challenge.Andrea Guardo - forthcoming - Minds and Machines:1-18.
    Kripkenstein’s challenge is usually described as being essentially about the use of a word in new kinds of cases ‒ the old kinds of cases being commonly considered as non-problematic. I show that this way of conceiving the challenge is neither true to Kripke’s intentions nor philosophically defensible: the Kripkean skeptic can question my answering “125” to the question “What is 68 plus 57?” even if that problem is one I have already encountered and answered. I then argue that once (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  17. Hey Alexa, why are you called intelligent? An empirical investigation on definitions of AI.Lucas Caluori - forthcoming - AI and Society:1-15.
    This paper seeks to examine the questions of what criteria definitions of Artificial Intelligence (AI) use to define AI, what the disagreements that revolve around the term AI are based on, and what correlations can be drawn to other parameters. Framed as a problem of classification, a random sample of 45 definitions from various text sources was subjected to a qualitative content analysis. The criteria found are concluded in five dimensions, namely (1) learning ability, (2) human likeness, (3) state of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18. Correction to: The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - forthcoming - Minds and Machines:1-1.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19. Moral distance, AI, and the ethics of care.Carolina Villegas-Galaviz & Kirsten Martin - forthcoming - AI and Society:1-12.
    This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20. Enactivism Meets Mechanism: Tensions & Congruities in Cognitive Science.Jonny Lee - forthcoming - Minds and Machines:1-32.
    Enactivism advances an understanding of cognition rooted in the dynamic interaction between an embodied agent and their environment, whilst new mechanism suggests that cognition is explained by uncovering the organised components underlying cognitive capacities. On the face of it, the mechanistic model’s emphasis on localisable and decomposable mechanisms, often neural in nature, runs contrary to the enactivist ethos. Despite appearances, this paper argues that mechanistic explanations of cognition, being neither narrow nor reductive, and compatible with plausible iterations of ideas like (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. When stigmatization does not work: over-securitization in efforts of the Campaign to Stop Killer Robots.Anzhelika Solovyeva & Nik Hynek - forthcoming - AI and Society:1-23.
    This article reflects on securitization efforts with respect to ‘killer robots’, known more impartially as autonomous weapons systems (AWS). Our contribution focuses, theoretically and empirically, on the Campaign to Stop Killer Robots, a transnational advocacy network vigorously pushing for a pre-emptive ban on AWS. Marking exactly a decade of its activity, there is still no international regime formally banning, or even purposefully regulating, AWS. Our objective is to understand why the Campaign has not been able to advance its disarmament agenda (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. Selective Abduction in the Selection of Hypotheses and its Relationship with Inference to the Best Explanation (IBE).Seyed Ahmad Mirsanei - 2022 - Analytic Philosophy 19 (41):325-344.
    Selective abduction is in contrast with creative abduction as well as Inference to the Best Explanation (IBE). There are two types of selective abduction: Either hypotheses are selected among new and conjectural hypotheses without any prior knowledge ( Pierce s' selective abduction), or the selection of the best hypotheses and explanations is among a large number of possible hypotheses and explanations already known (L. Magnani's selective abduction and G. Schurz's factual abduction). According to both views, as well as an alternative (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  23. Robots among us: ordinary but significant human–robot interactions in the city.Jeffrey Kok Hui Chan & Yixiao Wang - forthcoming - AI and Society:1-2.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24. Informational Equivalence but Computational Differences? Herbert Simon on Representations in Scientific Practice.David Waszek - forthcoming - Minds and Machines:1-24.
    To explain why, in scientific problem solving, a diagram can be “worth ten thousand words,” Jill Larkin and Herbert Simon (1987) relied on a computer model: two representations can be “informationally” equivalent but differ “computationally,” just as the same data can be encoded in a computer in multiple ways, more or less suited to different kinds of processing. The roots of this proposal lay in cognitive psychology, more precisely in the “imagery debate” of the 1970s on whether there are image-like (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. The future of ethics in AI: challenges and opportunities.Angelo Trotta, Marta Ziosi & Vincenzo Lomonaco - forthcoming - AI and Society:1-3.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26. Artificial understanding: a step toward robust AI.Erez Firt - forthcoming - AI and Society:1-13.
    In recent years, state-of-the-art artificial intelligence systems have started to show signs of what might be seen as human level intelligence. More specifically, large language models such as OpenAI’s GPT-3, and more recently Google’s PaLM and DeepMind’s GATO, are performing amazing feats involving the generation of texts. However, it is acknowledged by many researchers that contemporary language models, and more generally, learning systems, still lack important capabilities, such as understanding, reasoning and the ability to employ knowledge of the world and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. Omission and commission errors underlying AI failures.Sasanka Sekhar Chanda & Debarag Narayan Banerjee - forthcoming - AI and Society:1-24.
    In this article we investigate origins of several cases of failure of Artificial Intelligence (AI) systems employing machine learning and deep learning. We focus on omission and commission errors in (a) the inputs to the AI system, (b) the processing logic, and (c) the outputs from the AI system. Our framework yields a set of 28 factors that can be used for reconstructing the path of AI failures and for determining corrective action. Our research helps identify emerging themes of inquiry (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. Arthur Bispo do Rosário: lunacy, art and second-order cybernetics.Carlos Senna Figueiredo - forthcoming - AI and Society:1-4.
    Arthur Bispo do Rosário created separate realities inspired by the objects of his surroundings. He intended to summon up everything and report to God. The objects he found or got from other inmates were waste of the Juliano Moreira Colony where he lived in seclusion because the lords of order categorised him as mentally ill. Bispo began by unravelling the uniforms of his seafaring days and Colony clothing and with the threads he wove maps and banners. He collected old shoes, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. Disillusioned with artificial intelligence: a book review. [REVIEW]Manh-Tung Ho - forthcoming - AI and Society:1-2.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Might artificial intelligence become part of the person, and what are the key ethical and legal implications?Jan Christoph Bublitz - forthcoming - AI and Society:1-12.
    This paper explores and ultimately affirms the surprising claim that artificial intelligence (AI) can become part of the person, in a robust sense, and examines three ethical and legal implications. The argument is based on a rich, legally inspired conception of persons as free and independent rightholders and objects of heightened protection, but it is construed so broadly that it should also apply to mainstream philosophical conceptions of personhood. The claim is exemplified by a specific technology, devices that connect human (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Might artificial intelligence become part of the person, and what are the key ethical and legal implications?Jan Christoph Bublitz - forthcoming - AI and Society:1-12.
    This paper explores and ultimately affirms the surprising claim that artificial intelligence (AI) can become part of the person, in a robust sense, and examines three ethical and legal implications. The argument is based on a rich, legally inspired conception of persons as free and independent rightholders and objects of heightened protection, but it is construed so broadly that it should also apply to mainstream philosophical conceptions of personhood. The claim is exemplified by a specific technology, devices that connect human (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. “I Am Not Your Robot:” the metaphysical challenge of humanity’s AIS ownership.Tyler L. Jaynes - 2022 - AI and Society 37 (4):1689-1702.
    Despite the reality that self-learning artificial intelligence systems (SLAIS) are gaining in sophistication, humanity’s focus regarding SLAIS-human interactions are unnervingly centred upon transnational commercial sectors and, most generally, around issues of intellectual property law. But as SLAIS gain greater environmental interaction capabilities in digital spaces, or the ability to self-author code to drive their development as algorithmic models, a concern arises as to whether a system that displays a “deceptive” level of human-like engagement with users in our physical world ought (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. How China’s cognitive warfare works: A frontline perspective of Taiwan’s anti-disinformation wars.Tzu-Chieh Hung & Tzu-wei Hung - 2022 - Journal of Global Security Studies 7 (4):1-18.
    Cognitive warfare—controlling others’ mental states and behaviors by manipulating environmental stimuli—is a significant and ever-evolving issue in global conflict and security, especially during the COVID-19 crisis. In this article, we aim to contribute to the field by proposing a two-dimensional framework to evaluate China's cognitive warfare and explore promising ways of counteracting it. We first define the problem by clarifying relevant concepts and then present a case study of China's attack on Taiwan. Next, based on predictive coding theory from the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. Musical Jabberwocky? [REVIEW]Timothy Justus - 2002 - Trends in Cognitive Sciences 6:144–145.
    In this book review essay, Justus discusses Virtual Music: Computer Synthesis of Musical Style (2001) by David Cope. The review begins by drawing a parallel between the Turing Test and evaluating the compositions of Cope’s Experiments in Musical Intelligence (EMI) before providing an overview of how this computer programme works and the commentaries included in the book (by Douglas Hofstadter, Eleanor Selfridge-Field, Bernard Greenberg, Steve Larson, Jonathan Berger, and Daniel Dennett). The essay then raises questions of absolute music versus music (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. Recipient design in human–robot interaction: the emergent assessment of a robot’s competence.Sylvaine Tuncer, Christian Licoppe, Paul Luff & Christian Heath - forthcoming - AI and Society:1-16.
    People meeting a robot for the first time do not know what it is capable of and therefore how to interact with it—what actions to produce, and how to produce them. Despite social robotics’ long-standing interest in the effects of robots’ appearance and conduct on users, and efforts to identify factors likely to improve human–robot interaction, little attention has been paid to how participants evaluate their robotic partner in the unfolding of actual interactions. This paper draws from qualitative analyses of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. How a Minimal Learning Agent can Infer the Existence of Unobserved Variables in a Complex Environment.Benjamin Eva, Katja Ried, Thomas Müller & Hans J. Briegel - forthcoming - Minds and Machines:1-35.
    According to a mainstream position in contemporary cognitive science and philosophy, the use of abstract compositional concepts is amongst the most characteristic indicators of meaningful deliberative thought in an organism or agent. In this article, we show how the ability to develop and utilise abstract conceptual structures can be achieved by a particular kind of learning agent. More specifically, we provide and motivate a concrete operational definition of what it means for these agents to be in possession of abstract concepts, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37. The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - forthcoming - Minds and Machines:1-28.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Auditing the impact of artificial intelligence on the ability to have a good life: using well-being measures as a tool to investigate the views of undergraduate STEM students.Brielle Lillywhite & Gregor Wolbring - forthcoming - AI and Society:1-16.
    AI/ML increasingly impacts the ability of humans to have a good life. Various sets of indicators exist to measure well-being/the ability to have a good life. Students play an important role in AI/ML discussions. The purpose of our study using an online survey was to learn about the perspectives of undergraduate STEM students on the impact of AI/ML on well-being/the ability to have a good life. Our study revealed that many of the abilities participants perceive to be needed for having (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. Think Differently We Must! An AI Manifesto for the Future.Emma Dahlin - forthcoming - AI and Society:1-4.
    There is a problematic tradition of dualistic and reductionist thinking in artificial intelligence (AI) research, which is evident in AI storytelling and imaginations as well as in public debates about AI. Dualistic thinking is based on the assumption of a fixed reality and a hierarchy of power, and it simplifies the complex relationships between humans and machines. This commentary piece argues that we need to work against the grain of such logics and instead develop a thinking that acknowledges AI–human interconnectedness (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. Artificial intimacy: virtual friends, digital lovers, algorithmic matchmakers.Linda Hamrick - forthcoming - AI and Society:1-2.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41. Gianluigi Negro (2017): “The Internet in China. From Infrastructure to a Nascent Civil Society” (Palgrave Macmillan).Yao Han - forthcoming - AI and Society:1-2.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. Does the paradox of choice exist in theory? A behavioral search model and pareto-improving choice set reduction algorithm.Shane Sanders - forthcoming - AI and Society:1-11.
    A growing body of empirical evidence suggests the existence of a Paradox of Choice, whereby a larger choice set leads to a lower expected payoff to the decisionmaker. These empirical findings contradict traditional choice-theoretic results in microeconomics and even social psychology, suggesting profound yet unexplained aspects of choice settings/behavior. The Paradox remains largely as a theoretically-rootless empirical phenomenon. We neither understand the mechanisms and conditions that generate it, nor whether the Paradox stems from choice behavior, choice setting, or both. We (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43. “Just” accuracy? Procedural fairness demands explainability in AI-based medical resource allocations.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - forthcoming - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44. The paradoxical transparency of opaque machine learning.Felix Tun Han Lo - forthcoming - AI and Society:1-13.
    This paper examines the paradoxical transparency involved in training machine-learning models. Existing literature typically critiques the opacity of machine-learning models such as neural networks or collaborative filtering, a type of critique that parallels the black-box critique in technology studies. Accordingly, people in power may leverage the models’ opacity to justify a biased result without subjecting the technical operations to public scrutiny, in what Dan McQuillan metaphorically depicts as an “algorithmic state of exception”. This paper attempts to differentiate the black-box abstraction (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45. MinMax fairness: from Rawlsian Theory of Justice to solution for algorithmic bias.Flavia Barsotti & Rüya Gökhan Koçer - forthcoming - AI and Society:1-14.
    This paper presents an intuitive explanation about why and how Rawlsian Theory of Justice (Rawls in A theory of justice, Harvard University Press, Harvard, 1971) provides the foundations to a solution for algorithmic bias. The contribution of the paper is to discuss and show why Rawlsian ideas in their original form (e.g. the veil of ignorance, original position, and allowing inequalities that serve the worst-off) are relevant to operationalize fairness for algorithmic decision making. The paper also explains how this leads (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. Street surface condition of wealthy and poor neighborhoods: the case of Los Angeles.Pooyan Doozandeh, Limeng Cui & Rui Yu - forthcoming - AI and Society:1-8.
    Are wealthy neighborhoods visually more attractive than poorer neighborhoods? Past studies provided a positive answer to this question for characteristics such as green space and visible pollution. The condition of streets is one of the characteristics that can not only contribute to neighborhoods’ aesthetics, but can also affect residents’ health and mobility. In this study, we investigate whether street condition of wealthy neighborhoods is different from poorer neighborhoods. We resolved the difficulty of data collection using a dataset that utilized artificial (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. The worst mistake 2.0? The digital revolution and the consequences of innovation.Matthew O’Lemmon - forthcoming - AI and Society:1-10.
    The invention of agriculture 12,000 years ago has been called the worst mistake in human history. Alongside the social, political, and technological innovations that stemmed from it, there came a litany of drawbacks ranging from social inequality, a decline in human health, to the concentration of power in the hands of a few. Millennia after the invention of agriculture, another revolution—the digital revolution—is having a similar impact on humanity, albeit at a scale and speed measured in decades. Despite the tremendous (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48. Artificial intelligence in support of the circular economy: ethical considerations and a path forward.Huw Roberts, Joyce Zhang, Ben Bariach, Josh Cowls, Ben Gilburt, Prathm Juneja, Andreas Tsamados, Marta Ziosi, Mariarosaria Taddeo & Luciano Floridi - forthcoming - AI and Society:1-14.
    The world’s current model for economic development is unsustainable. It encourages high levels of resource extraction, consumption, and waste that undermine positive environmental outcomes. Transitioning to a circular economy (CE) model of development has been proposed as a sustainable alternative. Artificial intelligence (AI) is a crucial enabler for CE. It can aid in designing robust and sustainable products, facilitate new circular business models, and support the broader infrastructures needed to scale circularity. However, to date, considerations of the ethical implications of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Disposable culture, posthuman affect, and artificial human in Kazuo Ishiguro’s Klara and the Sun (2021).Om Prakash Sahu & Manali Karmakar - forthcoming - AI and Society:1-9.
    Kazuo Ishiguro’s novel Klara and the Sun (2021) philosophizes on how in the current technologically saturated culture, the gradual evolution of the empathetic humanoids has, on one hand, problematized our normative notions of cognitive and affective categories, and on the other, has triggered an order of emotional uncanniness due to our reliance on hyperreal real objects for receiving solace and companionship. The novel may be conceived to be a commentary on the emerging discourse in the domain of cognitive and emotional (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. AI ethics: from principles to practice.Jianlong Zhou & Fang Chen - forthcoming - AI and Society:1-11.
    Much of the current work on AI ethics has lost its connection to the real-world impact by making AI ethics operable. There exist significant limitations of hyper-focusing on the identification of abstract ethical principles, lacking effective collaboration among stakeholders, and lacking the communication of ethical principles to real-world applications. This position paper presents challenges in making AI ethics operable and highlights key obstacles to AI ethics impact. A preliminary practice example is provided to initiate practical implementations of AI ethics. We (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 13729