About this topic
Summary

The philosophy of artificial intelligence is a collection of issues primarily concerned with whether or not AI is possible -- with whether or not it is possible to build an intelligent thinking machine.  Also of concern is whether humans and other animals are best thought of as machines (computational robots, say) themselves. The most important of the "whether-possible" problems lie at the intersection of theories of the semantic contents of thought and the nature of computation. A second suite of problems surrounds the nature of rationality. A third suite revolves around the seeming “transcendent” reasoning powers of the human mind. These problems derive from Kurt Gödel's famous Incompleteness Theorem.  A fourth collection of problems concerns the architecture of an intelligent machine.  Should a thinking computer use discrete or continuous modes of computing and representing, is having a body necessary, and is being conscious necessary.  This takes us to the final set of questions. Can a computer be conscious?  Can a computer have a moral sense? Would we have duties to thinking computers, to robots?  For example, is it moral for humans to even attempt to build an intelligent machine?  If we did build such a machine, would turning it off be the equivalent of murder?  If we had a race of such machines, would it be immoral to force them to work for us?

Key works Probably the most important attack on whether AI is possible is John Searle's famous Chinese Room Argument: Searle 1980.  This attack focuses on the semantic aspects (mental semantics) of thoughts, thinking, and computing.   For some replies to this argument, see the same 1980 journal issue as Searle's original paper.  For the problem of the nature of rationality, see Pylyshyn 1987.  An especially strong attack on AI from this angle is Jerry Fodor's work on the frame problem: Fodor 1987.  On the frame problem in general, see McCarthy & Hayes 1969.  For some replies to Fodor and advances on the frame problem, see Ford & Pylyshyn 1996.  For the transcendent reasoning issue, a central and important paper is Hilary Putnam's Putnam 1960.  This paper is arguably the source for the computational turn in 1960s-70s philosophy of mind.  For architecture-of-mind issues, see, for starters: M. Spivey's The Contintuity of Mind, Oxford, which argues against the notion of discrete representations. See also, Gelder & Port 1995.  For an argument for discrete representations, see, Dietrich & Markman 2003.  For an argument that the mind's boundaries do not end at the body's boundaries, see, Clark & Chalmers 1998.  For a statement of and argument for computationalism -- the thesis that the mind is a kind of computer -- see Shimon Edelman's excellent book Edelman 2008. See also Chapter 9 of Chalmers's book Chalmers 1996.
Introductions Chinese Room Argument: Searle 1980. Frame problem: Fodor 1987, Computationalism and Godelian style refutation: Putnam 1960. Architecture: M. Spivey's The Contintuity of Mind, Oxford and Shimon Edelman's Edelman 2008. Ethical issues: Anderson & Anderson 2011.  Conscious computers: Chalmers 2011.
Related categories
Subcategories:

10880 found
Order:
1 — 50 / 10880
Material to categorize
  1. Verification and Validation of Simulations Against Holism.Julie Jebeile & Vincent Ardourel - forthcoming - Minds and Machines:1-20.
    It has been argued that the Duhem problem is renewed with computational models since model assumptions having a representational aim and computational assumptions cannot be tested in isolation. In particular, while the Verification and Validation methodology is supposed to prevent such holism, Winsberg argues that verification and validation cannot be separated in practice. Morrison replies that Winsberg overstates the entanglement between the steps. The paper aims at arbitrating these two positions, by stressing their respective validity in relation to domains of (...)
  2. Bounded Rationality and Heuristics in Humans and in Artificial Cognitive Systems.Antonio Lieto - forthcoming - Isonomía. Revista de Teoría y Filosofía Del Derecho.
    In this paper I will present an analysis of the impact that the notion of “bounded rationality”, introduced by Herbert Simon in his book “Administrative Behavior”, produced in the field of Artificial Intelligence (AI). In particular, by focusing on the field of Automated Decision Making (ADM), I will show how the introduction of the cognitive dimension into the study of choice of a rational (natural) agent, indirectly determined - in the AI field - the development of a line of research (...)
  3. Risk Management Standards and the Active Management of Malicious Intent in Artificial Superintelligence.Patrick Bradley - forthcoming - AI and Society:1-10.
    The likely near future creation of artificial superintelligence carries significant risks to humanity. These risks are difficult to conceptualise and quantify, but malicious use of existing artificial intelligence by criminals and state actors is already occurring and poses risks to digital security, physical security and integrity of political systems. These risks will increase as artificial intelligence moves closer to superintelligence. While there is little research on risk management tools used in artificial intelligence development, the current global standard for risk management, (...)
  4. Singularität und Uploading – Säkulare Mythen.Jan-Hendrik Heinrichs - 2015 - Aufklärung and Kritik 22 (3):185-197.
  5. Black-Box Artificial Intelligence: An Epistemological and Critical Analysis.Manuel Carabantes - forthcoming - AI and Society:1-9.
    The artificial intelligence models with machine learning that exhibit the best predictive accuracy, and therefore, the most powerful ones, are, paradoxically, those with the most opaque black-box architectures. At the same time, the unstoppable computerization of advanced industrial societies demands the use of these machines in a growing number of domains. The conjunction of both phenomena gives rise to a control problem on AI that in this paper we analyze by dividing the issue into two. First, we carry out an (...)
  6. AI and the Path to Envelopment: Knowledge as a First Step Towards the Responsible Regulation and Use of AI-Powered Machines.Scott Robbins - forthcoming - AI and Society:1-10.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
  7. Reply to “Prayer-Bots and Religious Worship on Twitter: A Call for a Wider Research Agenda Islamic”.Yasser Qureshy - forthcoming - Minds and Machines:1-2.
  8. Do People with Social Anxiety Feel Anxious About Interacting with a Robot?Tatsuya Nomura, Takayuki Kanda, Tomohiro Suzuki & Sachie Yamada - forthcoming - AI and Society:1-10.
    To investigate whether people with social anxiety have less actual and “anticipatory” anxiety when interacting with a robot compared to interacting with a person, we conducted a 2 × 2 psychological experiment with two factors: social anxiety and interaction partner. The experiment was conducted in a counseling setting where a participant played the role of a client and the robot or the confederate played the role of a counselor. First, we measured the participants’ social anxiety using the Social Avoidance and (...)
  9. Delegating Religious Practices to Autonomous Machines, A Reply to “Prayer-Bots and Religious Worship on Twitter: A Call for a Wider Research Agenda”.Yaqub Chaudhary - forthcoming - Minds and Machines:1-7.
  10. The Epistemic Importance of Technology in Computer Simulation and Machine Learning.Michael Resch & Andreas Kaminski - forthcoming - Minds and Machines:1-9.
    Scientificity is essentially methodology. The use of information technology as methodological instruments in science has been increasing for decades, this raises the question: Does this transform science? This question is the subject of the Special Issue in Minds and Machines “The epistemological significance of methods in computer simulation and machine learning”. We show that there is a technological change in this area that has three methodological and epistemic consequences: methodological opacity, reproducibility issues, and altered forms of justification.
  11. Prayer-Bots and Religious Worship on Twitter: A Call for a Wider Research Agenda.Carl Öhman, Robert Gorwa & Luciano Floridi - forthcoming - Minds and Machines:1-8.
    The automation of online social life is an urgent issue for researchers and the public alike. However, one of the most significant uses of such technologies seems to have gone largely unnoticed by the research community: religion. Focusing on Islamic Prayer Apps, which automatically post prayers from its users’ accounts, we show that even one such service is already responsible for millions of tweets daily, constituting a significant portion of Arabic-language Twitter traffic. We argue that the fact that a phenomenon (...)
  12. 15 Challenges for AI: Or What AI Can’T Do.Thilo Hagendorff & Katharina Wezel - forthcoming - AI and Society:1-11.
    The current “AI Summer” is marked by scientific breakthroughs and economic successes in the fields of research, development, and application of systems with artificial intelligence. But, aside from the great hopes and promises associated with artificial intelligence, there are a number of challenges, shortcomings and even limitations of the technology. For one, these challenges arise from methodological and epistemological misconceptions about the capabilities of artificial intelligence. Secondly, they result from restrictions of the social context in which the development of applications (...)
  13. Culture, the Process of Knowledge, Perception of the World and Emergence of AI.Badrudin Amershi - forthcoming - AI and Society:1-14.
    Considering the technological development today, we are facing an emerging crisis. We are in the midst of a scientific revolution, which promises to radically change not only the way we live and work—but beyond that challenge the stability of the very foundations of our civilization and the international political order. All our attention and effort is thus focused on cushioning its impacts on life and society. Looking back in history, it would be pertinent to ask whether this process is a (...)
  14. Computer Modeling and Simulation: Increasing Reliability by Disentangling Verification and Validation.Vitaly Pronskikh - forthcoming - Minds and Machines:1-18.
    Verification and validation of computer codes and models used in simulations are two aspects of the scientific practice of high importance that recently have been discussed widely by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to the model’s relation to the real world and its intended use. Because complex simulations are generally opaque to a practitioner, the Duhem problem can (...)
  15. From Judgment to Calculation: The Phenomenology of Embodied Skill.Karamjit S. Gill - forthcoming - AI and Society:1-11.
  16. Augmented Learning, Smart Glasses and Knowing How.Wulf Loh & Catrin Misselhorn - forthcoming - AI and Society:1-12.
    While recent studies suggest that augmented learning employing smart glasses increases overall learning performance, in this paper we are more interested in the question which repercussions ALSG will have on the type of knowledge that is acquired. Drawing from the theoretical discussion within epistemology about the differences between Knowledge-How and Knowledge-That, we will argue that ALSG furthers understanding as a series of epistemic and non-epistemic Knowing-Hows. Focusing on academic knowledge acquisition, especially with respect to early curriculum experiments in various STEM (...)
  17. Collective Bread Diaries: Cultural Identities in an Artificial Intelligence Framework.Haytham Nawar - forthcoming - AI and Society:1-8.
    The complex relationship between the current advancement of technology, including the wide scope of settings at which machinery plays substantial roles, and the cultural, historical, and political realities that have long existed across the history of mankind, is one that deserves absolute attention and exploration. This interconnection has been investigated in light of bread, and the meaning it signifies to people from all over the world. Drawing on the commonly unnoticed value of bread, and the everlasting impregnable imprint it has (...)
  18. Natural Language Understanding: Methodological Conceptualization.Vitalii Shymko - forthcoming - Psycholinguistics.
    This article contains the results of a theoretical analysis of the phenomenon of natural language understanding (NLU), as a methodological problem. The combination of structural-ontological and informational-psychological approaches provided an opportunity to describe the subject matter field of NLU, as a composite function of the mind, which systemically combines the verbal and discursive structural layers. In particular, the idea of NLU is presented, on the one hand, as the relation between the discourse of a specific speech message and the meta-discourse (...)
  19. Phronesis and Automated Science: The Case of Machine Learning and Biology.Emanuele Ratti - forthcoming - In Fabio Sterpetti & M. Bertolaso (eds.), Will Science Remain Human? Springer.
    The applications of machine learning and deep learning to the natural sciences has fostered the idea that the automated nature of algorithmic analysis will gradually dispense human beings from scientific work. In this paper, I will show that this view is problematic, at least when ML is applied to biology. In particular, I will claim that ML is not independent of human beings and cannot form the basis of automated science. Computer scientists conceive their work as being a case of (...)
  20. Pragmatism and Purism in Artificial Intelligence and Legal Reasoning.Richard Susskind - 1989 - AI and Society 3 (1):28-38.
  21. Law, Liability and Expert Systems.Joseph A. Cannataci - 1989 - AI and Society 3 (3):169-183.
  22. Why Computers Are Never Likely to Be Smarter Than People.Peter J. Marcer - 1989 - AI and Society 3 (2):142-145.
  23. The Civic Role of Online Service Providers.Mariarosaria Taddeo - forthcoming - Minds and Machines:1-7.
  24. Artificial Intelligence: Consciousness and Conscience.Gunter Meissner - forthcoming - AI and Society.
  25. Why Computers Can’T Feel Pain.Mark Bishop - 2009 - Minds and Machines 19 (4):507-516.
  26. The Role of Robotics and AI in Technologically Mediated Human Evolution: A Constructive Proposal.Jeffrey White - forthcoming - AI and Society.
  27. Announcing the Professor Cooley Archive at Waterford Institute of Technology, Ireland: A Celebration of the Legacy of Mike Cooley.Larry Stapleton, Brenda O’Neill, Kieran Cronin & Matthew Kendrick - forthcoming - AI and Society.
  28. Reconsidering Buber, Educational Technology, and the Expansion of Dialogic Space.Vikas Baniwal - 2019 - AI and Society 34 (1):121-127.
    This paper is an attempt to further the conversation about the possibilities of dialogue with technology that Wegerif and Major have initiated. In their paper Wegerif and Major have argued that “constructive dialogue with technology is possible, even essential, and that this takes the form of opening a dialogic space” and they also “argue against Buber that dialogic spaces do not all take the same form, but that they take a multitude of forms depending, to a large extent, on the (...)
  29. The Human Relationship in the Ethics of Robotics: A Call to Martin Buber’s I and Thou.Kathleen Richardson - 2019 - AI and Society 34 (1):75-82.
    Artificially Intelligent robotic technologies increasingly reflect a language of interaction and relationship and this vocabulary is part and parcel of the meanings now attached to machines. No longer are they inert, but interconnected, responsive and engaging. As machines become more sophisticated, they are predicted to be a “direct object” of an interaction for a human, but what kinds of human would that give rise to? Before robots, animals played the role of the relational other, what can stories of feral children (...)
  30. Towards a Unified Framework for Developing Ethical and Practical Turing Tests.Balaji Srinivasan & Kushal Shah - 2019 - AI and Society 34 (1):145-152.
    Since Turing proposed the first test of intelligence, several modifications have been proposed with the aim of making Turing’s proposal more realistic and applicable in the search for artificial intelligence. In the modern context, it turns out that some of these definitions of intelligence and the corresponding tests merely measure computational power. Furthermore, in the framework of the original Turing test, for a system to prove itself to be intelligent, a certain amount of deceit is implicitly required which can have (...)
  31. Is It Possible to Grow an I–Thou Relation with an Artificial Agent? A Dialogistic Perspective.Stefan Trausan-Matu - 2019 - AI and Society 34 (1):9-17.
    The paper analyzes if it is possible to grow an I–Thou relation in the sense of Martin Buber with an artificial, conversational agent developed with Natural Language Processing techniques. The requirements for such an agent, the possible approaches for the implementation, and their limitations are discussed. The relation of the achievement of this goal with the Turing test is emphasized. Novel perspectives on the I–Thou and I–It relations are introduced according to the sociocultural paradigm and Mikhail Bakhtin’s dialogism, polyphony inter-animation, (...)
  32. The Rise of the Robots and the Crisis of Moral Patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, (...)
  33. AI and Education: The Importance of Teacher and Student Relations.Alex Guilherme - 2019 - AI and Society 34 (1):47-54.
    A defining aspect of our modern age is our tenacious belief in technology in all walks of life, not least in education. It could be argued that this infatuation with technology or ‘techno-philia’ in education has had a deep impact in the classroom changing the relationship between teacher and student, as well as between students; that is, these relations have become increasingly more I–It than I–Thou based because the capacity to form bonds, the level of connectedness between teacher and students, (...)
  34. Encountering Bloody Others in Mined Reality.Nika Mahnič - 2019 - AI and Society 34 (1):153-160.
    This article explores interpersonal and human–computer interaction in the era of big data through the lens of Martin Buber’s relational ethics. Doing theory otherwise, it analyses the importance of other voices and speech through the case of digital assistants, questioning the implications of naming them ‘companions’. Following recent proposals to ascribe legal subjectivity to synthetic agents, the article explores the effects on agency, interaction with flesh-and-blood others and democracy in an attention economy enmeshed with technologies of behavioural manipulation powered by (...)
  35. Robot Use Self-Efficacy in Healthcare Work : Development and Validation of a New Measure.Tuuli Turja, Teemu Rantanen & Atte Oksanen - 2019 - AI and Society 34 (1):137-143.
    The aim of this study was to develop and validate a measure of robot use self-efficacy in healthcare work based on social cognitive theory and the theory of planned behavior. This article provides a briefing on technology-specific self-efficacy and discusses the development, validation, and implementation of an instrument that measures care workers’ self-efficacy in working with robots. The validity evaluation of the Finnish-language measure was based on representative survey samples gathered in 2016. The respondents included practical and registered nurses, homecare (...)
  36. Buber, Educational Technology, and the Expansion of Dialogic Space.Rupert Wegerif & Louis Major - 2019 - AI and Society 34 (1):109-119.
    Buber’s distinction between the ‘I-It’ mode and the ‘I-Thou’ mode is seminal for dialogic education. While Buber introduces the idea of dialogic space, an idea which has proved useful for the analysis of dialogic education with technology, his account fails to engage adequately with the role of technology. This paper offers an introduction to the significance of the I-It/I-Thou duality of technology in relation with opening dialogic space. This is followed by a short schematic history of educational technology which reveals (...)
  37. The Synthetization of Human Voices.Oliver Bendel - 2019 - AI and Society 34 (1):83-89.
    The synthetization of voices, or speech synthesis, has been an object of interest for centuries. It is mostly realized with a text-to-speech system, an automaton that interprets and reads aloud. This system refers to text available for instance on a website or in a book, or entered via popup menu on the website. Today, just a few minutes of samples are enough to be able to imitate a speaker convincingly in all kinds of statements. This article abstracts from actual products (...)
  38. Cultivating Mindfulness Through Technology in Higher Education: A Buberian Perspective.Linor L. Hadar & Oren Ergas - 2019 - AI and Society 34 (1):99-107.
    One of the most fundamental concepts within Martin Buber’s philosophy concerns two modes of being: I–it, which reflects an egocentric instrumental existence, and I–thou, which reflects dialogical encounter and interrelatedness. At the face of it, technology seems to be the ultimate example of that which engenders and I–it consciousness. Indeed, a recurrent concern in contemporary times suggests that the increase in our technology use is slowly but surely depriving us of meaningful encounters with the other. In this paper we propose (...)
  39. I–Thou Dialogical Encounters in Adolescents’ WhatsApp Virtual Communities.Arie Kizel - 2019 - AI and Society 34 (1):19-27.
    The use of WhatsApp as a means of communication is widespread amongst today‘s youth, many of whom spend hours in virtual space, in particular during the evenings and nighttime in the privacy of their own homes. This article seeks to contribute to the discussion of the dialogical language and ―conversations‖ conducted in virtual-space encounters and the way in which young people perceive this space, its affect on them, and their interrelations within it. It presents the findings of a study based (...)
  40. Primacy of I–You Connectedness Revisited: Some Implications for AI and Robotics.Beata Stawarska - 2019 - AI and Society 34 (1):3-8.
    In this essay, I challenge the egocentric tradition which privileges the standpoint of an isolated individual, and propose a speech-based dialogical approach as an alternative. Considering that the egocentric tradition can be deciphered in part by analyzing the distortions undergone by pronominal discourse in the language of classical philosophy, I reexamine the pragmatics of ordinary language featuring the pronoun I in an effort to recover a more relational understanding of persons. I develop such an analysis of the deep grammar of (...)
  41. E. M. Forster’s ‘The Machine Stops’: Humans, Technology and Dialogue.Ana Cristina Zimmermann & W. John Morgan - 2019 - AI and Society 34 (1):37-45.
    The article explores E.M. Forster’s story The Machine Stops as an example of dystopian literature and its possible associations with the use of technology and with today’s cyber culture. Dystopian societies are often characterized by dehumanization and Forster’s novel raises questions about how we live in time and space; and how we establish relationships with the Other and with the world through technology. We suggest that the fear of technology depicted in dystopian literature indicates a fear that machines are mimicking (...)
  42. Humans as Relational Selves.Nicole Dewandre - 2019 - AI and Society 34 (1):95-98.
    Instead of wondering about the nature of robots, as if our thinking about humans was stable and straightforward, we should dig deeper in thinking about how we think about humans. Indeed, the emotions embedded in the ethical approaches to robots and artificial intelligence, are rooted in a long tradition of thinking about humans, either in an instrumental or in a pseudo-divine way. Both perspectives miss humanness, and are misleading when it comes to thinking about robots and their relationships with humans. (...)
  43. The Vitruvian Robot.Cathrine Hasse - 2019 - AI and Society 34 (1):91-93.
    Robots are simultaneously real machines and technical images that challenge our sense of self. I discuss the movie Ex Machina by director Alex Garland. The robot Ava, played by Alicia Vikander, is a rare portrait of what could be interpreted as a feminist robot. Though she apparently is created as the dream of the ‘perfect woman’, sexy and beautiful, she also develops and urges to free herself from the slavery of her creator, Nathan Bateman. She is a robot created along (...)
  44. Burning Down the House: Bitcoin, Carbon-Capitalism, and the Problem of Trustless Systems.David Morris - 2019 - AI and Society 34 (1):161-162.
  45. Why Being Dialogical Must Come Before Being Logical: The Need for a Hermeneutical–Dialogical Approach to Robotic Activities.John Shotter - 2019 - AI and Society 34 (1):29-35.
    Currently, our official rationality is still of a Cartesian kind; we are still embedded in a mechanistic order that takes it that separate, countable entities, related logically to each other, are the only ‘things’ that matter to us—an order clearly suited to advances in robotics. Unfortunately, it is an order that renders invisible ‘relational things’, non-objective things that exist in time, in the transitions from one state of affairs to another, things that ‘point’ toward possibilities in the future, which mean (...)
  46. S. P. Gill: Tacit Engagement: Beyond Interaction.Kathleen Richardson - 2019 - AI and Society 34 (1):163-163.
  47. Ontologies, Mental Disorders and Prototypes.Maria Cristina Amoretti, Marcello Frixione, Antonio Lieto & Greta Adamo - 2019 - In Matteo Vincenzo D'Alfonso & Don Berkich (eds.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence. Berlin, Germany: Springer Verlag.
  48. Information Processing Artifacts.Neal G. Anderson - forthcoming - Minds and Machines:1-33.
    What is a computer? What distinguishes computers from other artificial or natural systems with alleged computational capacities? What does use of a physical system for computation entail, and what distinguishes such use from otherwise identical transformation of that same system when it is not so used? This paper addresses such questions through a theory of information processing artifacts, the class of technical artifacts with physical capacities that enable agents to use them as means to their computational ends. Function ascription, use (...)
  49. Robotic Simulations, Simulations of Robots.Edoardo Datteri & Viola Schiaffonati - forthcoming - Minds and Machines:1-17.
    Simulation studies have been carried out in robotics for a variety of epistemic and practical purposes. Here it is argued that two broad classes of simulation studies can be identified in robotics research. The first one is exemplified by the use of robotic systems to acquire knowledge on living systems in so-called biorobotics, while the second class of studies is more distinctively connected to cases in which artificial systems are used to acquire knowledge about the behaviour of autonomous mobile robots. (...)
  50. Reproducibility and the Concept of Numerical Solution.Johannes Lenhard & Uwe Küster - forthcoming - Minds and Machines:1-18.
    In this paper, we show that reproducibility is a severe problem that concerns simulation models. The reproducibility problem challenges the concept of numerical solution and hence the conception of what a simulation actually does. We provide an expanded picture of simulation that makes visible those steps of simulation modeling that are numerically relevant, but often escape notice in accounts of simulation. Examining these steps and analyzing a number of pertinent examples, we argue that numerical solutions are importantly different from usual (...)
1 — 50 / 10880