About this topic
Summary

The philosophy of artificial intelligence is a collection of issues primarily concerned with whether or not AI is possible -- with whether or not it is possible to build an intelligent thinking machine.  Also of concern is whether humans and other animals are best thought of as machines (computational robots, say) themselves. The most important of the "whether-possible" problems lie at the intersection of theories of the semantic contents of thought and the nature of computation. A second suite of problems surrounds the nature of rationality. A third suite revolves around the seeming “transcendent” reasoning powers of the human mind. These problems derive from Kurt Gödel's famous Incompleteness Theorem.  A fourth collection of problems concerns the architecture of an intelligent machine.  Should a thinking computer use discrete or continuous modes of computing and representing, is having a body necessary, and is being conscious necessary.  This takes us to the final set of questions. Can a computer be conscious?  Can a computer have a moral sense? Would we have duties to thinking computers, to robots?  For example, is it moral for humans to even attempt to build an intelligent machine?  If we did build such a machine, would turning it off be the equivalent of murder?  If we had a race of such machines, would it be immoral to force them to work for us?

Key works Probably the most important attack on whether AI is possible is John Searle's famous Chinese Room Argument: Searle 1980.  This attack focuses on the semantic aspects (mental semantics) of thoughts, thinking, and computing.   For some replies to this argument, see the same 1980 journal issue as Searle's original paper.  For the problem of the nature of rationality, see Pylyshyn 1987.  An especially strong attack on AI from this angle is Jerry Fodor's work on the frame problem: Fodor 1987.  On the frame problem in general, see McCarthy & Hayes 1969.  For some replies to Fodor and advances on the frame problem, see Ford & Pylyshyn 1996.  For the transcendent reasoning issue, a central and important paper is Hilary Putnam's Putnam 1960.  This paper is arguably the source for the computational turn in 1960s-70s philosophy of mind.  For architecture-of-mind issues, see, for starters: M. Spivey's The Contintuity of Mind, Oxford, which argues against the notion of discrete representations. See also, Gelder & Port 1995.  For an argument for discrete representations, see, Dietrich & Markman 2003.  For an argument that the mind's boundaries do not end at the body's boundaries, see, Clark & Chalmers 1998.  For a statement of and argument for computationalism -- the thesis that the mind is a kind of computer -- see Shimon Edelman's excellent book Edelman 2008. See also Chapter 9 of Chalmers's book Chalmers 1996.
Introductions Chinese Room Argument: Searle 1980. Frame problem: Fodor 1987, Computationalism and Godelian style refutation: Putnam 1960. Architecture: M. Spivey's The Contintuity of Mind, Oxford and Shimon Edelman's Edelman 2008. Ethical issues: Anderson & Anderson 2011.  Conscious computers: Chalmers 2011.
Related categories
Subcategories:

10769 found
Order:
1 — 50 / 10769
Material to categorize
  1. Digital Akrasia: A Qualitative Study of Phubbing.Jesper Aagaard - forthcoming - AI and Society:1-8.
    The present article focuses on the issue of ignoring conversational partners in favor of one’s phone, or what has also become known as phubbing. Prior research has shown that this behavior is associated with a host of negative interpersonal consequences. Since phubbing by definition entails adverse effects, however, it is interesting to explore why people continue to engage in this hurtful behavior: Are they unaware that phubbing is hurtful to others? Or do they simply not care? Building on interviews with (...)
  2. ?X? meansX: Fodor/Warfield semantics.Fred Adams & Kenneth Aizawa - 1994 - Minds and Machines 4 (2):215-231.
  3. A Minimalist Epistemology for Agent-Based Simulations in the Artificial Sciences.Giuseppe Primiero - forthcoming - Minds and Machines:1-22.
    The epistemology of computer simulations has become a mainstream topic in the philosophy of technology. Within this large area, significant differences hold between the various types of models and simulation technologies. Agent-based and multi-agent systems simulations introduce a specific constraint on the types of agents and systems modelled. We argue that such difference is crucial and that simulation for the artificial sciences requires the formulation of its own specific epistemological principles. We present a minimally committed epistemology which relies on the (...)
  4. Epistemic Entitlements and the Practice of Computer Simulation.John Symons & Ramón Alvarado - forthcoming - Minds and Machines:1-24.
    What does it mean to trust the results of a computer simulation? This paper argues that trust in simulations should be grounded in empirical evidence, good engineering practice, and established theoretical principles. Without these constraints, computer simulation risks becoming little more than speculation. We argue against two prominent positions in the epistemology of computer simulation and defend a conservative view that emphasizes the difference between the norms governing scientific investigation and those governing ordinary epistemic practices.
  5. Artifictional Intelligence: Against Humanity’s Surrender to Computers.Karamjit S. Gill - forthcoming - AI and Society:1-2.
  6. Delinquent Genius: The Strange Affair of Man and His Technology.Karamjit S. Gill - forthcoming - AI and Society:1-3.
  7. A Machine is Cheaper Than a Human for the Same Task.Luís Moniz Pereira - forthcoming - AI and Society:1-3.
  8. The Bit Define the Borderline Between Hardware and Software.Russ Abbott - forthcoming - Minds and Machines:1-47.
    Modern computing is generally taken to consist primarily of symbol manipulation. But symbols are abstract, and computers are physical. How can a physical device manipulate abstract symbols? Neither Church nor Turing considered this question. My answer is that the bit, as a hardware-implemented abstract data type, serves as a bridge between materiality and abstraction. Computing also relies on three other primitive—but more straightforward—abstractions: Sequentiality, State, and Transition. These physically-implemented abstractions define the borderline between hardware and software and between physicality and (...)
  9. Simulation, Epistemic Opacity, and ‘Envirotechnical Ignorance’ in Nuclear Crisis.Tudor B. Ionescu - forthcoming - Minds and Machines:1-26.
    The Fukushima nuclear accident from 2011 provided an occasion for the public display of radiation maps generated using decision-support systems for nuclear emergency management. Such systems rely on computer models for simulating the atmospheric dispersion of radioactive materials and estimating potential doses in the event of a radioactive release from a nuclear reactor. In Germany, as in Japan, such systems are part of the national emergency response apparatus and, in case of accidents, they can be used by emergency task forces (...)
  10. The Posthuman Abstract: Ai, Dronology & “Becoming Alien”.Louis Armand - forthcoming - AI and Society:1-6.
    This paper is addressed to recent theoretical discussions of the Anthropocene, in particular Bernard Stiegler’s Neganthropocene, which argues: “As we drift past tipping points that put future biota at risk, while a post-truth regime institutes the denial of ‘climate change’, and as Silicon Valley assistants snatch decision and memory, and as gene-editing and a financially-engineered bifurcation advances over the rising hum of extinction events and the innumerable toxins and conceptual opiates that Anthropocene Talk fascinated itself with—in short, as ‘the Anthropocene’ (...)
  11. Computers Are Syntax All the Way Down: Reply to Bozşahin.William J. Rapaport - forthcoming - Minds and Machines:1-11.
    A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
  12. Burning Down the House: Bitcoin, Carbon-Capitalism, and the Problem of Trustless Systems.David Morris - forthcoming - AI and Society:1-2.
  13. Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism.Juan M. Durán & Nico Formanek - 2018 - Minds and Machines 28 (4):645-666.
    Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations :483–496, 2009; Morrison in Philos Stud 143:33–57, 2009), the nature of computer data Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of (...)
  14. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
  15. Computational Functionalism for the Deep Learning Era.Ezequiel López-Rubio - 2018 - Minds and Machines 28 (4):667-688.
    Deep learning is a kind of machine learning which happens in a certain type of artificial neural networks called deep networks. Artificial deep networks, which exhibit many similarities with biological ones, have consistently shown human-like performance in many intelligent tasks. This poses the question whether this performance is caused by such similarities. After reviewing the structure and learning processes of artificial and biological neural networks, we outline two important reasons for the success of deep learning, namely the extraction of successively (...)
  16. Info-Metrics for Modeling and Inference.Amos Golan - 2018 - Minds and Machines 28 (4):787-793.
    Info-metrics is a framework for rational inference based on insufficient information. The complete info-metric framework, accompanied with many interdisciplinary examples and case studies, as well as graphical representations of the theory appear in the new book “Foundations of Info-Metrics: Modeling, Inference and Imperfect Information,” Oxford University Press, 2018. In this commentary, I describe that framework in general terms, demonstrate some of the ideas via simple examples, and provide arguments for using it to transform information into useful knowledge.
  17. Syntactical Informational Structural Realism.Majid Davoody Beni - 2018 - Minds and Machines 28 (4):623-643.
    Luciano Floridi’s informational structural realism takes a constructionist attitude towards the problems of epistemology and metaphysics, but the question of the nature of the semantical component of his view remains vexing. In this paper, I propose to dispense with the semantical component of ISR completely. I outline a Syntactical version of ISR. The unified entropy-based framework of information has been adopted as the groundwork of SISR. To establish its realist component, SISR should be able to dissolve the latching problem. We (...)
  18. Retracted Article: Habits, Priming and the Explanation of Mindless Action.Ezio Di Nucci - 2018 - Minds and Machines 28 (4):795-795.
  19. Killer Robot Arms: A Case-Study in Brain–Computer Interfaces and Intentional Acts.David Gurney - 2018 - Minds and Machines 28 (4):775-785.
    I use a hypothetical case study of a woman who replaces here biological arms with prostheses controlled through a brain–computer interface the explore how a BCI might interpret and misinterpret intentions. I define pre-veto intentions and post-veto intentions and argue that a failure of a BCI to differentiate between the two could lead to some troubling legal and ethical problems.
  20. Ontologies, Mental Disorders and Prototypes.Maria Cristina Amoretti, Marcello Frixione, Antonio Lieto & Greta Adamo - forthcoming - In M. V. D’Alfonso and D. Berkich (ed.), On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence -- IACAP 2016. Berlin, Germany:
  21. Peeking Inside the Black Box: A New Kind of Scientific Visualization.Michael T. Stuart & Nancy J. Nersessian - 2018 - Minds and Machines:1-21.
    Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization that was developed to address just this sort of epistemic opacity. The visualization (...)
  22. “ It’s Like Holding a Human Heart ”: The Design of Vital + Morph, a Shape-Changing Interface for Remote Monitoring.Alberto Boem & Hiroo Iwata - 2018 - AI and Society 33 (4):599-619.
    Based on the concept of data physicalization, we developed Vital + Morph, an interactive surface for remote connection and awareness of clinical data. It enables users located in remote places to monitor and feel the vital signs measured from a hospitalized person through shape-change. We propose shape-changing interfaces as a way of making data physicalization a richer, intriguing and memorable experience that communicates complex information and insights about data. To demonstrate and validate our proposed concept, we developed an exploratory study (...)
  23. Will Big Data Algorithms Dismantle the Foundations of Liberalism?Daniel First - 2018 - AI and Society 33 (4):545-556.
    In Homo Deus, Yuval Noah Harari argues that technological advances of the twenty-first century will usher in a significant shift in how humans make important life decisions. Instead of turning to the Bible or the Quran, to the heart or to our therapists, parents, and mentors, people will turn to Big Data recommendation algorithms to make these choices for them. Much as we rely on Spotify to recommend music to us, we will soon rely on algorithms to decide our careers, (...)
  24. Assistive Device Art: Aiding Audio Spatial Location Through the Echolocation Headphones.Aisen C. Chacin, Hiroo Iwata & Victoria Vesna - 2018 - AI and Society 33 (4):583-597.
    Assistive Device Art derives from the integration of Assistive Technology and Art, involving the mediation of sensorimotor functions and perception from both, psychophysical methods and conceptual mechanics of sensory embodiment. This paper describes the concept of ADA and its origins by observing the phenomena that surround the aesthetics of prosthesis-related art. It also analyzes one case study, the Echolocation Headphones, relating its provenience and performance to this new conceptual and psychophysical approach of tool design. This ADA tool is designed to (...)
  25. Marx’s Concept of Distributive Justice: An Exercise in the Formal Modeling of Political Principles.Antônio Carlos da Rocha Costa - 2018 - AI and Society 33 (4):487-500.
    This paper presents an exercise in the formalization of political principles, by taking as its theme the concept of distributive justice that Karl Marx advanced in his Critique of the Gotha Programme. We first summarize the content of the Critique of the Gotha Programme. Next, we transcribe the core of Marx’s presentation of the concept of distributive justice. Following, we present our formalization of Marx’s conception. Then, we make use of that formal analysis to confront Marx’s principle of distributive justice (...)
  26. Artificial Intelligence: Looking Though the Pygmalion Lens.Karamjit S. Gill - 2018 - AI and Society 33 (4):459-465.
  27. The Quest for Appropriate Models of Human-Likeness: Anthropomorphism in Media Equation Research.Nils Klowait - 2018 - AI and Society 33 (4):527-536.
    Nass’ and Reeves’ media equation paradigm within human–computer interaction challenges long-held assumptions about how users approach computers. Given a rudimentary set of cues present in the system’s design, users are said to unconsciously treat computers as genuine interactants—extending rules of politeness, biases and human interactive conventions to machines. Since the results have wide-ranging implications for HCI research methods, interface design and user experiences, researchers are hard-pressed to experimentally verify the paradigm. This paper focuses on the methodology of attributing the necessary (...)
  28. The Art, Poetics, and Grammar of Technological Innovation as Practice, Process, and Performance.Coeckelbergh Mark - 2018 - AI and Society 33 (4):501-510.
    Usually technological innovation and artistic work are seen as very distinctive practices, and innovation of technologies is understood in terms of design and human intention. Moreover, thinking about technological innovation is usually categorized as “technical” and disconnected from thinking about culture and the social. Drawing on work by Dewey, Heidegger, Latour, and Wittgenstein and responding to academic discourses about craft and design, ethics and responsible innovation, transdisciplinarity, and participation, this essay questions these assumptions and examines what kind of knowledge and (...)
  29. Risk Analysis and Prediction in Welfare Institutions Using a Recommender System.Maayan Zhitomirsky-Geffet & Avital Zadok - 2018 - AI and Society 33 (4):511-525.
    Recommender systems are recently developed computer-assisted tools that support social and informational needs of various communities and help users exploit huge amounts of data for making optimal decisions. In this study, we present a new recommender system for assessment and risk prediction in child welfare institutions in Israel. The system exploits a large diachronic repository of manually completed questionnaires on functioning of welfare institutions and proposes two different rule-based computational models. The system accepts users’ requests via a simple graphical interface, (...)
  30. Eight Legs Good, Two Legs Bad?Richard Ennals - 2018 - AI and Society 33 (4):645-646.
  31. Is Artificial Intelligence Associated with Chemist’s Creativity Represents a Threat to Humanity?Jean-Louis Kraus - 2018 - AI and Society 33 (4):641-643.
  32. Rethinking the Experiment: Necessary Evolution.Mihai Nadin - 2018 - AI and Society 33 (4):467-485.
    The current assumptions of knowledge acquisition brought about the crisis in the reproducibility of experiments. A complementary perspective should account for the specific causality characteristic of life by integrating past, present, and future. A “second Cartesian revolution,” informed by and in awareness of anticipatory processes, should result in scientific methods that transcend the theology of determinism and reductionism. In our days, science, itself an expression of anticipatory activity, makes possible alternative understandings of reality and its dynamics. For this purpose, the (...)
  33. Reflections on James Bond of AI.Urjit A. Yajnik - 2018 - AI and Society 33 (4):637-640.
  34. Brexit for Beginners”, or “The Young Gentlemen of Etona.Richard Ennals - 2018 - AI and Society 33 (4):633-635.
  35. Artificial Intelligence and Collective Intelligence: The Emergence of a New Field.Geoff Mulgan - 2018 - AI and Society 33 (4):631-632.
  36. Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence.Seth D. Baum - 2018 - AI and Society 33 (4):565-572.
    Artificial intelligence experts are currently divided into “presentist” and “futurist” factions that call for attention to near-term and long-term AI, respectively. This paper argues that the presentist–futurist dispute is not the best focus of attention. Instead, the paper proposes a reconciliation between the two factions based on a mutual interest in AI. The paper further proposes realignment to two new factions: an “intellectualist” faction that seeks to develop AI for intellectual reasons and a “societalist faction” that seeks to develop AI (...)
  37. Games Between Humans and AIs.Stephen J. DeCanio - 2018 - AI and Society 33 (4):557-564.
    Various potential strategic interactions between a “strong” Artificial intelligence and humans are analyzed using simple 2 × 2 order games, drawing on the New Periodic Table of those games developed by Robinson and Goforth. Strong risk aversion on the part of the human player leads to shutting down the AI research program, but alternative preference orderings by the human and the AI result in Nash equilibria with interesting properties. Some of the AI-Human games have multiple equilibria, and in other cases (...)
  38. A Glance of Cultural Differences in the Case of Interactive Device Art Installation idMirror.Maša Jazbec, Floris Erich Arden & Hiroo Iwata - 2018 - AI and Society 33 (4):573-582.
    The idMirror project consists of a tablet computer, specially equipped with a small mirror and a newly developed android app. The Android application uses face recognition to detect the location of the user’s face in relation to the device and based on this renders a computer graphic at the location of his or her reflection. The goal of the idMirror project setting as a research tool was to make an exploratory study on cultural differences at exhibition venues. For this study, (...)
  39. The Problem of Self in Nāgārjuna’s Philosophy: A Contemporary Perspective.Rajakishore Nath - 2018 - AI and Society 33 (4):537-543.
    In this paper, I would like to examine Nāgārjuna’s idea of the self and its contemporaneity interpretations in philosophy. As we know, Nāgārjuna examines the emptiness of various things in which the emptiness of the self occupies an important position in the Buddhist philosophical tradition. The main aim of this paper is to understand the meaning of emptiness to explain the nature of the self and to show how it is different from the substantial notion of self. However, Nāgārjuna’s idea (...)
  40. EEG Efficient Classification of Imagined Right and Left Hand Movement Using RBF Kernel SVM and the Joint CWT_PCA.Rihab Bousseta, Salma Tayeb, Issam El Ouakouak, Mourad Gharbi, Fakhita Regragui & Majid Mohamed Himmi - 2018 - AI and Society 33 (4):621-629.
    Brain–machine interfaces are systems that allow the control of a device such as a robot arm through a person’s brain activity; such devices can be used by disabled persons to enhance their life and improve their independence. This paper is an extended version of a work that aims at discriminating between left and right imagined hand movements using a support vector machine classifier to control a robot arm in order to help a person to find an object in the environment. (...)
  41. An Invitation to Critical Social Science of Big Data: From Critical Theory and Critical Research to Omniresistance.Ulaş Başar Gezgin - forthcoming - AI and Society:1-9.
    How a social science of big data would look like? In this article, we exemplify such a social science through a number of cases. We start our discussion with the epistemic qualities of big data. We point out to the fact that contrary to the big data champions, big data is neither new nor a miracle without any error nor reliable and rigorous as assumed by its cheer leaders. Secondly, we identify three types of big data: natural big data, artificial (...)
  42. Cognitive Computation Sans Representation.Paul Schweizer - 2017 - In Thomas M. Powers (ed.), Philosophy and Computing: Essays in epistemology, philosophy of mind, logic, and ethics,. Cham, Switzerland: Springer. pp. 65-84.
    The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that representational content is (...)
  43. Artificial Brains and Hybrid Minds.Paul Schweizer - 2018 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2017. Cham, Switzerland: Springer. pp. 81-91.
    The paper develops two related thought experiments exploring variations on an ‘animat’ theme. Animats are hybrid devices with both artificial and biological components. Traditionally, ‘components’ have been construed in concrete terms, as physical parts or constituent material structures. Many fascinating issues arise within this context of hybrid physical organization. However, within the context of functional/computational theories of mentality, demarcations based purely on material structure are unduly narrow. It is abstract functional structure which does the key work in characterizing the respective (...)
  44. AI & Society: In Memoriam.Karamjit S. Gill - forthcoming - AI and Society:1-2.
  45. An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...)
  46. Super Artifacts: Personal Devices as Intrinsically Multifunctional, Meta-Representational Artifacts with a Highly Variable Structure.Marco Fasoli - 2018 - Minds and Machines 28 (3):589-604.
    The computer is one of the most complex artifacts ever built. Given its complexity, it can be described from many different points of view. The aim of this paper is to investigate the representational structure and multifunctionality of a particular subset of computers, namely personal devices from a user-centred perspective. The paper also discusses the concept of “cognitive task”, as recently employed in some definitions of cognitive artifacts, and investigates the metaphysical properties of such artifacts. From a representational point of (...)
  47. Computing Mechanisms Without Proper Functions.Joe Dewhurst - 2018 - Minds and Machines 28 (3):569-588.
    The aim of this paper is to begin developing a version of Gualtiero Piccinini’s mechanistic account of computation that does not need to appeal to any notion of proper functions. The motivation for doing so is a general concern about the role played by proper functions in Piccinini’s account, which will be evaluated in the first part of the paper. I will then propose a potential alternative approach, where computing mechanisms are understood in terms of Carl Craver’s perspectival account of (...)
  48. The Role of Observers in Computations.Peter Leupold - 2018 - Minds and Machines 28 (3):427-444.
    John Searle raised the question whether all computation is observer-relative. Indeed, all of the common views of computation, be they semantical, functional or causal rely on mapping something onto the states of a physical or abstract process. In order to effectively execute such a mapping, this process would have to be observed in some way. Thus a probably syntactical analysis by an observer seems to be essential for judging whether a given process implements some computation or not. In order to (...)
  49. A Computational Conundrum: “What is a Computer?” A Historical Overview.Istvan S. N. Berkeley - 2018 - Minds and Machines 28 (3):375-383.
    This introduction begins by posing the question that this Special Issue addresses and briefly considers historical precedents and why the issue is important. The discussion then moves on to the consideration of important milestones in the history of computing, up until the present time. A brief specification of the essential components of computational systems is then offered. The final section introduces the papers that are included in this volume.
  50. Virtual Machines and Real Implementations.Tyler Millhouse - 2018 - Minds and Machines 28 (3):465-489.
    What does it take to implement a computer? Answers to this question have often focused on what it takes for a physical system to implement an abstract machine. As Joslin observes, this approach neglects cases of software implementation—cases where one machine implements another by running a program. These cases, Joslin argues, highlight serious problems for mapping accounts of computer implementation—accounts that require a mapping between elements of a physical system and elements of an abstract machine. The source of these problems (...)
1 — 50 / 10769