In the 70 years since Alan Turing’s ‘Computing Machinery and Intelligence’ appeared in Mind, there have been two widely-accepted interpretations of the Turing test: the canonical behaviourist interpretation and the rival inductive or epistemic interpretation. These readings are based on Turing’s Mind paper; few seem aware that Turing described two other versions of the imitation game. I have argued that both readings are inconsistent with Turing’s 1948 and 1952 statements about intelligence, and fail to explain the design of his game. (...) I argue instead for a response-dependence interpretation. This interpretation has implications for Turing’s view of free will: I argue that Turing’s writings suggest a new form of free will compatibilism, which I call response-dependence compatibilism. The philosophical implications of rethinking Turing’s test go yet further. It is assumed by numerous theorists that Turing anticipated the computational theory of mind. On the contrary, I argue, his remarks on intelligence and free will lead to a new objection to computationalism. (shrink)
The canonical version of possible worlds semantics for story prefixes is due to David Lewis. This paper reassesses Lewis's theory and draws attention to some novel problems for his account.
Anthropomorphism is a phenomenon that describes the human tendency to see human-like shapes in the environment. It has considerable consequences for people’s choices and beliefs. With the increased presence of robots, it is important to investigate the optimal design for this tech- nology. In this paper we discuss the potential benefits and challenges of building anthropomorphic robots, from both a philosophical perspective and from the viewpoint of empir- ical research in the fields of human–robot interaction and social psychology. We believe (...) that this broad investigation of anthropomorphism will not only help us to understand the phenomenon better, but can also indicate solutions for facil- itating the integration of human-like machines in the real world. (shrink)
It is not widely realised that Turing was probably the first person to consider building computing machines out of simple, neuron-like elements connected together into networks in a largely random manner. Turing called his networks 'unorganised machines'. By the application of what he described as 'appropriate interference, mimicking education' an unorganised machine can be trained to perform any task that a Turing machine can carry out, provided the number of 'neurons' is sufficient. Turing proposed simulating both the behaviour of the (...) network and the training process by means of a computer program. We outline Turing's connectionist project of 1948. (shrink)
Turing’s analysis of computability has recently been challenged; it is claimed that it is circular to analyse the intuitive concept of numerical computability in terms of the Turing machine. This claim threatens the view, canonical in mathematics and cognitive science, that the concept of a systematic procedure or algorithm is to be explicated by reference to the capacities of Turing machines. We defend Turing’s analysis against the challenge of ‘deviant encodings’.Keywords: Systematic procedure; Turing machine; Church–Turing thesis; Deviant encoding; Acceptable encoding; (...) Turing’s analysis of computability; Turing’s Notational Thesis. (shrink)
The widespread tendency, even within AI, to anthropomorphize machines makes it easier to convince us of their intelligence. How can any putative demonstration of intelligence in machines be trusted if the AI researcher readily succumbs to make-believe? This is (what I shall call) the forensic problem of anthropomorphism. I argue that the Turing test provides a solution. This paper illustrates the phenomenon of misplaced anthropomorphism and presents a new perspective on Turingʼs imitation game. It also examines the role of the (...) Turing test in relation to the current dispute between human-level AI and ‘mindless intelligence’. (shrink)
According to the early futurist Julian Huxley, human life as we know it is ‘a wretched makeshift, rooted in ignorance’. With modern science, however, ‘the present limitations and miserable frustrations of our existence could be in large measure surmounted’ and human life could be ‘transcended by a state of existence based on the illumination of knowledge’ (1957b, p. 16).
Recent work in social robotics, which is aimed both at creating an artificial intelligence and providing a test-bed for psychological theories of human social development, involves building robots that can learn from ‘face-to-face’ interaction with human beings — as human infants do. The building-blocks of this interaction include the robot’s ‘expressive’ behaviours, for example, facial-expression and head-and-neck gesture. There is here an ideal opportunity to apply Wittgensteinian conceptual analysis to current theoretical and empirical work in the sciences. Wittgenstein’s philosophical psychology (...) is sympathetic to embodied and situated Artificial Intelligence (see Proudfoot, 2002, 2004b), and his discussion of facial-expression is remarkably modern. In this chapter, I explore his approach to facial-expression, using smiling as a representative example, and apply it to the canonical interactive face robot, Cynthia Breazeal’s Kismet (see e.g. Breazeal, 2009, 2002). I assess the claim that Kismet has expressive behaviours, with the aim of generating philosophical insights for AI. (shrink)
In this article the central philosophical issues concerning human-level artificial intelligence (AI) are presented. AI largely changed direction in the 1980s and 1990s, concentrating on building domain-specific systems and on sub-goals such as self-organization, self-repair, and reliability. Computer scientists aimed to construct intelligence amplifiers for human beings, rather than imitation humans. Turing based his test on a computer-imitates-human game, describing three versions of this game in 1948, 1950, and 1952. The famous version appears in a 1950 article in Mind, ‘Computing (...) Machinery and Intelligence’ (Turing 1950). The interpretation of Turing's test is that it provides an operational definition of intelligence (or thinking) in machines, in terms of behavior. ‘Intelligent Machinery’ sets out the thesis that whether an entity is intelligent is determined in part by our responses to the entity's behavior. Wittgenstein frequently employed the idea of a human being acting like a reliable machine. A ‘living reading-machine’ is a human being or other creature that is given written signs, for example Chinese characters, arithmetical symbols, logical symbols, or musical notation, and who produces text spoken aloud, solutions to arithmetical problems, and proofs of logical theorems. Wittgenstein mentions that an entity that manipulates symbols genuinely reads only if he or she has a particular history, involving learning and training, and participates in a social environment that includes normative constraints and further uses of the symbols. (shrink)
According to Richard Routley, a comprehensive theory of fiction is impossible, since almost anything is in principle imaginable. In my view, Routley is right: for any purported logic of fiction, there will be actual or imaginable fictions that successfully counterexample the logic. Using the example of ‘impossible’ fictions, I test this claim against theories proposed by Routley’s Meinongian contemporaries and also by Routley himself and his 21st century heirs. I argue that the phenomenon of impossible fictions challenges even today’s modal (...) Meinongians. (shrink)
In 1948 Turing claimed that the concept of intelligence is an “emotional concept”. An emotional concept is a response-dependent concept and Turing’s remarks in his 1948 and 1952 papers suggest a response-dependence approach to the concept of intelligence. On this view, whether or not an object is intelligent is determined, as Turing said, “as much by our own state of mind and training as by the properties of the object”. His discussion of free will suggests a similar approach. Turing said, (...) for example, that if a machine’s program “results in its doing something interesting which we had not anticipated I should be inclined to say that the machine had originated something”. This points to a new form of free will compatibilism, which I call response-dependence compatibilism and explore here. (shrink)
We set the Turing Test in the historical context of the development of machine intelligence, describe the different forms of the test and its rationale, and counter common misinterpretations and objections. Recently published material by Turing casts fresh light on his thinking.
Turing used the expression “emotional” in three distinct ways: to state his philosophical theory of the concept of intelligence, to classify arguments for and against the possibility of machine intelligence, and to describe the education of a “child machine”. The remarks on emotion include several of the most important philosophical claims. This paper analyses these remarks and their significance for current research in Artificial Intelligence.
In this paper I argue that Turing proposed a new approach to the concept of thinking, based on his claim that intelligence is an ‘emotional concept’; and that the response-dependence interpretation of Turing’s ‘criterion for “thinking”’ is a better fit with his writings than orthodox interpretations. The aim of this paper is to clarify the response-dependence interpretation, by addressing such questions as: What did Turing mean by the expression ‘emotional’? Is Turing’s criterion subjective? Are ‘emotional’ judgements decided by social consensus? (...) Turing’s take on these issues impacts current philosophical debates on response-dependent concepts and on the nature of artificial intelligence. (shrink)
This paper explores the relevance of Wittgenstein’s philosophi- cal psychology for the two major contemporary approaches to the relation between language and cognition. As Pinker describes it, on the ‘Standard Social Science Model’ language is ‘an insidious shaper of thought’. According to Pinker’s own widely–shared alternative view, ‘Language is the magnificent faculty that we use to get thoughts from one head to another’. I investigate Wittgenstein’s powerful challenges to the hypothe- sis that language is a device for communicating independently constituted (...) (or individuated) thoughts. I argue that Wittgenstein offers instead a subtle version of the thesis that language determines thought. (shrink)
Given (1) Wittgensteins externalist analysis of the distinction between following a rule and behaving in accordance with a rule, (2) prima facie connections between rule-following and psychological capacities, and (3) pragmatic issues about training, it follows that most, even all, future artificially intelligent computers and robots will not use language, possess concepts, or reason. This argument suggests that AIs traditional aim of building machines with minds, exemplified in current work on cognitive robotics, is in need of substantial revision.
Turing was probably the first person to advocate the pursuit of robotics as a route to Artificial Intelligence and Wittgenstein the first to argue that, without the appropriate history, no machine could be intelligent. Wittgenstein anticipated much recent theorizing about the mind, including aspects of connectionist theo- ries of mind and the situated cognition approach in AI. Turing and Wittgenstein had a wary respect for each other and there is significant overlap in their work, in both the philosophy of mathematics (...) and the philosophy of AI. Both took (what would now be called) an externalist stance with respect to machine intelligence. But whereas Turing was concerned only with behaviour, Wittgenstein emphasized in addition history and environment. I show that Wittgenstein's externalist analysis of psychological capacities entails that most, even all, future "artificially intelligent" computers and robots will not use language, possess concepts, or reason. The ar- gument tells, not against AI, but only against AI's traditional and romantic goal of building an artificial "res cogitans" - as first embraced by Turing and now ex- emplified in the work of Brooks and others on cognitive robotics. This argument supports the stance of the growing number of AI researchers whose aim is to pro- duce, not thinking and understanding machines, but high-performance "advanced information processing systems.". (shrink)