Logic and Reality is a collection of essays by philosophers, logicians, mathematicians, and computer scientists, celebrating the work of the late distinguished philosopher Arthur Prior on the eightieth anniversary of his birth. Topics range from philosophical discussions of the nature of time and of the nature of logic itself, to descriptions of computer systems that can reason and take account of the fact that they exist in a temporal world.
Alan Turing, pioneer of computing and WWII codebreaker, is one of the most important and influential thinkers of the twentieth century. In this volume for the first time his key writings are made available to a broad, non-specialist readership. They make fascinating reading both in their own right and for their historic significance: contemporary computational theory, cognitive science, artificial intelligence, and artificial life all spring from this ground-breaking work, which is also rich in philosophical and logical insight.
Do the dynamics of a physical system determine what function the system computes? Except in special cases, the answer is no: it is often indeterminate what function a given physical system computes. Accordingly, care should be taken when the question ‘What does a particular neuronal system do?’ is answered by hypothesising that the system computes a particular function. The phenomenon of the indeterminacy of computation has important implications for the development of computational explanations of biological systems. Additionally, the phenomenon lends (...) some support to the idea that a single neuronal structure may perform multiple cognitive functions, each subserved by a different computation. We provide an overarching conceptual framework in order to further the philosophical debate on the nature of computational indeterminacy and computational explanation. (shrink)
Alan M. Turing, pioneer of computing and WWII codebreaker, is one of the most important and influential thinkers of the twentieth century. In this volume for the first time his key writings are made available to a broad, non-specialist readership. They make fascinating reading both in their own right and for their historic significance: contemporary computational theory, cognitive science, artificial intelligence, and artificial life all spring from this ground-breaking work, which is also rich in philosophical and logical insight. An introduction (...) by leading Turing expert JackCopeland provides the background and guides the reader through the selection. About Alan Turing Alan Turing FRS OBE, (1912-1954) studied mathematics at King's College, Cambridge. He was elected a Fellow of King's in March 1935, at the age of only 22. In the same year he invented the abstract computing machines - now known simply as Turing machines - on which all subsequent stored-program digital computers are modelled. During 1936-1938 Turing continued his studies, now at Princeton University. He completed a PhD in mathematical logic, analysing the notion of 'intuition' in mathematics and introducing the idea of oracular computation, now fundamental in mathematical recursion theory. An 'oracle' is an abstract device able to solve mathematical problems too difficult for the universal Turing machine. In the summer of 1938 Turing returned to his Fellowship at King's. When WWII started in 1939 he joined the wartime headquarters of the Government Code and Cypher School (GC&CS) at Bletchley Park, Buckinghamshire. Building on earlier work by Polish cryptanalysts, Turing contributed crucially to the design of electro-mechanical machines ('bombes') used to decipher Enigma, the code by means of which the German armed forces sought to protect their radio communications. Turing's work on the version of Enigma used by the German navy was vital to the battle for supremacy in the North Atlantic. He also contributed to the attack on the cyphers known as 'Fish'. Based on binary teleprinter code, Fish was used during the latter part of the war in preference to morse-based Enigma for the encryption of high-level signals, for example messages from Hitler and other members of the German High Command. It is estimated that the work of GC&CS shortened the war in Europe by at least two years. Turing received the Order of the British Empire for the part he played. In 1945, the war over, Turing was recruited to the National Physical Laboratory (NPL) in London, his brief to design and develop an electronic computer - a concrete form of the universal Turing machine. Turing's report setting out his design for the Automatic Computing Engine (ACE) was the first relatively complete specification of an electronic stored-program general-purpose digital computer. Delays beyond Turing's control resulted in NPL's losing the race to build the world's first working electronic stored-program digital computer - an honour that went to the Royal Society Computing Machine Laboratory at Manchester University, in June 1948. Discouraged by the delays at NPL, Turing took up the Deputy Directorship of the Royal Society Computing Machine Laboratory in that year. Turing was a founding father of modern cognitive science and a leading early exponent of the hypothesis that the human brain is in large part a digital computing machine, theorising that the cortex at birth is an 'unorganised machine' which through 'training' becomes organised 'into a universal machine or something like it'. He also pioneered Artificial Intelligence. Turing spent the rest of his short career at Manchester University, being appointed to a specially created Readership in the Theory of Computing in May 1953. He was elected a Fellow of the Royal Society of London in March 1951 (a high honour). (shrink)
Presupposing no familiarity with the technical concepts of either philosophy or computing, this clear introduction reviews the progress made in AI since the inception of the field in 1956. Copeland goes on to analyze what those working in AI must achieve before they can claim to have built a thinking machine and appraises their prospects of succeeding. There are clear introductions to connectionism and to the language of thought hypothesis which weave together material from philosophy, artificial intelligence and neuroscience. (...) John Searle's attacks on AI and cognitive science are countered and close attention is given to foundational issues, including the nature of computation, Turing Machines, the Church-Turing Thesis and the difference between classical symbol processing and parallel distributed processing. The book also explores the possibility of machines having free will and consciousness and concludes with a discussion of in what sense the human brain may be a computer. (shrink)
To compute is to execute an algorithm. More precisely, to say that a device or organ computes is to say that there exists a modelling relationship of a certain kind between it and a formal specification of an algorithm and supporting architecture. The key issue is to delimit the phrase of a certain kind. I call this the problem of distinguishing between standard and nonstandard models of computation. The successful drawing of this distinction guards Turing's 1936 analysis of computation against (...) a difficulty that has persistently been raised against it, and undercuts various objections that have been made to the computational theory of mind. (shrink)
Presupposing no familiarity with the technical concepts of either philosophy or computing, this clear introduction reviews the progress made in AI since the inception of the field in 1956. Copeland goes on to analyze what those working in AI must achieve before they can claim to have built a thinking machine and appraises their prospects of succeeding.There are clear introductions to connectionism and to the language of thought hypothesis which weave together material from philosophy, artificial intelligence and neuroscience. John (...) Searle's attacks on AI and cognitive science are countered and close attention is given to foundational issues, including the nature of computation, Turing Machines, the Church-Turing Thesis and the difference between classical symbol processing and parallel distributed processing. The book also explores the possibility of machines having free will and consciousness and concludes with a discussion of in what sense the human brain may be a computer. (shrink)
There are various equivalent formulations of the Church-Turing thesis. A common one is that every effective computation can be carried out by a Turing machine. The Church-Turing thesis is often misunderstood, particularly in recent writing in the philosophy of mind.
Turing''s test has been much misunderstood. Recently unpublished material by Turing casts fresh light on his thinking and dispels a number of philosophical myths concerning the Turing test. Properly understood, the Turing test withstands objections that are popularly believed to be fatal.
Accelerating Turing machines are Turing machines of a sort able to perform tasks that are commonly regarded as impossible for Turing machines. For example, they can determine whether or not the decimal representation of contains n consecutive 7s, for any n; solve the Turing-machine halting problem; and decide the predicate calculus. Are accelerating Turing machines, then, logically impossible devices? I argue that they are not. There are implications concerning the nature of effective procedures and the theoretical limits of computability. Contrary (...) to a recent paper by Bringsjord, Bello and Ferrucci, however, the concept of an accelerating Turing machine cannot be used to shove up Searle's Chinese room argument. (shrink)
Accelerating Turing machines have attracted much attention in the last decade or so. They have been described as “the work-horse of hypercomputation”. But do they really compute beyond the “Turing limit”—e.g., compute the halting function? We argue that the answer depends on what you mean by an accelerating Turing machine, on what you mean by computation, and even on what you mean by a Turing machine. We show first that in the current literature the term “accelerating Turing machine” is used (...) to refer to two very different species of accelerating machine, which we call end-stage-in and end-stage-out machines, respectively. We argue that end-stage-in accelerating machines are not Turing machines at all. We then present two differing conceptions of computation, the internal and the external, and introduce the notion of an epistemic embedding of a computation. We argue that no accelerating Turing machine computes the halting function in the internal sense. Finally, we distinguish between two very different conceptions of the Turing machine, the purist conception and the realist conception; and we argue that Turing himself was no subscriber to the purist conception. We conclude that under the realist conception, but not under the purist conception, an accelerating Turing machine is able to compute the halting function in the external sense. We adopt a relatively informal approach throughout, since we take the key issues to be philosophical rather than mathematical. (shrink)
It is not widely realised that Turing was probably the first person to consider building computing machines out of simple, neuron-like elements connected together into networks in a largely random manner. Turing called his networks 'unorganised machines'. By the application of what he described as 'appropriate interference, mimicking education' an unorganised machine can be trained to perform any task that a Turing machine can carry out, provided the number of 'neurons' is sufficient. Turing proposed simulating both the behaviour of the (...) network and the training process by means of a computer program. We outline Turing's connectionist project of 1948. (shrink)
Turing’s analysis of computability has recently been challenged; it is claimed that it is circular to analyse the intuitive concept of numerical computability in terms of the Turing machine. This claim threatens the view, canonical in mathematics and cognitive science, that the concept of a systematic procedure or algorithm is to be explicated by reference to the capacities of Turing machines. We defend Turing’s analysis against the challenge of ‘deviant encodings’.Keywords: Systematic procedure; Turing machine; Church–Turing thesis; Deviant encoding; Acceptable encoding; (...) Turing’s analysis of computability; Turing’s Notational Thesis. (shrink)
A myth has arisen concerning Turing's paper of 1936, namely that Turing set forth a fundamental principle concerning the limits of what can be computed by machine - a myth that has passed into cognitive science and the philosophy of mind, to wide and pernicious effect. This supposed principle, sometimes incorrectly termed the 'Church-Turing thesis', is the claim that the class of functions that can be computed by machines is identical to the class of functions that can be computed by (...) Turing machines. In point of fact Turing himself nowhere endorses, nor even states, this claim (nor does Church). I describe a number of notional machines, both analogue and digital, that can compute more than a universal Turing machine. These machines are exemplars of the class of _nonclassical_ computing machines. Nothing known at present rules out the possibility that machines in this class will one day be built, nor that the brain itself is such a machine. These theoretical considerations undercut a number of foundational arguments that are commonly rehearsed in cognitive science, and gesture towards a new class of cognitive models. (shrink)
We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations.
It is not widely realised that Turing was probably the first person to consider building computing machines out of simple, neuron-like elements connected together into networks in a largely random manner. Turing called his networks unorganised machines. By the application of what he described as appropriate interference, mimicking education an unorganised machine can be trained to perform any task that a Turing machine can carry out, provided the number of neurons is sufficient. Turing proposed simulating both the behaviour of the (...) network and the training process by means of a computer program. We outline Turing's connectionist project of 1948. (shrink)
What are the limits of physical computation? In his ‘Church’s Thesis and Principles for Mechanisms’, Turing’s student Robin Gandy proved that any machine satisfying four idealised physical ‘principles’ is equivalent to some Turing machine. Gandy’s four principles in effect define a class of computing machines (‘Gandy machines’). Our question is: What is the relationship of this class to the class of all (ideal) physical computing machines? Gandy himself suggests that the relationship is identity. We do not share this view. We (...) will point to interesting examples of (ideal) physical machines that fall outside the class of Gandy machines and compute functions that are not Turing-machine computable. (shrink)
The tape is divided into squares, each square bearing a single symbol—'0' or '1', for example. This tape is the machine's general-purpose storage medium: the machine is set in motion with its input inscribed on the tape, output is written onto the tape by the head, and the tape serves as a short-term working memory for the results of intermediate steps of the computation. The program governing the particular computation that the machine is to perform is also stored on the (...) tape. A small, fixed program that is 'hard-wired' into the head enables the head to read and execute the instructions of whatever program is on the tape. The machine's atomic operations are very simple—for example, 'move left one square', 'move right one square', 'identify the symbol currently beneath the head', 'write 1 on the square that is beneath the head', and 'write 0 on the square that is beneath the head'. Complexity of operation is achieved by the chaining together of large numbers of these simple atoms. Any universal Turing machine can be programmed to carry out any calculation that can be performed by a human mathematician working with paper and pencil in accordance with some algorithmic method. This is what is meant by calling these machines 'universal'. (shrink)
Famous examples of conceivability arguments include (i) Descartes’ argument for mind-body dualism, (ii) Kripke's ‘modal argument’ against psychophysical identity theory, (iii) Chalmers’ ‘zombie argument’ against materialism, and (iv) modal versions of the ontological argument for theism. In this paper, we show that for any such conceivability argument, C, there is a corresponding ‘mirror argument’, M. M is deductively valid and has a conclusion that contradicts C's conclusion. Hence, a proponent of C—henceforth, a ‘conceivabilist’—can be warranted in holding that C's premises (...) are conjointly true only if she can find fault with one of M's premises. But M's premises are modelled on a pair of C's premises. The same reasoning that supports the latter supports the former. For this reason, a conceivabilist can repudiate M's premises only on pain of severely undermining C's premises. We conclude on this basis that all conceivability arguments, including each of (i)–(iv), are fallacious. (shrink)
Searle has recently used two adaptations of his Chinese room argument in an attack on connectionism. I show that these new forms of the argument are fallacious. First I give an exposition of and rebuttal to the original Chinese room argument, and then a brief introduction to the essentials of connectionism.
In this article the central philosophical issues concerning human-level artificial intelligence (AI) are presented. AI largely changed direction in the 1980s and 1990s, concentrating on building domain-specific systems and on sub-goals such as self-organization, self-repair, and reliability. Computer scientists aimed to construct intelligence amplifiers for human beings, rather than imitation humans. Turing based his test on a computer-imitates-human game, describing three versions of this game in 1948, 1950, and 1952. The famous version appears in a 1950 article in Mind, ‘Computing (...) Machinery and Intelligence’ (Turing 1950). The interpretation of Turing's test is that it provides an operational definition of intelligence (or thinking) in machines, in terms of behavior. ‘Intelligent Machinery’ sets out the thesis that whether an entity is intelligent is determined in part by our responses to the entity's behavior. Wittgenstein frequently employed the idea of a human being acting like a reliable machine. A ‘living reading-machine’ is a human being or other creature that is given written signs, for example Chinese characters, arithmetical symbols, logical symbols, or musical notation, and who produces text spoken aloud, solutions to arithmetical problems, and proofs of logical theorems. Wittgenstein mentions that an entity that manipulates symbols genuinely reads only if he or she has a particular history, involving learning and training, and participates in a social environment that includes normative constraints and further uses of the symbols. (shrink)
In his PhD thesis (1938) Turing introduced what he described as 'a new kind of machine'. He called these 'O-machines'. The present paper employs Turing's concept against a number of currently fashionable positions in the philosophy of mind.
This volume celebrates the various facets of Alan Turing (1912–1954), the British mathematician and computing pioneer, widely considered as the father of computer science. It is aimed at the general reader, with additional notes and references for those who wish to explore the life and work of Turing more deeply. -/- The book is divided into eight parts, covering different aspects of Turing’s life and work. -/- Part I presents various biographical aspects of Turing, some from a personal point of (...) view. -/- Part II presents Turing’s universal machine (now known as a Turing machine), which provides a theoretical framework for reasoning about computation. His 1936 paper on this subject is widely seen as providing the starting point for the field of theoretical computer science. -/- Part III presents Turing’s working on codebreaking during World War II. While the War was a disastrous interlude for many, for Turing it provided a nationally important outlet for his creative genius. It is not an overstatement to say that without Turing, the War would probably have lasted longer, and may even have been lost by the Allies. The sensitive nature of Turning’s wartime work meant that much of this has been revealed only relatively recently. -/- Part IV presents Turing’s post-War work on computing, both at the National Physical Laboratory and at the University of Manchester. He made contributions to both hardware design, through the ACE computer at the NPL, and software, especially at Manchester. Part V covers Turing’s contribution to machine intelligence (now known as Artificial Intelligence or AI). Although Turing did not coin the term, he can be considered a founder of this field which is still active today, authoring a seminal paper in 1950. -/- Part VI covers morphogenesis, Turing’s last major scientific contribution, on the generation of seemingly random patterns in biology and on the mathematics behind such patterns. Interest in this area has increased rapidly in recent times in the field of bioinformatics, with Turing’s 1952 paper on this subject being frequently cited. -/- Part VII presents some of Turing’s mathematical influences and achievements. Turing was remarkably free of external influences, with few co-authors – Max Newman was an exception and acted as a mathematical mentor in both Cambridge and Manchester. -/- Part VIII considers Turing in a wider context, including his influence and legacy to science and in the public consciousness. -/- Reflecting Turing’s wide influence, the book includes contributions by authors from a wide variety of backgrounds. Contemporaries provide reminiscences, while there are perspectives by philosophers, mathematicians, computer scientists, historians of science, and museum curators. Some of the contributors gave presentations at Turing Centenary meetings in 2012 in Bletchley Park, King’s College Cambridge, and Oxford University, and several of the chapters in this volume are based on those presentations – some through transcription of the original talks, especially for Turing’s contemporaries, now aged in their 90s. Sadly, some contributors died before the publication of this book, hence its dedication to them. -/- For those interested in personal recollections, Chapters 2, 3, 11, 12, 16, 17, and 36 will be of interest. For philosophical aspects of Turing’s work, see Chapters 6, 7, 26–31, and 41. Mathematical perspectives can be found in Chapters 35 and 37–39. Historical perspectives can be found in Chapters 4, 8, 9, 10, 13–15, 18, 19, 21–25, 34, and 40. With respect to Turing’s body of work, the treatment in Parts II–VI is broadly chronological. We have attempted to be comprehensive with respect to all the important aspects of Turing’s achievements, and the book can be read cover to cover, or the chapters can be tackled individually if desired. There are cross-references between chapters where appropriate, and some chapters will inevitably overlap. -/- We hope that you enjoy this volume as part of your library and that you will dip into it whenever you wish to enter the multifaceted world of Alan Turing. (shrink)
Accelerated Turing machines are Turing machines that perform tasks commonly regarded as impossible, such as computing the halting function. The existence of these notional machines has obvious implications concerning the theoretical limits of computability.
This paper charts some early history of the possible worlds semantics for modal logic, starting with the pioneering work of Prior and Meredith. The contributions of Geach, Hintikka, Kanger, Kripke, Montague, and Smiley are also discussed.
The revolution in semantics in the late 1960s and 1970s overturned an earlier competing paradigm, ‘translational’ semantics. I revive and defend Prior’s translational semantics for modals and tense-modals. I also show how to extend Prior’s propositional modal semantics to quantificational modal logic, and use the resulting semantics to formalize Prior’s own counterexample to the Barcan Formula.