Accelerated Turing machines are Turing machines that perform tasks commonly regarded as impossible, such as computing the halting function. The existence of these notional machines has obvious implications concerning the theoretical limits of computability.
Fifty years ago this month[[June]], in the Computing Machine Laboratory at Manchester University, the world's first electronic stored-program computer performed its first calculation. The tiny program, stored on the face of a cathode ray tube, was just 17 instructions long. Electronic engineers Freddie Williams and Tom Kilburn built the Manchester computer in accordance with fundamental ideas explained to them by Max Newman, professor of mathematics at Manchester. The computer fell sideways out of research that nobody could have guessed would have (...) any practical application. The initial idea germinated thirteen years earlier in the head of Alan Turing, who was working on a recherché problem in mathematical logic. While thinking about this problem Turing dreamed up an abstract machine, nowadays known simply as the 'universal Turing machine' and which, as he put it, would compute 'all numbers which could naturally be regarded as computable'. The machine consisted of a memory in.. (shrink)
The tense tree method extends Jeffrey’s well-known formulation of classical propositional logic to tense logic (Jeffrey 1991).1 Tense trees combine pure tense logic with features of Prior’s U-calculi (where ‘U’ is the earlier-than relation; see Prior 1967 and the Introduction to this volume). The tree method has a number of virtues: trees are well suited to computational applications; semantically, the tree systems presented here are no less illuminating than model theory; the metatheory associated with tree formulations is often more tractable (...) than that required in a model-theoretic setting; and last but not least the tree method is ideal for pedagogical purposes. (shrink)
By focusing on the caloric composition of hunted prey species, optimal foraging research has shown that hunters usually make economically rational prey choice decisions. However, research by meat scientists suggests that the gustatory appeal of wildlife meats may vary dramatically. In this study, behavioral research indicates that Mayangna and Miskito hunters in Nicaragua inconsistently pursue multiple prey types in the optimal diet set. We use cognitive methods, including unconstrained pile sorts and cultural consensus analysis, to investigate the hypothesis that these (...) partial preferences are influenced by considerations of meat flavor. Native informants exhibit high agreement on the relative appeal of different meats. Given the absence of other noteworthy differences between spider monkeys (Ateles geoffroyi) and howler monkeys (Alouatta palliata), the unappealing flavor of howler monkeys seems to be a factor in the partial preference for this species. (shrink)
There are various equivalent formulations of the Church-Turing thesis. A common one is that every effective computation can be carried out by a Turing machine. The Church-Turing thesis is often misunderstood, particularly in recent writing in the philosophy of mind.
What are the limits of physical computation? In his ‘Church’s Thesis and Principles for Mechanisms’, Turing’s student Robin Gandy proved that any machine satisfying four idealised physical ‘principles’ is equivalent to some Turing machine. Gandy’s four principles in effect define a class of computing machines (‘Gandy machines’). Our question is: What is the relationship of this class to the class of all (ideal) physical computing machines? Gandy himself suggests that the relationship is identity. We do not share this view. We (...) will point to interesting examples of (ideal) physical machines that fall outside the class of Gandy machines and compute functions that are not Turing-machine computable. (shrink)
This paper charts some early history of the possible worlds semantics for modal logic, starting with the pioneering work of Prior and Meredith. The contributions of Geach, Hintikka, Kanger, Kripke, Montague, and Smiley are also discussed.
The aim of this study was to examine the predictions of three theories of human logical reasoning, (a) mental model theory, (b) formal rules theory (e.g., PSYCOP), and (c) the probability heuristics model, regarding the inferences people make for extended categorical syllogisms. Most research with extended syllogisms has been restricted to the quantifier “All” and to an asymmetrical presentation. This study used three-premise syllogisms with the additional quantifiers that are used for traditional categorical syllogisms as well as additional syllogistic figures. (...) The predictions of the theories were examined using overall accuracy as well as a multinomial tree modelling technique. The results demonstrated that all three theories were able to predict response selections at high levels. However, the modelling analyses showed that the probability heuristics model did the best in both Experiments 1 and 2. (shrink)
The mathematical genius Alan Turing (1912-1954) was one of the greatest scientists and thinkers of the 20th century. Now well known for his crucial wartime role in breaking the ENIGMA code, he was the first to conceive of the fundamental principle of the modern computer-the idea of controlling a computing machine's operations by means of a program of coded instructions, stored in the machine's 'memory'. In 1945 Turing drew up his revolutionary design for an electronic computing machine-his Automatic Computing Engine (...) ('ACE'). A pilot model of the ACE ran its first program in 1950 and the production version, the 'DEUCE', went on to become a cornerstone of the fledgling British computer industry. The first 'personal' computer was based on Turing's ACE. -/- Alan Turing's Automatic Computing Engine describes Turing's struggle to build the modern computer. The first detailed history of Turing's contributions to computer science, this text is essential reading for anyone interested in the history of the computer and the history of mathematics. It contains first hand accounts by Turing and by the pioneers of computing who worked with him. As well as relating the story of the invention of the computer, the book clearly describes the hardware and software of the ACE-including the very first computer programs. The book is intended to be accessible to everyone with an interest in computing, and contains numerous diagrams and illustrations as well as original photographs. -/- The book contains chapters describing Turing's path-breaking research in the fields of Artificial Intelligence (AI) and Artificial Life (A-Life). The book has an extensive system of hyperlinks to The Turing Archive for the History of Computing, an on-line library of digital facsimiles of typewritten documents by Turing and the other scientists who pioneered the electronic computer. (shrink)
Accelerating Turing machines are Turing machines of a sort able to perform tasks that are commonly regarded as impossible for Turing machines. For example, they can determine whether or not the decimal representation of contains n consecutive 7s, for any n; solve the Turing-machine halting problem; and decide the predicate calculus. Are accelerating Turing machines, then, logically impossible devices? I argue that they are not. There are implications concerning the nature of effective procedures and the theoretical limits of computability. Contrary (...) to a recent paper by Bringsjord, Bello and Ferrucci, however, the concept of an accelerating Turing machine cannot be used to shove up Searle's Chinese room argument. (shrink)
Ignoring the temporal dimension, an object such as a railway tunnel or a human body is a three-dimensional whole composed of three-dimensional parts. The four-dimensionalist holds that a physical object exhibiting identity across time—Descartes, for example—is a four-dimensional whole composed of 'briefer' four-dimensional objects, its temporal parts. Peter van Inwagen (1990) has argued that four-dimensionalism cannot be sustained, or at best can be sustained only by a counterpart theorist. We argue that different schemes of individuation of temporal parts are available, (...) which undermines van Inwagen's argument. (shrink)
Turing''s test has been much misunderstood. Recently unpublished material by Turing casts fresh light on his thinking and dispels a number of philosophical myths concerning the Turing test. Properly understood, the Turing test withstands objections that are popularly believed to be fatal.
Alan Turing anticipated many areas of current research incomputer and cognitive science. This article outlines his contributionsto Artificial Intelligence, connectionism, hypercomputation, andArtificial Life, and also describes Turing's pioneering role in thedevelopment of electronic stored-program digital computers. It locatesthe origins of Artificial Intelligence in postwar Britain. It examinesthe intellectual connections between the work of Turing and ofWittgenstein in respect of their views on cognition, on machineintelligence, and on the relation between provability and truth. Wecriticise widespread and influential misunderstandings of theChurch–Turing thesis (...) and of the halting theorem. We also explore theidea of hypercomputation, outlining a number of notional machines thatcompute the uncomputable. (shrink)
An exploration of the governmental policy, prison works, and its attendant recidivism provides the general opening. The 1944 Education Act is then taken as furnishing the medical model of personal handicap and deficiency which informed special education at an early stage. The Warnock Report's attempt to shift considerations to educational grounds is examined with a particular focus upon the ensuing definition of special needs and its legacy in legislation following the 1981 Act to the present. Foucault's concept of normalisation is (...) the basis for analysis of the normative elements in the main features of the national curriculum and testing. This latter together with aspects of the 1988 Education Reform Act are examined with regard to their impact upon special educational needs. The conclusions are that 'education works' and the rediscovery of the medical model of personal deficiency. (shrink)
We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations.
The tape is divided into squares, each square bearing a single symbol—'0' or '1', for example. This tape is the machine's general-purpose storage medium: the machine is set in motion with its input inscribed on the tape, output is written onto the tape by the head, and the tape serves as a short-term working memory for the results of intermediate steps of the computation. The program governing the particular computation that the machine is to perform is also stored on the (...) tape. A small, fixed program that is 'hard-wired' into the head enables the head to read and execute the instructions of whatever program is on the tape. The machine's atomic operations are very simple—for example, 'move left one square', 'move right one square', 'identify the symbol currently beneath the head', 'write 1 on the square that is beneath the head', and 'write 0 on the square that is beneath the head'. Complexity of operation is achieved by the chaining together of large numbers of these simple atoms. Any universal Turing machine can be programmed to carry out any calculation that can be performed by a human mathematician working with paper and pencil in accordance with some algorithmic method. This is what is meant by calling these machines 'universal'. (shrink)
In his PhD thesis (1938) Turing introduced what he described as 'a new kind of machine'. He called these 'O-machines'. The present paper employs Turing's concept against a number of currently fashionable positions in the philosophy of mind.
A myth has arisen concerning Turing's paper of 1936, namely that Turing set forth a fundamental principle concerning the limits of what can be computed by machine - a myth that has passed into cognitive science and the philosophy of mind, to wide and pernicious effect. This supposed principle, sometimes incorrectly termed the 'Church-Turing thesis', is the claim that the class of functions that can be computed by machines is identical to the class of functions that can be computed by (...) Turing machines. In point of fact Turing himself nowhere endorses, nor even states, this claim (nor does Church). I describe a number of notional machines, both analogue and digital, that can compute more than a universal Turing machine. These machines are exemplars of the class of _nonclassical_ computing machines. Nothing known at present rules out the possibility that machines in this class will one day be built, nor that the brain itself is such a machine. These theoretical considerations undercut a number of foundational arguments that are commonly rehearsed in cognitive science, and gesture towards a new class of cognitive models. (shrink)