Accelerating Turingmachines are Turingmachines of a sort able to perform tasks that are commonly regarded as impossible for Turingmachines. For example, they can determine whether or not the decimal representation of contains n consecutive 7s, for any n; solve the Turing-machine halting problem; and decide the predicate calculus. Are accelerating Turingmachines, then, logically impossible devices? I argue that they are not. There are implications concerning the nature of (...) effective procedures and the theoretical limits of computability. Contrary to a recent paper by Bringsjord, Bello and Ferrucci, however, the concept of an accelerating Turing machine cannot be used to shove up Searle's Chinese room argument. (shrink)
Accelerated Turingmachines are Turingmachines that perform tasks commonly regarded as impossible, such as computing the halting function. The existence of these notional machines has obvious implications concerning the theoretical limits of computability.
The tape is divided into squares, each square bearing a single symbol—'0' or '1', for example. This tape is the machine's general-purpose storage medium: the machine is set in motion with its input inscribed on the tape, output is written onto the tape by the head, and the tape serves as a short-term working memory for the results of intermediate steps of the computation. The program governing the particular computation that the machine is to perform is also stored on the (...) tape. A small, fixed program that is 'hard-wired' into the head enables the head to read and execute the instructions of whatever program is on the tape. The machine's atomic operations are very simple—for example, 'move left one square', 'move right one square', 'identify the symbol currently beneath the head', 'write 1 on the square that is beneath the head', and 'write 0 on the square that is beneath the head'. Complexity of operation is achieved by the chaining together of large numbers of these simple atoms. Any universal Turing machine can be programmed to carry out any calculation that can be performed by a human mathematician working with paper and pencil in accordance with some algorithmic method. This is what is meant by calling these machines 'universal'. (shrink)
Since the beginning of the twenty-first century there has been an increasing awareness that software rep- resents a blind spot in new media theory. The growing interest in software also influences the argument in this paper, which sets out from the assumption that Alan M. Turing's concept of the universal machine, the first theoretical description of a computer program, is a kind of bachelor machine. Previous writings based on a similar hypothesis have focused either on a comparison of the (...) universal machine and the bachelor machine in terms of the similarities of their structural features, or they have taken the bachelor machine as a metaphor for a man or a computer. Unlike them, this paper stresses the importance of the con- text as a key to interpreting the universal Turing machine as a bachelor machine and, potentially, as a self-portrait. (shrink)
Infinite time Turingmachines extend the operation of ordinary Turingmachines into transfinite ordinal time. By doing so, they provide a natural model of infinitary computability, a theoretical setting for the analysis of the power and limitations of supertask algorithms.
Infinite time Turingmachines extend the operation of ordinary Turingmachines into transfinite ordinal time. By doing so, they provide a natural model of infinitary computability, a theoretical setting for the analysis of the power and limitations of supertask algorithms.
Accelerating Turingmachines have attracted much attention in the last decade or so. They have been described as “the work-horse of hypercomputation”. But do they really compute beyond the “Turing limit”—e.g., compute the halting function? We argue that the answer depends on what you mean by an accelerating Turing machine, on what you mean by computation, and even on what you mean by a Turing machine. We show first that in the current literature the term (...) “accelerating Turing machine” is used to refer to two very different species of accelerating machine, which we call end-stage-in and end-stage-out machines, respectively. We argue that end-stage-in accelerating machines are not Turingmachines at all. We then present two differing conceptions of computation, the internal and the external, and introduce the notion of an epistemic embedding of a computation. We argue that no accelerating Turing machine computes the halting function in the internal sense. Finally, we distinguish between two very different conceptions of the Turing machine, the purist conception and the realist conception; and we argue that Turing himself was no subscriber to the purist conception. We conclude that under the realist conception, but not under the purist conception, an accelerating Turing machine is able to compute the halting function in the external sense. We adopt a relatively informal approach throughout, since we take the key issues to be philosophical rather than mathematical. (shrink)
The common view that the notion of a Turing machine is directly relevant to AI is criticised. It is argued that computers are the result of a convergence of two strands of development with a long history: development of machines for automating various physical processes and machines for performing abstract operations on abstract entities, e.g. doing numerical calculations. Various aspects of these developments are analysed, along with their relevance to AI, and the similarities between computers viewed in (...) this way and animal brains. This comparison depends on a number of distinctions: between energy requirements and information requirements of machines, between ballistic and online control, between internal and external operations, and between various kinds of autonomy and self-awareness. The ideas are all intuitively familiar to software engineers, though rarely made fully explicit. Most of this has nothing to do with Turingmachines or most of the mathematical theory of computation. But it has everything to do with both the scientific task of understanding, modelling or replicating human or animal intelligence and the engineering applications of AI, as well as other applications of computers. (shrink)
We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations.
We extend in a natural way the operation of Turingmachines to infinite ordinal time, and investigate the resulting supertask theory of computability and decidability on the reals. Every $\Pi^1_1$ set, for example, is decidable by such machines, and the semi-decidable sets form a portion of the $\Delta^1_2$ sets. Our oracle concept leads to a notion of relative computability for sets of reals and a rich degree structure, stratified by two natural jump operators.
Can mind be modeled as a Turing machine? If you find such questions irrelevant, e.g. because the subject is already exhausted, then you need not read the book Mind versus Computer (Gams et al., 1991). If, on the other hand, you do find such questions relevant, then perhaps you need not read Dunlop's review of the book (Dunlop, 2000). (...).
This work investigates some of the issues and consequences for the field of artificial intelligence and cognitive science, which are related to the perceived limits of computation with current digital equipment. The Church -Turing thesis and the specific properties of Turingmachines are examined and some of the philosophical 'in principle' objections, such as the application of Gödel's incompleteness theorem, are discussed. It is argued that the misinterpretation of the Church-Turing thesis has led to unfounded assumptions (...) about the limitations of computing machines in general. Modern digital computers, which are based on the von Neuman architecture, can typically be programmed so that they interact effectively with the real word. It is argued that digital computing machines are supersets of Turingmachines, if they are, for example, programmed to interact with the real world. Moreover, computing is not restricted to the domain of discrete state machines. Analog computers and real or simulated neural nets exhibit properties that may not be accommodated in a definition of computing, which is based on Turingmachines. Consequently, some of the philosophical 'in principle' objections to artificial intelligence may not apply in reference to engineering efforts in artificial intelligence. (shrink)
In I used Turing machine arguments to show that computers can recognize humanly recognizable patterns in principle. In 1978 James D. Heffernan has expressed some doubts about such arguments. He does not question the propositions that I defend in the paper, nor the specific arguments in their support. What he does criticize are certain background assumptions.
Infinite time Turingmachines with only one tape are in many respects fully as powerful as their multi-tape cousins. In particular, the two models of machine give rise to the same class of decidable sets, the same degree structure and, at least for partial functions f : ℝ → ℕ, the same class of computable functions. Nevertheless, there are infinite time computable functions f : ℝ → ℝ that are not one-tape computable, and so the two models of (...) infinitary computation are not equivalent. Surprisingly, the class of one-tape computable functions is not closed under composition; but closing it under composition yields the full class of all infinite time computable functions. Finally, every ordinal that is clockable by an infinite time Turing machine is clockable by a one-tape machine, except certain isolated ordinals that end gaps in the clockable ordinals. (shrink)
We characterise explicitly the decidable predicates on integers of Infinite Time Turingmachines, in terms of admissibility theory and the constructible hierarchy. We do this by pinning down ζ, the least ordinal not the length of any eventual output of an Infinite Time Turing machine (halting or otherwise); using this the Infinite Time Turing Degrees are considered, and it is shown how the jump operator coincides with the production of mastercodes for the constructible hierarchy; further that (...) the natural ordinals associated with the jump operator satisfy a Spector criterion, and correspond to the L ζ -stables. It also implies that the machines devised are "Σ 2 Complete" amongst all such other possible machines. It is shown that least upper bounds of an "eventual jump" hierarchy exist on an initial segment. (shrink)
Storrs McCall appeals to a particular true but improvable sentence of formal arithmetic to argue, by appeal to its irrefutability, that human minds transcend Turingmachines. Metamathematical oversights in McCall's discussion of the Godel phenomena, however, render invalid his philosophical argument for this transcendentalist conclusion.
We first discuss Michael Dummett’s philosophy of mathematics and Robert Brandom’s philosophy of language to demonstrate that inferentialism entails the falsity of Church’s Thesis and, as a consequence, the Computational Theory of Mind. This amounts to an entirely novel critique of mechanism in the philosophy of mind, one we show to have tremendous advantages over the traditional Lucas-Penrose argument.
Vann McGee has argued that, given certain background assumptions and an ought-implies-can thesis about norms of rationality, Bayesianism conflicts globally with computationalism due to the fact that Robinson arithmetic is essentially undecidable. I show how to sharpen McGee's result using an additional fact from recursion theory—the existence of a computable sequence of computable reals with an uncomputable limit. In conjunction with the countable additivity requirement on probabilities, such a sequence can be used to construct a specific proposition to which Bayesianism (...) requires an agent to assign uncomputable credence—yielding a local conflict with computationalism. (shrink)
The properties of Turing’s famous ‘universal machine’ has long sustained functionalist intuitions about the nature of cognition. Here, I show that there is a logical problem with standard functionalist arguments for multiple realizability. These arguments rely essentially on Turing’s powerful insights regarding computation. In addressing a possible reply to this criticism, I further argue that functionalism is not a useful approach for understanding what it is to have a mind. In particular, I show that the difficulties involved in (...) distinguishing implementation from function make multiple realizability claims untestable and uninformative. As a result, I conclude that the role of Turingmachines in philosophy of mind needs to be reconsidered. (shrink)
Lucas and Penrose have contended that, by displaying how any characterisation of arithmetical proof programmable into a machine allows of diagonalisation, generating a humanly recognisable proof which eludes that characterisation, Gödel's incompleteness theorem rules out any purely mechanical model of the human intellect. The main criticisms of this argument have been that the proof generated by diagonalisation (i) will not be humanly recognisable unless humans can grasp the specification of the object-system (Benacerraf); and (ii) counts as a proof only on (...) the (unproven) hypothesis that the object system is consistent (Putnam). The present paper argues that criticism (ii) may be met head-on by an intuitionistic proponent of the anti-mechanist argument; and that criticism (i) is simply mistaken. However the paper concludes by questioning the sufficiency of the situation for an interesting anti-mechanist conclusion. (shrink)
Infinite time Turingmachines extend the operation of ordinary Turingmachines into transfinite ordinal time. By doing so, they provide a natural model of infinitary computability, a theoretical setting for the analysis of the power and limitations of supertask algorithms.
Earlier, we have studied computations possible by physical systems and by algorithms combined with physical systems. In particular, we have analysed the idea of using an experiment as an oracle to an abstract computational device, such as the Turing machine. The theory of composite machines of this kind can be used to understand (a) a Turing machine receiving extra computational power from a physical process, or (b) an experimenter modelled as a Turing machine performing a test (...) of a known physical theory T. Our earlier work was based upon experiments in Newtonian mechanics. Here we extend the scope of the theory of experimental oracles beyond Newtonian mechanics to electrical theory. First, we specify an experiment that measures resistance using a Wheatstone bridge and start to classify the computational power of this experimental oracle using non-uniform complexity classes. Secondly, we show that modelling an experimenter and experimental procedure algorithmically imposes a limit on our ability to measure resistance by the Wheatstone bridge. The connection between the algorithm and physical test is mediated by a protocol controlling each query, especially the physical time taken by the experimenter. In our studies we find that physical experiments have an exponential time protocol, this we formulate as a general conjecture. Our theory proposes that measurability in Physics is subject to laws which are co-lateral effects of the limits of computability and computational complexity. (shrink)
We characterise explicitly the decidable predicates on integers of Infinite Time Turingmachines, in terms of admissibility theory and the constructible hierarchy. We do this by pinning down $\zeta$, the least ordinal not the length of any eventual output of an Infinite Time Turing machine ; using this the Infinite Time Turing Degrees are considered, and it is shown how the jump operator coincides with the production of mastercodes for the constructible hierarchy; further that the natural (...) ordinals associated with the jump operator satisfy a Spector criterion, and correspond to the L$_\zeta$-stables. It also implies that the machines devised are "$\Sigma_2$ Complete" amongst all such other possible machines. It is shown that least upper bounds of an "eventual jump" hierarchy exist on an initial segment. (shrink)
Recent advances in neuroscience lead to a wider realm for philosophy to include the science of the Darwinian-evolved computational brain, our inner world producing organ, a non-recursive super- Turing machine combining 100B synapsing-neuron DNA-computers based on the genetic code. The whole system is a logos machine offering a world map for global context, essential for our intentional grasp of opportunities. We start from the observable contrast between the chaotic universe vs. our orderly inner world, the noumenal cosmos. So far, (...) philosophy has been rehearsing our thoughts, our human-internal world, a grand painting of the outer world, how we comprehend subjectively our experience, worked up by the logos machine, but now we seek a wider horizon, how humans understand the world thanks to Darwinian evolution to adapt in response to the metaphysical gap, the chasm between the human animal and its environment, shaping the organism so it can deal with its variable world. This new horizon embraces global context coded in neural structures that support the noumenal cosmos, our inner mental world, for us as denizens of the outer environment. Kant’s inner and outer senses are fundamental ingredients of scientific philosophy. Several sections devoted to Heidegger, his lizard debunked, but his version of the metaphysical gap & his doctrine of the logos praised. Rorty and others of the behaviorist school discussed also. (shrink)
In his article “On Mechanical Recognition” R. J. Nelson brings to bear a branch of mathematical logic called automata theory on problems of artificial intelligence. Specifically he attacks the anti-mechanist claim that “[i]nasmuch as human recognition to a very great extent relies on context and on the ability to grasp wholes with some independence of the quality of the parts, even to fill in the missing parts on the basis of expectations, it follows that computers cannot in principle be programmed (...) to recognize or learn to recognize all patterns”, p. 24). Nelson proposes, contrary to this claim, that “gestalt recognition is not beyond digital automata”. in particular, he claims, he will establish by what he calls “Turing machine arguments” that the following four theses are true: automata can recognize different pattern types in one and the same set of instances;automata can recognize the “same” pattern in different sets of instances;automata can recognize incomplete, degraded patterns having missing or indeterminate parts;automata can recognize family resemblances. (shrink)
According to pancomputationalism, everything is a computing system. In this paper, I distinguish between different varieties of pancomputationalism. I find that although some varieties are more plausible than others, only the strongest variety is relevant to the philosophy of mind, but only the most trivial varieties are true. As a side effect of this exercise, I offer a clarified distinction between computational modelling and computational explanation.<br><br>.
In this report I provide an introduction to the burgeoning field of hypercomputation – the study of machines that can compute more than Turingmachines. I take an extensive survey of many of the key concepts in the field, tying together the disparate ideas and presenting them in a structure which allows comparisons of the many approaches and results. To this I add several new results and draw out some interesting consequences of hypercomputation for several different disciplines.
Infinite time Turingmachines represent a model of computability that extends the operations of Turingmachines to transfinite ordinal time by defining the content of each cell at limit steps to be the lim sup of the sequences of previous contents of that cell. In this paper, we study a computational model obtained by replacing the lim sup rule with an ‘eventually constant’ rule: at each limit step, the value of each cell is defined if and (...) only if the content of that cell has stabilized before that limit step and is then equal to this constant value. We call these machines weak infinite time Turingmachines. We study different variants of wITTMs adding multiple tapes, heads, or bidimensional tapes. We show that some of these models are equivalent to each other concerning their computational strength. We show that wITTMs decide exactly the arithmetic relations on natural numbers. (shrink)