Both Putnam and Searle have argued that that every abstract automaton is realized by every physical system, a claim that leads to a reductio argument against Cognitivism or Strong AI: if it is possible for a computer to be conscious by virtue of realizing some abstract automaton, then by Putnam’s theorem every physical system also realizes that automaton, and so every physical system is conscious—a conclusion few supporters of Strong AI would be willing to accept. Dennett has suggested a criterion (...) of reverse engineering for identifying “real patterns,” and I argue that this approach is also very effective at identifying “real realizations.” I focus on examples of real-world implementations of complex automata because previous attempts at answering Putnam’s challenge have been overly restrictive, ruling out some realizations that are in fact paradigmatic examples of practical automaton realization. I also argue that some previous approaches have at the same time been overly lenient in accepting counter-intuitive realizations of trivial automata. I argue that the reverse engineering approach avoids both of these flaws. Moreover, Dennett’s approach allows us to recognize that some realizations are better than others, and the line between real realizations and non-realizations is not sharp. (shrink)
We suggest that developing automata theoretic foundations is relevant for knowledge theory, so that we study not only what is known by agents, but also the mechanisms by which such knowledge is arrived at. We define a class of epistemic automata, in which agents’ local states are annotated with abstract knowledge assertions about others. These are finite state agents who communicate synchronously with each other and information exchange is ‘perfect’. We show that the class of recognizable languages has (...) good closure properties, leading to a Kleene-type theorem using what we call regular knowledge expressions. These automata model distributed causal knowledge in the following way: each agent in the system has a partial knowledge of the temporal evolution of the system, and every time agents synchronize, they update each other’s knowledge, resulting in a more up-to-date view of the system state. Hence we show that these automata can be used to solve the satisfiability problem for a natural epistemic temporal logic for local properties. Finally, we characterize the class of languages recognized by epistemic automata as the regular consistent languages studied in concurrency theory. (shrink)
In this paper it is argued that certain stimulus-response learning models which are adequate to represent finite automata (acceptors) are not adequate to represent noninitial state input-output automata (transducers). This circumstance suggests the question whether or not the behavior of animals if satisfactorily modelled by automata is predictive. It is argued in partial answer that there are automata which can be explained in the sense that their transition and output functions can be described (roughly, Hempel-type covering (...) law explanation) while their behaviors are in principle not predictable short of possession of their complete histories or of information concerning present internal states by indirect observation. (shrink)
The semantic automata framework, developed originally in the 1980s, provides computational interpretations of generalized quantifiers. While recent experimental results have associated structural features of these automata with neuroanatomical demands in processing sentences with quantifiers, the theoretical framework has remained largely unexplored. In this paper, after presenting some classic results on semantic automata in a modern style, we present the first application of semantic automata to polyadic quantification, exhibiting automata for iterated quantifiers. We also discuss the (...) role of semantic automata in linguistic theory and offer new empirical predictions for sentence processing with embedded quantifiers. (shrink)
The objective of this paper is analyzing to which extent the multiverse hypothesis provides a real explanation of the peculiarities of the laws and constants in our universe. First we argue in favor of the thesis that all multiverses except Tegmark’s “mathematical multiverse” are too small to explain the fine tuning, so that they merely shift the problem up one level. But the “mathematical multiverse" is surely too large. To prove this assessment, we have performed a number of experiments with (...) cellular automata of complex behavior, which can be considered as universes in the mathematical multiverse. The analogy between what happens in some automata (in particular Conway’s “Game of Life") and the real world is very strong. But if the results of our experiments can be extrapolated to our universe, we should expect to inhabit—in the context of the multiverse—a world in which at least some of the laws and constants of nature should show a certain time dependence. Actually, the probability of our existence in a world such as ours would be mathematically equal to zero. In consequence, the results presented in this paper can be considered as an inkling that the hypothesis of the multiverse, whatever its type, does not offer an adequate explanation for the peculiarities of the physical laws in our world. (shrink)
As interpreted by Pattee, von Neumann’s Theory of Self-Reproducing Automata has proved to be a useful tool for understanding some of the difficulties and paradoxes of molecular biosemiotics. But is its utility limited to molecular systems or is it more generally applicable within biosemiotics? One way of answering that question is to look at the Theory as a model for one particular high-level biosemiotic activity, human language. If the model is not useful for language, then it certainly cannot be (...) generally useful to biosemiotics. Beginning with the Universal Turing Machine and continuing with von Neumann’s Theory and Pattee’s interpretation, the properties of universality, programmability, underspecification, complementarity of description/construction, and open-ended evolutionary potential are shown to be usefully applicable to language, thus opening a new line of inquiry in biosemiotics. (shrink)
Saul Kripke has proposed an argument to show that there is a serious problem with many computational accounts of physical systems and with functionalist theories in the philosophy of mind. The problem with computational accounts is roughly that they provide no noncircular way to maintain that any particular function with an infinite domain is realized by any physical system, and functionalism has the similar problem because of the character of the functional systems that are supposed to be realized by organisms. (...) This paper shows that the standard account of what it is for a physical system to compute a function can avoid Kripke's criticisms without being reduced to circularity; a very minor and natural elaboration of the standard account suffices to save both functionalist theories and computational accounts generally. (shrink)
Cellular Automata (CA) based simulations are widely used in a great variety of domains, fromstatistical physics to social science. They allow for spectacular displays and numerical predictions. Are they forall that a revolutionary modeling tool, allowing for “direct simulation”, or for the simulation of “the phenomenon itself”? Or are they merely models "of a phenomenological nature rather than of a fundamental one”? How do they compareto other modeling techniques? In order to answer these questions, we present a systematic exploration (...) of CA’s various uses. (shrink)
Cellular automata (henceforth: CA) are discrete, abstract computational systems that have proved useful both as general models of complexity and as more specific representations of non-linear dynamics in a variety of scientific fields. Firstly, CA are (typically) spatially and temporally discrete: they are composed of a finite or denumerable set of homogeneous, simple units, the atoms or cells. At each time unit, the cells instantiate one of a finite set of states. They evolve in parallel at discrete time steps, (...) following state update functions or dynamical transition rules: the update of a cell state obtains by taking into account the states of cells in its local neighborhood (there are, therefore, no actions at a distance). Secondly, CA are abstract, as they can be specified in purely mathematical terms and implemented in physical structures. Thirdly, CA are computational systems: they can compute functions and solve algorithmic problems. Despite functioning in a different way from traditional, Turing machine-like devices, CA with suitable rules can emulate a universal Turing machine, and therefore compute, given Turing's Thesis, anything computable.... (shrink)
In this paper I explore the question of how artificial life might be used to get a handle on philosophical issues concerning the mind-body problem. I focus on questions concerning what the physical precursors were to the earliest evolved versions of intelligent life. I discuss how cellular automata might constitute an experimental platform for the exploration of such issues, since cellular automata offer a unified framework for the modeling of physical, biological, and psychological processes. I discuss what it (...) would take to implement in a cellular automaton the evolutionary emergence of cognition from non-cognitive artificial organisms. I review work on the artificial evolution of minimally cognitive organisms and discuss how such projects might be translated into cellular automata simulations. (shrink)
This introduction to certain mathematical topics central to theoretical computer science treats computability and recursive functions, formal languages and automata, computational complexity, and cruptography. The presentation is essentially self-contained with detailed proofs of all statements provided. Although it begins with the basics, it proceeds to some of the most important recent developments in theoretical computer science.
One of the most important features of living beings that seems universal is perhaps their ability to be modified in a functional way.In order to modelize this characteristic, we designed automata with a finite number of instantaneous internal descriptions, with input(s) and output(s) and which are able to be functionally modified.
A continuous spatial automaton is analogous to a cellular automaton, except that the cells form a continuum, as do the possible states of the cells. After an informal mathematical description of spatial automata, we describe in detail a continuous analog of Conway’s “Life,” and show how the automaton can be implemented using the basic operations of ﬁeld computation.
A structure has a (finite-string) automatic presentation if the elements of its domain can be named by finite strings in such a way that the coded domain and the coded atomic operations are recognised by synchronous multitape automata. Consequently, every structure with an automatic presentation has a decidable first-order theory. The problems surveyed here include the classification of classes of structures with automatic presentations, the complexity of the isomorphism problem, and the relationship between definability and recognisability.
A cellular automaton that is related to the "mosaic cycle concept" is considered. We explain why such automata sustain very often, but not always, n-periodic trajectories (n being the number of states of the automaton). Our work is a first step in the direction of a theory of these type of automata which might be useful in modeling mosaic successions.
Very often, living beings seem able to change their functioning when external conditions vary. In order to study this property, we have devised abstract machines whose internal organisation changes whenever the external conditions vary. The internal organisations of these machines (or programs), are as simple as possible, functions of discrete variables. We call such machines self-modifying automata.These machines stabilise after any transient steps when they go indefinitely through a loop (...) called p-cycle or limit cycle of length p. More often than not, the p in the cycle is equal to one and the cycle reduces to a fixed point. (shrink)
We define a class of finite state automata acting on transfinite sequences, and use these automata to prove that no singular cardinal can be defined by a monadic second order formula over the ordinals.
This paper develops a theory of quantum automata and their slightly more general versions, q-automata. Quantum languages and η-quantum languages, 0≤η<1, are studied. Functions that can be realized as probability maps for q-automata are characterized. Quantum grammars are discussed and it is shown that quantum languages are precisely those languages that are induced by a quantum grammar. A quantum pumping lemma is employed to show that there are regular languages that are not η-quantum, 0≤η<1.
In this paper we present a negative solution of counting problems for some classes slightly different from bounded arithmetic (▵ 0 sets). To get the results we study properties of chains of finite automata.
Hilary Putnam has argued that computational functionalism cannot serve as a foundation for the study of the mind, as every ordinary open physical system implements every finite-state automaton. I argue that Putnam's argument fails, but that it points out the need for a better understanding of the bridge between the theory of computation and the theory of physical systems: the relation of implementation. It also raises questions about the class of automata that can serve as a basis for understanding (...) the mind. I develop an account of implementation, linked to an appropriate class of automata, such that the requirement that a system implement a given automaton places a very strong constraint on the system. This clears the way for computation to play a central role in the analysis of mind. (shrink)
Despite holding to the essential distinction between mind and body, Descartes did not adopt a life-body dualism. Though humans are the only creatures which can reason, as they are the only creatures whose body is in an intimate union with a soul, they are not the only finite beings who are alive. In the present note, I attempt to determine Descartes'' criteria for something to be ''living.'' Though certain passages associate such a principle with the presence of a properly functioning (...) heart, I show that there are important reasons for also understanding life in terms of a degree of complexity of design. (shrink)
The central argument for functionalism is the so-called argument from multiple realizations. According to this argument, because a functionally characterized system admits a potential infinity of structurally diverse physical realizations, the functional organization of such systems cannot be captured in a law-like manner at the level of physical description (and, thus, must be treated as a principally autonomous domain of inquiry). I offer a rebuttal of this argument based on formal modeling of its premises in the framework of automata (...) theory. In this formal model I exploit the so-called minimal (universal) realizations of automata behaviors to show that the argument from multiple realizations is not just invalid but is refutable, in the sense that its premises (when made formally precise) entail the very opposite of the functionalist's conclusion. (shrink)
We compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and pushdown automata is psychologically relevant. Our research improves upon hypothesis and explanatory power of recent neuroimaging studies as well as provides evidence for the claim that human linguistic abilities are constrained by computational complexity.
I shall argue that software agents can be attributed cognitive states, since their behaviour can be best understood by adopting the intentional stance. These cognitive states are legally relevant when agents are delegated by their users to engage, without users’ review, in choices based on their the agents’ own knowledge. Consequently, both with regard to torts and to contracts, legal rules designed for humans can also be applied to software agents, even though the latter do not have rights and duties (...) of their own. The implications of this approach in different areas of the law are then discussed, in particular with regard to contracts, torts, and personality. (shrink)
We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality. In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon (...) hypothesis and explanatory power of recent neuroimaging studies as well as provides evidence. (shrink)
Szymanik (2007) suggested that the distinction between first-order and higher-order quantifiers does not coincide with the computational resources required to compute the meaning of quantifiers. Cognitive difficulty of quantifier processing might be better assessed on the basis of complexity of the minimal corresponding automata. For example, both logical and numerical quantifiers are first-order. However, computational devices recognizing logical quantifiers have a fixed number of states while the number of states in automata corresponding to numerical quantifiers grows with the (...) rank of the quantifier. This observation partially explains the differences in processing between those two types of quantifiers (Troiani et al. 2009) and links them to the computational model. Taking this perspective, below, we suggest the experimental setting extending the one by McMillan et al. (2005) and Troiani et al. (2009). (shrink)
Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...) realistic option), or facing a responsibility gap, which cannot be bridged by traditional concepts of responsibility ascription. (shrink)
While classical temporal logics lose track of a state as soon as a temporal operator is applied, several branching-time logics able to repeatedly refer to a state have been introduced in the literature. We study such logics by introducing a new formalism, hybrid branching-time logics, subsuming the other approaches and making the ability to refer to a state more explicit by assigning a name to it. We analyze the expressive power of hybrid branching-time logics and the complexity of their satisfiability (...) problem. As main result, the satisfiability problem for the hybrid versions of several branching-time logics is proved to be 2EXPTIME -complete. To prove the upper bound, the automata-theoretic approach to branching-time logics is extended to hybrid logics. As a result of independent interest, the nonemptiness problem for alternating one-pebble Büchi tree automata is shown to be 2EXPTIME -complete. A common property of the logics studied is that they refer to only one state. This restriction is crucial: The ability to refer to more than one state causes a nonelementary blow-up in complexity. In particular, we prove that satisfiability for NCTL * has nonelementary complexity. (shrink)
In the spatialized Prisoner's Dilemma, players compete against their immediate neighbors and adopt a neighbor's strategy should it prove locally superior. Fields of strategies evolve in the manner of cellular automata (Nowak and May, 1993; Mar and St. Denis, 1993a,b; Grim 1995, 1996). Often a question arises as to what the eventual outcome of an initial spatial configuration of strategies will be: Will a single strategy prove triumphant in the sense of progressively conquering more and more territory without opposition, (...) or will an equilibrium of some small number of strategies emerge? Here it is shown, for finite configurations of Prisoner's Dilemma strategies embedded in a given infinite background, that such questions are formally undecidable: there is no algorithm or effective procedure which, given a specification of a finite configuration, will in all cases tell us whether that configuration will or will not result in progressive conquest by a single strategy when embedded in the given field. The proof introduces undecidability into decision theory in three steps: by (1) outlining a class of abstract machines with familiar undecidability results, by (2) modelling these machines within a particular family of cellular automata, carrying over undecidability results for these, and finally by (3) showing that spatial configurations of Prisoner's Dilemma strategies will take the form of such cellular automata. (shrink)