We extend previous work by modeling evolution of communication using a spatialized genetic algorithm which recombines strategies purely locally. Here cellular automata are used as a spatialized environment in which individuals gain points by capturing drifting food items and are 'harmed' if they fail to hide from migrating predators. Our individuals are capable of making one of two arbitrary sounds, heard only locally by their immediate neighbors. They can respond to sounds from their neighbors by opening their mouths or by (...) hiding. By opening their mouths in the presence of food they maximize gains; by hiding when a predator is present they minimize losses. We consider the result a 'natural' template for benefits from communication; unlike a range of other studies, it is here only the recipient of communicated information that immediately benefits. (shrink)
A version of this paper was presented at the IEEE International Conference on Computational Intelligence, combined meeting of ICNN, FUZZ-IEEE, and ICEC, Orlando, June-July, 1994, and an earlier form of the result is to appear as "The Undecidability of the Spatialized Prisoner's Dilemma" in Theory and Decision . An interactive form of the paper, in which figures are called up as evolving arrays of cellular automata, is available on DOS disk as Research Report #94-04i . An expanded version appears as (...) chapter 6 of The Philosophical Computer. (shrink)
Modeling and simulation clearly have an upside. My discussion here will deal with the inevitable downside of modeling — the sort of things that can go wrong. It will set out a taxonomy for the pathology of models — a catalogue of the various ways in which model contrivance can go awry. In the course of that discussion, I also call on some of my past experience with models and their vulnerabilities.
‘The problem with simulations is that they are doomed to succeed.’ So runs a common criticism of simulations—that they can be used to ‘prove’ anything and are thus of little or no scientific value. While this particular objection represents a minority view, especially among those who work with simulations in a scientific context, it raises a difficult question: what standards should we use to differentiate a simulation that fails from one that succeeds? In this paper we build on a structural (...) analysis of simulation developed in previous work to provide an evaluative account of the variety of ways in which simulations do fail. We expand the structural analysis in terms of the relationship between a simulation and its real-world target emphasizing the important role of aspects intended to correspond and also those specifically intended not to correspond to reality. The result is an outline both of the ways in which simulations can fail and the scientific importance of those various forms of failure. (shrink)
Robustness has long been recognized as an important parameter for evaluating game-theoretic results, but talk of ‘robustness’ generally remains vague. What we offer here is a graphic measure for a particular kind of robustness (‘matrix robustness’), using a three-dimensional display of the universe of 2 × 2 game theory. In such a measure specific games appear as specific volumes (Prisoner’s Dilemma, Stag Hunt, etc.), allowing a graphic image of the extent of particular game-theoretic effects in terms of those games. The (...) measure also allows for an easy comparison between different effects in terms of matrix robustness. Here we use the measure to compare the robustness of Tit for Tat’s well-known success in spatialized games (Axelrod, R. (1984). The evolution of cooperation. New York: Basic Books; Grim, P. et al. (1998). The philosophical computer: Exploratory essays in philosophical computer modeling. Cambridge, Mass: MIT Press) with the robustness of a recent game-theoretic model of the contact hypothesis regarding prejudice reduction (Grim et al. 2005. Public Affairs Quarterly, 19, 95–125). (shrink)
The goal of philosophy of information is to understand what information is, how it operates, and how to put it to work. But unlike âinformationâ in the technical sense of information theory, what we are interested in is meaningful information. To understand the nature and dynamics of information in this sense we have to understand meaning. What we offer here are simple computational models that show emergence of meaning and information transfer in randomized arrays of neural nets. These we take (...) to be formal instantiations of a tradition of theories of meaning as use. What they offer, we propose, is a glimpse into the origin and dynamics of at least simple forms of meaning and information transfer as properties inherent in behavioral coordination across a community. (shrink)
Any behavior belongs to innumerable overlapping types. Any adequate theory of emergence and retention of behavior, whether psychological or biological, must give us not only a general mechanism – reinforcement or selection, for example – but a reason why that mechanism applies to a particular behavior in terms of one of its types rather than others. Why is it as this type that the behavior is reinforced or selected?
We extend previous work on cooperation to some related questions regarding the evolution of simple forms of communication. The evolution of cooperation within the iterated Prisoner's Dilemma has been shown to follow different patterns, with significantly different outcomes, depending on whether the features of the model are classically perfect or stochastically imperfect (Axelrod 1980a, 1980b, 1984, 1985; Axelrod and Hamilton, 1981; Nowak and Sigmund, 1990, 1992; Sigmund 1993). Our results here show that the same holds for communication. Within a simple (...) model, the evolution of communication seems to require a stochastically imperfect world. (shrink)
Formal systems are standardly envisaged in terms of a grammar specifying well-formed formulae together with a set of axioms and rules. Derivations are ordered lists of formulae each of which is either an axiom or is generated from earlier items on the list by means of the rules of the system; the theorems of a formal system are simply those formulae for which there are derivations. Here we outline a set of alternative and explicitly visual ways of envisaging and analyzing (...) at least simple formal systems using fractal patterns of infinite depth. Progressively deeper dimensions of such a fractal can be used to map increasingly complex wffs or increasingly complex 'value spaces', with tautologies, contradictions, and various forms of contingency coded in terms of color. This and related approaches, it turns out, offer not only visually immediate and geometrically intriguing representations of formal systems as a whole but also promising formal links (1) between standard systems and classical patterns in fractal geometry, (2) between quite different kinds of value spaces in classical and infinite-valued logics, and (3) between cellular automata and logic. It is hoped that pattern analysis of this kind may open possibilities for a geometrical approach to further questions within logic and metalogic. (shrink)
happy face, in my view, is this. It starts with two simple claims about our language that I think just have to be right. On the basis of essentially those two claims alone it offers what I think is a very plausible account of both (1) what really is wrong with the argument and (2) why there doesn't seem to be anything wrong with the argument.
In the spatialized Prisoner's Dilemma, players compete against their immediate neighbors and adopt a neighbor's strategy should it prove locally superior. Fields of strategies evolve in the manner of cellular automata (Nowak and May, 1993; Mar and St. Denis, 1993a,b; Grim 1995, 1996). Often a question arises as to what the eventual outcome of an initial spatial configuration of strategies will be: Will a single strategy prove triumphant in the sense of progressively conquering more and more territory without opposition, or (...) will an equilibrium of some small number of strategies emerge? Here it is shown, for finite configurations of Prisoner's Dilemma strategies embedded in a given infinite background, that such questions are formally undecidable: there is no algorithm or effective procedure which, given a specification of a finite configuration, will in all cases tell us whether that configuration will or will not result in progressive conquest by a single strategy when embedded in the given field. The proof introduces undecidability into decision theory in three steps: by (1) outlining a class of abstract machines with familiar undecidability results, by (2) modelling these machines within a particular family of cellular automata, carrying over undecidability results for these, and finally by (3) showing that spatial configurations of Prisoner's Dilemma strategies will take the form of such cellular automata. (shrink)
Predicates are term-to-sentence devices, and operators are sentence-to-sentence devices. What Kaplan and Montague's Paradox of the Knower demonstrates is that necessity and other modalities cannot be treated as predicates, consistent with arithmetic; they must be treated as operators instead. Such is the current wisdom.A number of previous pieces have challenged such a view by showing that a predicative treatment of modalities neednot raise the Paradox of the Knower. This paper attempts to challenge the current wisdom in another way as well: (...) to show that mere appeal to modal operators in the sense of sentence-to-sentence devices is insufficient toescape the Paradox of the Knower. A family of systems is outlined in which closed formulae can encode other formulae and in which the diagonal lemma and Paradox of the Knower are thereby demonstrable for operators in this sense. (shrink)
Suppose there were a set T of all truths, and consider all subsets of T --all members of the power set T. To each element of this power set will correspond a truth. To each set of the power set, for example, a particular truth T1 either will or will not belong as a member. In either case we will have a..
Let us sum up.The paradox of the Knower poses a direct and formal challenge to the coherence of common notions of knowledge and truth. We've considered a number of ways one might try to meet that challenge: propositional views of truth and knowledge, redundancy or operator views, and appeal to hierarchy of various sorts. Mere appeal to propositions or operators, however, seems to be inadequate to the task of the Knower, at least if unsupplemented by an auxiliary recourse to hierarchy. (...) But the cost of hierarchy appears to be an abandonment of any notion of all truth or of omniscience. What the contradictions of the Knower seem to demand, then, is at least an abandonment of these.As noted in the introduction, the argument is complicated enough that one must be wary of dogmatic and precipitate conclusions. One may legitimately wonder whether some new response, or some variation on an old one, will yet offer a way out.Far too often, however, it is asked what has gone wrong with paradox rather than what paradox may have to teach us. What the Knower may have to teach us, I think, is that there really can be no coherent notion of all truth and really can be no coherent notion of omniscience. In its own way that conclusion is perhaps as humbling as is any traditional notion of God. (shrink)
R l purtill has claimed that the ontological argument that plantinga presents in "the nature of necessity" is basically the same as that offered in hartshorne's "the logic of perfection" and that it falls victim to the same criticisms. i argue that plantinga's ontological argument is different enough "not" to fall victim to purtill's criticisms. what makes plantinga's argument different, however, also makes it vulnerable to a different criticism: the god of plantinga's conclusion is not a being greater than which (...) none can be conceived. (shrink)