Skip to main content
Log in

On the Coevolution of Basic Arithmetic Language and Knowledge

  • Original Article
  • Published:
Erkenntnis Aims and scope Submit manuscript

Abstract

Skyrms-Lewis sender-receiver games with invention allow one to model how a simple mathematical language might be invented and become meaningful as its use coevolves with the basic arithmetic competence of primitive mathematical inquirers. Such models provide sufficient conditions for the invention and evolution of a very basic sort of arithmetic language and practice, and, in doing so, provide insight into the nature of a correspondingly basic sort of mathematical knowledge in an evolutionary context. Given traditional philosophical reflections concerning the nature and preconditions of mathematical knowledge, these conditions are strikingly modest.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Such games were first used by Lewis(1969) to account for the formation of convention. Brian Skyrms later described how such games might be formulated in an evolutionary context without any assumptions of ideal rationality or common knowledge(Skyrms 2006). The type of invention-learning dynamics we will use for the arithmetic games was proposed by Skyrms (2010). Some of its formal properties are further considered by Alexander et al. (2011). It is an extension of the sort of simple reinforcement learning proposed by Roth and Erev (1995).

  2. Skyrms has shown how dispositions that track basic logical truths might evolve in simple Skyrms-Lewis games (Skyrms 2000). Here we consider rather more involved games that allow for the invention of mathematical language and the evolution of mathematical dispositions.

  3. In particular, the agents start with no language whatsoever and only random first-order dispositions to act on signals. As we will see, the subtlety of the evolved practices of such agents depends on the precise nature of their second-order dispositions and the world they inhabit.

  4. For discussions of standard Skyrms-Lewis sender-receiver games and other variants (see Argiento et al. 2009; Barrett 2012b, 2009, 2007; Lewis 1969; Skyrms 2006 and Skyrms 2010). While we will restrict our attention here to evolution in the context of learning models, one should expect similar results in the context of population models as there is, for example, a close formal relationship between evolution under reinforcement learning and evolution under the replicator dynamics.

  5. While useful, the distinction between the agents’ first-order dispositions (their dispositions to signal and to act) and their second-order dispositions (the learning dynamics that updates their first-order dispositions) is at best a rough one. It would begin to unravel a bit if, as is certainly the case in the evolution of real languages, the agents’ second-order dispositions were themselves allowed to evolve in response to the evolution of their first-order dispositions. In particular, one should expect real inquirers to better learn how to learn as they evolve better descriptive language and the practices that this better language supports. The rough distinction between first- and second-order dispositions, then, is perhaps best understood as one between faster-evolving dispositions that primarily concern signaling and acting and slower-evolving dispositions that primarily concern updating the faster-evolving dispositions.

  6. More specifically, for N max = 1,000 and 1,000 runs each with 106 plays, the sender and receiver nearly always (0.993) evolve a set of nearly optimal (0.994) dispositions. Similar results are obtained for a wide range of parameter values for this general sort of bounded reinforcement learning with punishment. With unbounded reinforcement and no punishment, the agents in this game evolve an optimal signaling language only about 0.78 of the time and end up in a suboptimal mixed equilibrium the other 0.22 of the time. (Barrett 2007) What leads to the very high degree of success in the case above is that (1) there is a maximum number of each type of ball in each urn (so bad habits, if they arise, do not get too strongly ingrained), (2) no type of signal or action ever goes to extinction in any urn (so there is always a possible escape from suboptimal dispositions), and (3) both reinforcement and punishment, or weakening, of first-order disquisitions are possible (since without reinforcement, the agents would be unable to learn; and without punishment they would be unable to forget suboptimal dispositions and hence would lack the means of escape). With this learning dynamics the fact that no action goes to extinction allows for the possibility of the agents’ first-order dispositions randomly wandering away from success, but, if they do, they will quickly return and spend most of their time almost ideally successful in their actions.

  7. Note that the king is not an agent in the game; rather, he is simply part of the world that the agents inhabit. Of course, how nature responds to the acts of the agents determines in part what stable first-order dispositions, if any, will evolve in the game.

  8. Since the agents in this game are both evolving a language and evolving a simple theory of arithmetic, as represented in their evolved dispositions, this game might alternatively be characterized as a sender-predictor game. See (Barrett 2012b) for a description of such games.

  9. This simple rule for inventing terms was proposed by Skyrms (2010). There are, at least in the short run, more effective ways to introduce new terms. One might, for example, tie the likelihood of the introduction a new term to the failure of old terms in promoting successful action. Such a rule might also more strongly discourage the unnecessary invention and evolution of synonyms. That said, if agents can evolve successful behavior using Skyrms’ very simple invention rule and reinforcement without punishment, this only serves to illustrate the robust nature of the evolution toward successful practice in such games.

  10. More precisely, when he constructs a new urn, he populates it with one ball of each act type allowed in the game as it is currently being played. Here this always means one ball for each of the eleven possible actions: making 0 to 10 rings. The condition as it is currently being played will matter later when we consider games where the king changes the structure of the game during play.

  11. While the agents do not need prior pure concepts, forms of sensible intuition, intentions, or such to play this game, they do need second-order dispositions that allow them to learn. In the case of urn learning by simple reinforcement, among other things, this means that the agents must be able to match states to urns and signals to urns consistently. Having such second-order dispositions is simply one of the preconditions for the game. To be sure, one might expect that such second-order dispositions are themselves often the product of prior evolutionary processes. In particular, one might imagine that the ability to match objects in a one-to-one onto way is a good candidate for an evolved ability. And, indeed, the four-state game in the last section shows how it is possible for agents to evolve dispositions that jointly represent a one-to-one onto map between states, terms, and acts where there was no such map to begin. The general point here, however, is more important. What language and practice evolves on a particular game if any depends on the precise second-order dispositions of the agents playing the game and the nature of the world they inhabit.

  12. Note that a new type of term is only kept if the first-play of the term was successful and that it then becomes available for any purpose.

  13. In particular, on 1,000 runs of 8.0 × 106 plays each, the cumulative success rate is greater than 0.95 in 0.67 of the runs, greater than 0.90 in 0.83 of the runs, and greater than 0.85 in 0.95 of the runs. Because there is no punishment, it is invention that does the work here in avoiding suboptimal equilibria. When a modest degree of punishment or forgetting is added to invention, the simulated agents typically evolve yet more quickly and surely toward an optimal equilibrium between their first- and second-order dispositions.

  14. More specifically, in this particular game, the number terms are invented, then coevolve with the arithmetic competence of the agents to represent the cardinalities of concrete collections.

  15. Since it is always possible to draw a black ball, the number of terms used by the advisors increases over time, but at an ever slower rate. Since there is no cost for keeping terms in use and no mechanism for forgetting in this model, numerous synonyms evolve over time. Further, since there is nothing in the model to coordinate the meanings of the terms used by the two advisors, each advisor evolves his own number language. The jeweler, then, must not only learn to add the numbers signaled by the advisors but also to coordinate between the synonyms in each advisor’s language and between the advisors’ different languages. And the jeweler typically evolves to do precisely this, and the three agents together coevolve to exhibit the basic arithmetic competence. When successful, the agents have coevolved the ability to represent the cardinalities of two finite collections of concrete objects and to represent the cardinality of the collection that would be formed by combining those two collections.

  16. Note that this game has no equilibria since nature keeps changing as the agents evolve; or, equivalently, one might put it that the game that the agents are playing itself keeps changing, preventing the agents from reaching equilibrium. The agents here are nevertheless typically able to track the moving target of successful action. This tracking provides a more general notion of success than the notion of approaching a stationary equilibrium between first- and second-order dispositions.

  17. On 1,000 runs of 1.0 × 106 plays each, the mean difficulty k of the sums the agents are able to do with accuracy 0.95 is 8.03; on 2.0 × 106 plays, they are at mean difficulty of 10.27; and with 4.0 × 106 plays, they are at mean difficulty of 13.43. When they have played as long as the agents in the last game, 8.0 × 106 plays, they are typically successfully expressing and adding numbers up to 17 + 17. And after 1.6 × 107 plays, the agents have typically evolved the language to express and the ability to compute sums reliably up to 22 + 22 and after 3.2 × 107 up to 28+28. In short, the arithmetic competence of the agents appears to grow significantly faster than linearly against the log of the number of plays. This provides modest reason to believe that the agents’ arithmetic competence may be unbounded even in this relatively simple game. Of course, one would much prefer having a theorem. While Alexander, Skyrms, and Zabell have formal results that characterize the behavior of simple reinforcement games with invention (Alexander et al. 2011), there is, so far, no proof that the agents in a game like the one described here would almost always evolve to exhibit arithmetic competence better than any given finite bound.

  18. Note that such models illustrate how it is possible for a very basic sort of mathematical competence to evolve given relatively weak preconditions, not how such competence evolved for human agents. Providing a how-in-fact explanation for even the simplest arithmetic competences exhibited by human agents would require one to know details about our evolutionary history, including features of our linguistic environment, that we will likely never be able to determine. If so, we are limited to consider only those philosophical purposes that may be served with much weaker how-possible explanations. Of course, providing a full how-possible explanation for the sophisticated sort of arithmetic competence exhibited by human agents would also be extraordinarily difficult. Since capturing more sophisticated mathematical practice requires more sophisticated evolutionary models, the thought is that the project best proceeds in clear, concrete steps. This approach also allows more sophisticated models to use abilities evolved in the simpler models.

  19. Instead of learning how to match n rings and m rings to n + m fingers and discovering standard addition, equipped with different payoffs, the agents might have learned how to match n rings to each of m fingers and discovered multiplication. The details of such evolutionary stories vary depending on the second-order dispositions one considers. Modular addition and absolute difference evolve, on simulation, very much as standard addition above. Multiplication evolves similarly but significantly slower since the number of possible outcomes of the operation, on at least one natural understanding, grow quickly with the number of possible arguments. On a slightly different type of game, the advisors might observe the king’s fingers and the receiver might act on the signals and only be successful if the number of fingers on each hand were the same. Here the agents coevolve number terms and the relation of equality rather than an arithmetic operation. The type of number terms evolved might be thought of as correspondingly less subtle.

  20. While the agents modeled here learn a rule for addition and follow it, a more sophisticated sort of mathematical knowledge might require that agents learn a rule that would allow them to add numbers they may never have considered before (Kripke 1982). In order to provide an evolutionary account of this sort of arithmetic competence, one would need to show how it is possible to coevolve first-order dispositions that represent a more general algorithm for addition than the direct concrete dispositions that evolve for the agents discussed here. See (Barrett 2012a) for a discussion of the coevolution of language and more subtle sorts of rule-following behaviors that complement those discussed here. One would like to have a story, for example, for how terms evolved in one context might evolve to be used more generally. Two such evolutionary mechanisms are discussed in (Barrett 2012a). The simpler of the two is substitution. On substitution, when the task changes, say from adding the number of fingers on two hands to adding the number of marbles in two piles, rather than evolving a new coordinated descriptive language and dispositions for adding, it can be evolutionarily more efficient to evolve to use the old evolved dispositions in a new way. What one needs for the appropriation of the old evolved dispositions to a new task is for the new stimuli to come to trigger the old dispositions in a way that leads to successful action. A pile of five marbles and a pile of two marbles might then evolve to be treated the same way for the purposes of addition as five fingers on one hand and two on another.

  21. Providing a full account of how we give and evaluate proof would also require one to account for proofs that rely, for example, on geometric construction, which would require a very different sort of evolutionary model than the those discussed here.

  22. The simple models discussed here clearly do not capture a rich notion of intentionality; rather, they show that a rich notion of intentionality is not required to account for the evolution of at least one very basic sort of arithmetic practice.

  23. More sophisticated types of evolved mathematical language and practice can be expected to involve more subtle evolutionary games. The thought, again, is that the investigation of such games best proceeds in clear, concrete steps, where one may use the first-order dispositions evolved in simpler games to characterize the second-order dispositions of the agents in increasingly more subtle games.

References

  • Alexander, J. M., Skyrms, B., & Zabell, S. (2011). Inventing new signals. Dynamic Games and Applications, 1–17.

  • Argiento, R., Pemantle, R., Skyrms, B., & Volkov, S. (2009). Learning to signal: Analysis of a micro-level reinforcement model. Stochastic Processes and their Applications, 119(2), 373–390.

    Google Scholar 

  • Barrett, J. A. (2012a). The evolution of simple rule following. Forthcoming in Brian Skyrms and Simon Huttegger edited volume.

  • Barrett, J. A. (2012b). On the coevolution of theory and language and the nature of successful inquiry. Forthcoming in Erkenntnis.

  • Barrett, J. A. (2009). Faithful description and the incommensurability of evolved languages. Philosophical Studies, 147(1), 123–137.

    Article  Google Scholar 

  • Barrett, J. A. (2007). Dynamic partitioning and the conventionality of kinds. Philosophy of Science, 74, 527–546.

    Article  Google Scholar 

  • Kripke, S. (1982). Wittgenstein on rules and private language: An elementary exposition. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Lewis, D. (1969). Convention. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Roth, A. l., & Erev, I. (1995). Learning in extensive form games: Experimental data and simple dynamical models in the intermediate term. Games and Economic Behavior, 8, 164-212.

    Article  Google Scholar 

  • Skyrms, B. (2010). Signals evolution, learning, & information. New York: Oxford University Press.

    Google Scholar 

  • Skyrms, B. (2006). Signals. Philosophy of Science, 75(5), 489–500.

    Article  Google Scholar 

  • Skyrms, B. (2000). Evolution of inference. In K. Tim, & G. Gumerman (Eds.), Dynamics of human and primate societies (pp. 77–88). New York: Oxford University Press.

    Google Scholar 

Download references

Acknowledgments

I would like to thank Brian Skyrms, Penelope Maddy, Jim Weatherall, Cailin O'Connor, Martha Barrett, and two anonymous referees for helpful comments. I would also like to thank the Zukunftskolleg at the University of Konstanz for supporting this project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeffrey A. Barrett.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Barrett, J.A. On the Coevolution of Basic Arithmetic Language and Knowledge. Erkenn 78, 1025–1036 (2013). https://doi.org/10.1007/s10670-012-9398-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10670-012-9398-z

Keywords

Navigation