1 Introduction

Ludwig Wittgenstein was concerned with how meaningful language was interwoven with action.Footnote 1 As he put it, in learning a language “children are brought up to perform these actions, to use these words as they do so, and to react in this way to the words of others” (1958, §6).Footnote 2 Learning a language involves establishing an association between words and actions. To illustrate how meaningful language is interwoven with action, Wittgenstein described a simple language meant to serve for communication between a builder A and her assistant B:

A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, and that in the order in which A needs them. For this purpose they use a language consisting of the words “block”, “pillar”, “slab”, “beam”. A calls them out;—B brings the stone which he has learnt to bring at such-and-such a call. Conceive this as a complete primitive language. (1958, §2)

In using the language, one agent calls out the words and the other acts on them. To know how to do so is to know the rules of a game. More generally, “[w]e can ... think of the whole process of using words in [the builder-assistant language] as one of those games by means of which children learn their native language.” The language game associated with the primitive builder-assistant language is “the whole, consisting of language and the actions into which it is woven” (1958, §7).

Inasmuch as it involves learning how to use the language, a language game on Wittgenstein’s conception is an evolutionary game. As the players repeatedly interact, they update their strategies on the basis of what has happened in past plays. They are playing the game well when their language use facilitates successful action.

Our aim here is not to reconstruct Wittgenstein’s philosophical views generally nor his account of how one might learn an established language more specifically. We are concerned, rather, with how a language game, where words are inextricably interwoven with action, might emerge from prelinguistic interactions between potential language users. This forging of a language game might also be characterized as an evolutionary game. It is a game that shows how language and action might come to form an integrated whole in the first place. In particular, we will consider how this might happen in the context of generalized signaling games that allow for self-assembly.

David Lewis (1969) introduced the idea of a signaling game to show how linguistic conventions might be established. Brian Skyrms (2006) subsequently showed how the classical games Lewis described might be reformulated as evolutionary games that illustrate how even low-rationality learners might evolve simple signaling languages. Barrett and Skyrms (2017) have more recently shown how both simple and more generalized signaling games might self-assemble by means of ritualized interactions. Better understanding the self-assembly of games is a key motivation for the present models.Footnote 3

A self-assembling game is characterized by what the players do when they interact and how their structure of interactions evolves. The sender in a signaling game observes nature, then sends a signal. The receiver waits for the signal, observes the signal, then acts. Then the agents each update their dispositions based on what happened and do it all again. This sequence of interactions forms a network that is manifest across time. It is a structure where players take turns. The theory of self-assembling games is concerned with the self-assembly of such diachronic structures.

The generalized signaling games we consider here illustrate how a meaningful language and the diachronic structure of the players’ interactions might coevolve to facilitate successful action.Footnote 4 The first game shows how a player might learn to initiate meaningful discourse by asking a question rather than immediately acting. The second game shows how agents may evolve to ask new questions with coordinated meanings that coevolve on repeated plays. We will also consider variants of each game that illustrate features of self-assembly. Together, these games show how language and the structure of discourse itself may come to be interwoven with action and how the diachronic structure of an evolutionary game may coevolve with the play of the game.

2 The emergence of discourse

We will start with something akin to Wittgenstein’s builder-assistant game. It shows how agents might come to be involved in meaningful discourse using an evolved language instead of simply acting in the first place. It also shows how they might learn to end discourse and to act instead of talking. While this game is very simple, it allows for self-assembly. It is this that determines the structure of discourse between the two agents.

The question game begins with nature randomly determining whether the builder needs a slab or a block with unbiased probabilities. The assistant knows that the builder needs a slab or a block but does not know which. He may guess at what the builder needs and hand her a slab S or a block B at random, or he may produce a signal Q instead of immediately acting.Footnote 5

If the assistant decides to guess and hands the builder what she needs given the current state of nature, then the round is a success. In this case, the assistant is rewarded in a way that may influence his future actions, something we will discuss in more detail in a moment. If the assistant hands the builder the wrong thing given the current state of nature, then the round is a failure. In this case, neither the assistant nor the builder are rewarded, and they may even be punished, something else we will discuss in a moment. So if the assistant hands the builder a block or slab, players succeed or fail and the current round of the game ends.

If the assistant decides instead to signal Q, then the builder observes the signal and replies with one of two signals \(A_0\) or \(A_1\).Footnote 6 In this case, the assistant has a second chance to guess what the builder needs by handing her a slab S or a block B, or he may again signal Q. If the assistant chooses to hand something and hands the builder what she needs given the current state of nature, then the round is a success and both players are rewarded. If the assistant hands the builder the wrong thing, then the round is a failure. In this case, neither player is rewarded and both may be punished. If the assistant chooses to signal Q again, then we will also count the round as a failure on the present version of the game.

Note that the question game may be played without any discourse at all if the assistant always just chooses to guess what the builder needs. In that case, one should expect the builder and her assistant to be successful about half the time as a result of blind luck. For the builder and her assistant to do better than chance, they must coevolve an integrated set of dispositions. Specifically, the assistant must learn to initiate discourse with Q, the builder must learn to use \(A_0\) and \(A_1\) to represent the current state of nature, and the assistant must learn to respond by handing the builder the building material that has come to be represented by the builder’s signal. If the builder and her assistant evolve to be uniformly successful, Q must come to initiate discourse much as the question “What do you need?” might, and \(A_0\) and \(A_1\) must come to specify each of the two building materials: slab and block.

In order to characterize an evolutionary game, we need to say how the players learn from their experience. To this end, we will consider two learning dynamics: simple reinforcement and reinforcement with punishment.Footnote 7 One might represent how the players’ dispositions evolve under each of these dynamics by considering a process of adding and drawing balls from urns. The urns and the types of ball each contains represent the degrees of freedom of the players’ dispositions and the proportion of each type of ball in each urn represent their first-order dispositions to act on states of nature and signals. Some of their actions will serve to structure their discourse. The question game on simple reinforcement learning proceeds as indicated in Fig. 1 read from top to bottom. The events on each round of play are as follows.

question game (simple reinforcement):

assistant move i: Nature randomly determines whether the builder needs a slab or a block with unbiased probabilities. The assistant draws a ball from his start urn. This urn begins with one ball each of types Q, S, and B. If he draws S or B he just hands the builder a slab or a block. If the builder gets what she needs given the state of nature, then the assistant returns the ball he drew to the urn from which he drew it and adds a duplicate ball of the same type. Else, the assistant just returns the ball he drew to the urn from which it was drawn. But if the assistant draws Q, he signals Q.

builder move i: If the assistant signals Q, the builder in turn draws a ball from an urn corresponding to the building material she needs. Specifically, if nature tells her that she needs a slab, then she draws from her slab urn; and if nature tells her that she needs a block, then she draws from her block urn. Each of these urns initially contains one ball each of types \(A_0\) and \(A_1\). If the builder draws an \(A_0\) ball, she signals \(A_0\); and if she draws an \(A_1\) ball, she signals \(A_1\).

assistant move ii: When the assistant hears the builder’s reply, he draws from one of two reply urns, \(A_0\) and \(A_1\), each initially containing one ball each of types Q, S, and B, then either signals Q again, or hands a slab S or block B to the builder. If the builder gets the building material she needs, then the round was successful and both players return the balls they drew to the urns from which they drew them and add a duplicate ball. Else, each agent just returns his or her balls to the urns from which they were drawn.

Fig. 1
figure 1

The question game. Play begins in the top left and zig-zags towards the bottom right. The far right column of text represents nature’s play, while the middle column represents signals and/or actions taken by players. The boxes represent urns which the respective players draw from, conditioned on nature’s play and/or the signals received

A closely analogous description characterizes the question game under reinforcement with punishment. The difference is that here when a round leads to a successful action, each of the players returns the ball that he or she drew to the urn from which it was drawn and adds a copy of that ball; but when a round leads to failure, each of the players discards the ball that he or she drew unless it was the last ball of its type in the urn; in which case, he or she just returns the drawn ball to the urn from which it was drawn.

On simulation the builder and her assistant begin by acting randomly, but on repeated plays, the assistant typically evolves to ask what the builder needs rather than just guess, and the builder’s reply coevolves to represent what she needs. With \(10^7\) plays per run, on simple reinforcement, the players end up with dispositions that are reliable more than 0.9 of the time on 0.895 of the runs. For (+ 1, − 1) reinforcement with punishment, all of the runs were observed to yield a final reliability better than 0.9.

Figure 2 provides a more more detailed sense of these results. The blue distribution represents the number of simulation runs out of 1000 where the final accuracy was less than or equal to the specified value on simple reinforcement learning. The dots indicate the actual final accuracy of each run. The orange distribution does the same for (+ 1, − 1) reinforcement with punishment.Footnote 8

Fig. 2
figure 2

Final accuracies for the question game, displayed as an empirical CDF. Individual dots indicate the results of actual simulations, rank-ordered so that the corresponding value on the ordinate indicates the number of simulations out of 1000 which had less than or equal to the specified final accuracy. The blue distribution indicates the results for simple (+ 1, − 0) reinforcement, while the orange distribution the results for reinforcement with punishment (+ 1, − 1)

Inasmuch as Q, \(A_0\), and \(A_1\) are initially meaningless, they are neither questions nor answers. But as the players interact with each other and the world and update their dispositions in accord with their learning dynamics, Q comes to serve as a question and \(A_0\) and \(A_1\) come to serve as answers, answers that communicate precisely what the assistant needs to know to hand the right thing to the builder given the current state of nature.

We have supposed that the builder simply gives up if the assistant asks a second question. A more subtle model would allow the conversation to continue at a cost for each question and reply. The cost might represent the time and effort the agents expend in the exchange.Footnote 9

Consider the question game under simple reinforcement learning with a base reinforcement payoff of 2.0 for success and a fixed signal cost of 0.5 for each signal sent in the round. Here the assistant’s expected return for randomly guessing is 1.0. This is less than the 1.5 expected return for asking one question if the builder and assistant have evolved a perfectly reliable signaling system. But it is more than the 0.75 expected return for asking one question when they have not yet evolved the ability to communicate.Footnote 10 So the question is whether the builder and assistant are able to evolve reliable enough signalling conventions fast enough to make discourse more attractive than guessing.

On simulation of the costly question game the builder gets what she needs with a final accuracy better than 0.75 on 0.534 of the runs and an accuracy of better than 0.9 on 0.398 of the runs with \(10^7\) plays per run. The assistant usually evolves to ask precisely one question then act on the builder’s reply. This happens even in the case of suboptimal pooling as the expected return for asking one question using a language with a reliability of 0.75 is 1.125. On successful runs, the builder and assistant learn to signal reliably enough early on that the assistant is sufficiently likely to initiate discourse in future plays so that they have the chance to evolve a yet more reliable language which makes future discourse yet more likely. This feedback reinforces both the likelihood of future discourse and the accuracy of the evolved language. As a result, the agents coevolve a diachronic structure of interactions and a simple signaling language where the builder gets what she needs.

Unsurprisingly, the likelihood of discourse decreases as signaling costs increase. That said, reliable discourse is sometimes found to evolve on reinforcement learning in the present game with a signaling cost of 1.0. In this case, there is no expected advantage for the assistant to ask a question even after having evolved a perfectly reliable signaling system. Nevertheless, on simulation the builder gets what she needs with an accuracy better than 0.75 on 0.008 of the runs and with an accuracy better than 0.9 on 0.002 of the runs. If a reliable signaling system evolves quickly, the assistant may be regularly reinforced early on for initiating discourse in a round. As a result, the assistant may learn to communicate effectively with the builder and prefer doing so even when he might do slightly better in the medium run by guessing on his own.

To do better than chance in each version of the question game, the assistant must learn to initiate discourse and the builder and the assistant must together forge a language with sufficient expressive resources to tell the assistant what the builder needs. This requires them to coevolve dispositions that bind their signals and actions in such a way as to form an integrated whole that both structures their discourse and gives it content. Just as with Wittgenstein’s language game, the game that self-assembles, when successful, allows for basic communication between the builder and her assistant.

The sequential structure of the language game they ultimately produce coevolves with the meanings of the players’ signals. The options each player has on a round of play and the significance of those options depend on what the players did in earlier rounds, what happened when they did it, and how they updated their dispositions. The self-assembling game evolves a language game with a fixed diachronic structure together with the players’ strategies for playing that game.

Important for the present argument, the agents cannot even begin to evolve a language with sufficient expressive resources for the task at hand without first getting involved in discourse. Self-assembly explains how the language part of the language game comes to be played at all. In brief, the agents talk because discourse eventually has salutary consequences. If they do not begin to play the language part of the game, they have no chance of evolving the stable linguistic practice that ultimately allows them to coordinate their practical actions. When they do start talking, it does not take long to get some measure of success. And this leads to further talk.

Also concerning the structure of discourse, the agents are only successful if the assistant learns when it is appropriate to stop asking questions and to act on the information shared by the builder. Knowing how to stop talking is as important as knowing how to start. But ending the conversation is only successful if the assistant gives the builder the right thing. It is in this way that the very structure of discourse co-emerges with meaningful language. Together they allow for successful action.

3 Meaning and the structure of discourse

In the question game, the assistant’s one linguistic act evolves to serve as a prompt to get the builder to say something that might come to represent the material the builder needs. One can imagine a self-assembling dialogue game that allows for the evolution of more subtle language games.

Suppose that the builder needs one of four possible building materials on each round (red slab RS, red block RB, blue slab BS, or blue block BB) and that the assistant has two potential linguistic acts (\(Q_0\) and \(Q_1\)) that may come to represent the same or different questions. There are a number of ways to fill in the details to characterize a particular self-assembling game. We will discuss one of these in some detail, then briefly describe what happens in a natural variant.

Part of filling in the details involves saying what options and resources each player has and what might affect each player’s actions at each step in the game. To begin, we will suppose that the builder has the same four responses available to answer each of her assistant’s two potential questions (\(A_0\), \(A_1\), \(A_2\), and \(A_3\)) and that she conditions her actions on what she needs and the question that her assistant just asked and on nothing else. We will further suppose that the assistant conditions his actions on everything the builder has said so far in the round and on nothing else.Footnote 11

Fig. 3
figure 3

The dialogue game proceeds from the top left to the bottom. Note that the builder uses the same urns to reply to both questions. Which she draws from depends on the question asked and what she needs

The dialogue game on simple reinforcement proceeds as indicated in Fig. 3 read from top to bottom. The events on each round of play are as follows.

dialogue game (simple reinforcement):

assistant’s move i: Nature randomly determines what the builder needs with unbiased probabilities from among the four possible materials: red slab RS, red block RB, blue slab BS, or blue block BB. Her assistant then draws a ball from an urn that begins with one ball of each of the six types RS, RB, BS, BB, \(Q_0\), and \(Q_1\). If he draws RS, RB, BS, or BB, he simply hands the corresponding material to the builder. If it is what the builder needs, the round ends with success and the assistant returns his ball to the urn and adds a duplicate ball of the same type. If it is not what the builder needs, the play ends in failure and the assistant just returns the ball he drew to the urn. If the assistant draws \(Q_0\) or \(Q_1\), he sends the corresponding signal.

builder’s move i: The builder draws from an urn corresponding to her assistant’s signal and the building material she needs. She has eight urns for this purpose labeled \(Q_0RS, Q_1RS, Q_0RB, \ldots \) each initially containing one ball each of four types \(A_0, A_1, A_2,\) and \(A_3\). The builder sends the signal corresponding to the type of ball drawn.

assistant’s move ii: The assistant observes the builder’s reply and draws a ball from one of four new urns \(A_0, A_1, A_2,\) and \(A_3\), each corresponding to one of the possible replies. Each reply urn begins with one ball of each of the six types RS, RB, BS, BB, \(Q_0\), and \(Q_1\). If he draws RS, RB, BS, or BB, he hands the corresponding material to the builder. If the assistant gives the builder what she needs, the round ends with success and both players return the balls they have drawn to the urns from which they were drawn and add a duplicate ball of the same type. If it is not what is needed, the round ends in failure and each just returns the ball(s) to the urns from which they were drawn. Else, if the assistant draws \(Q_0\) or \(Q_1\), he sends the corresponding signal.

builder’s move ii: This is exactly the same as builder move (i). The builder has only one set of urns, so the dispositions that the builder uses to reply to the first question are precisely the same as the dispositions she uses to reply to the second question.

assistant’s move iii: The assistant observes the builder’s second reply and draws from one of sixteen new urns depending on both the builder’s first and second replies labeled \(A_0A_0, A_0A_1, \ldots \). Each of these reply urns begins with one ball of each of the six types RS, RB, BS, BB, \(Q_0\), and \(Q_1\). If he draws RS, RB, BS, or BB, he gives the corresponding material to the builder, and the round ends in success or failure depending on whether the builder got what she needs. The players reinforce on all draws that led to success in the round and do not reinforce on failure. If the assistant draws \(Q_0\) or \(Q_1\), he asks another question and the round ends in failure as the builder has lost patience. The players do not reinforce.

Again, an analogous description characterizes the dialogue game under \((+1,-1)\) reinforcement with punishment. Success works just as described above, but failure leads to each ball drawn in the round being discarded unless it was the last ball of its type in the urn from which it was drawn.

In order to be optimally successful, the builder and assistant must learn to communicate within an evolved structure of interactions using questions and answers that have coevolved coordinated meanings. The assistant must learn to initiate discourse, to ask the right questions given their evolving meanings, and eventually to stop asking questions. And the builder must evolve answers to the two questions the assistant may ask that together communicate what she needs.

Note that only the first question and answer are in principle needed in this initial version of the game. That said, asking both questions may still serve a purpose if the first question does not end up eliciting a response that fully specifies what the builder needs. If the agents do evolve to play beyond the first question on a round, the two questions might come to mean different things that together determine the assistant to provide the builder with the material she needs. Indeed, this often happens under simple reinforcement learning.

Fig. 4
figure 4

Final accuracies for the dialogue game

On simulation the players initially signal and act randomly, but on repeated plays, they again typically evolve nearly optimal dispositions. With \(10^7\) plays per run on simple reinforcement, they end up with actions that are reliable more than 0.9 of the time on 0.890 of the runs. On \((+1,-1)\) reinforcement with punishment, all of the runs were observed to yield a nearly optimal final reliability. The blue distribution in Fig. 4 represents the number of simulation runs out of 1000 that did less than or equal to the specified final accuracy value on simple reinforcement. The orange does the same for \((+1,-1)\) reinforcement with punishment. A significant proportion of the runs on simple reinforcement 0.072 exhibited a final success rate of less than 0.8. These suboptimal pooling equilibria can be seen in the blue distribution’s inflection point at 0.75 in Fig. 4.

The agents are so successful in evolving an efficient language game on the \((+1,-1)\) reinforcement with punishment game, that there is not much to say. Here the players evolve a highly accurate language that only requires one question. In contrast, language games with different diachronic structures may evolve on simple reinforcement. The evolved diachronic structure of a game depends in part, then, on the adaptive dynamics.

On different runs of the simple reinforcement game, the players sometimes evolve linguistic practice that does not require a second question. The assistant needs two bits of information to know what material the builder needs. As Fig. 5 shows, the probability of the second question being asked at all in the language game resulting from a full run decreases as the evolved information content of the first answer increases.

Note, however, that sometimes both questions get asked even when the first answer is fully informative. Indeed, the second question sometimes evolves to mean precisely the same thing as the first, as indicated by the answers it elicits and the subsequent actions it produces. There are a number of ways that this may happen. If the meaning of the first question were slower to evolve on a run than that of the second, such redundancy may have played a role in early success on the run. Since asking a second question is cost free in the present game, the evolution of this sort of redundancy is unsurprising. One would expect it to occur less frequently if there were a cost to asking a second question. In this spirit, there is a very high cost to asking a third question in the present game, and it comes to be almost never asked on either of the two dynamics.

More interesting, the two questions sometimes evolve different meanings and only allow for optimal success by dint of the systematically interrelated meanings of the questions and the replies they elicit. Two examples of this can be seen in Fig. 6. Both of these are from runs that produced language games that allow for nearly optimal success.

Fig. 5
figure 5

The probability that the assistant asks a second question given the information (in bits) communicated by the first question. Once again, individual dots indicate actual simulations, with the blue dots corresponding to those simulations with final accuracies greater than 0.9, the orange dots to those with final accuracies between 0.9 and 0.75, and the green dots to those with final accuracies less than 0.75

Fig. 6
figure 6

Two examples of evolved languages (left and right) in which each question elicits insufficient information but together precisely specifies the material needed

In the first run (left), the builder uses \(A_0\) for both RS and RB in reply to \(Q_0\), and she uses \(A_1\) for both RS and BB in reply to \(Q_1\), but her two answers together serve to determine the required building material.Footnote 12 In the second run (right), the builder sometimes uses \(A_0\) and sometimes uses \(A_3\) for RS and uses \(A_2\) for both RB and BB in reply to \(Q_0\), and she uses \(A_3\) for both RS and BB and does not use \(A_0\) at all in reply to \(Q_1\), but again her two answers together serve to determine the required building material.Footnote 13

The two questions here provide two chances for the agents to evolve a reliable system of communication on simple reinforcement. Sometimes one works and the other is not needed. And sometimes they both work and one sees a form of redundancy. This is akin to what provides success on a learning dynamics like reinforcement with invention.Footnote 14 But in the present game there is also another phenomenon at work. The agents sometime evolve to ask different, but coordinated, questions.

Consider a costly version of the dialogue game where the players may continue to talk indefinitely with a base reinforcement payoff of 3.0 for success and a cost of 1.0 for each question and reply. On simulation the assistant almost always evolves to ask precisely one question then to act on the builder’s reply. The assistant’s expected return by guessing here is 0.5, and his expected return after asking one question is 2.0 if the assistant and builder are able to evolve a perfect language. So given what we saw in the costly question game, it is not surprising that the assistant learns to initiate discourse. When he does so and when the builder and assistant are able to evolve a working language, they are also forging a diachronic structure of interactions. Future plays of the game will involve less guessing and more extended discourse. Again, the diachronic structure of the game coevolves with what happens when the agents interact on repeated plays.

That the assistant does not evolve to ask more than one question also makes sense on reflection. The players usually 0.619 evolve a signaling language that gets the builder what she needs better than 0.9 of the time with \(10^7\) plays per run. But even when they get stuck in a suboptimal pooling equilibrium in the evolution of their language, the assistant only asks one question. At a suboptimal pooling accuracy of 0.75, he still has an expected return of 1.5, which is better than the expected return of 1.0 that he would get if he asked two questions even with a perfect language. So the builder and assistant are doing well enough playing a suboptimal pooling equilibria that the assistant does not have any reason to continue the conversation beyond one question. The distribution of payoffs in Fig. 7 illustrates this feature of the results. The upshot is that the assistant asking precisely one question is the optimal strategy if the builder and assistant are able to evolve a language with any real expressive power. And this is what the players are always found to do under the reinforcement dynamics.

Fig. 7
figure 7

Final average payoffs for the costly dialogue game

Asking just one question is in principle enough for perfect success on the games we have discussed so far. But more than one question is always required for success in a slightly modified version of the dialogue game.

Consider a cost-free version of the dialogue game but where the builder only has the expressive resources for two possible answers \(A_0\) and \(A_1\) to each of the assistant’s potential questions. On simulation the agents sometimes evolve optimal linguistic practice on both learning dynamics in this game as well. Here the only way that the builder can fully specify which of the four materials she needs is if her assistant asks both questions and if the answers to the two questions come to be systematically interrelated by means of evolved crosscutting properties. On successful runs the assistant learns to ask precisely two questions and the builder’s reply to each question comes to pick out one of two coevolved cross-cutting kinds such that the two answers together serve to specify the precise building material needed.Footnote 15

The various dialogue games illustrate how the meanings of the assistant’s questions and the builder’s replies might coevolve with the assistant’s learning to ask questions then learning to stop asking when he has enough information to act successfully. The players self-assemble the structure of discourse and action that is needed for success given the evolving meanings of the terms. And they tune the meanings of their terms to the evolving structure of the game that they are playing. In this way, the resulting language game comes to exhibit a systematically interrelated whole, that ties together the evolved meanings of their words and the structure of their discourse and actions.

4 Discussion

Wittgenstein used the notion of a language game to illustrate how language is interwoven with action. We have shown here how a systematically interrelated whole, where the agents’ words and the structure of their discourse and actions are thoroughly integrated to facilitate successful cooperative action, might be forged in the context of a simple learning dynamics. This explains how a simple language game like that described by Wittgenstein might come to be.

Self-assembly is essential to the agents’ success in each of the games we have considered. The agents cannot even begin to evolve a language without first getting involved in discourse. And they cannot benefit from having evolved a language that allows for reliable communication without learning when to stop talking and use what they have learned.

The self-assembly of such diachronic structure is central to the theory of self-assembling games. It explains how the dispositions that allow for sequential interactions between agents might coevolve with other aspects of the game. It shows how agents may learn to take turns even as they coevolve what they will do on their turns.

Importantly, the present games illustrate only a part of the full self-assembly process. Just as the assistant learns to ask questions, the builder may learn to reply to the assistant’s questions rather than to remain silent. To investigate how such a feature of discourse might evolve, one would give the builder the option of not replying at all or replying with one of a set of responses and see what happens under the adaptive dynamics. And so on for other structural features of the games we have considered. In short, there are yet more subtle models to investigate.

Such stories are thoroughly pragmatic. The agents learn to take part in discourse because it allows them to evolve a linguistic practice that eventually facilitates success in action. Achieving some measure of success leads to further talk. And that talk leads to further success. The feedback ritualizes their linguistic practice as a piece with their pragmatic practice more generally.

Ritualization of successful action under the adaptive dynamics of the self-assembling game is what structures the interactions of the agents and determines the significance of their actions at every step. It is what explains how they might come to use language at all.

The self-assembly of increasingly subtle language games allows for richer forms of meaningful discourse. In each evolved game, one’s language and actions are inextricably interwoven.