Learning to signal: Analysis of a micro-level reinforcement model


We consider the following signaling game. Nature plays first from the set {1, 2}. Player 1 (the Sender) sees this and plays from the set {A, B}. Player 2 (the Receiver) sees only Player 1’s play and plays from the set {1, 2}. Both players win if Player 2’s play equals Nature’s play and lose otherwise. Players are told whether they have won or lost, and the game is repeated. An urn scheme for learning coordination in this game is as follows. Each node of the desicion tree for Players 1 and 2 contains an urn with balls of two colors for the two possible decisions. Players make decisions by drawing from the appropriate urns. After a win, each ball that was drawn is reinforced by adding another of the same color to the urn. A number of equilibria are possible for this game other than the optimal ones. However, we show that the urn scheme achieves asymptotically optimal coordination.



    Upload a copy of this work     Papers currently archived: 92,953

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.


Added to PP

52 (#314,514)

6 months
6 (#588,321)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Brian Skyrms
University of California, Irvine

References found in this work

No references found.

Add more references