Skip to main content
Log in

Intentional Vagueness

  • Original Article
  • Published:
Erkenntnis Aims and scope Submit manuscript

Abstract

This paper analyzes communication with a language that is vague in the sense that identical messages do not always result in identical interpretations. It is shown that strategic agents frequently add to this vagueness by being intentionally vague, i.e. they deliberately choose less precise messages than they have to among the ones available to them in equilibrium. Having to communicate with a vague language can be welfare enhancing because it mitigates conflict. In equilibria that satisfy a dynamic stability condition intentional vagueness increases with the degree of conflict between sender and receiver.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Much of the necessary game theory background needed to appreciate our formal analysis can be found in Myerson’s textbook (1991), which also contains a simple example of what noisy communication can achieve when incentives rule out effective communication in a noiseless environment. The relevant literature on communication games has recently been surveyed by Sobel (2010).

  2. Kartik et al. (2007) (KOS) study strategic information transmission when messages directly affect payoffs, either because the sender faces a cost of lying or receivers are credulous. In their environment, an exogenous mapping from the state space to the message space endows each message with an intrinsic meaning, and, a fortiori, orders the message space. When the state space is unbounded, they demonstrate that there are fully revealing equilibria where there is language inflation in the sense that senders systematically send messages corresponding to higher types than their own. This language inflation is reminiscent of the upward distortion of separating types in our equilibria. In our setup, due to the noise, separation does not reveal the type; thus, the closest parallel is with the credulous receiver case of KOS.

  3. In “Appendix 2” we compute the ex ante utility loss for the receiver resulting from this behavior.

  4. The two-type model suffices to make our main point that strategic players add intentional vagueness to a vague language. Furthermore, it allows us to provide a nearly complete characterization of the conditions under which informative equilibria exist for the entire range of prior type distributions. We will show later how to extend our analysis to an arbitrary finite number of types.

  5. There is no a priori reason why messages m and interpretations q should belong to the same space. Therefore, we take full advantage of the analytic convenience of working with the normal distribution rather than a truncated version or some other distribution supported on [0, 1] rather than on \({\mathbb{R}.}\) Two properties of normal distributions that we use repeatedly are that they have log-concave densities and satisfy the (strict) monotone likelihood ratio property (SMLRP). These are central to obtaining monotonicity of the receiver’s response, to be able to characterize the sender’s optimal choice of message in terms of a first-order condition, and for the characterization of equilibria in the general case. For our comparative statics result, we also make use of the symmetry of the normal distribution. Finally, the full force of our distributional assumption is needed to characterize the conditions under which there will be informative equilibria, and of course to calculate specific equilibria. These calculations are important because they illustrate the multiplicity of equilibria that can arise even with all the regularity that is imposed by assuming quadratic preferences and normality of the distribution of interpretations conditional on messages.

  6. Some of our results, e.g. Lemmas 1–4 in the appendix and Propositions 1–3, continue to hold for the class of payoff functions \(U_\eta^{S}\left(a,t,b\right)=-\left\vert t+b-a\right\vert^{\eta}\) and \(U_\eta^{R}\left(a,t\right) =-\left\vert t-a\right\vert^{\eta},\) where η ≥ 1 parameterizes different degrees of risk aversion, that was considered by Krishna and Morgan (2004).

  7. A simple symmetry argument shows that if there is an equilibrium in which the sender chooses messages m 0 and m 1 with m 0 > m 1, then there is a corresponding equilibrium in which she chooses messages 1 − m 0 and 1 − m 1.

  8. Implicit in our setup is that, following the bulk of the communication-games literature, we rule out commitment by the sender to send particular messages as a function of her type and receiver commitment to adopt specific actions as a function of the received message. Lack of commitment also distinguishes our work from Aragonès and Neeman (2000) who model ambiguous policy platform choices by giving candidates the opportunity to select among different degrees of commitment, which is motivated by a preference for post-election flexibility.

  9. Existence of informative equilibria requires that the message space attains either a lower or an upper bound. Otherwise, for any strictly monotone action rule of the receiver, the high type would have an incentive to use ever more extreme messages. Removal of only one of these bounds has no effect on interior informative equilibria, in which the low type sends a message in the interval [0, 1]. Finally, having both bounds is necessary for existence of monotonic communicative equilibria in the common interest case. Otherwise both types would have an incentive to race to the extremes. The common interest case also demonstrates that having those bounds is a natural consequence of postulating exogenous noise, for without those bounds players could reduce noise to any desired degree by picking more extreme messages. Informally, the bounds may be thought of as a proxy for a model in which players can reduce noise at a cost, but completely eliminating noise is prohibitively costly.

  10. In fact, with common interest, every informative pure-strategy equilibrium is an equilibrium with maximal differentiation. Such an equilibrium always exists in the common-interest case, as does a pooling equilibrium. In addition there are mixed equilibria that are not outcome equivalent. For example, there is an equilibrium in which the high type randomizes uniformly over messages 0 and 1, and the low type sends message 1/2. The latter equilibria are neither monotonic, nor strict, nor in pure strategies. Furthermore, in the common-interest case it is easy to see that the minimal curb requirement eliminates all mixed equilibria, and uniquely selects maximal differentiation.

  11. When b is low enough (less than \(\frac{1}{2}\)) intentional vagueness can arise even in the benchmark model with no noise (σ = 0), in an equilibrium where the type-0 sender mixes between identifying herself and pooling with the type-1 sender. Note that in such an equilibrium, unlike in the case with noise described above, the type-0 sender suffers no loss from full identification. Further, this equilibrium is always Pareto dominated by another equilibrium.

  12. Noise forces partial pooling of high types with low types, even when both send distinct messages. As a result, by sending a distinct message the low-type sender achieves some degree of separation without the risk of being fully identified. Essentially the same effect, first noted by Myerson (1991), is at work in garbling by non-strategic mediators (Goltsman et al. 2007), mixing by strategic mediators (Ivanov 2010), or when there is uncertainty about how well informed the sender is (Austen-Smith 1994). De Jaegher (2003b) characterizes the conditions needed for noise to improve communication in sender-receiver games with two types, two messages and three receiver responses, assuming that noise takes the form of messages getting lost, and De Jaegher and van Rooij (2011) relate examples of this type to linguistics. Noise is a special case of mediated communication, whose role in sender-receiver games was first studied by Forges (1985, 1988). Blume et al. (2007) demonstrate that a sufficiently small amount of replacement noise, where in the noise event the sent message gets replaced by some other message independent of the message sent, is generally beneficial in the uniform-quadratic CS environment and, using the characterization of Goltsman et al. (2007), that the right amount of such noise can substitute for an optimal mediation scheme. In the present paper we study the role of additive noise, which gets added to the original message and is natural if we believe that sent messages affect interpretations even in the event of misunderstandings.

  13. Our results would remain unchanged if we considered dynamics in the more general class \(\dot{m}=\xi \left( z\left(b,m\right) \right) , \) where \({\xi :\mathbb{R}\rightarrow \mathbb{R}}\) is any continuously differentiable function with \(\xi^{\prime }\left(z\right) >0\) for all \({z\in \mathbb{R}}\) and \(\xi \left(0\right) =0. \)

  14. For a concise discussion of the correspondence principle, its history and related literature, see Echenique (2008).

  15. Weaver (1949) sketches a similar framework, where the speaker observes the state of the world, t, chooses her intended meaning, m, but says p. There is randomness in the encoding of m into p, referred to by Weaver as “semantic noise" (1949, p. 16). The sender may, for example, want to express vermilion, but says “red orange" instead, as could be the case if the word “vermilion" momentarily escapes her or she is part of a heterogeneous population of speakers in which there is variation in the use of color words. The listener observes p, i.e. “red orange" and forms an interpretation q. There is additional randomness in the decoding of p into q because the boundaries of “red orange" are uncertain, there is heterogeneity in the population of listeners and the listeners’ memory of p may be imperfect by the time they act.

  16. Geraats uses the case of a speech that Alan Greenspan gave at the Economic Club of New York on June 20, 1995. The following day the New York Times had the headline “Doubts voiced by Greenspan on a rate cut," whereas the Washington Post’s headline was “Greenspan hints Fed may cut interest rates.” Heterogeneity in the interpretation of central bank announcements is also noted by Alan Blinder: “Central bank communication \(\ldots\) must have both a transmitter and a receiver, and either could be the source of uncertainty or confusion. Moreover, on the receiving end, the same message might be interpreted differently by different listeners who may have different expectations or believe in different models.” (Blinder 2008, p. 934)

  17. Note that it is not without problems to attribute an intent to a legislative body.

  18. Posner (1987, p. 193) mentions explicitly that interest groups may cause “serious departures from optimality" in legislatures. One of the key policy issues in the debate on judicial interpretation concerns the legitimacy of the use of legislative history in the effort to determine the meaning of statutes. Farber and Frickey (1988, p. 448) recognize the possibility that such evidence may be compromised by bias, but argue that “\(\ldots\) even if legislative history were systemically biased, that would not justify ignoring it, because a decision maker can always compensate for known bias in assessing evidence.”

  19. The monotonicity condition m 1 < m 2 is satisfied in only one of these, so we do not know if global incentive compatibility is satisfied in the other two.

  20. Li ( 2007) also considers a model where the sender has a choice of communication channels. Unlike in our model, she assumes that there are reputational concerns for the sender, who can opt to communicate directly with the receiver or indirectly through an intermediary.

  21. A number of papers in the industrial organization literature (e.g. Lewis and Sappington 1994; Johnson and Myatt 2006; Ivanov 2008) consider an environment where a seller chooses the accuracy of information available to a buyer about the characteristics of his product. Like our paper, these papers also make the assumption that the level of accuracy chosen is publicly observable. In contrast to our paper, however, once he has chosen accuracy, the seller has no further control over the signal observed by the buyers—the buyers’ signals are hard information not cheap talk; furthermore, in these models the buyers have private information about how they value certain product characteristics.

  22. For a concise summary of vagueness in philosophy see http://plato.stanford.edu/entries/vagueness/.

  23. The mentioning of beliefs for which best replies are well defined and the requirement that for a curb set X the set β(X) is nonempty are a consequence of extending the curb requirement to infinite games.

References

  • Agranov, M., & Schotter, A. (2008). Ambiguity and vagueness in the announcement (Bernanke) game: An experimental study of natural language. Working paper, Department of Economics, New York University.

  • Aragonès, E., & Neeman, Z. (2000). Strategic ambiguity in electoral competition. Journal of Theoretical Politics, 12, 183–204.

    Article  Google Scholar 

  • Austen-Smith, D. (1994). Strategic transmission of costly information. Econometrica, 62, 955–963.

    Article  Google Scholar 

  • Basu, K., & Weibull, J. W. (1991). Strategy subsets closed under rational behavior. Economics Letters, 36, 141–146.

    Article  Google Scholar 

  • Blinder, A., Ehrmann, M., Fratzcher, M., de Haan, J. & Jansen, D. (2008). Central bank communication and monetary policy: A survey of theory and evidence. Journal of Economic Literature, XLVI, 910–945.

    Article  Google Scholar 

  • Blume, A., Board, O. J., & Kawamura, K. (2007). Noisy talk. Theoretical Economics, 2, 395–440.

    Google Scholar 

  • Boudreau, C., Lupia, A., McCubbins, M. D., & Rodriguez, D. B. (2005). The judge as a fly on the wall: Interpretive lessons from the positive political theory of legislation. UCSD working paper.

  • Crawford, V. P. & Sobel, J. (1982). Strategic information transmission. Econometrica, 50, 1431–1451.

    Article  Google Scholar 

  • De Jaegher, K. (2003a). A game-theoretic rationale for vagueness. Linguistics and Philosophy, 26, 637–659.

    Article  Google Scholar 

  • De Jaegher, K. (2003b). Error-proneness as a handicap signal. Journal of Theoretical Biology, 224, 139–152.

    Article  Google Scholar 

  • De Jaegher, K. & van Rooij, R. (2011). Game-theoretic pragmatics under conflicting and common interests. TKI discussion paper 11–25, Utrecht University.

  • Dewan, T., & Myatt, D. P. (2008). The qualities of leadership: Direction, communication, and obfuscation. American Political Science Review, 102, 351–368.

    Article  Google Scholar 

  • Echenique, F. (2008). The correspondence principle. In S. N. Durlauf & L. E. Blume (Eds.), The new Palgrave dictionary of economics (2nd ed.). Basingstoke: Palgrave MacMillan.

    Google Scholar 

  • Farber, D. A., & Frickey, P. P. (1988). Legislative intent and public choice. Virginia Law Review, 74, 423–469.

    Article  Google Scholar 

  • Fine, K. (1975). Vagueness, truth and logic. Synthese, 30, 265–300.

    Article  Google Scholar 

  • Forges, F. (1985). Correlated equilibria in a class of repeated games with incomplete information. International Journal of Game Theory, 14, 129–150.

    Article  Google Scholar 

  • Forges, F. (1988). Can sunspots replace a mediator? Journal of Mathematical Economics, 17, 347–368.

    Article  Google Scholar 

  • Gärdenfors, P. (2000). Conceptual spaces: The geometry of thought, Cambridge, MA: MIT Press.

    Google Scholar 

  • Geraats, P. (2007). The mystique of central bank speak. International Journal of Central Banking, 3, 37–80.

    Google Scholar 

  • Goltsman, M., Hörner, J., Pavlov, G., & Squintani, F. (2007). Mediation, arbitration and negotiation. Journal of Economic Theory, 144, 1397–1420.

    Article  Google Scholar 

  • Greenhouse, L. (1991). Morality play’s twist. New York Times, November 3, 1991.

  • Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics (Vol. 3). New York: Academic Press.

    Google Scholar 

  • Hirsch, M. W., & Smale, S. (1974). Differential equations, dynamical systems and linear algebra, New York: Academic Press.

    Google Scholar 

  • Ibragimov, I. A. (1956). On the composition of unimodal distributions. Theory of Probability and its Applications, 1, 255–260.

    Article  Google Scholar 

  • Ivanov, M. (2008). Information revelation in competitive markets. Working paper, McMaster University.

  • Ivanov, M. (2010). Communication via a strategic mediator. Journal of Economic Theory, 145, 869–884.

    Google Scholar 

  • Jäger, G. (2008). Applications of game theory in linguistics. Language and Linguistics Compass, 2, 1–16.

    Article  Google Scholar 

  • Johnson, J. P., & Myatt, D. P. (2006). On the simple economics of advertising, marketing, and product design. American Economic Review, 96, 756–784.

    Article  Google Scholar 

  • Kartik, N., Ottaviani, M., & Squintani, F. (2007). Credulity, lies, and costly talk. Journal of Economic Theory, 134, 93–116.

    Article  Google Scholar 

  • Krishna, V., & Morgan, J. (2004). The art of conversation: Eliciting information from experts through multi-stage communication. Journal of Economic Theory, 117, 147–179.

    Article  Google Scholar 

  • Lewis, T. R., & Sappington, D. R.M. (1994). Supplying information to facilitate price discrimination. International Economic Review, 35, 309–327.

    Article  Google Scholar 

  • Li, W. (2007). Peddling influence through intermediaries: Propaganda. Working paper, University of California, Riverside.

  • Lipman, B. L. (2009). Why is language vague? Boston: Boston University.

    Google Scholar 

  • Myerson, R.B. (1991). Game theory: Analysis of conflict, Harvard University Press, Cambridge, MA.

    Google Scholar 

  • Nowak, M. A., Krakauer, D. C., & Dress, A. (1999). An error limit for the evolution of language. Proceedings of the Royal Society B: Biological Sciences, 266, 2131–2136.

    Article  Google Scholar 

  • Parikh, P. (2000). Communication, meaning, and interpretation. Linguistics and Philosophy, 23, 185–212.

    Article  Google Scholar 

  • Parikh, R. (1994). Vagueness and utility: The semantics of common nouns. Linguistics and Philosophy, 17, 521–535.

    Article  Google Scholar 

  • Pinker, S. (2007). The stuff of thought: Language as a window into human nature, New York: Viking.

    Google Scholar 

  • Pinker, S., Nowak, M. A., & Lee, J. J. (2008). The logic of indirect speech. Proceedings of the National Academy of Sciences, 105, 833–838.

    Article  Google Scholar 

  • Posner, R. A. (1987). Legal formalism, legal realism, and the interpretation of statutes and the constitution. Case Western Reserve Law Review, 37, 179–217.

    Google Scholar 

  • Reiter, E., & Sripada, S. (2002). Human variation in lexical choice. Association for Computational Linguistics, 28, 545–553.

    Article  Google Scholar 

  • Rizzo, M. J., & Arnold, F. S. (1987). An economic framework for statutory interpretation. Law and Contemporary Problems, 50, 165–180.

    Article  Google Scholar 

  • Samuelson, P. A. (1941). The stability of equilibrium: Comparative statics and dynamics. Econometrica, 9, 97–120.

    Article  Google Scholar 

  • Samuelson, P. A. (1947). Foundations of economic analysis, Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Schwartz, J.-L., Boe, L.-J., Vallé, N., & Abry, C. (1997). The dispersion–focalization theory of vowel systems. Journal of Phonetics, 25, 255–286.

    Article  Google Scholar 

  • Serra-Garcia, M., van Damme, E., & Potters, J. (2008). Truth or effiency? Communication in a sequential public good game. Working paper, Tilburg University.

  • Shannon, C. E., & Weaver, W. (1949). The mathematical theory of communication, Urbana, IL: University of Illinois Press.

    Google Scholar 

  • Sobel, J. (2010). Giving and receiving advice. Working paper, UCSD.

  • Stein, J. C. (1989). Cheap talk and the Fed: A theory of imprecise policy announcements. American Economic Review, 97, 42–42.

    Google Scholar 

  • Warglien, M., & Gärdenfors, P. (2011). Semantics, conceptual spaces and the meeting of minds. Synthese, forthcoming. doi:10.1007/s11229-011-9963-z.

  • Weaver, W. (1949). Some recent contributions to the mathematical theory of communication. In Shannon, C. E., & Warren W. (Eds.), The mathematical theory of communication. Urbana, IL: University of Illinois Press.

    Google Scholar 

  • Williamson, T. (1994). Vagueness, Routledge: London and New York.

    Google Scholar 

  • Zadeh, L. (1975). Fuzzy logic and approximate reasoning. Synthese, 30, 407–428.

    Article  Google Scholar 

Download references

Acknowledgments

We thank Maxim Ivanov, Ming Li, Wei Li, Marta Serra-Garcia and seminar audiences at Texas A&M University, Universität Bonn, Universität Hannover, University of Minnesota, UC San Diego, UC Riverside, UCLA, UC Davis, the Third World Congress of the Game Theory Society, and the Individual Decisions and Political Process workshop at CIRANO. We are especially grateful to Joel Sobel whose many thoughtful comments led to substantial improvements in the content and exposition of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oliver Board.

Appendices

Appendix 1: Proofs

We start with some preliminaries. As in Sect. 2.4, we define an expectation function α, which gives the expected value of the sender’s type if she sends message m 0 when t = 0 and message m 1 when t = 1:

$$ \alpha \left(q,m_{0},m_{1},\theta ,\sigma \right) \equiv \frac{\theta \cdot \phi _{m_{1},\sigma^{2}}\left(q\right) }{\left( 1-\theta \right) \cdot \phi _{m_{0},\sigma^{2}}\left(q\right) +\theta \cdot \phi _{m_{1},\sigma^{2}}\left(q\right) }. $$

This function has 180° rotational symmetry; the following lemma describes this symmetry in the special case where m 1 = 1.

Lemma 1:

Suppose m 0 ≠ 1. Then the action function α satisfies the following symmetry property:

$$ \alpha \left(q^{\ast }+x,m_{0},1,\theta ,\sigma \right) -\frac{1}{2}=\frac{1 }{2}-\alpha \left(q^{\ast }-x,m_{0},1,\theta ,\sigma \right) \quad \forall x\in {\mathbb{R}}. $$

where

$$ q^{\ast }=\sigma^{2}\frac{\ln \left(\frac{1-\theta }{\theta }\right) }{ 1-m_{0}}+\frac{1+m_{0}}{2}. $$

Proof

Notice that the condition

$$ \alpha \left(q^{\ast }+x,m_{0},1,\theta ,\sigma \right) -\frac{1}{2}=\frac{1 }{2}-\alpha \left(q^{\ast }-x,m_{0},\theta ,\sigma \right) \quad \forall x\in {\mathbb{R}} $$

is equivalent to

$$ \begin{aligned} &\frac{\theta \cdot \phi _{1,\sigma^{2}}\left(q\right) \left( q^{\ast }+x\right) }{\theta \cdot \phi _{1,\sigma^{2}}\left(q^{\ast }+x\right) +\left(1-\theta \right) \cdot \phi _{m_{0},\sigma^{2}}\left(q^{\ast }+x\right) } \\ &\quad =1-\frac{\theta \cdot \phi _{1,\sigma^{2}}\left( q^{\ast }-x\right) }{\theta \cdot \phi _{1,\sigma^{2}}(q^{\ast }-x)+(1-\theta )\cdot \phi _{m_{0},\sigma^{2}}(q^{\ast }-x)}\quad \forall x\in {\mathbb{R}} \\ &\quad \Leftrightarrow \frac{1}{1+\frac{1-\theta }{\theta }\frac{\phi _{m_{0},\sigma^{2}}\left(q^{\ast }+x\right) }{\phi _{1,\sigma^{2}}\left(q^{\ast }+x\right) }}=\frac{1}{1+\frac{\theta }{1-\theta }\frac{\phi _{1,\sigma^{2}}\left(q^{\ast }-x\right) }{\phi _{m_{0},\sigma^{2}}\left(q^{\ast }-x\right) }}\quad \forall x\in {\mathbb{R}} \\ &\quad \Leftrightarrow \frac{1-\theta }{\theta }\frac{\phi _{m_{0},\sigma^{2}}\left(q^{\ast }+x\right) }{\phi _{1,\sigma^{2}}\left(q^{\ast }+x\right) }=\frac{\theta }{1-\theta }\frac{\phi _{1,\sigma^{2}}\left(q^{\ast }-x\right) }{\phi _{m_{0},\sigma^{2}}\left(q^{\ast }-x\right) } \quad \forall x\in {\mathbb{R}} \\ &\quad\Leftrightarrow \frac{1-\theta }{\theta }\frac{e^{-\frac{\left(q^{\ast }+x-m_{0}\right)^{2}}{2\sigma^{2}}}}{e^{-\frac{\left(q^{\ast }+x-1\right)^{2}}{2\sigma^{2}}}}=\frac{\theta }{1-\theta }\frac{e^{-\frac{\left(q^{\ast }-x-1\right)^{2}}{2\sigma^{2}}}}{e^{-\frac{\left(q^{\ast }-x-m_{0}\right)^{2}}{2\sigma^{2}}}}\quad \forall x\in {\mathbb{R}}\\ &\quad \Leftrightarrow \hbox{ln} \left(\frac{1-\theta }{\theta }\right) +\frac{-\left(q^{\ast }+x-m_{0}\right)^{2}+\left(q^{\ast }+x-1\right)^{2}}{2\sigma^{2}} \\ &\quad = \hbox{ln} \left(\frac{\theta }{1-\theta }\right) +\frac{ -\left(q^{\ast }-x-1\right)^{2}+\left(q^{\ast }-x-m_{0}\right)^{2}}{ 2\sigma^{2}}\quad \forall x\in {\mathbb{R}} \\& \quad \Leftrightarrow \hbox{ln} \left(\frac{1-\theta }{\theta }\right) +\frac{ 2q^{\ast }m_{0}-m_{0}^{2}-2q^{\ast }+1}{2\sigma^{2}} \\& \quad =\hbox{ln} \left(\frac{\theta }{1-\theta }\right) +\frac{ 2q^{\ast }-1-2q^{\ast }m_{0}+m_{0}^{2}}{2\sigma^{2}} \\& \quad \Leftrightarrow q^{\ast }=\sigma^{2}\frac{\ln \left( \frac{1-\theta }{ \theta }\right) }{1-m_{0}}+\frac{1+m_{0}}{2}. \\ \end{aligned} $$

\(\square\)

We now present some results that describe several properties of informative equilibria. Recall that we assume without loss of generality that m 0 ≤ m 1; further, in an informative equilibrium, m 0 ≠ m 1, so m 0 < m 1. Lemma 2 states that, in any such equilibrium, the receiver’s chosen action is a strictly monotone function of q.

Lemma 2

In an informative equilibrium, the receiver’s action a is a strictly increasing function of the interpretation q.

Proof

In any equilibrium, the receiver’s action function satisfies

$$ {\bf a}\left(q\right) =\alpha \left(q,m_{0},m_{1},\theta ,\sigma \right) =\frac{\theta }{\left(1-\theta \right) \frac{\phi _{m_{0},\sigma^{2}}\left(q\right) }{\phi _{m_{1},\sigma^{2}}\left( q\right) }+\theta }. $$

For the normal distribution, the likelihood ratio \(\frac{\phi _{m_{0},\sigma^{2}}\left(q\right) }{\phi _{m_{1},\sigma^{2}}\left( q\right) }\) is strictly decreasing in q for m 0 < m 1. \(\square\)

Since the sender, for given t, wants a higher action than the receiver, it follows that the type-1 will always choose an extremal message. Formally,

Lemma 3

In an informative equilibrium m 1 = 1.

Proof

By Lemma 2 the receiver’s action function \({\bf a}\left( q\right) \) is a strictly increasing function of q. Furthermore, \({\bf a}\left(q\right) <1\) for all \({q\in \mathbb{R}}\). Therefore \(-\left(1+b-a\left(q\right) \right)^{2}, \) the type-1 sender’s payoff from the interpretation q, is also a strictly increasing function of q. This and the fact that \(\phi _{1,\sigma^{2}}\) strictly first-order stochastically dominates \(\phi_{m_{0},\sigma^{2}}\) for any m 0 < 1 implies that

$$ \int\limits_{-\infty }^{\infty }-\left(1+b-a\left(q\right) \right)^{2}\phi _{1,\sigma^{2}}\left(q\right) dq>\int\limits_{-\infty }^{\infty }-\left(1+b-a\left(q\right) \right)^{2}\phi _{m_{0},\sigma^{2}}\left(q\right) dq ,\quad \hbox{for all }m_{0}<1. $$

\(\square\)

Henceforth, then, we assume that m 1 = 1. The optimization problem for the type-0 sender is much trickier to solve, since in a candidate equilibrium with m 0 < m 1, different messages not only shift, but also change the shape of the induced distribution of actions. Some care is required to ensure that sending the specified m 0 is globally optimal. To this end we repeatedly make use of an important technical observation. As long as the action function of the receiver satisfies Eq. (2) (see the definition of equilibrium in Sect. 2.1), the expected payoff of a type-t sender is a convolution of two quasi-concave functions. Furthermore, the density of the normal distribution is log-concave. Ibragimov (1956) shows that under these conditions the convolution itself will be quasi-concave. The following lemma adapts his result to the present environment.

Lemma 4

If the receiver’s action function a is strictly increasing in the interpretation qthen for any tthe sender’s expected payoff from sending message m

$$ V^{S}\left(m,t,{\bf a}\right) \equiv \int\limits_{-\infty }^{\infty }U^{S}\left({\bf a} \left(q\right) ,t,b\right) \phi _{m,\sigma^{2}}\left(q\right) dq $$

is a strictly quasi-concave function of mand any \(m^{\ast }\) with \(\frac{dV^{S}}{dm\ast }\left(m^{\ast },t\right) =0\) is the unique global maximizer for type t.

Proof

To simplify notation, we suppress reference to t,  b and a and let \(U\left(q\right) \equiv U^{S}\left({\bf a}\left( q\right) ,t,b\right) , \) so that

$$ V^{S}\left(m\right) \equiv \int\limits_{-\infty }^{\infty }U\left( q\right) {\frac{ 1}{\sigma \sqrt{2\pi }}}e^{\frac{-\left( q-m\right)^{2}}{2\sigma^{2}}}dq. $$

Note that given our monotonicity assumption on a and by virtue of the fact that for any t there is a unique a t that solves \(\max_{a}U^{S}\left(a,t,b\right) , \) U is either (i) strictly increasing, (ii) strictly decreasing, or (iii) there exists a value q 0 such that U is strictly increasing for q < q 0 and strictly decreasing for q > q 0. In cases (i) and (ii), the result follows because the normal distribution satisfies the strict-monotone-likelihood-ratio property and therefore strict first-order stochastic dominance. Otherwise, U has a unique maximizer q 0. For this case, consider

$$ \begin{aligned} \frac{dV^{S}}{dm} &=\int\limits_{-\infty }^{\infty }U\left(q\right) {\frac{d}{dm}} \left({\frac{1}{\sigma \sqrt{2\pi }}}e^{\frac{-\left(q-m\right)^{2}}{ 2\sigma^{2}}}\right) dq \\ &=-\int\limits_{-\infty }^{\infty }U\left(q\right) {\frac{d}{dq}}\left( {\frac{1}{ \sigma \sqrt{2\pi }}}e^{\frac{-\left( q-m\right)^{2}}{2\sigma^{2}}}\right) dq \\ &=\left[ U\left(q\right) {\frac{1}{\sigma \sqrt{2\pi }}}e^{\frac{-\left(q-m\right)^{2}}{2\sigma^{2}}}\right] _{-\infty }^{\infty }+\int\limits_{-\infty }^{\infty }{\frac{1}{\sigma \sqrt{2\pi }}}e^{\frac{-\left(q-m\right)^{2}}{ 2\sigma^{2}}}\frac{dU}{dq}dq \\ &=\frac{1}{\sigma \sqrt{2\pi }}\int\limits_{-\infty }^{\infty }e^{\frac{-\left(q-m\right)^{2}}{2\sigma^{2}}}\frac{dU}{dq}dq, \\ \end{aligned} $$

using the fact \(\left\vert U\right\vert \) is bounded. Define λ ≡ q − q 0. Note that we have \(\frac{dU}{dq}\left( q_{0}+\lambda \right) <0\) and \(\frac{dU}{dq}\left(q_{0}-\lambda \right) >0\) for all λ > 0. Now suppose that \(\frac{dV^{S}}{dm}\left(m^{\ast }\right) =0\) for some \(m^{\ast }\). \(\frac{dV^{S}}{dm}\left(m^{\ast }\right) \) can be re-written as

$$ \frac{dV^{S}}{dm}\left(m^{\ast }\right) ={\frac{1}{\sigma \sqrt{2\pi }}} \left\{ \int\limits_{0}^{\infty }e^{\frac{-\left(m^{\ast }-\left(q_{0}+\lambda \right) \right)^{2}}{2\sigma^{2}}}\frac{dU}{dq}\left(q_{0}+\lambda \right) d\lambda +\int\limits_{0}^{\infty }e^{\frac{-\left(m^{\ast }-\left( q_{0}-\lambda \right) \right)^{2}}{2\sigma^{2}}}\frac{dU}{dq}\left( q_{0}-\lambda \right) d\lambda \right\}. $$

Also, we have

$$ \begin{aligned} \frac{dV^{S}}{dm} \left(m^{\ast }+\delta \right) &={\frac{1}{\sigma \sqrt{ 2\pi }}}e^{\frac{-\left(2\delta m^{\ast }-2\delta q_{0}+\delta^{2}\right) }{2\sigma^{2}}} \\ &\quad\times\left(\int\limits_{0}^{\infty }e^{\frac{-\left(m^{\ast }-\left( q_{0}+\lambda \right) \right)^{2}}{2\sigma^{2}}}e^{\frac{2\delta \lambda }{2\sigma^{2}}} \frac{dU}{dq}\left(q_{0}+\lambda \right) d\lambda +\int\limits_{0}^{\infty }e^{ \frac{-\left(m^{\ast }-\left( q_{0}-\lambda \right) \right)^{2}}{2\sigma^{2}}}e^{\frac{-2\delta \lambda }{2\sigma^{2}}}\frac{dU}{dq}\left(q_{0}-\lambda \right) d\lambda \right). \end{aligned} $$

Note that for δ > 0 (δ < 0) we are inflating (deflating) the negative terms and deflating (inflating) the positive terms in the integrand. Therefore \(\frac{dV^{S}}{dm}\left(m^{\ast }+\delta \right) <0\) for δ > 0 and \(\frac{dV^{S}}{dm}\left(m^{\ast }+\delta \right) >0\) for δ < 0; i.e. V S is strictly quasi-concave and \(m^{\ast }\) is a global maximum. \(\square\)

We can now start to prove our main results. The proofs make use of the function V defined in Sect, 2.4; V gives the expected utility of the type-0 sender with bias b if she sends message m when the receiver expects her to send message m 0 and the type-1 sender to send message 1:

$$ V\left(b,m_{0},m^{\prime }\right) \equiv \int\limits_{\infty }^{\infty }-(b-\alpha \left(q,m_{0},1,\theta ,\sigma \right) )^{2}\cdot \phi _{m^{\prime },\sigma }\left(q\right) dq. $$

Proof of Proposition 1

Let \(b=\frac{1}{2}\) and suppose we have an informative equilibrium where the sender chooses \({\bf m}=\left( m_{0},1\right) \) with m 0 < 1. From Lemma 1, we know that \(\alpha \left(q,m_{0},1,\theta ,\sigma \right) \) has 180° rotational symmetry about the point \(\left(q^{\ast },\frac{1}{2}\right) \) , where \(q^{\ast }=\sigma^{2}\frac{\ln \left(\frac{1-\theta }{\theta }\right) }{1-m_{0}}+\frac{1+m_{0}}{2}. \) It follows that \(V\left( \frac{1}{2},m_{0},q^{\ast }-k\right) =V\left(\frac{1}{2},m_{0},q^{\ast }+k\right)\). Given Lemma 4, \(q^{\ast }\) is the unique maximizer of \(V\left( \frac{1}{2},m_{0},m\right)\). If \(\theta \leq \frac{1}{2}, \) \(q^{\ast}\)m 0, so the type-0 sender will deviate and send m = min { \(q^{\ast}\), 1} instead of m 0; in this case, then, an informative equilibrium does not exist. Suppose instead that \(\theta >\frac{1}{2}, \) Solving m 0 = \(q^{\ast}\), we obtain

$$ m_{0}^{\ast }=1-\sigma \sqrt{2\log \left(\frac{\theta }{1-\theta }\right) }. $$

Notice that this expression must be less than 1. If \(m_{0}^{\ast }\) lies between 0 and 1, we have a unique informative equilibrium with the sender choosing \({\bf m}=\left(m_{0}^{\ast },1\right) ; \) if \(m_{0}^{\ast }\leq 0, \) we have a unique informative equilibrium with the sender choosing \({\bf m}=\left( 0,1\right)\). \(\square\)

Proof of Proposition 2

In an informative equilibrium, the sender chooses \({\bf m}=\left(m_{0},1\right) \) for some \(m_{0}\in \left[ 0,1\right) , \) and the receiver’s action function is given by \({\bf a}\left(q\right) =\alpha \left(q,m_{0},1,\theta ,\sigma \right)\). Since \(\alpha \left(q,m_{0},1,\theta ,\sigma \right) \) is strictly increasing in q and bounded below by 0, for b = 0, \(-\left( b-\alpha \left(q,m_{0},1,\theta ,\sigma \right) \right)^{2}\) is a strictly decreasing function of q. This, and the fact that \(\Upphi _{m^{\prime },\sigma^{2}}\) first-order stochastically dominates \(\Upphi _{m,\sigma^{2}}\) for any \(m^{\prime }>m\) implies that \(V_{3}\left(0,m,m\right) <0\) for all \(m\in \left[ 0,1\right)\). This means that whatever message the receiver expects the type-0 sender to send, she wants to send a lower message, and hence the unique informative equilibrium (with b = 0) is with \({\bf m}=\left( 0,1\right)\). Given continuity of \(V_{3}\left(b,m,m\right) \) in b, then, there is a non-empty interval \(\left[ 0,\underline {b}\right] \) for which there is a unique informative equilibrium which exhibits maximum differentiation.\(\square\)

Proof of Proposition 3

Fix some \(m\in \left[ 0,1\right)\). From the proof of Proposition 2 we know that \(V_{3}\left(0,m,m\right) <0. \) By similar reasoning and the fact that \(\alpha \left(q,m,1,\theta ,\sigma \right) \) is bounded above by 1, we have \(V_{3}\left( 1,m,m\right) >0. \) Continuity and the intermediate value theorem then imply that there exists a \(b\left(m\right) \in \left( 0,1\right) \) such that

$$ V_{3}\left(b\left(m\right) ,m,m\right) =0. $$

From Lemma 4, then, it follows that \({\bf m}=\left(m,1\right) \) is an equilibrium strategy for the sender when \(b=b\left(m\right)\). \(\square\)

Proof of Proposition 4

Differentiating V (the expected utility of the type-0 sender) with respect to the message she actually sends, and evaluating at the message m 0 that the receiver expects her to send, we obtain

$$ V_{3}\left(b,m_{0},m_{0}\right) =\int\limits_{-\infty }^{\infty }\left( 2b-\alpha \right) \alpha \phi _{m_{0},\sigma^{2}}\left(q\right) \frac{q-m_{0}}{ \sigma^{2}}dq. $$

The derivative of this expression with respect to m 0 evaluated at m 0 = 1 is equal to

$$ \left. \frac{dV_{3}\left(b,m_{0},m_{0}\right) }{dm_{0}}\right\vert _{m_{0}=1}=\frac{2\left(b-\theta \right) \left(-1+\theta \right) \theta \sqrt{1/\sigma^{2}}}{\sigma } $$

This derivative is positive exactly when b < θ. Using the fact that \(V_{3}\left(b,1,1\right) =0, \) this implies that when b < θ , there exists m 0 < 1 for which \(V_{3}\left( b,m_{0},m_{0}\right) <0. \) Since \(V_{3}\left(b,m_{0},m_{0}\right) \) is continuous in m 0, there are two possibilities. Either \(V_{3}\left(b,m_{0},m_{0}\right) <0 \forall m_{0}\in \left( 0,1\right) , \) in which case there is an informative equilibrium with maximal differentiation, where the sender’s strategy is \({\bf m}=\left(0,1\right)\). Or there exists an \(m_{0}\in \left( 0,1\right) \) for which \(V_{3}\left(b,m_{0},m_{0}\right) =0, \) in which case Lemma 4 implies that we have an informative equilibrium with intentional vagueness, where the sender’s strategy is \({\bf m}=\left(m_{0},1\right)\). \(\square\)

Proof of Proposition 5

Suppose that \(\theta \geq \frac{1}{2}\) and b = θ (we extend the result to the case where b > θ later), and the type-0 sender sends message m 0 < 1. We claim that she obtains at least as high an expected utility from sending message 1; combining this result with Lemma 4, we derive a contradiction. To compare \(V\left(\theta,m_{0},m_{0}\right) \) with \(V\left(\theta,m_{0},1\right) , \) we show that the type-0 sender (weakly) prefers a \(\frac{1}{2} - \frac{1}{2}\) gamble between \(\alpha \left(1-k,m_{0},1,\theta ,\sigma \right)\) and \(\alpha \left(1+k,m_{0},1,\theta ,\sigma \right)\) to a \(\frac{1}{2} - \frac{1}{2}\) gamble between \(\alpha \left(m_0-k,m_{0},1,\theta ,\sigma \right)\) and \(\alpha \left(m_0+k,m_{0},1,\theta ,\sigma \right)\) (property *), for all k ≥ 0. It follows that \(V\left(\theta,m_{0},m_{0}\right) \geq V\left(\theta,m_{0},1\right)\).

Start by considering values of \(k \in \left[0,\frac{1-m_{0}}{2}\right]\). For these values, \(m_0+k \leq \frac{1+m_0}{2} \leq 1-k. \) It is easy to show that α satisfies the following properties:

  1. 1.

    \(\alpha \left(\frac{1+m_0}{2},m_{0},1,\theta ,\sigma \right)=\theta; \)

  2. 2.

    \(\alpha \left(q,m_{0},1,\theta ,\sigma \right)\) is strictly increasing in q;

  3. 3.

    \(\alpha \left(q,m_{0},1,\theta ,\sigma \right)\) has a unique point of inflection at

    $$ q^*=\sigma^{2}\frac{\ln \left(\frac{1-\theta }{\theta }\right) }{1-m_{0}}+\frac{1+m_{0}}{2}< \frac{1+m_0}{2} $$

    and is bounded above and below; therefore, α is strictly convex in q for q < \(q^{\ast}\) and strictly concave in q for q > \(q^{\ast}\); and

  4. 4.

    \(\alpha(q^*,m_0,1,\theta,\sigma)=\frac{1}{2}\).

It follows that

$$ \begin{aligned} \theta -\alpha \left(m_{0}+k,m_{0},1,\theta ,\sigma \right) &\geq \alpha \left(1-k,m_{0},1,\theta ,\sigma \right) -\theta \geq 0 \quad \hbox{and} \\ \theta -\alpha \left(m_{0}-k,m_{0},1,\theta ,\sigma \right) &\geq \alpha \left(1+k,m_{0},1,\theta ,\sigma \right) -\theta \geq 0, \end{aligned} $$

for all \(k\in \left[0,\frac{1-m_{0}}{2}\right], \) and thus property * is satisfied.

Now consider any k with \(k>\frac{1-m_{0}}{2}. \) As before, we compare a \(\frac{1}{2} - \frac{1}{2}\) gamble between interpretations q = m 0 + k and q = m 0 − k with a \(\frac{1}{2} - \frac{1}{2}\) gamble between interpretations q = 1 + k and q = 1 − k. The first pair of gambles induces actions α(m 0 + km 0, 1, θσ) and α(m 0 − km 0, 1, θσ); let:

$$ \begin{aligned} a&=\alpha(m_0+k,m_0,1,\theta,\sigma)-\theta \\ b&=\theta - \alpha(m_0-k,m_0,1,\theta,\sigma). \end{aligned} $$

Similarly, the second pair of gambles induces actions α(1 + km 0, 1, θσ) and α(1 − km 0, 1, θσ); let:

$$ \begin{aligned} c&=\alpha(1+k,m_0,1,\theta,\sigma)-\theta \\ d&=\theta - \alpha(1-k,m_0,1,\theta,\sigma). \end{aligned} $$

For values of \(k>\frac{1-m_{0}}{2}\) in this range, we have c ≥ a ≥ 0 and d ≥ a ≥ 0. We now show that a + b ≥ c + d; substituting in the expressions for abc, and d above, this inequality becomes:

$$ \begin{aligned} & \alpha \left(m_{0}+k,m_{0},1,\theta ,\sigma \right) -\alpha \left(m_{0}-k,m_{0},1,\theta ,\sigma \right) \\ &\quad\geq \alpha \left(1+k,m_{0},1,\theta ,\sigma \right) -\alpha \left(1-k,m_{0},1,\theta ,\sigma \right). \end{aligned} $$

Simplifying the action function, we obtain

$$ \alpha \left(q,m_{0},1,\theta ,\sigma \right) =\frac{1}{1+e^{\frac{\left(1-m_{0}\right) \left(1+m_{0}-2q\right) }{2\sigma^{2}}}\left(\frac{1}{ \theta }-1\right) }. $$

Hence

$$ \begin{aligned} &\alpha \left(m_{0}+k,m_{0},1,\theta ,\sigma \right) -\alpha \left( m_{0}-k,m_{0},1,\theta ,\sigma \right) \\ &=\frac{1}{1+e^{\frac{\left(1-m_{0}\right) \left(1+m_{0}-2\left( m_{0}+k\right) \right) }{2\sigma^{2}}}\left(\frac{1}{\theta }-1\right) }- \frac{1}{1+e^{\frac{\left(1-m_{0}\right) \left( 1+m_{0}-2\left(m_{0}-k\right) \right) }{2\sigma^{2}}}\left( \frac{1}{\theta }-1\right) } \\ &=\frac{\theta \left(1-\theta \right) \left(e^{\frac{\left( 1-m_{0}\right) \left(1+m_{0}-2\left(m_{0}-k\right) \right) }{2\sigma^{2}} }-e^{\frac{\left(1-m_{0}\right) \left( 1+m_{0}-2\left(m_{0}+k\right) \right) }{2\sigma^{2}}}\right) }{\left(\theta +e^{\frac{\left(1-m_{0}\right) \left( 1+m_{0}-2\left(m_{0}+k\right) \right) }{2\sigma^{2}} }\left( 1-\theta \right) \right) \left(\theta +e^{\frac{\left( 1-m_{0}\right) \left(1+m_{0}-2\left(m_{0}-k\right) \right) }{2\sigma^{2}} }\left(1-\theta \right) \right) } \\ &=\frac{\theta \left(1-\theta \right) \left( 1-\frac{e^{\frac{\left(1-m_{0}\right) \left(1-m_{0}-2k\right) }{2\sigma^{2}}}}{e^{\frac{\left(1-m_{0}\right) \left( 1-m_{0}+2k\right) }{2\sigma^{2}}}}\right) }{\left(\theta +e^{\frac{\left(1-m_{0}\right) \left(1-m_{0}-2k\right) }{2\sigma^{2}}}\left(1-\theta \right) \right) \left(\frac{\theta }{e^{\frac{\left(1-m_{0}\right) \left(1-m_{0}+2k\right) }{2\sigma^{2}}}}+\left(1-\theta \right) \right) } \\ &=\frac{\theta \left(1-\theta \right) \left(1-e^{\frac{-2k\left( 1-m_{0}\right) }{\sigma^{2}}}\right) }{\left(\theta +e^{\frac{\left(1-m_{0}\right) \left(1-m_{0}-2k\right) }{2\sigma^{2}}}\left(1-\theta \right) \right) \left(\theta e^{\frac{-\left(1-m_{0}\right) \left(1-m_{0}+2k\right) }{2\sigma^{2}}}+\left(1-\theta \right) \right) } \\ \end{aligned} $$
(3)

and

$$ \begin{aligned} &\alpha \left(1+k,m_{0},1,\theta ,\sigma \right) -\alpha \left( 1-k,m_{0},1,\theta ,\sigma \right)\\ &=\frac{1}{1+e^{\frac{\left(1-m_{0}\right) \left(1+m_{0}-2\left( 1+k\right) \right) }{2\sigma^{2}}}\left(\frac{1}{\theta }-1\right) }-\frac{ 1}{1+e^{\frac{\left(1-m_{0}\right) \left(1+m_{0}-2\left( 1-k\right) \right) }{2\sigma^{2}}}\left(\frac{1}{\theta }-1\right)} \\ &=\frac{\theta \left(1-\theta \right) \left(e^{\frac{\left( 1-m_{0}\right) \left(1+m_{0}-2\left(1-k\right) \right) }{2\sigma^{2}}}-e^{ \frac{\left(1-m_{0}\right) \left( 1+m_{0}-2\left(1+k\right) \right) }{ 2\sigma^{2}}}\right) }{\left( \theta +e^{\frac{\left(1-m_{0}\right) \left(1+m_{0}-2\left( 1+k\right) \right) }{2\sigma^{2}}}\left(1-\theta \right) \right) \left(\theta +e^{\frac{\left(1-m_{0}\right) \left(1+m_{0}-2\left( 1-k\right) \right) }{2\sigma^{2}}}\left(1-\theta \right) \right) } \\ &=\frac{\theta \left(1-\theta \right) \left( 1-\frac{e^{\frac{\left(1-m_{0}\right) \left(m_{0}-1-2k\right) }{2\sigma^{2}}}}{e^{\frac{\left(1-m_{0}\right) \left( m_{0}-1+2k\right) }{2\sigma^{2}}}}\right) }{\left(\theta +e^{\frac{\left(1-m_{0}\right) \left(m_{0}-1-2k\right) }{2\sigma^{2}}}\left(1-\theta \right) \right) \left(\frac{\theta }{e^{\frac{\left(1-m_{0}\right) \left(m_{0}-1+2k\right) }{2\sigma^{2}}}}+\left(1-\theta \right) \right) } \\ &=\frac{\theta \left(1-\theta \right) \left(1-e^{\frac{-2k\left( 1-m_{0}\right) }{2\sigma^{2}}}\right) }{\left(\theta +e^{\frac{\left(1-m_{0}\right) \left(m_{0}-1-2k\right) }{2\sigma^{2}}}\left(1-\theta \right) \right) \left(\theta e^{\frac{-\left(1-m_{0}\right) \left(m_{0}-1+2k\right) }{2\sigma^{2}}}+\left(1-\theta \right) \right) }\\ \end{aligned} $$
(4)

Notice that numerators of (3) and (4) are the same, and positive; it follows that

$$ \alpha \left(m_{0}+k,m_{0},1,\theta ,\sigma \right) -\alpha \left( m_{0}-k,m_{0},1,\theta ,\sigma \right) \geq \alpha \left( 1+k,m_{0},1,\theta ,\sigma \right) -\alpha \left(1-k,m_{0},1,\theta ,\sigma \right) $$

if and only if

$$ \begin{aligned} &\left(\theta +e^{\frac{\left(1-m_{0}\right) \left( 1-m_{0}-2k\right) }{ 2\sigma^{2}}}\left(1-\theta \right) \right) \left(\theta e^{\frac{-\left(1-m_{0}\right) \left( 1-m_{0}+2k\right) }{2\sigma^{2}}}+\left(1-\theta \right) \right)\\ \leq &\left(\theta +e^{\frac{\left(1-m_{0}\right) \left( m_{0}-1-2k\right) }{2\sigma^{2}}}\left(1-\theta \right) \right) \left(\theta e^{\frac{-\left(1-m_{0}\right) \left( m_{0}-1+2k\right) }{2\sigma^{2}}}+\left(1-\theta \right) \right) \end{aligned} $$

if and only if

$$ \begin{aligned} &\theta^{2}e^{\frac{-\left(1-m_{0}\right) \left(1-m_{0}+2k\right) }{ 2\sigma^{2}}}+e^{\frac{\left(1-m_{0}\right) \left( 1-m_{0}-2k\right) }{ 2\sigma^{2}}}\left(1-2\theta +\theta^{2}\right) \\ \leq &\theta^{2}e^{\frac{\left(1-m_{0}\right) \left( 1-m_{0}-2k\right) }{ 2\sigma^{2}}}+e^{\frac{-\left(1-m_{0}\right) \left(1-m_{0}+2k\right) }{ 2\sigma^{2}}}\left(1-2\theta +\theta^{2}\right) \end{aligned} $$

if and only if

$$ \begin{aligned} &\left(e^{\frac{\left(1-m_{0}\right) \left(1-m_{0}-2k\right) }{2\sigma^{2}}}-e^{\frac{\left(1-m_{0}\right) \left( m_{0}-1-2k\right) }{2\sigma^{2} }}\right)\left(1-2\theta \right) \leq 0 \end{aligned} $$

if and only if

$$ \begin{aligned} &\left(1-2\theta \right) \leq 0 \end{aligned} $$

if and only if

$$ \begin{aligned} &\theta \geq \frac{1}{2}\quad \checkmark \end{aligned} $$

To recap, we have c ≥ a ≥ 0, d ≥ a ≥ 0, and a + b ≥ c + d. From the third inequality, we obtain

$$ \begin{aligned} & a^2+b^2 \geq a^2 + (c+d-a)^2 \\ & \quad \Rightarrow a^2 + b^2 \geq c^2 +d^2 + 2(d-a)(c-a). \end{aligned} $$

Thus, the first two inequalities give us

$$ a^2 + b^2 \geq c^2 + d^2 $$

It follows immediately that the \(\frac{1}{2} - \frac{1}{2}\) gamble between α(m 0 + km 0, 1, θσ) and α(m 0 − km 0, 1, θσ) is preferred to the \(\frac{1}{2} - \frac{1}{2}\) gamble between α(1 + km 0, 1, θσ) and α(1 − km 0, 1, θσ). Thus, when b = θ, the type-0 sender obtains at least as high expected utility from message 1 as from message m 0.

To complete the proof, consider the case where b > θ. Here, the Proposition follows from the result just obtained together with the single-crossing condition

$$ \frac{\partial^{2}U^{s}(a,t,b)}{\partial t\partial a}>0 $$

and the fact that the normal distribution satisfies the strict monotone likelihood ratio property (see the proof of Proposition 9 for details). \(\square\)

Lemma 5

\(z_1(b,m)=V_{31}(b,m,m) > 0 \forall m \in [0,1).\)

Proof

Notice that \(\phi _{m,\sigma^{2}}\left(m+x\right) \frac{x}{\sigma^{2}}=-\phi _{m,\sigma^{2}}\left(m-x\right) \frac{\left(-x\right) }{\sigma^{2}}. \) Furthermore, for all \(m \in [0,1), \) \(\frac{\theta \phi _{1,\sigma^{2}}\left(q\right) }{\theta \phi _{1,\sigma^{2}}\left(q\right) +\left(1-\theta \right) \phi _{m,\sigma^{2}}\left(q\right) }\) is a strictly increasing function of q and therefore gives greater weight to \(\phi _{m,\sigma^{2}}\left(m+x\right) \frac{x}{\sigma^{2}}\) than to \(\phi _{m,\sigma^{2}}\left(m-x\right) \frac{\left(-x\right) }{\sigma^{2}}\) for all x > 0. Therefore

$$ V_{31}\left(b,m,m\right) =\int\limits_{-\infty }^{\infty }2\frac{\theta \phi _{1,\sigma^{2}}\left(q\right) }{\theta \phi _{1,\sigma^{2}}\left(q\right) +\left(1-\theta \right) \phi _{m,\sigma^{2}}\left(q\right) } \phi _{m,\sigma^{2}}\left(q\right) \frac{q-m}{\sigma^{2}}>0. $$

\(\square\)

Proof of Proposition 6

For m 0 < 1, define

$$ q^{\ast }\left(m_{0}\right) \equiv \sigma^{2}\frac{\log \left( \frac{ 1-\theta }{\theta }\right) }{1-m_{0}}+\frac{1+m_{0}}{2} $$

(so \(\left(q^{\ast }\left(m_{0}\right) ,\frac{1}{2}\right) \) is the point of symmetry of the expectation function \(\alpha \left(q,m_{0},1,\theta ,\sigma \right) \)—see Lemma 1 above). If \(\theta <\frac{1}{2}, \) then \(q^{\ast }(m_{0})>m_{0}, \) in which case we claim that the existence of an informative equilibrium requires that \(b<\frac{1}{2}.\) To see why, consider

$$ \begin{aligned} V_{3}\left(b,m_{0},m_{0}\right) &=\int\limits_{-\infty }^{\infty }-\left( b-\frac{ \theta \phi _{1,\sigma^{2}}\left(q\right) }{\theta \phi _{1,\sigma^{2}}\left(q\right) +\left(1-\theta \right) \phi _{m_{0},\sigma^{2}}\left(q\right) }\right)^{2}\phi _{m_{0},\sigma^{2}}\left(q\right) \frac{q-m_{0}}{\sigma^{2}}dq \\ &=\int\limits_{-\infty }^{\infty }\left(-b^{2}+2b\frac{\theta \phi _{1,\sigma^{2}}\left(q\right) }{\theta \phi _{1,\sigma^{2}}\left( q\right) +\left(1-\theta \right) \phi _{m_{0},\sigma^{2}}\left( q\right) }\right. \\ &\quad \left. -\left(\frac{\theta \phi _{1,\sigma^{2}}\left( q\right) }{\theta \phi _{1,\sigma^{2}}\left(q\right) +\left( 1-\theta \right) \phi _{m_{0},\sigma^{2}}\left(q\right) }\right)^{2}\right) \phi _{m_{0},\sigma^{2}}\left(q\right) \frac{q-m_{0}}{\sigma^{2}}dq \\ &=\int\limits_{-\infty }^{\infty }\left(2b-\frac{\theta \phi _{1,\sigma^{2}}\left(q\right) }{\theta \phi _{1,\sigma^{2}}\left( q\right) +\left(1-\theta \right) \phi _{m_{0},\sigma^{2}}\left( q\right) }\right) \\ &\quad \quad \frac{\theta \phi _{1,\sigma^{2}}\left(q\right) }{\theta \phi _{1,\sigma^{2}}\left(q\right) +\left(1-\theta \right) \phi _{m_{0},\sigma^{2}}\left(q\right) }\phi _{m_{0},\sigma^{2}}\left(q\right) \frac{q-m_{0}}{\sigma^{2}}dq. \end{aligned} $$

Setting \(b=\frac{1}{2},\) we obtain

$$ \begin{aligned} V_{3}\left(\frac{1}{2},m_{0},m_{0}\right) &=\int\limits_{-\infty }^{\infty }\left(1-\frac{\theta \phi _{1,\sigma^{2}}\left(q\right) }{\theta \phi _{1,\sigma^{2}}\left(q\right) +\left(1-\theta \right) \phi _{m_{0},\sigma^{2}}\left(q\right) }\right) \\ &\quad \quad \frac{\theta \phi _{1,\sigma^{2}}\left(q\right) }{\theta \phi _{1,\sigma^{2}}\left(q\right) +\left(1-\theta \right) \phi _{m_{0},\sigma^{2}}\left(q\right) }\phi _{m_{0},\sigma^{2}}\left(q\right) \frac{q-m_{0}}{\sigma^{2}}dq. \end{aligned} $$

The quantity \(\left(1-\frac{\theta \phi _{1,\sigma^{2}}\left( q\right) }{\theta \phi _{1,\sigma^{2}}\left(q\right) +\left( 1-\theta \right) \phi _{m_{0},\sigma^{2}}\left(q\right) }\right) \frac{\theta \phi _{1,\sigma^{2}}\left(q\right) }{\theta \phi _{1,\sigma^{2}}\left(q\right) +\left(1-\theta \right) \phi _{m_{0},\sigma^{2}}\left(q\right) }\) is a function of q that is symmetric about \(q^{\ast }\left(m_{0}\right) , \) strictly increasing to the left of \(q^{\ast }\left(m_{0}\right) \) and strictly decreasing to the right of \(q^{\ast }\left(m_{0}\right)\). So as long as \(q^{\ast }\left(m_{0}\right) >m_{0},\) it assigns greater weight to \(\phi _{m_{0},\sigma^{2}}\left(m_{0}+x\right) \frac{x}{\sigma^{2}}\) than to \(\phi _{m_{0},\sigma^{2}}\left( m_{0}-x\right) \frac{\left(-x\right) }{\sigma^{2}}\) for all x > 0. Hence, \(V_{3}\left(\frac{1}{2},m_{0},m_{0}\right) >0.\)

Now consider raising b above \(\frac{1}{2}.\) From Lemma 5, \(V_{31}(b,m,m) > 0 \forall m \in [0,1)\) and in particular for m = m 0. Hence \(V_{3}\left(b,m_{0},m_{0}\right) >0\) for all \(b>\frac{1}{2}, \) which is not consistent with equilibrium. \(\square\)

Proof of Proposition 8

Recall that \(z\left(b,1\right) =0\) and

$$ \frac{dz\left(b,1\right) }{dm}=\frac{2\left(b-\theta \right) \left(-1+\theta \right) \theta \sqrt{1/\sigma^{2}}}{\sigma }. $$

Therefore, if b > θ , by continuity of z there exists an \(\underline {m}\in \left(0,1\right) \) such that for all \(m^{\prime }\in \left(\underline {m},1\right] \) we have \(z\left(b,m^{\prime }\right) >0. \) Hence, for b > θ pooling at message \(m^{\ast }=1\) is asymptotically stable. If b < θ , then \(\frac{dz\left( b,1\right) }{dm}>0, \) i.e. m = 1 is a hyperbolic source rather than a sink of the vagueness dynamic, and therefore unstable.

It remains to show that there is a (Lyapunov) stable equilibrium when b ≤ θ.

Consider the case b < θ first. Then there are two subcases: If \(z\left(b,m\right) \leq 0\) for all \(m\in \left[ 0,1\right) , \) then m = 0 is stable. Otherwise, there exists \(m^{\prime }\in \left[ 0,1\right) \) with \(z\left(b,m^{\prime }\right) >0. \) At the same time, given the case we are considering, there is \(m^{\prime \prime }\in \left(m^{\prime },1\right) \) with \(z\left(b,m^{\prime \prime }\right) <0. \) Define \(\overline{m}\equiv \inf \left\{ m\mid z\left( b,m\right) <0 , m>m^{\prime }\right\} \) and \(\underline {m}\equiv \sup \left\{ m\mid z\left(b,m\right) >0 , m<\overline{m}\right\} .\) Note that \(\underline {m}\leq \overline{m}. \) If \(\underline {m}<\overline{m}, \) then \(z\left(b,m\right) =0\) for all m 0 in the open set \(\left(\underline {m},\overline{m}\right) , \) and therefore any such m 0 is stable. If \(\underline {m}=\overline{m}, \) then any open set \(\left( \overline{m},\overline{m}+\epsilon \right) \) contains a subinterval on which \(z\left(b,m\right) <0\) and any open set \(\left( \overline{m}-\epsilon ,\overline{m}\right) \) contains a subinterval on which \(z\left(b,m\right) >0,\) and therefore \(\overline{m}\) is stable.

Finally, consider b = θ . If \(z\left(b,m\right) \geq 0\) for all \(m\in \left[ 0,1\right] , \) then \(m^{\ast }=1\) is stable. If \(z\left(b,m^{\prime \prime }\right) <0\) for some \(m^{\prime \prime}\in \left[ 0,1\right) , \) then either \(z\left(b,m\right) \leq 0\) for all \(m\in \left[ 0,m^{\prime \prime }\right) \) or there exists \(m^{\prime }<m^{\prime \prime } \) with \(z\left(b,m^{\prime}\right) >0. \) In that case, the argument given for the case b < θ applies. \(\square\)

Lemma 6

In any pure-strategy equilibrium the receiver’s action rule is continuously differentiable. If in addition the sender’s strategy is monotone and informative, then the receiver’s action rule is strictly increasing.

Proof

If the sender uses a pure strategy, then with any equilibrium message m we can associate the set of types \(\Uptheta \left( m\right) \) who use that message. Let \(M^{\ast }\) denote the set of equilibrium messages. Then the receiver’s posterior belief about the sender’s type given interpretation q is

$$ \mu \left(t\mid q\right) =\frac{\phi _{m_{t},\sigma^{2}}\left( q\right) \nu \left(t\right) }{\sum_{m\in M^{\ast }}\phi _{m,\sigma^{2}}\left(q\right) \nu \left(\Uptheta \left(m\right) \right) }. $$

In equilibrium, the receiver’s action rule is given by

$$ \begin{aligned} {\bf a}\left(q\right) &=\arg \max_{a}\sum_{t\in T}-\left( a-t\right)^{2}\mu \left(t\mid q\right) \\ &=\frac{\sum_{m\in M^{\ast }}\phi _{m,\sigma^{2}}\left(q\right) \nu \left(\Uptheta \left(m\right) \right) E\left[ t\mid t\in \Uptheta \left(m\right) \right] }{\sum_{m\in M^{\ast }}\phi _{m,\sigma^{2}}\left(q\right) \nu \left(\Uptheta \left(m\right) \right) }. \end{aligned} $$

(Note that, as in the two-type case, the receiver’s best response is to choose an action equal to the expectation of t.) Continuous differentiability of the receiver’s action rule follows from continuous differentiability of \(\phi _{m,\sigma^{2}}\) for any m and the fact that \(\phi _{m,\sigma^{2}}\) is everywhere positive.

If the sender’s strategy is monotone and informative, more than one message is sent. Let there be k > 1 such messages. It will be convenient to reindex messages and to use m i to denote the ith equilibrium message and \(\Uptheta _{i}\) to denote the set of types who send that message. Then we can rewrite the receiver’s action rule as

$$ {\bf a}\left(q\right) =\frac{\sum_{i=1}^{k}\phi _{m_{i},\sigma^{2}}\left(q\right) \nu \left(\Uptheta _{i}\right) E\left[ t\mid t\in \Uptheta _{i}\right] }{\sum_{i=1}^{k}\phi _{m_{i},\sigma^{2}}\left(q\right) \nu \left(\Uptheta _{i}\right) }, $$

where \(E\left[ t\mid t\in \Uptheta _{i+1}\right] >E\left[ t\mid t\in \Uptheta _{i}\right] \) for all \(i=1,\ldots ,k-1\) (which can be satisfied because of monotonicity) and \(\nu \left(\Uptheta _{i}\right) >0\) for all \(i=1,\ldots ,k\). To prove that a is a strictly increasing function of q, we proceed by induction. Define \(\xi \left(q\mid m_{i}\right) \equiv \frac{\phi _{m_{i},\sigma^{2}}\left(q\right) \nu \left(\Uptheta _{i}\right) }{\sum_{j=1}^{k}\phi _{m_{j},\sigma^{2}}\left(q\right) \nu \left( \Uptheta _{j}\right) }, \) so that

$$ a\left(q\right) =\sum_{i=1}^{k}\xi \left(q\mid m_{i}\right) E\left[ t\mid t\in \Uptheta _{i}\right]. $$

Notice that \(\sum_{i=1}^{k-1}\frac{\xi \left(q\mid m_{i}\right) }{\xi \left(q\mid m_{k}\right) }+1=\frac{1}{\xi \left(q\mid m_{k}\right) }. \) SMLRP implies that each of the fractions on the left–hand side decrease as q increases. Hence \(\xi \left(q\mid m_{k}\right) \) is (strictly) increasing in q. This establishes the claim for k = 2. We will now show that if it holds for k, then it holds for k + 1. For \(i=1,\ldots ,k\) define \(\tilde{\xi}\left(q\mid m_{i}\right) \equiv \frac{\xi \left(q\mid m_{i}\right) }{\sum_{j=1}^{k}\xi \left(q\mid m_{j}\right) }. \) Then

$$ \begin{aligned} &\sum_{i=1}^{k+1}\xi \left(q\mid m_{i}\right) E\left[ t\mid t\in \Uptheta _{i}\right] \\ &\quad =\left(1-\xi \left(q\mid m_{k+1}\right) \right) \left\{ \sum_{i=1}^{k}\tilde{\xi}\left(q\mid m_{i}\right) E\left[ t\mid t\in \Uptheta _{i}\right] \right\} +\xi \left(q\mid m_{k+1}\right) E\left[ t\mid t\in \Uptheta _{k+1}\right]. \end{aligned} $$

The result follows because the expression in curly brackets, which is strictly smaller than \(E\left[ t\mid t\in \Uptheta _{k+1}\right] , \) by the induction hypothesis is strictly increasing in q and because \(\xi \left(q\mid m_{k+1}\right) \) is strictly increasing in q. \(\square\)

Proof of Proposition 9

Given the receiver’s equilibrium action rule a, define type t’s payoff from sending message m as

$$ V^{S}\left(m,t,{\bf a}\right) \equiv \int\limits_{-\infty }^{\infty }U^{s}\left({\bf a}\left(q\right) ,t,b\right) \phi _{m,\sigma^{2}}\left(q\right) dq. $$

Then

$$ \frac{\partial V^{S}\left(m,t,{\bf a}\right) }{\partial t}=\int\limits_{-\infty }^{\infty }\frac{\partial U^{s}({\bf a}(q),t,b)}{\partial t}\phi _{m,\sigma^{2}}(q)dq. $$

The sender’s payoff function satisfies the single–crossing condition

$$ \frac{\partial^{2}U^{s}(a,t,b)}{\partial t\partial a}>0. $$

This and the fact that a is a strictly increasing function of q from Lemma 6 implies that \(\frac{\partial U^{s}({\bf a}(q),t,b)}{\partial t}\) is strictly increasing in q. Since \(\phi_{m,\sigma^{2}}\) satisfies the strict monotone likelihood ratio property \(\Upphi _{m^{\prime },\sigma^{2}}\) first–order stochastically dominates \(\Upphi _{m,\sigma^{2}}\) for any \(m^{\prime }>m.\) Therefore

$$ \frac{\partial^{2}\tilde{V}({\bf a},t,m)}{\partial m\partial t}>0. $$

Suppose that

$$ \frac{\partial \tilde{V}({\bf a},s,m_{s})}{\partial m}\geq 0 $$

for a type s < 1. Then

$$ \frac{\partial \tilde{V}({\bf a},\tau ,m_{s})}{\partial m}>0 $$

for any type τ > s. Using Lemma 4, this implies that either m τ > m s or \(m_{\tau }^{\prime }=1\) for all \(\tau^{\prime }\geq s.\) Similarly, when

$$ \frac{\partial \tilde{V}({\bf a},t,m_{t})}{\partial m}\leq 0 $$

for a type t > 0, we get that for any type τ < t either m τ < m t or \(m_{\tau }^{\prime }=0\) for all \(\tau^{\prime }\leq t.\) \(\square\)

Proof of Proposition 10

The receiver’s utility coincides with the utility of a sender whose type is less than t. Given the strict monotonicity of the receiver’s action rule from Lemma 6, single crossing and SMLRP, this type would want to send a message less than m t . \(\square\)

Proof of Proposition 12

With \(b> \frac{1}{2}\) beliefs that are concentrated on the lowest type are the least favorable ones for every type. Therefore, if there is a monotone informative equilibrium in the noisy-channel game with sender strategy m , receiver strategy a and belief system ν, there is an equilibrium in the channel-choice game in which the sender uses strategy m, the receiver responds with a(q) to any interpretation q that is received through the noisy channel, responds with action 0 to any interpretation that is received through the clear channel, has belief ν(q) after any interpretation that is received through the noisy channel and believes that the sender is the lowest type after every interpretation that is received through the clear channel. \(\square\)

Proof of Proposition 13

Recall that the receiver uses a pure strategy in any Perfect Bayesian equilibrium. Therefore in any PBE each clear message m induces exactly one action \(a_m \in [0,1].\) In the common interest game there is a type t for whom a m is the ideal action. This type strictly prefers sending message m to any noisy equilibrium message \(\tilde m.\) By continuity, there is an open neighborhood \({{\mathcal{O}}}\) of t such that all types in \({{\mathcal{O}}}\) strictly prefer sending the clear message m to sending any noisy equilibrium message. \(\square\)

Appendix 2: Utility Loss from Intentional Vagueness in the CS Model

We observed in the introduction that there is intentional vagueness even in the CS model: Given the receiver’s interpretation of messages in an informative equilibrium, he would prefer that some subset of sender types deviate from their equilibrium strategy and send messages that are associated with lower types. For example, consider the two-step equilibrium of the uniform-quadratic version of the CS model when the sender’s bias \(b=\frac{1}{8}: \) sender types \(t \in [0,\frac{1}{4})\) send one message, say m 1, while sender types \(t \in [\frac{1}{4},1]\) send a different message, say m 2; the receiver chooses action \(a=\frac{1}{8}\) if he observes m 1, and action \(a=\frac{5}{8}\) if he observes m 2. Given this equilibrium response, however, note that the receiver would be better off if types between \(\frac{1}{4}\) and \(\frac{3}{8}\) deviated from their equilibrium strategy and sent message m 1 instead of m 2, since m 1 induces an action closer his ideal than does m 2 as long as \(t<(\frac{1}{8}+\frac{5}{8})/2=\frac{3}{8}. \)

Consider an n-step equilibrium of the uniform-quadratic version of the CS model, with sender’s bias b. The boundary types are given by

$$ t_i=\frac{i}{n}+2i(i-n)b, \quad i=0,\ldots,n. $$

Following the reasoning above, it is easy to see that the receiver (assuming his equilibrium interpretation of messages remains unchanged) would prefer all and only sender types between t i and t i  + b (\(i=1,\ldots,n-1\)) to send the message corresponding to the preceding partition step. Compared with this scenario, his ex ante utility loss in the actual equilibrium is

$$ \sum^{n-1}_{i=1}\int\limits^{t_i+b}_{t_i} \left(\theta - \frac{t_i + t_{i-1}}{2} \right)^2 - \left(\theta - \frac{t_{i+1} + t_i}{2} \right)^2 d\theta = -\frac{b^2(n-1)}{n}. $$

In the most informative equilibrium, the number of steps is given by

$$ n^*=\left\lceil -\frac{1}{2}+\frac{1}{2}\sqrt{1+\frac{2}{b}} \right\rceil $$

(where \(\lceil x \rceil\) is the smallest integer greater than x). Figure 9 plots the receiver’s utility loss in this equilibrium.

Fig. 9
figure 9

Receiver’s ex ante utility loss from intentional vagueness in the CS model

Appendix 3: Sets of Strategies that are Closed Under Rational Behavior (Curb)

Let X i i = SR, be a set of pure strategies of player i and X = X S  × X R a set of pure strategy profiles for sender and receiver. For player i = SR define β i (X j ), i ≠ j, as the set of pure-strategy best replies to those beliefs over X j for which a best reply is well-defined. Define β(X): = β S (X R ) × β R (X S ). Then we call X a curb set if \(X \supseteq \beta(X) \neq \emptyset.\) Footnote 23 It is a minimal curb set if it does not contain a proper subset that is a curb set. Note that, as usual, the set of all strategy profiles is a curb set.

Lemma 7

Assume that there is a strictly monotone equilibrium (σ eρ e). Then the singleton set {(σ eρ e)} is a minimal curb set and no pooling equilibrium belongs to a minimal curb set.

Proof

By Lemma 6, ρ e is strictly increasing. Therefore by Lemma 4, σ e is a unique best reply, and hence (σ eρ e) is strict. A strict equilibrium trivially forms a minimal curb set as a singleton.

If a pooling equilibrium did belong to a minimal curb set X, then X would have to include every sender strategy and corresponding best reply of the receiver. Hence X would have to include (σ e, ρe), which is a curb set in its own right. Therefore X could not be minimal. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Blume, A., Board, O. Intentional Vagueness. Erkenn 79 (Suppl 4), 855–899 (2014). https://doi.org/10.1007/s10670-013-9468-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10670-013-9468-x

Navigation