Skip to main content
Log in

Measures of Assortativity

  • Thematic Issue Article: Strategic Interaction
  • Published:
Biological Theory Aims and scope Submit manuscript

Abstract

This paper discusses alternative measures of assortative matching and relates them to Sewall Wright’s F-statistic. It also explores applications of measures of assortativity to evolutionary dynamics. We generalize Wright’s statistic to allow the possibility that some types match more assortatively than others, and explore the possibility of identifying parameters of this more general model from the observed distribution of matches by the partners’ types.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Of course this model is far from completely general. In this model, those who do not join an assortative pool consisting only of their own type, select their matches at random from a single pool that includes all individuals who did not join assortative pools. It does not allow the possibility, for example, that some members of types i and j do not join assortative pools, but join a random pool that includes members of types i and j but no members of type k.

  2. To make a stationary model of this process, we need individuals to have lives of finite length. Some individuals of each type reach the end of their life without finding a match. Given its lower matching probability, the less common type will be more likely than those of the more common type to die without finding a match.

References

  • Alger I (2008) Public goods games, altruism, and evolution. J Public Econ Theory 12:789–813

    Article  Google Scholar 

  • Alger I, Weibull JW (2010) Kinship, incentives and evolution. Am Econo Rev 100:1725–1758

    Article  Google Scholar 

  • Alger I, Weibull JW (2012) Homo moralis: preference evolution under incomplete information and assortative matching. http://ideas.repec.org/p/tse/wpaper/25607.html

  • Bergstrom T (2003) The algebra of assortative mating and the evolution of cooperation. Int Game Theory Rev 5(3):1–18

    Article  Google Scholar 

  • Cavalli-Sforza L, Feldman M (1981) Cultural transmission and evolution: a quantitative approach. Princeton University Press, Princeton

    Google Scholar 

  • Hamilton W (1964) The genetical evolution of social behavior, parts i and ii. J Theor Biol 7:1–52

    Article  Google Scholar 

  • Hartl DH, Clark AG (1989) Principles of population genetics. Sinauer Associates, Sunderland

    Google Scholar 

  • Wright S (1921) Systems of mating. Genetics 6:111–178

    Google Scholar 

  • Wright S (1922) Coefficients of inbreeding and relationship. Am Nat 56(645):330–338

    Article  Google Scholar 

  • Wright S (1965) The interpretation of population structure by F-statistics with special regard to systems of mating. Evolution 19:395–420

    Article  Google Scholar 

Download references

Acknowledgment

This research was supported in part by NSF 0851357.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Theodore C. Bergstrom.

Appendix

Appendix

Proof of Theorem 1

Proof

To motivate Wright’s F-statistic as a correlation, let us construct two random variables I A and I B as follows. If one randomly selects one matched pair and then randomly chooses one individual from that pair, let I A be the random variable that takes on the value 1 or 0 depending on whether this individual is a type 1 or a type 2. Let I B be the random variable that takes the value 1 or 0 depending on whether the remaining member of the selected pair is of type 1 or type 2. Wright’s F is the correlation coefficient between the random variable I A and I B . This correlation coefficient is, by definition,

$$ \rho = \frac{E(I_{A}I_{B})-E(I_{A})E(I_{B})}{\sigma(I_{A})\sigma(I_{B})} $$
(32)

Now E(I A ) = E(I B ) = p, and \(\sigma(I_{A})=\sigma(I_{B})=\sqrt{p(1-p)}.\) Also \(E(I_{A}I_{B})=\pi_{11}=p\pi(1\vert 1).\) Therefore Eq. 32 can be written as

$$ \rho=\frac{p\pi(1\vert 1)-p^{2}}{p(1-p)}=\frac{\pi(1\vert 1)-p}{1-p} $$
(33)

This establishes Eq. 1.\(\square\)

Since \(\pi(1\vert 1)=1-\pi(2\vert 1), \) Eq. 33 can be written as

$$ \rho=\frac{1-\pi(2\vert 1)-p}{1-p}=1-\frac{\pi(2\vert 1)}{1-p} $$
(34)

Then, since \(\pi(1,2)=p\pi(2\vert 1),\) it follows that

$$ \rho=1-\frac{\pi_{12}}{p(1-p)} $$
(35)

This establishes Eq. 2.

Since \(\pi_{12}=p\pi(2\vert 1)=(1-p)\pi(1\vert 2), \) it must be that

$$ \pi(1\vert 2)=\frac{p}{1-p}\pi(2\vert 1)=\frac{p}{1-p}\left(1-\pi (1\vert 1)\right). $$
(36)

From Eq. 1 it follows that \(\pi(1\vert 1)=(1-p)F(p) +p.\) Therefore Eq. 36 simplifies to

$$ \pi(1\vert 2)=p\left(1-F(p)\right). $$
(37)

Therefore we have

$$ \begin{aligned} \pi(1\vert 1)-\pi(1\vert 2)&=(1-p)F(p)+p-p\left(1-F(p)\right) &=F(p) \end{aligned} $$
(38)

This establishes Eq. 3.

Proof of Theorem 3

Proof

The mapping from the G i ’s and p i ’s to the probabilities π ij is immediate from Eqs. 10 and 11.\(\square\)

To find the inverse mapping from the π ij ’s to the G i ’s and p i ’s, we proceed as follows. From Eq. 10, it follows that

$$ \frac{2\left(p_{1}p_{2}G_{1}G_{2}\right)\left(p_{1}p_{3}G_{1}G_{3}\right)} {p_{2}p_{3} G_{2}G_{3}}= \frac{\pi_{12}\pi_{13}}{\pi_{23}} \left(\sum_{k=1}^{3}p_{k}G_{k}\right) $$
(39)

Simplifying and rearranging Eq. 39, we have

$$ p_{1}G_{1}=\frac{1}{\sqrt{2}}\frac{\sqrt{\pi_{12}\pi_{13}}}{ \sqrt{\pi_{23}}}\sqrt{\sum_{k}p_{k}G_{k}}= \frac{1}{\sqrt{2}}\frac{\sqrt{\pi_{12}\pi_{23}\pi_{13}}}{\pi_{23}} \sqrt{\sum_{k}p_{k}G_{k}}. $$
(40)

Symmetric reasoning shows that also

$$ p_{2}G_{2} = \frac{1}{\sqrt{2}}\frac{\sqrt{\pi_{12}\pi_{23}\pi_{13}}} {\pi_{13}} \sqrt{\sum_{k}p_{k}G_{k}} $$
(41)
$$ p_{3}G_{3} = \frac{1}{\sqrt{2}}\frac{\sqrt{\pi_{12} \pi_{23}\pi_{13}}}{\pi_{12}} \sqrt{\sum_{k}p_{k}G_{k}} $$
(42)

Summing the terms in Eqs. 4042, we have

$$ \sum_{k}p_{k}G_{k}=\frac{1}{\sqrt{2}}\sqrt{\pi_{12}\pi_{23}\pi_{13}} \sqrt{\sum_{k}p_{k}G_{k}} \left(\frac{1}{\sqrt{\pi_{23}}+\sqrt{\pi_{13}}+\sqrt{\pi_{12}}}\right) $$
(43)

which in turn implies

$$ \sqrt{\sum_{k}p_{k}G_{k}}=\frac{1}{\sqrt{2}}\sqrt{\pi_{12}\pi_{23}\pi_{13}} \left(\frac{1}{\sqrt{\pi_{23}}+\sqrt{\pi_{13}}+\sqrt{\pi_{12}}}\right) $$
(44)

From Eqs. 4042 it then follows that

$$ p_{1}G_{1}=\frac{1}{2}\pi_{12}\pi_{13}\left(\frac{1}{\sqrt{\pi_{23}} +\sqrt{\pi_{13}}+\sqrt{\pi_{12}}}\right) $$
(45)
$$ p_{2}G_{2}=\frac{1}{2}\pi_{12}\pi_{23}\left(\frac{1}{\sqrt{\pi_{23}} +\sqrt{\pi_{13}}+\sqrt{\pi_{12}}}\right) $$
(46)
$$ p_{3}G_{3}=\frac{1}{2}\pi_{13}\pi_{23}\left(\frac{1}{\sqrt{\pi_{23}} +\sqrt{\pi_{13}}+\sqrt{\pi_{12}}}\right) $$
(47)

Since

$$p_{i}=\pi_{ii}+\frac{1}{2}\sum_{j\neq i}\pi_{ij} $$
(48)

the p i ’s are uniquely determined by the π ij ’s. Given that the p i ’s are uniquely determined, it follows from Eqs. 4042 that the G i ’s are also uniquely determined by the π ij ’s. This proves Theorem 3.

Proof of Theorem 6

Proof

The function \(F(\cdot)\) is continuous and strictly decreasing for p in the interval [1/2, 1], with \(F(1/2)=\frac{s-m}{s+m}\) and F(1) = 0. Therefore \(F^{-1}(\cdot)\) is a continuous, decreasing function from the interval \([0,\frac{s-m}{s+m}]\) onto [0,1/2]. Our assumptions imply that function \(\rho(\cdot)\) is a continuous, increasing function from (0,x*] onto the interval \((\rho_{0},\frac{s-m}{s+m}].\) Therefore there is a well-defined function p(x) = F −1(ρ(x)) mapping the non-empty interval (0, x*] onto [1/2,1). Since \(\rho(\cdot)\) is an increasing function and \(F^{-1}(\cdot)\) is a decreasing function, it must be that \(p(\cdot)\) is a decreasing function of x.\(\square\)

Let \(x\in (0,x^{*}]\) and suppose that the fraction p(x) of the population seeking matches are of the type that contributes x and the fraction 1 − p(x) are of the type that contributes 0. Then the expected payoff of a type x is \(\pi(x\vert x)b(x)-c(x)\) and the expected payoff of a type 0 is \(\pi(x\vert 0)b(x). \) The difference between the expected payoffs of the two types is F(p(x)b((x) − c(x). From the definition of p(x), it follows that \(F\left(p(x)\right)=\rho(x)\) and hence \(b(x)F\left(p(x)\right)-c(x)=0.\) Therefore, when the fraction p(x) are of type x and fraction 1 − p(x) are of type 0, the expected fitnesses of the two types are equal. A mutant individual of another type, who contributes a non-zero amount x′ ≠ x will almost always meet either a type x or a type 0. The probability that a type x′ matches with a type x is the same as the probability that a type 0 matches with a type x. Therefore the fitness of a type x′ is \(\pi(x\vert 0)b(x)-c(x')<\pi(x\vert 0,\) which is the expected payoff of the two incumbent types x and 0.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bergstrom, T.C. Measures of Assortativity. Biol Theory 8, 133–141 (2013). https://doi.org/10.1007/s13752-013-0105-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13752-013-0105-3

Keywords

Navigation