Skip to main content
Log in

Conformity in scientific networks

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Scientists are generally subject to social pressures, including pressures to conform with others in their communities, that affect achievement of their epistemic goals. Here we analyze a network epistemology model in which agents, all else being equal, prefer to take actions that conform with those of their neighbors. This preference for conformity interacts with the agents’ beliefs about which of two (or more) possible actions yields the better result. We find a range of possible outcomes, including stable polarization in belief and action. The model results are sensitive to network structure. In general, though, conformity has a negative effect on a community’s ability to reach accurate consensus about the world.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. For this history see Carter (2017), and Semmelweis (1983).

  2. This history is drawn from Grundy (1999).

  3. In addition, those researching online social networks have found that conformity seems to shape members’ choices. For example, seeing a friend has “liked” something on Facebook doubles the chances that a user will “like” it themselves (Egebark and Ekström 2011).

  4. Of course, we cannot know that this was the case, and in the discussion we will consider alternative explanations for the behavior of physicians in these cases.

  5. For more work in philosophy of science using this sort of model see Zollman (2010a), Mayo-Wilson et al. (2011), Kummerfeld and Zollman (2015), Holman and Bruner (2015, 2017), Weatherall et al. (2017), and O’Connor and Weatherall (2017, 2019). Recently a number of authors have discussed in greater care how these models can and cannot be applied to scientific communities (Rosenstock et al. 2017; Frey and Šešelja 2018; Borg et al. 2017).

  6. The value of a beta distribution on \(x = [0,1]\) is \(f(x, \alpha , \beta ) = \frac{\Gamma (\alpha + \beta )}{\Gamma (\alpha )\Gamma (\beta )}x^{\alpha -1}(1-x)^{\beta -1}\) where \(\Gamma (y) = (y-1)!\). An intuitive understanding of the parameters \(\alpha \) and \(\beta \) is as representing the number of successes and failures garnered in a sample from a Bernoulli random variable. That is, if an agent pulls arm A five times and gets 3 successes, \(\alpha = 3\) and \(\beta = 2\). The distribution then specifies probabilities over values of the probability of success for A given this set of results.

  7. While this choice is arbitrary, we follow Zollman (2010a). These values correspond to an agent who has gathered relatively little evidence, and so will tend to have beliefs that can be updated relatively quickly.

  8. Notice that on the intuitive understanding of a beta distribution this is the number of previous successes divided by total number of tests of the arm.

  9. As will become clear, we restrict k to be non-negative. This means our actors cannot have an anti-conformist preference. Observe that as we have set things up, negative payoffs for failing to conform are possible. If one wished, one could define the same decision problem with only positive payoffs by performing an appropriate affine transformation.

  10. Economists often investigate conformity using models where agents play a Beauty-contest game. The goal is to choose an action (perhaps pick a price) with a double goal of matching some state of the world, and also coming as close as possible to the average choice. There is some similarity to what we do here, as at least some of these models involve networks of agents who gather data from the world, and then make decisions based on both desires for accuracy and conformity (Hellwig and Veldkamp 2009; Myatt and Wallace 2011; Colombo et al. 2014). They find that the desire for conformity can increase the desire of agents to seek information that is well known. Our model deviates from these in that we focus on a two-choice problem. In addition, our agents share data and evidence, as is appropriate for a model of a scientific community.

  11. They are able to do this, in part, because each agent ‘dies’ after stating an opinion and is randomly replaced by a new agent. This creates a ‘public posterior’ that all agents share.

  12. In addition, as we will describe later, cycle networks often have perfectly symmetric social influences, which increase the chances that agents make statements based on their beliefs.

  13. Note, however, that stability is never guaranteed in this model, because of its inherent stochasticity. It is always possible that a run of misleading data will shift agents’ beliefs.

  14. The term ‘polarization’ has been used in different ways across the social sciences. We are referencing the general phenomenon where two subgroups hold stably different beliefs, or even diverge in belief, over the course of deliberation. O’Connor and Weatherall (2017) show how this could happen in models like those presented here, i.e., capturing aspects of a scientific community, but where actors do not conform. Rather, they distrust evidence from those who hold different beliefs from their own. Bramson et al. (2017) overview the literature modeling polarization.

  15. We use simulations here because analytic results would be intractible for this model. Network models, in general, are difficult to analyze because individuals are in highly asymmetric scenarios.

  16. We studied values of k ranging from 0 to 20, but focus on on values \(\le 1\) since increasing k further did not strongly influence results.

  17. We focused on simulations where \(p_B =.51, .55, .6, .7\).

  18. We considered values of 1, 5, 10, 20, 50, 100. This parameter influences how likely it is that a particular set of data will support the better action, B. When n is small, there are many spurious results, and when n is large there are fewer. Previous authors have found that this parameter can strongly influence the ability of agents in this sort of model to reach true beliefs (Rosenstock et al. 2017), though this parameter will not be particularly important to the results we state here.

  19. Generally networks reached stable outcomes in \(\ll 1000\) rounds, but we ran them for longer to confirm stability. An exception was that in sparse networks, such as the cycle and some small world networks, for low \(p_B\) convergence took a long time. We avoid sharing results where simulations may not have reached stable outcomes.

  20. We ran 10,016 simulations for each combination of parameters of random graphs we considered, to increase our confidence that we were getting a reasonable sample of both possible networks and possible outcomes for each network.

  21. This algorithm is closely related to the Erdös–Rényi random graph model (Erdös and Rényi 1959) by which one samples from the collection of all graphs with N nodes and M edges, with uniform probability. It is usually called G(np), but we did not wish to confuse readers by using the same variables for different parameters.

  22. Since the probability distribution over all networks generated by this algorithm is uniform, restricting to only connected networks still samples from all (connected) networks with equal probability.

  23. We never consider this possibility; \(k = 20\) is the closest we get.

  24. Observe that by this analysis, one would expect the probability of arriving at an outcome where all agents perform action B is bounded from below by .5 across networks structures. This is consistent with our results for all networks we consider in which polarization is not possible. The possibility of polarization, however, leads to worse outcomes still, because in such cases different subgroups can evolve to different outcomes essentially independently.

  25. Results are for G(Nq) random networks with linking probability \(q = .5\). For this figure \(n = .5\), population size 10. Note that there is ‘polarization’ in the models where \(k=0\). This occurs when actors settle on multiple arms where \(p=.5\) and evidence therefore does not discriminate between them.

  26. For hard tasks, this effect reversed, presumably because uncertain people were depending on their peers to help them reach the right answer.

  27. This algorithm begins with a regular ring with N nodes of degree K, where K is some positive even number less than N and greater than \(\ln (N)\). To be clear: a cycle as we have discussed it above is a regular ring of degree 2: each agent is connected to two neighbors. In a ring of degree 4, meanwhile, each agent would be connected to their two neighbors as in a cycle, but also to their neighbors’ neighbors. And so on. Then, the algorithm randomly rewires some connections to create an irregular network. For each node \(n_i\) in the network, with \(i=1,\ldots ,N\), and each edge from \(n_i\) to \(n_j\), for \(i < j\), with probability b delete that edge and create a new edge between \(n_i\) and any other node \(n_k\), where \(n_k\) is drawn with uniform probability from the nodes, aside from \(n_i\) itself, to which \(n_i\) is not already connected.

  28. We also considered smaller networks, but found that for small N, one could not vary K enough to find dependencies while remaining in the “small world” regime of small K and low b. We considered only \(p_B=.55\) and \(n=5\), because these simulations took a long time to run.

  29. We confirmed that these results were stable by running simulations for a sampling of parameter values for 100,000 rounds and confirming that the proportions of outcomes did not change from the standard 10,000 round cases.

  30. Random refers to G(Nq) networks with \(q = .5\), aka a uniform sampling over all connected networks of size 10.

  31. Of course, these shocks could lead a clique to move from a good action to a worse action. This is relatively less likely, though, since the actors’ good beliefs in these models push them away from bad actions.

  32. Readers may be interested in a related paper by Imbert et al. (2019) who look at the dynamics of deliberating groups to consider how to decrease misrepresentation of one’s beliefs due to conformist tendencies.

  33. In particular, we find that in the cycle network, for fixed values of \(p_B\), n, and N, increasing the value of k can decrease the number of rounds needed to reach a state in which the whole network performs the better action by 25% or more, while having a minimal impact on the fraction of runs in which true action is achieved.

  34. Furthermore, these pathways suggest nearly completely different interventions. See O’Connor and Weatherall (2019) for a discussion of this point.

  35. Previous authors have delved into the usefulness of network epistemology models, considering robustness of results especially. See Frey and Šešelja (2018), and Rosenstock et al. (2017). As they point out, sometimes small changes in these models lead to significantly different results, raising questions for real world applicability.

References

  • Asch, S. E., & Guetzkow, H. (1951). Effects of group pressure upon the modification and distortion of judgments. In H. Guetzkow (Ed.), Groups, leadership, and men (pp. 222–236). Pittsburgh: Carnegie Press.

    Google Scholar 

  • Bala, V., & Goyal, S. (1998). Learning from neighbors. Review of Economic Studies, 65(3), 595–621.

    Article  Google Scholar 

  • Banerjee, A. V. (1992). A simple model of herd behavior. The Quarterly Journal of Economics, 107(3), 797–817.

    Article  Google Scholar 

  • Baron, R. S., Vandello, J. A., & Brunsman, B. (1996). The forgotten variable in conformity research: Impact of task importance on social influence. Journal of Personality and Social Psychology, 71(5), 915.

    Article  Google Scholar 

  • Berger, S., Feldhaus, C., & Ockenfels, A. (2018). A shared identity promotes herding in an information cascade game. Journal of the Economic Science Association, 4(1), 63–72.

    Article  Google Scholar 

  • Bikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100(5), 992–1026.

    Article  Google Scholar 

  • Bikhchandani, S., Hirshleifer, D., & Welch, I. (1998). Learning from the behavior of others: Conformity, fads, and informational cascades. The Journal of Economic Perspectives, 12(3), 151–170.

    Article  Google Scholar 

  • Bond, R., & Smith, P. B. (1996). Culture and conformity: A meta-analysis of studies using Asch’s (1952b, 1956) line judgment task. Psychological Bulletin, 119(1), 111.

    Article  Google Scholar 

  • Borg, A., Frey, D., Šešelja, D., & Straßer, C. (2017). Examining network effects in an argumentative agent-based model of scientific inquiry. In International workshop on logic, rationality and interaction (pp. 391–406). Berlin: Springer.

  • Bramson, A., Grim, P., Singer, D. J., Berger, W. J., Sack, G., Fisher, S., et al. (2017). Understanding polarization: Meanings, measures, and model evaluation. Philosophy of Science, 84(1), 115–159.

    Article  Google Scholar 

  • Carter, K. C. (2017). Childbed fever: A scientific biography of Ignaz Semmelweis. London: Routledge.

    Book  Google Scholar 

  • Colombo, L., Femminis, G., & Pavan, A. (2014). Information acquisition and welfare. The Review of Economic Studies, 81(4), 1438–1483.

    Article  Google Scholar 

  • Condorcet, M. D. (1785). Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix.

  • Egebark, J., & Ekström, M. (2011). Like what you like or like what others like? conformity and peer effects on Facebook.

  • Erdös, P., & Rényi, A. (1959). On random graphs I. Publicationes Mathematicae Debrecen, 6, 290–297.

    Google Scholar 

  • Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly, 80(S1), 298–320.

    Article  Google Scholar 

  • Frey, D., & Şeşelja, D. (2018). What is the epistemic function of highly idealized agent-based models of scientific inquiry? Philosophy of the Social Sciences, 48(4), 407–433.

    Article  Google Scholar 

  • Frey, D., & Šešelja, D. (2019). Robustness and idealizations in agent-based models of scientific interaction. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axy039.

  • Gilbert, E. N. (1959). Random graphs. The Annals of Mathematical Statistics, 30(4), 1141–1144.

    Article  Google Scholar 

  • Grundy, I. (1999). Lady Mary Wortley Montagu. Oxford: Clarendon Press.

    Google Scholar 

  • Hellwig, C., & Veldkamp, L. (2009). Knowing what others know: Coordination motives in information acquisition. The Review of Economic Studies, 76(1), 223–251.

    Article  Google Scholar 

  • Holman, B., & Bruner, J. (2017). Experimentation by industrial selection. Philosophy of Science, 84(5), 1008–1019.

    Article  Google Scholar 

  • Holman, B., & Bruner, J. P. (2015). The problem of intransigently biased agents. Philosophy of Science, 82(5), 956–968.

    Article  Google Scholar 

  • Imbert, C., Boyer-Kassem, T., Chevrier, V., & Bourjot, C. (2019). Improving deliberations by reducing misrepresentation effects. Episteme, 3, 1–17.

    Google Scholar 

  • Kummerfeld, E., & Zollman, K. J. (2015). Conservatism and the scientific state of nature. The British Journal for the Philosophy of Science, 67(4), 1057–1076.

    Article  Google Scholar 

  • Mayo-Wilson, C., Zollman, K. J., & Danks, D. (2011). The independence thesis: When individual and social epistemology diverge. Philosophy of Science, 78(4), 653–677.

    Article  Google Scholar 

  • Mohseni, A., & Williams, C. R. (2017). Truth and conformity on networks (working paper).

  • Myatt, D. P., & Wallace, C. (2011). Endogenous information acquisition in coordination games. The Review of Economic Studies, 79(1), 340–374.

    Article  Google Scholar 

  • Newman, M. E. (2001). The structure of scientific collaboration networks. Proceedings of the National Academy of Sciences, 98(2), 404–409.

    Article  Google Scholar 

  • O’Connor, C., & Weatherall, J. O. (2017). Scientific polarization. arXiv:1712.04561 [cs.SI].

  • O’Connor, C., & Weatherall, J. O. (2019). The misinformation age: How false beliefs spread. New Haven: Yale University Press.

    Book  Google Scholar 

  • Onnela, J.-P., Saramäki, J., Hyvönen, J., Szabó, G., Lazer, D., Kaski, K., et al. (2007). Structure and tie strengths in mobile communication networks. Proceedings of the National Academy of Sciences, 104(18), 7332–7336.

    Article  Google Scholar 

  • Pariser, E. (2011). The filter bubble: How the new personalized web is changing what we read and how we think. New York: Penguin.

    Google Scholar 

  • Rosenstock, S., Bruner, J., & O’Connor, C. (2017). In epistemic networks, is less really more? Philosophy of Science, 84(2), 234–252.

    Article  Google Scholar 

  • Semmelweis, I. F. (1983). The etiology, concept, and prophylaxis of childbed fever. No. 2. Madison: University of Wisconsin Press.

    Google Scholar 

  • Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ‘small-world’ networks. Nature, 393(6684), 440–442.

    Article  Google Scholar 

  • Weatherall, J. O., O’Connor, C., & Bruner, J. (2017). How to beat science and influence people. arXiv:1801.01239 [cs.SI].

  • Zollman, K. J. (2007). The communication structure of epistemic communities. Philosophy of Science, 74(5), 574–587.

    Article  Google Scholar 

  • Zollman, K. J. (2010a). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17.

    Article  Google Scholar 

  • Zollman, K. J. S. (2010b). Social structure and the effects of conformity. Synthese, 172(3), 317–340.

    Article  Google Scholar 

Download references

Acknowledgements

This paper is partially based upon work supported by the National Science Foundation under Grant No. 1535139. We are grateful to Jeff Barrett, Aydin Mohseni, and Mike Schneider for helpful conversations related to this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to James Owen Weatherall.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper was previously distributed and cited with the title “Do as I Say, Not as I Do, or, Conformity in Scientific Networks”.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Weatherall, J.O., O’Connor, C. Conformity in scientific networks. Synthese 198, 7257–7278 (2021). https://doi.org/10.1007/s11229-019-02520-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-019-02520-2

Keywords

Navigation