Skip to main content
Log in

Machine learning, misinformation, and citizen science

  • Paper in the Philosophy of the Social Sciences and Humanities
  • Published:
European Journal for Philosophy of Science Aims and scope Submit manuscript

Abstract

Current methods of operationalizing concepts of misinformation in machine learning are often problematic given idiosyncrasies in their success conditions compared to other models employed in the natural and social sciences. The intrinsic value-ladenness of misinformation and the dynamic relationship between citizens’ and social scientists’ concepts of misinformation jointly suggest that both the construct legitimacy and the construct validity of these models needs to be assessed via more democratic criteria than has previously been recognized.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See Murphy (2022) for details on these methods.

  2. This is not to be confused with how the workflow ought to be constructed; as I will argue later, there are many issues with the procedure as it is typically practiced.

  3. See Franklin (2016, 229-240) for a discussion of how varying thresholds for statistical significance have even decided the very ontology of sub-atomic particles.

  4. See Brockwell and Davis (2016, 13) for a precise mathematical definition. In essence, the behavior of stationary systems have stable statistical properties in its first and second moments for any given time-lag shift of that time series.

  5. See Hecker et al. (2018) for an overview of recent literature on global citizen science initiatives and their philosophies.

References

  • Abouzeid, A., Granmo, O. C., Webersik, C., & Goodwin, M. (2021). Learning automata-based misinformation mitigation via hawkes processes. Information Systems Frontiers, 23, 1169–1188.

  • Alenezi, M. N., & Alqenaei, Z. M. (2021). Machine learning in detecting COVID-19 misinformation on twitter. Future Internet, 13(244), 1–20.

    Google Scholar 

  • Alexandrova, A. (2017). A philosophy for the science of well-being. Oxford University Press.

  • Arendt, H. [1953] (1976). The origins of totalitarianism. Houghton Mifflin Harcourt Publishing Company.

  • Berger, M. (2021). Singapore invokes ‘fake news’ law in push against anti-vaccine website. Washington Post. https://www.washingtonpost.com/world/2021/10/25/singapore-fake-news-law-anti-vaxxer-coronavirus/. Accessed 4 Sept 2022.

  • Brennan, J. (2016). Against democracy. Princeton University Press.

  • Brockwell, P. J., & Davis, R. A. (2016). Introduction to time series and forecasting. 3rd Edition. Springer.

  • Caled, D., & Silva, M. J. (2022). Digital media and misinformation: An outlook on multidisciplinary strategies against manipulation. Journal of Computational Social Science, 5, 123–159.

    Article  Google Scholar 

  • Castillo, C., Mendoza, M., & Poblete, B. (2012). Predicting information credibility in time-sensitive social media. Internet Research, 23(5), 560–588.

    Article  Google Scholar 

  • Chatham, M. L. (2008). The death of George Washington: An end to the controversy? The American Surgeon, 74(8), 770–774.

    Article  Google Scholar 

  • Cheng, L., Guo, R., Shu, K., & Liu, H. (2021). Causal understanding of fake news dissemination on social media. KDD ’21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 148-157.

  • Chronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302.

    Article  Google Scholar 

  • Coan, T. G., Boussalis, C., Cook, J., & Nanko, M. O. (2021). Computer-assisted classification of contrarian claims about climate change. Nature Scientific Reports, 11(22320), 1–12.

    Google Scholar 

  • de Ridder, J. (2022). Online illusions of understanding. Social Epistemology. https://doi.org/10.1080/02691728.2022.2151331

    Article  Google Scholar 

  • Dretske, F. (1983). Précis of knowledge and the flow of information. Behavioral and Brain Sciences, 6(1), 55–90.

    Article  Google Scholar 

  • Du, J., Preston, S., Sun, H., Shegog, R., Cunningham, R., Boom, J., Savas, L., Amith, M., & Tao, C. (2021). Using machine learning-based approaches for the detection and classification of human papillomavirus vaccine misinformation: Infodemiology study of reddit discussions. Journal of Medical Internet Research, 23(98), 1–12.

  • Elgin, C. (2017) True enough. MIT Press.

  • Fallis, D., & Mathieson, K. (2019). Fake news is counterfeit news. Inquiry, 1-20.

  • Fallis, D. (2015). What is disinformation? Library Trends, 63(3), 401–426.

    Article  Google Scholar 

  • Feest, U. (2020). Construct validity in psychological tests - the case of implicit social cognition. European Journal for Philosophy of Science, 10(4), 1–24.

    Google Scholar 

  • Floridi, L. (2011). Philosophy of Information. Oxford University Press.

  • Franklin, A. (2016). What makes a good experiment? The University of Pittsburgh Press.

  • Gillies, D. A. (1971). A falsifying rule for probability statements. British Journal for Philosophy of Science, 22, 231–261.

    Article  Google Scholar 

  • Goldenberg, M. (2021). Vaccine hesitancy. University of Pittsburgh Press.

  • Government of Canada. (2022a). Online disinformation. Government of Canada. https://www.canada.ca/en/canadian-heritage/services/online-disinformation.html. Accessed 31 Aug 2022.

  • Government of Canada. (2022b). Canada’s efforts to counter disinformation - Russian invasion of Ukraine. Government of Canada. https://www.international.gc.ca/world-monde/issues_development-enjeux_developpement/response_conflict-reponse_conflits/crisis-crises/ukraine-disinfo-desinfo.aspx?lang=eng

  • Gruppi, M., Horne, B. D., & Adali, S. (2021). Workshop proceedings of the 15th international AAAI conference on web and social media. Association for the Advancement of Artificial Intelligence, 1-10.

  • Guerrero, A. (2014). Against elections: The lottocratic alternative. Philosophy & Public Affairs, 42(2), 135–178.

    Article  Google Scholar 

  • Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(5686), 1–8.

    Google Scholar 

  • Habgood-Coote, J. (2019). Stop talking about fake news! Inquiry, 62(9–10), 1033–1065.

    Article  Google Scholar 

  • Hecker, S., Haklay, M., Bowser, A., Makuch, Z., Vogel, J., & Bonn, A. (2018). Citizen science: Innovation in open science, society and policy. UCL Press.

  • Horne, B. D. (2020). Robust news veracity detection. Rensselaer Polytechnic Institute. Dissertation.

  • Horne, B. D., Gruppi, M., & Adali, S. (2020). Do all good actors look the same? Exploring news veracity detection across the U.S. and the U.K. Association for the Advancement of Artificial Intelligence, 1-4.

  • Hou, R., Pérez-Rosas, V., Loeb, S., & Mihalcea, R. (2019). Towards automatic detection of misinformation in online medical videos. Proceedings of the 2019 International Conference on Multimodal Interaction, 235–243.

  • Islam, M. R., Liu, S., Wang, X., & Xu, G. (2020). Deep learning for misinformation detection on online social networks: A survey and new perspectives. Social Network Analysis and Mining, 10(82), 1–20.

  • Jin, Z., Cao, J., Zhang, Y., Zhou, J., & Tian, Q. (2017). Novel novel visual and statistical image features for microblogs news verification. IEEE Transactions on Multimedia, 19(3), 598–608.

  • John, O. P., & Soto, C. J. (2007). The importance of being valid: Reliability and the process of construct validation. In Richard W. Robins, R. Chris Fraley, & Robert F. Krueger (Ed.), Handbook of research methods in personality psychology, (pp. 461–94). Guilford.

  • Khan, J. Y., Kohndaker, M. T. I., Afroz, S., Uddin, G. & Iqbal, A. (2021). A benchmark study of machine learning models for online fake news detection. Machine Learning with Applications, 4(100032), 1–12.

  • Laudan, L. (1981). A confutation of convergent realim. Philosophy of Science, 48(1), 19–49.

    Article  Google Scholar 

  • Longino, H. (2022). What’s social about social epistemology? Journal of Philosophy, 119(4), 169–195.

    Article  Google Scholar 

  • Mahase, E. (2021). Covid-19: US suspends Johnson and Johnson vaccine rollout over blood clots. British Medical Journal, 373(970), 1.

    Google Scholar 

  • Mishra, S., Shukla, P., & Agarwal, R. (2022). Analyzing machine learning enabled fake news detection techniques for diversified datasets. Wireless Communications and Mobile Computing,1–18. https://doi.org/10.1155/2022/1575365

  • Murphy, K. P. (2022). Probabilistic machine learning: An introduction. The MIT Press.

  • Nevo, D., & Horne, B. D. (2022). How topic novelty impacts the effectiveness of news veracity interventions. Communications of the ACM, 65(2), 68–75.

    Article  Google Scholar 

  • Nguyen, C. T. (2020). Echo chambers and epistemic bubbles. Episteme, 17(2), 141–161.

    Article  Google Scholar 

  • Oreskes, N. (2004). The scientific consensus on climate change. Science, 306(5702), 1686.

    Article  Google Scholar 

  • Osman, M., Adams, Z., & Meder, B. (2022). People’s understanding of the concept of misinformation. Journal of Risk Research, 25(10), 1239–1258.

    Article  Google Scholar 

  • Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences, 25(5), 388–402.

    Article  Google Scholar 

  • Reporters Without Borders. (2022). The ranking. Reporters Without Borders, https://rsf.org/en/ranking. Accessed 4 Sept 2022.

  • Republic of Singapore. (2021). Protection from online falsehoods and manipulation act 2019. The Statutes of the Republic of Singapore.

  • Robinson, L. D., Cawthray, J. L., West, S. E., Bonn, A., & Ansine, J. (2018). Ten principles of citizen science. In S. Hecker, M. Haklay, A. Bowser, Z. Makuch, J. Vogel, & A. Bonn (Eds.), Citizen science: Innovation in open science, society and policy (pp. 1–3). UCL Press.

  • Shao, C., Hui, P. M., Wang, L., Jiang,X., Flammini, A., Menczer, F., & Ciampaglia, G. L. (2018). Anatomy of an online misinformation network. PLoS ONE, 13(4), 1–23.

  • Shu, K., & Liu, H. (2019). Detecting fake news on social media. Morgan & Claypool.

  • Søe, S. O. (2018). Algorithmic detection of misinformation and disinformation: Gricean perspectives. Journal of Documentation, 74(2), 309–332.

    Article  Google Scholar 

  • Sorokin, A., & Forsyth, D. (2008). Utility data annotation with amazon mechanical turk. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 1-8. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp= &arnumber=4562953. Accessed 11 Oct 2022.

  • Stone, C. (2019). A defense and definition of construct validity in psychology. Philosophy of Science, 86, 1250–1261.

    Article  Google Scholar 

  • Swire-Thompson, B., & Lazer D. (2020). Public health and online misinformation: Challenges and recommendations. Annual Review of Public Health, 41, 433–451.

  • Tromble, R. (2021). Where have all the data gone? A critical reflection on academic digital research in the post-API age. Social Media + Society, 7(1), 1–8.

  • United Nations Human Rights Council. (2018). Report of the detailed findings of the independent international fact-finding mission on Myanmar. United Nations Human Rights Council. https://www.ohchr.org/sites/default/files/Documents/HRBodies/HRCouncil/FFM-Myanmar/A_HRC_39_CRP.2.pdf, Accessed 3 Oct 2022.

  • van Fraassen, B. (1980). The scientific image. Oxford University Press.

  • Vosoughi, S., Ry, D., & Aral, S. (2018). The spread of true and false news online. Science, 359, 1146–1151.

    Article  Google Scholar 

  • Yee, A. K. (2023). Information deprivation and democratic engagement. Philosophy of Science, 90(5).

  • Zubiaga, A., Liakata, M., Proctor, R., Wong Sak Hoi, G., & Tolmie, P. (2016). Analysing how people orient to and spread rumours in social media by looking at conversational threads. PLoS ONE, 11(3), 1–29.

Download references

Acknowledgements

I thank the following for constructive feedback on ideas in this paper: Brian Baigrie, 921 Franz Huber, Michael Miller, Regina Rini, Denis Walsh, the Pittsburgh HPS fringe theory group, the York 922 University moral psychology lab, four anonymous reviewers, the Hong Kong Catastrophic Risk Centre for funding, and the Philosophy of Contemporary and Future Science research group at Lingnan University, Department of Philosophy. All errors and infelicities are mine alone.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrian K. Yee.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yee, A.K. Machine learning, misinformation, and citizen science. Euro Jnl Phil Sci 13, 56 (2023). https://doi.org/10.1007/s13194-023-00558-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13194-023-00558-1

Keywords

Navigation