Abstract
Search engines are important contemporary sources of information and contribute to shaping our beliefs about the world. Each time they are consulted, various algorithms filter and order content to show us relevant results for the inputted search query. Because these search engines are frequently and widely consulted, it is necessary to have a clear understanding of the distinctively epistemic role that these algorithms play in the background of our online experiences. To aid in such understanding, this paper argues that search engine algorithms are providers of “bent testimony”—that, within certain contexts of interactions, users act as if these algorithms provide us with testimony—and acquire or alter beliefs on that basis. Specifically, we treat search engine algorithms as if they were asserting as true the content ordered at the top of a search results page—which has interesting parallels with how we might treat an ordinary testifier. As such, existing discussions in the philosophy of testimony can help us better understand and, in turn, improve our interactions with search engines. By explicating the mechanisms by which we come to accept this “bent testimony,” our paper discusses methods to help us control our epistemic reliance on search engine algorithms and clarifies the normative expectations one ought to place on the search engines that deploy these algorithms.
Similar content being viewed by others
Notes
The terms “recommendation systems” or “content-filtering algorithms” are sometimes used in the computer science literature, as well as in popular discourse, to refer to the algorithms that search engines use. To stay clear of any potential terminological inconsistencies that might arise with the use of these more precise terms, we use the more generic term “search engine algorithms” throughout.
For a discussion on the ubiquity and use of such algorithms, see Chaney et al. (2018).
Since Google is currently by far the most popular search engine (cf. StatCounter, n.d.), throughout this paper, we interchangeably use the phrases “Google’s algorithms” to refer to search engine algorithms and “googling” to refer to online searching.
Although Gunn and Lynch’s account focuses primarily on our epistemic reliance on those who produce the content that we engage with online, one might also consider how we are reliant on fellow Google users—whose data and search habits improve the quality of the recommendations Google provides to us—as well as Google’s engineers who designed these algorithms. On a broader reading of their claim that googling resembles testimony because of it is “dependent on the beliefs and actions of others,” one might reasonably extend their account to include these other groups and thereby build a more comprehensive view of how googling resembles testimony. Gunn and Lynch, however, do not explicitly mention these other groups, and our own suggested improvement for their view (i.e., that we should better consider our epistemic reliance on search engine algorithms themselves) still holds even on this extended version of their account. As a result, we did not discuss this extension in our main text, but instead raise it as footnote for interested readers.
Gunn and Lynch do acknowledge that googling is a “preference-dependent” mode of inquiry—Google tells us who to consult based on its assessment of what we might like and what links we will click (p. 43). Preference dependence is, of course, one important way in which google differs from “xoogle.” However, they do not specify how exactly this feature of preference dependence is relevant to their account of how googling resembles testimony. As such, a reading that the only relevant consideration for them is our dependence on the actions and beliefs of others is, while perhaps strict, not uncharacteristic.
More recently, Google has implemented a feature called “Quick Answers” that highlights at the top of the first page of search results a single answer to certain types of simple search queries—like “Who is the Prime Minister of Singapore?”, or “What’s two plus six?”—where single answers are possible. But even so, below this highlighted “quick answer,” you would still find the set of ordered links as you would in an ordinary Google search.
This is often, but not necessarily the case. The articles at the top of a search result have high ‘similarity scores’, are hosted on websites of comparable repute, and are closely related to the search input we type in. If, for instance, two sources deemed reputable by the algorithm (say, The New York Times and The Washington Post) make dramatically different assertions about a topic, these might appear together at the top of a search results page. It’s hard to say how often this happens.
For details on the design of Google’s ‘PageRank’ algorithm, see (Page, 2006).
Some might prefer to avoid using a well-explored term like “testimony”—with modifier or without—in favour of terms like “evidence” or “influence.” But, for this paper, it does not matter so much what we call it. We would only have to engage in these terminological discussions if we were arguing that search engine algorithms were, in fact, providing testimony (and, perhaps, not just “evidence”). Hopefully even those who prefer a more restricted use of “testimony” would agree that, in some cases, we might act as if someone (or something) was giving us testimony and acquire/alter beliefs on this basis, even if they, in fact, were not testifying. This concession is all we need for the arguments in the paper to proceed.
Despite opening up the possibility for algorithmic testimony for this essay, Lackey seems to be among those scholars who believe only humans can testify (2008, p. 189). See Freiman and Miller (2020) for a more comprehensive engagement with her concerns about non-human testimony.
Freiman and Miller (2020) are among those who believe that there is a meaningful distinction to be drawn between the testimony of humans and algorithms. Their use of the modifier “quasi” while discussing algorithmic testimony is mainly to emphasize this distinction. Despite this, they believe that we might similarly acquire or alter beliefs on the basis of both ordinary testimony as well as “quasi” testimony.
Freiman and Miller suggest that when a machine’s output, by design, resembles human testimony (e.g., an automated announcement in a natural language on a loud speaker), the machine’s designers “count on its users to correctly decipher the meaning of the output and correctly assess its validity because they recognize the testimony-like epistemic norms under which the output is produced” (p. 13). In turn, when users expect this machine output to conform to epistemic norms that would be in place for similar interactions (e.g., an announcement on a loud speaker made by a human), we are treating this machine as a “quasi-testifier.” See Freiman and Miller (2020, pp. 11 – 14) for a more thorough discussion of quasi-testimony.
References
Adler, J. (2017). Epistemological problems of testimony. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2017). Metaphysics Research Lab, Stanford University. Retrieved 15 Jan 2021, from https://plato.stanford.edu/archives/win2017/entriesestimony-episprob/
Bermúdez, J. L. (2007). Thinking without words. Oxford University Press.
Boutin, P. (2011, May 20). Your Results May Vary. Wall Street Journal. Retrieved 15 Jan 2021, from https://online.wsj.com/article/SB10001424052748703421204576327414266287254.html
Chaney, A. J. B., Stewart, B. M., & Engelhardt, B. E. (2018). How Algorithmic confounding in recommendation systems increases homogeneity and decreases utility. ArXiv:1710.11214 [Cs, Stat]. https://doi.org/10.1145/3240323.3240370
Coady, C. A. J. (1992). Testimony: A philosophical study. Clarendon Press, Oxford, England.
De Cremer, D., McGuire, J., Hesselbarth, Y., & Mai, M. (2019). Social intelligence at work: Can AI help you to trust your new colleague. Harvard Business Review. 4 June. Retrieved 10 Jan 2021, from https://hbr.org/2019/06/can-algorithms-help-us-decide-who-to-trust
Diaz, A. (2008). Through the Google goggles: Sociopolitical bias in search engine design. In Web search (pp. 11–34). Springer, Berlin, Heidelberg.
Epstein, R., & Robertson, R. E. (2015). The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences, 112(33), E4512–E4521. https://doi.org/10.1073/pnas.1419828112
Freiman, O., & Miller, B. (2020, May 7). Can artificial entities assert? The Oxford Handbook of Assertion. https://doi.org/10.1093/oxfordhb/9780190675233.013.36
Google Privacy & Terms. (n.d.). Frequently asked questions. Retrieved 24 January 2022, from https://policies.google.com/faq?hl=en-US
Google Search. (n.d.-a). Ranking results. Retrieved 24 January 2022, from https://www.google.com/intl/en_in/search/howsearchworks/how-search-works/ranking-results/#context
Google Search. (n.d.-b). Rigorous testing. Retrieved 24 January 2022, from https://www.google.com/intl/en_in/search/howsearchworks/how-search-works/rigorous-testing/
Google Search. (n.d.-c). Search quality rating guidelines. Retrieved 24 January 2022, from https://static.googleusercontent.com/media/guidelines.raterhub.com/en//searchqualityevaluatorguidelines.pdf
Google Search. (n.d.-d). Features. Retrieved 24 January 2022, from https://www.google.com/intl/en_in/search/howsearchworks/features/
Grimmelmann, J. (2011). Some Skepticism About Search Neutrality. In B. Szoka, A. Marcus (Eds.), The Next Digital Decade: Essays on the Future of the Internet (pp. 435 - 459). TechFreedom, Washington DC., USA. Retrieved 15 Jan 2021, from https://papers.ssrn.com/abstract=1742444
Gunn, H. K., & Lynch, M. P. (2018, September 3). Googling. The Routledge Handbook of Applied Epistemology; Routledge. https://doi.org/10.4324/9781315679099-4
Hillis, K., Petit, M., & Jarrett, K. (2012). Google and the Culture of Search. Routledge. https://doi.org/10.4324/9780203846261
Introna, L. D., & Nissenbaum, H. (2000). Shaping the Web: Why the politics of search engines matters. The Information Society, 16(3), 169–185.
Jansen, B. J., Spink, A., & Saracevic, T. (2000). Real life, real users, and real needs: A study and analysis of user queries on the web. Information Processing & Management, 36(2), 207–227. https://doi.org/10.1016/S0306-4573(99)00056-4
Jansen, B. J., & Spink, A. (2006). How are we searching the World Wide Web? A comparison of nine search engine transaction logs. Information Processing & mAnagement, 42(1), 248–263.
Jansen, B. J., Zhang, M., & Schultz, C. D. (2009). Brand and its effect on user perception of search engine performance. Journal of the American Society for Information Science and Technology, 60(8), 1572–1595. https://doi.org/10.1002/asi.21081
Joachims, T., Granka, L., Pan, B., Hembrooke, H., Radlinski, F., & Gay, G. (2007). Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Transactions on Information Systems (TOIS), 25(2), 7-es.
Lackey, J. (2008). Learning from Words: Testimony as a Source of Knowledge. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199219162.001.0001/acprof-9780199219162
Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.
Latour, B. (2012). We Have never been modern. Harvard University Press.
McKinsey (2011). The impact of internet technologies: Search. Retrieved 24 January 2022, from https://www.mckinsey.com/~/media/mckinsey/dotcom/client_service/high%20tech/pdfs/impact_of_internet_technologies_search_final2.aspx
Miller, B., & Record, I. (2013). Justified belief in a digital age: On the epistemic implications of secret Internet technologies. Episteme, 10(2), 117–134. https://doi.org/10.1017/epi.2013.11
Nentwich, M., & König, R. (2012). Cyberscience 2.0: Research in the age of digital social networks. Campus Verlag GmbH, Frankfurt-on-Main.
Nickel, P. (2013). Artificial speech and its authors. Minds and Machines, 23(4), 489–502.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. https://doi.org/10.2307/j.ctt1pwt9w5
Oulasvirta, A., Hukkinen, J. P., & Schwartz, B. (2009). When more is less: The paradox of choice in search engine use. Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 516–523. https://doi.org/10.1145/1571941.1572030
Page, L. (2006). Method for node ranking in a linked database (United States Patent No. US7058628B1). https://patents.google.com/patent/US7058628B1/en
Pagin, P. (2016). Assertion. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2016). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2016/entries/assertion/
Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google We Trust: Users’ Decisions on Rank, Position, and Relevance. Journal of Computer-Mediated Communication, 12(3), 801–823. https://doi.org/10.1111/j.1083-6101.2007.00351.x
Pasquale, F. (2015). The black box society. Harvard University Press.
Rescorla, M. (2009). Chrysippus’ dog as a case study in non-linguistic cognition. In R. Luiz (Ed.), The Philosophy of Animal Minds (pp. 52–71). Cambridge University Press.
Rini, R. (2017). Fake News and Partisan Epistemology. Kennedy Institute of Ethics Journal, 27(S2), 43–64. https://doi.org/10.1353/ken.2017.0025
Roesler, P. (2021, October 16). How Google’s Latest Search Redesign May Impact Your Business. Inc.Com. Retrieved 22 Jan 2022, from https://www.inc.com/peter-roesler/how-googles-latest-search-redesign-may-impact-your-business.html
Rothman, L. (2018). 20 Years of Google has changed the way we think. Here’s how, according to a historian of information. Time. Retrieved 24 Jan 2022, from https://time.com/5383389/google-history-search-information/
Searle, J. R. (1969). Speech acts: An essay in the philosophy of language. Cambridge University Press. https://doi.org/10.1017/CBO9781139173438
SimilarWeb. (2020). Google.com Traffic Statistics. Retrieved 15 November 2020, from http://similarweb.com/website/google.com/
StatCounter. (n.d.). Search engine market share worldwide. StatCounter Global Stats. Retrieved 8 November 2020, from https://gs.statcounter.com/search-engine-market-share
Tollefsen, D. P. (2009). Wikipedia and the epistemology of testimony. Episteme, 6(1), 8–24. https://doi.org/10.3366/E1742360008000518
Tuomela, R. (1992). Group beliefs. Synthese, 91(3), 285–318.
Vaidhyanathan, S. (2012). The googlization of everything: (And why we should worry). University of California Press.
Van Deursen, A. J., & Van Dijk, J. A. (2014). Digital skills: Unlocking the information society. Springer
Verma, S., Gao, R., & Shah, C. (2020). Facets of fairness in search and recommendation. In L. Boratto, S. Faralli, M. Marras, & G. Stilo (Eds.), Bias and Social Aspects in Search and Recommendation (pp. 1–11). Springer International Publishing. https://doi.org/10.1007/978-3-030-52485-2_1
Westerwick, A. (2013). Effects of sponsorship, web site design, and Google ranking on the credibility of online information. Journal of Computer-Mediated Communication, 18(2), 194–211. https://doi.org/10.1111/jcc4.12006
Weisberg, J. (2011, June 10). Eli Pariser’s the filter bubble: Is web personalization turning us into solipsistic twits? Slate Magazine. Retrieved 15 Jan 2021, from https://slate.com/news-and-politics/2011/06/eli-pariser-s-the-filter-bubble-is-web-personalization-turning-us-into-solipsistic-twits.html
Wheeler, B. (2020) “Reliabilism and the Testimony of Robots.” Techné: Research in Philosophy and Technology, 24(3), 332–356.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the Topical Collection on Information in Interactions between Humans and Machines
Rights and permissions
About this article
Cite this article
Narayanan, D., De Cremer, D. “Google Told Me So!” On the Bent Testimony of Search Engine Algorithms. Philos. Technol. 35, 22 (2022). https://doi.org/10.1007/s13347-022-00521-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13347-022-00521-7