Abstract
In Homo Deus, Yuval Noah Harari argues that technological advances of the twenty-first century will usher in a significant shift in how humans make important life decisions. Instead of turning to the Bible or the Quran, to the heart or to our therapists, parents, and mentors, people will turn to Big Data recommendation algorithms to make these choices for them. Much as we rely on Spotify to recommend music to us, we will soon rely on algorithms to decide our careers, spouses, and commitments. Harari also predicts that next, the state will take away individuals’ rights to make their own choices about their lives. If Google knows where your children would flourish best in school, why should the state allow a fallible human parent to decide? Liberalism—which, as Harari uses this term, refers to a state of society in which human freedom to choose is respected and championed—will collapse. In this paper, I argue that Harari’s conception of the future implications of recommendation algorithms is deeply flawed, for two reasons. First, users will not rely on algorithms to make decisions for them because they have no reason to trust algorithms, which are developed by companies with their own incentives, such as profit. Second, for most of our life decisions, algorithms will not be able to be developed, because the factors relevant to the decisions we face are unique to our situation. I present an alternative depiction of the future: instead of relying on algorithms to make decisions for us, humans will use algorithms to enhance our decision-making by helping us consider the most relevant choices first and notice information we might not otherwise. Finally, I will also argue that even if computers could make many of our decisions for us, liberalism as a political system would emerge unscathed.
Similar content being viewed by others
Notes
The term “liberalism” has many distinct meanings in different contexts, but I will not be exploring these various meanings in this paper. Instead, I will use the term as Harari uses it, to refer to social systems in which humans are left free to make their own decisions about their lives.
Readers interested in learning more about how Spotify’s recommendation system works might begin with this presentation by one of its creators: https://www.slideshare.net/MrChrisJohnson/algorithmic-music-recommendations-at-spotify/15-The_Netflix_Problem_Vs_The. For more on how recommendation algorithm present recommendations to users, see also (Zhao et al. 2017).
See, for example, Krizhevsky et al. (2012).
Issues of overfitting would be particularly nefarious here. See O’neil (2016).
The observation that many of our life decisions are unique and particular rather than general and universal was a cornerstone of Nietzsche’s philosophy. See, for example, Beyond Good and Evil (198).
Nietzsche is the unsung hero of Harari’s book. And insofar as Harari’s argument is thoroughly Nietzschean, his prediction that big data algorithms will spell the end of liberalism falls prey to the same fallacy that Nietzsche failed to overcome. Nietzschean elements in Harari’s book are omnipresent. Like Nietzsche’s imagined prophet, Zarathustra, Harari proclaims throughout the book that “God is dead,” and that we have yet to grapple with the true consequences. He professes his love for Nietzsche’s genealogical method in the first chapter (without calling it such). Further, Nietzsche dreamed of philosophers of the future who would create new values for humanity, and Harari complains that “Since 1789, despite numerous wars…humans have not managed to come up with any new value.” Instead, the philosophy that Harari advocates, “Dataism,” is “the first movement since 1789 that created a really novel value: freedom of information.” (382) Nietzsche plays such a prominent role in Harari’s thinking to such an extent that one begins to seriously suspect that Harari sees himself (perhaps fittingly so) as the first of the “philosophers of the future” that Nietzsche desperately hoped for.
See Hunt (2007) for related arguments.
References
Adams RM (2006) A theory of virtue: excellence in being for the good. Oxford University Press, Oxford
Annas J (2005) Comments on John Doris’s lack of character. Philos Phenomenol Res 71(3):636–642
Appiah A (2008) Experiments in ethics. Harvard University Press, Cambridge
Arpaly N (2005) Comments on lack of character by John Doris. Philos Phenomenol Res 71(3):643–647
Doris JM (1998) Persons, situations, and virtue ethics. Noûs 32:504–530
Doris JM (2002) Lack of character: personality and moral behavior. Cambridge University Press, New York
Doris JM (2005) Replies: evidence and sensibility. Philos Phenomenol Res 71(3):656–677
Fogg BJ (2002) Persuasive technology: using computers to change what we think and do. Ubiquity. 1:31–61, 211–255
Fogg BJ (2009) A behavior model for persuasive design. In: Proceedings of the 4th international conference on persuasive technology, ACM
Haidt J (2016) Moral Psychology: An Exchange. The New York Review of Books. http://www.nybooks.com/articles/2016/04/07/moral-psychology-an-exchange/. Accessed 18 May 2017
Harman G (1999) Moral philosophy meets social psychology: virtue ethics and the fundamental attribution error. Proc Aristot Soc 99:315–331
Harman G (2000) The nonexistence of character traits. Proc Aristot Soc 100:223–226
Henke N et al (2016) The age of analytics: competing in a data-driven world. McKinsey Global Institute. http://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/the-age-of-analytics-competing-in-a-data-driven-world. Accessed 18 May 2017
Hunt L (2007) Inventing human rights: a history. WW Norton & Company, New York
Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New York
Keren G, Schul Y (2009) Two is not always better than one: a critical evaluation of two-system theories. Perspect Psychol Sci 4:533–550
Kitcher P (2012) Preludes to pragmatism: toward a reconstruction of philosophy. Oxford University Press, Oxford
Kramer ADI, Guillory JE, Hancock JT (2014) Experimental evidence of massive-scale emotional contagion through social networks. Proc Natl Acad Sci 111(24):8788–8790
Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. In: Proceedings of Advances in neural information processing systems, Lake Tahoe, USA, Vol. 2, pp 1097–1105
Kruglanski AW, Gigerenzer G (2011) Intuitive and deliberative judgements are based on common principles. Psychol Rev 118:97–109
Nee DE, Berman MG, Moore KS, Jonides J (2008) Neuro-scientific evidence about the distinction between short and long term memory. Curr Dir Psychol Sci 17:102–106
O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown Publishing Group, New York
Ricci F, Rokach L, Shapira B (2011) Introduction to recommender systems handbook. Springer, US
Ross L, Nisbett RE (2011) The person and the situation: perspectives of social psychology. Pinter & Martin Publishers, London
Rose D, Livengood J, Sytsma J, Machery E (2012) Deep trouble for the deep self. Philos Psychol 25(5):629–646
Rubinstein A (2008) Comments on neuroeconomics. Manuscript in preparation, Tel Aviv University
Schurger A, Sitt JD, Dehaene S (2012) An accumulator model for spontaneous neural activity prior to self-initiated movement. Proceedings of the National Academy of Sciences 109(42):E2904–E2913
Shaw T (2016) The Psychologists Take Power. Review of the righteous mind, by Jonathan Haidt. The New York review of books. http://www.nybooks.com/articles/2016/02/25/the-psychologists-take-power/. Accessed 18 May 2017
Sunstein CR (1999) Free markets and social justice. Oxford University Press
Zhao Q, Adomavicius G, Maxwell Harper F, Willemsen M, Konstan J (2017) Toward better interactions in recommender systems: cycling and serpentining approaches for top-N item lists. In: Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing, 1444–1453
Acknowledgements
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 16-44869. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
First, D. Will big data algorithms dismantle the foundations of liberalism?. AI & Soc 33, 545–556 (2018). https://doi.org/10.1007/s00146-017-0733-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-017-0733-4