Skip to main content

Advertisement

Log in

Human achievement and artificial intelligence

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

In domains as disparate as playing Go and predicting the structure of proteins, artificial intelligence (AI) technologies have begun to perform at levels beyond which any humans can achieve. Does this fact represent something lamentable? Does superhuman AI performance somehow undermine the value of human achievements in these areas? Go grandmaster Lee Sedol suggested as much when he announced his retirement from professional Go, blaming the advances of Go-playing programs like AlphaGo for sapping his will to play the game at a high level. In this paper, I attempt to make sense of Sedol’s lament. I consider a number of ways that the existence of superhuman-performing AI technologies could undermine the value of human achievements. I argue there is very little in the nature of the technology itself that warrants such despair. (Compare: does the existence of a fighter jet undermine the value of being the fastest human sprinter?) But I also argue there are several more localized domains where these technologies threaten to displace human beings from being able to achieve valuable things at all. This is a particular worry for those in unequal societies, I argue, given the difficulty of many achievements and the corresponding amount of resources needed to achieve great things.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See Lee et al. (2016) for a contemporaneous report.

  2. More recent work has attempted to replicate superhuman performance at multiple games without hand-coding expertise of any kind (even the rules of the game) into the algorithm (Silver et al., 2018).

  3. Yonhap News Agency (2019). As the article mentions, there are also political reasons why Sedol might have announced his retirement from Korean Go. But whatever the true motivation for Sedol’s retirement, the sentiments he expressed latch onto a real concern about the future of human achievement in an era of superhuman AI performance. It is this concern that motivates the remainder of this paper, not Sedol’s actual motivations.

  4. The broader question of how to “align” the values of AI technologies with human interest is a rapidly-expanding field of research (Gabriel, 2020; Peterson, 2019), but there has been little published reflection connecting these concerns to the value of achievement in particular (though see Danaher & Nyholm, 2020, discussed at length below).

  5. Some examples of this recent work, many of which are discussed below, include Bradford (2015), Hirji (2019), von Kriegstein (2017), Hurka (2020), and Wang (2021).

  6. Though Bradford’s account is controversial, the basic metaphysics and axiology of achievement (which she presents very clearly) are all we need to get on the table at this moment. If there are aspects of the view we need to modify in light of our reflection on AI performance, we can do so below.

  7. In focusing on Bradford’s account, I am setting aside a dense assortment of theoretical questions concerning achievement. For one thing, Bradford (and authors who respond to her) take achievements to generate value irrespective of whether they contribute to the welfare of the achieving agent. One might, instead, understand an achievement as primarily being good for an agent’s welfare (Portmore, 2008; Scanlon, 1998). There are complicated questions as to how welfare-based and intrinsic-value-based accounts of achievement might interact. While these debates are fascinating, the agent-neutral form of achievement seems to be what is most at stake with worries like Sedol’s, so we shall focus on it here.

  8. For Bradford, for an achievement to be competently caused just is for the agent to have a significant number of justified true beliefs about that achievement (Bradford, 2015, pp. 65–67). I ignore this condition for several reasons. First, it is only difficulty that ultimately contributes to the value of achievement for Bradford (see Hirji, 2019, and ignoring complications about the value of organic unities (see also Hurka, 2020) that would take us very far afield). The value of difficulty will be our exclusive focus in the “The value of difficulty” section. Additionally, I do not find the account of competent causation in terms of justified true belief compelling, preferring instead an account that centers the agent’s capacities and dispositions (as in Sosa, 2007).

  9. Ignoring that, for a creature with a different cognitive makeup, the latter might be quite difficult.

  10. One reason I think this: the underlying axiology of perfectionist value is flexible enough that many antecedent commitments can fit within it. For example, consequentialist leanings are compatible with versions of perfectionism (Hurka, 1993). One can also imagine how to adjust Hurka’s consequentialist theory to take into account the agent-centered prerogatives of nonconsequentialist theories. The important point for technology ethics is one that Rawls (1999, p. 325) makes: perfectionist goods are a kind of good that should be built into any moral theory (to be weighed against other goods).

  11. The program is able to play millions of games in the time it would take a human being to play tens or hundreds (Silver et al., 2018). If anything, playing Go is the easiest thing in the world for AlphaGo.

  12. This is not to take a stand on the thorny question of whether a sufficiently complicated AI technology could have cognition or agency in the right way. Contrary to classic arguments from Searle (1980), I do not see any in-principle reasons why this could not be a possibility, and there are some interesting extant accounts for how this might happen (e.g. List, 2021). Nonetheless, almost everyone agrees that machine learning algorithms as they currently exist lack most of the capacities necessary for agency, and thus for competent causation (though see Danaher, 2020).

  13. My thanks to Josh Shepherd for pushing me on this point.

  14. My thanks to Jake Quilty-Dunn for discussions of this line of reflection.

  15. There are some reasons to push back here, since keeping a “well-oiled machine” running might itself be a genuine achievement.The empirical facts concerning the spread and ubiquity of “bullshit jobs” (Graeber, 2013), however, make this a rather theoretical response.

  16. Similar arguments have also been given outside of the perfectionist account of achievement, most obviously in Experience Machine arguments (Nozick, 1974).

  17. Some standard citations include Persson and Savulescu (2008) and Levy (2007, ch. 2 & 3).

  18. I raise some particular issues for these ideas below, but they are rather applied in scope. A more systematic critique of the supposed undermining of achievement by enhancement can be found in Forsberg and Skelton (2020).

  19. There is plenty of philosophical work on the nature and function of human skill (e.g. Shepherd, 2019; Stichter, 2007), but comparatively less on the notion of talent, though they are intimately connected. I am here relying on the excellent and novel account of talent in Robb (2020).

  20. Machine learning can in turn be used to evaluate different variants of the rules of chess, creating a feedback loop that pushes players towards new variants that will keep and attract interest within the broader space of “chess-like games” (Tomašev et al., 2020).

  21. My thanks to an anonymous reviewer for pushing me to make this formulation more precise.

  22. This is the classic argument of Suits (1978), though how to precisify the idea is not always clear (Wildman & Archer, 2019; Yorke, 2018).

  23. For more on this possibility, and its impact on science, see Buckner (2020).

  24. This is the standard objection to political forms of perfectionism; see Nagel (1995), Brink (2007), and Wall (2009) for book-length treatments of these topics.

  25. As an anonymous reviewer points out, though the specific empirical facts cited here are widely discussed and (mostly) accepted, it is possible to contest them. Even so, I think the project sketched in this section is interesting regardless, if for no other reason than as a conditional claim. If the social and political facts are as this section claims, then a version of displacement represents a real threat to the value of widespread human achievement in the era of superhuman AI. How various institutional and social realities intersect with the normative theory of achievement in the era of AI is a broad research project on which I have much more to say, but can only gesture at here due to space limitations.

  26. As theorists of the “leaky pipeline” in academia have long noted; see Cheryan et al. (2017) for a recent example in STEM fields in particular.

  27. Attempts to mitigate the results of algorithmic bias in particular have been attempted, especially in “algorithmic auditing” (see the framework in Raji et al., 2020). But it is unclear how much these internal fixes, originating at and being implemented in companies whose interests are clearly aligned with inequality-driving forces, will be sufficient to alleviate the problem.

  28. For instance, if one thinks the value of achievement is partially or wholly grounded in the enjoyment that we get out of the process of achieving, then the fact that we despair when contemplating the rise of superhuman AI might itself be enough to undermine the value of our achievements. This could be true even if all the arguments presented in this paper are on the right track. While I think this view represents an implausibly subjective view of the value of achievement, more work is needed to tease out the threads of these kinds of downstream issues. I am thankful to an anonymous reviewer for suggesting this line of future work.

References

  • Adler, P., Falk, C., Friedler, S. A., Rybeck, G., Scheidegger, C., Smith, B., & Venkatasubramanian, S. (2016). Auditing black-box models for indirect influence. In 2016 IEEE 16th international conference on data mining (ICDM) (pp. 1–10). IEEE.

  • Bradford, G. (2015). Achievement. Oxford University Press.

    Book  Google Scholar 

  • Brennan, T., Dieterich, W., & Ehret, B. (2009). Evaluating the predictive validity of the COMPAS risk and needs assessment system. Criminal Justice and Behavior, 36(1), 21–40.

    Article  Google Scholar 

  • Brink, D. O. (2007). Perfectionism and the common good: Themes in the philosophy of TH green. Clarendon Press.

    Google Scholar 

  • Brookwell, I. (2020). The hottest new video game is… chess? Fast Company.

  • Buckner, C. (2019). Rational inference: The lowest bounds. Philosophy and Phenomenological Research, 98(3), 697–724.

    Article  Google Scholar 

  • Buckner, C. (2020). Understanding adversarial examples requires a theory of artefacts for deep learning. Nature Machine Intelligence, 2(12), 731–736.

    Article  Google Scholar 

  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77–91). PMLR.

  • Cheryan, S., Ziegler, S. A., Montoya, A. K., & Jiang, L. (2017). Why are some STEM fields more gender balanced than others? Psychological Bulletin, 143(1), 1.

    Article  Google Scholar 

  • Danaher, J. (2020). Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics, 26(4), 2023–2049.

    Article  Google Scholar 

  • Danaher, J., & Nyholm, S. (2020). Automation, work and the achievement gap. AI and Ethics, 1–11.

  • Forsberg, L., & Skelton, A. (2020). Achievement and Enhancement. Canadian Journal of Philosophy, 50(3), 322–338.

    Article  Google Scholar 

  • Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411–437.

    Article  Google Scholar 

  • Graeber, D. (2013). On the phenomenon of bullshit jobs: A work rant. Strike Magazine, 3, 1–5.

    Google Scholar 

  • Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.

    Google Scholar 

  • Halina, M. (2021). Insightful artificial intelligence. Mind and Language., 36(2), 315–329.

    Article  Google Scholar 

  • Hirji, S. (2019). Not always worth the effort: Difficulty and the value of achievement. Pacific Philosophical Quarterly, 100(2), 525–548.

    Article  Google Scholar 

  • Hsu, F. H. (2002). Behind Deep Blue: Building the computer that defeated the world chess champion. Princeton University Press.

    MATH  Google Scholar 

  • Hurka, T. (1993). Perfectionism. Oxford University Press.

    Google Scholar 

  • Hurka, T. (2006). Games and the good. Proceedings of the Aristotelian Society, 106(1), 217–235.

    Article  Google Scholar 

  • Hurka, T. (2020). The parallel goods of knowledge and achievement. Erkenntnis, 85(3), 589–608.

    Article  Google Scholar 

  • Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., & Madry, A. (2019). Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175.

  • Johnson, G. M. (2020). Algorithmic bias: on the implicit biases of social technology. Synthese, 1–21.

  • Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Tunyasuvunakool, K., & Hassabis, D. (2020). High accuracy protein structure prediction using deep learning. Fourteenth Critical Assessment of Techniques for Protein Structure Prediction, 22(24), 2.

    Google Scholar 

  • Lee, C. S., Wang, M. H., Yen, S. J., Wei, T. H., Wu, I. C., Chou, P. C., & Yan, T. H. (2016). Human vs computer go: Review and prospect. IEEE Computational Intelligence Magazine, 11(3), 67–72.

    Article  Google Scholar 

  • Levene, M., & Bar-Ilan, J. (2007). Comparing typical opening move choices made by humans and chess engines. The Computer Journal, 50(5), 567–573.

    Article  Google Scholar 

  • Levy, N. (2007). Neuroethics: Challenges for the 21st century. Cambridge University Press.

    Book  Google Scholar 

  • List, C. (2021). Group Agency and Artificial Intelligence. Philosophy and Technology, 1–30.

  • Mohamed, S., Png, M. T., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33, 659-684.

  • Nagel, T. (1995). Equality and partiality. Oxford University Press.

    Book  Google Scholar 

  • Nguyen, C. T. (2019). Games and the art of agency. Philosophical Review, 128(4), 423–462.

    Article  Google Scholar 

  • Noble, S. U. (2018). Algorithms of oppression. New York University Press.

    Book  Google Scholar 

  • Nozick, R. (1974). Anarchy, state, and Utopia. Basic Books.

    Google Scholar 

  • O’neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

    MATH  Google Scholar 

  • Pranam, A. (2019) Why The retirement of Lee Se-Dol, former 'Go' champion, is a sign of things to come. Forbes.

  • Persson, I., & Savulescu, J. (2008). The perils of cognitive enhancement and the urgent imperative to enhance the moral character of humanity. Journal of Applied Philosophy, 25(3), 162–177.

    Article  Google Scholar 

  • Peterson, M. (2019). The value alignment problem: A geometric approach. Ethics and Information Technology, 21(1), 19–28.

    Article  Google Scholar 

  • Portmore, D. W. (2008). Welfare, Achievement, and Self-Sacrifice. Journal of Ethics and Social Philosophy, 2(2), 1.

    Article  Google Scholar 

  • Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 33–44).

  • Rawls, J. (1999). A theory of justice (Revised). Harvard University Press.

    Book  Google Scholar 

  • Robb, C. M. (2020). Talent dispositionalism. Synthese, 1–18.

  • Scanlon, T. (1998). What we owe to each other. Belknap Press.

    Google Scholar 

  • Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424.

    Article  Google Scholar 

  • Shepherd, J. (2019). Skilled action and the double life of intention. Philosophy and Phenomenological Research, 98(2), 286–305.

    Article  MathSciNet  Google Scholar 

  • Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140–1144.

    Article  MathSciNet  MATH  Google Scholar 

  • Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., & Hassabis, D. (2016). Mastering the game of go without human knowledge. Nature, 550(7676), 354–359.

    Article  Google Scholar 

  • Sosa, E. (2007). A virtue epistemology: Apt belief and reflective knowledge (Vol. 1). Oxford University Press.

    Book  Google Scholar 

  • Stichter, M. (2007). Ethical expertise: The skill model of virtue. Ethical Theory and Moral Practice, 10(2), 183–194.

    Article  Google Scholar 

  • Suits, B. (1978). The grasshopper. University of Toronto Press.

    Book  Google Scholar 

  • Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.

  • Teachman, J. D. (1987). Family background, educational resources, and educational attainment. American Sociological Review, 52, 548–557.

    Article  Google Scholar 

  • Tomašev, N., Paquet, U., Hassabis, D., & Kramnik, V. (2020). Assessing game balance with AlphaZero: Exploring alternative rule sets in chess. arXiv preprint arXiv:2009.04374.

  • von Kriegstein, H. (2017). Effort and achievement. Utilitas, 29(1), 27–51.

    Article  Google Scholar 

  • Wang, J. (2021). Cognitive enhancement and the value of cognitive achievement. Journal of Applied Philosophy, 38(1), 121–135.

    Article  Google Scholar 

  • Wall, S. (2009). Liberalism, perfectionism and restraint. Cambridge University Press.

  • Wildman, N., & Archer, A. (2019). Playing with art in suits’ Utopia. Sport, Ethics and Philosophy, 13(3–4), 456–470.

    Article  Google Scholar 

  • Yonhap News Agency. (2019). Go master Lee says he quits, unable to win over AI Go Players. Yonhap News.

  • Yorke, C. C. (2018). Bernard suits on capacities: Games, perfectionism, and Utopia. Journal of the Philosophy of Sport, 45(2), 177–188.

    Article  Google Scholar 

Download references

Acknowledgements

My deepest gratitude to Joe Moore, Thomas Lambert, Jake Quilty-Dunn, Josh Shepherd, Anncy Thresher, Jon Vandenburgh, Michael Ball-Blakely, Ting-An Lin, Diana Acosta-Navas, Henrik Kugelberg, Valerie Soon, Anne Newman, Rob Reich, Tom Kelly, Colin Allen, Tony Chemero, and Zvi Biener for comments and conversation that improved this paper immensely. Thanks are also due to audiences at Stanford University, the University of Cincinnati, the University of Pittsburgh, Florida Atlantic University, and the 2021 iteration of the Society for Philosophy and Psychology annual meeting.

Funding

The author was generously supported by a Grant from the Templeton World Charity Foundation (#0467: Practical Wisdom and Intelligent Machines) for the duration of this project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brett Karlan.

Ethics declarations

Conflict of interest

The author declare no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Karlan, B. Human achievement and artificial intelligence. Ethics Inf Technol 25, 40 (2023). https://doi.org/10.1007/s10676-023-09713-x

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10676-023-09713-x

Keywords

Navigation