Skip to main content

AI Literacy: A Primary Good

  • Conference paper
  • First Online:
Artificial Intelligence Research (SACAIR 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1976))

Included in the following conference series:

  • 368 Accesses

Abstract

In this paper, I argue that AI literacy should be added to the list of primary goods developed by political philosopher John Rawls. Primary goods are the necessary resources all citizens need to exercise their two moral powers, namely their sense of justice and their sense of the good. These goods are advantageous for citizens since without them citizens will not be able to fully develop their moral powers. I claim the lack of AI literacy impacts citizens’ ability to exercise their sense of justice and their sense of the good. Without citizens having the ability to understand how AI technology works – including being aware of the social and political implications and the limits and possibilities of this technology broadly speaking – this could impact their ability to participate in a free, equal and fair society and their ability to carry out their conception of the good. Thus this paper is a call for AI literacy to be regarded as a basic good in a liberal constitutional democracy in order for citizens to be able to exercise their freedom and equality.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In this paper I have focused on addressing why AI literacy as opposed to digital literacy should be regarded as a primary good. Digital literacy has a wider scope than AI literacy, where the latter is generally subsumed as a feature of the former [30]. The aim of focusing on AI literacy specifically and not digital literacy, in general, is to focus on the harms of AI technology on the moral powers of persons specifically given the timely nature of these harms in liberal democracies worldwide.

  2. 2.

    For research providing technical solutions to the implementation of AI literacy see: [21, 23, 28, 29].

  3. 3.

    For further research on the intersection of John Rawls’s political philosophy and research in AI see: [3, 18, 19, 50].

  4. 4.

    By ‘actively’ I mean to suggest that citizens are choosing to want to use AI technology in an intentional manner to achieve a specific outcome.

  5. 5.

    By ‘background conditions’ I mean to suggest that AI technology impacts citizens in a subtle manner, insofar as this technology is influencing persons’ economic, political, social and cultural experiences in society.

  6. 6.

    By ‘responsible’ I am not suggesting that AI technology has agency to be responsible rather I am implying that citizens may attribute blame to the technology and not the persons using the technology.

  7. 7.

    The three frameworks I discuss in this section are not a comprehensive representative sample of all the frameworks in the field or how they are being implemented. For further frameworks see: [6, 11, 28, 29].

  8. 8.

    There are 15 ‘design considerations’ experts in AI need to consider when developing the technology to facilitate the development of these competencies. Due to space constraints, I have not discussed these considerations, see [26] for discussion.

  9. 9.

    This moral power associated with the faculty of reasonability, see: [35].

  10. 10.

    This moral power is associated with the faculty of rationality, see: [35].

  11. 11.

    A rational plan of life refers to a person’s chosen life ends/goals. This plan is informed by one’s conception of the good, personal desires, affiliations, and loyalties. These aspects inform one’s moral duties and obligations they have for persons in their private lives. Rational life plans are subjective, since a one’s goals are determined by the social, economic, moral, and political environment an individual exists within [37].

  12. 12.

    A conception of the good, is an umbrella term to refer to the moral values a person or community consider valuable to hold. These moral values can range from, secular belief systems to philosophical, metaphysical or religious doctrines [38].

  13. 13.

    In addition to primary goods, the main content of justice is the two principles he proposes. Firstly, the ‘liberty principle’, which safeguards the equal basic rights and liberties of citizens. The second principle has a dual function; on the one hand, it safeguards fair equality of opportunity among citizens, and, on the other hand, it justifies social inequalities iff these inequalities are to the benefit of the worst-off members of society. This second aspect of the second principle is referred to as the difference principle [36].

  14. 14.

    By ‘less’ I mean their agency is reduced in comparison to the agency they have when they actively engage in choosing to use certain AI technology such as a smartwatch. It is reduced as these individuals do not have the ability to not engage with algorithmic recommendations for social media, rather they only have the agency to choose how they will engage, and this choice is constrained by factors external to them.

  15. 15.

    Given the potential impact of AI technology on people’s lives and the types of harms that could stem from this, there is a vast field of research on ethical guidelines for AI technology, see: [8, 17, 13, 20].

  16. 16.

    Daniel Dennett coined the term ‘intuition pump’ to describe thought experiments that enable a reader to buy into the moral intuitions the relevant thinker wants the reader to grasp [10]. I refer to the original position as an intuition pump since, it is a device of representation that models the intuitions of fair agreement [35,36,37]. Rawls wants to justify the relevance of his principle's justice given the constraints of reasoning.

  17. 17.

    Fricker defines testimonial injustice as “…either the prejudice results in the speaker’s receiving more credibility than she otherwise would have—a credibility excess—or it results in her receiving less credibility than she otherwise would have—a credibility deficiency” [15]. Both credibility excess and credibility deficiency pose as epistemic danger, in the discussion of an ‘inferior epistemic knower’ above, an injustice is present because of the latter, a deficiency of credibility.

References

  1. AI4K12. https://ai4k12.org/. Accessed 20 Sept 2022

  2. Bender, E.M., Shah, C.: All-knowing machines are a fantasy IAI TV - changing how the world thinks (2022). https://iai.tv/articles/all-knowing-machines-are-a-fantasy-auid-2334. Accessed 10 Mar 2023

  3. Binns, R.: Algorithmic accountability and public reason. Philos. Technol. 31(4), 543–556 (2017). https://doi.org/10.1007/s13347-017-0263-5

    Article  Google Scholar 

  4. Bogen, M.: All the ways hiring algorithms can introduce bias. Harvard Business Review (2019). https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias. Accessed 10 Mar 2023

  5. British Science Association, One in Three Believe that the Rise of Artificial Intelligence is a Threat to Humanity (2016). https://www.britishscienceassociation.org/news/rise-of-artificial-intelligence-is-a-threat-to-humanity. Accessed 20 Mar 2023

  6. Burgsteiner, H., Kandlhofer, M., Steinbauer, G.: IRobot: teaching the basics of artificial intelligence in high schools. Proc. AAAI Conf. Artif. Intell. 30(1), 4126–4127 (2016)

    Google Scholar 

  7. Burrell, J.: How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. 3(1), 1–12 (2016). https://doi.org/10.1177/2053951715622512

    Article  MathSciNet  Google Scholar 

  8. Cowls, J., King, T., Taddeo, M., Floridi, L.: Designing AI for social good: seven essential factors (2019). https://doi.org/10.2139/ssrn.3388669

  9. Crawford, K.: The Trouble with Bias. NIPS Keynote (2017). https://www.youtube.com/watch?v=fMym_BKWQzk. Accessed 20 Sept 2021

  10. Dennett, D.C.: Intuition Pumps and Other Tools for Thinking. W.W. Norton & Company, New York (2014)

    Google Scholar 

  11. Druga, S., Vu, S.T., Likhith, E., Qiu, T.: Inclusive AI literacy for kids around the world. In: Proceedings of FabLearn ACM, pp. 104–111 (2019)

    Google Scholar 

  12. Fast, E., Horvitz, E.: Long-term trends in the public perception of artificial intelligence. In: Thirty-First AAAI Conference on Artificial Intelligence, pp. 963–969 (2017)

    Google Scholar 

  13. Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. Harvard Data Sci. Rev. 1(1), 1–15 (2019). https://doi.org/10.1162/99608f92.8cd550d1

    Article  Google Scholar 

  14. Forrester, K.: In the Shadow of Justice: Postwar Liberalism and the Remaking of Political Philosophy. Princeton University Press, New Jersey (2019)

    Book  Google Scholar 

  15. Fricker, M.: Epistemic Injustice Power and the Ethics of Knowing. N.Y. Oxford University Press, New York (2007)

    Book  Google Scholar 

  16. Friedman, B., Brok, E., Roth, K.S., et al.: Minimizing bias in computer systems. ACM SIGCHI Bull. 28(1), 48–51 (1996). https://doi.org/10.1145/249170.249184

    Article  Google Scholar 

  17. Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Minds Mach. 99–120 (2020).https://doi.org/10.1007/s11023-020-09517-8

  18. Iason, G.: Toward a theory of justice for artificial intelligence. Daedalus 151(2), 218–231 (2022)

    Article  Google Scholar 

  19. Hoffmann, A. L.: Rawls, information technology, and the sociotechnical bases of self-respect. In: Shannon, V. (eds.) The Oxford Handbook of Philosophy of Technology, Oxford University Press, pp. 230–49 (2022)

    Google Scholar 

  20. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

    Article  Google Scholar 

  21. Julie, H., Alyson, H., Anne-Sophie, C.: Designing digital literacy activities: an interdisciplinary and collaborative approach. In 2020 IEEE Frontiers in Education Conference (FIE), pp. 1–5 (2020)

    Google Scholar 

  22. Jungherr, A.: Artificial intelligence and democracy: a conceptual framework. Soc. Med. + Soc. 9(3) (2023). https://doi.org/10.1177/20563051231186353

  23. Kaspersen, M.H., Bilstrup, K.E.K., Petersen, M.G.: The machine learning machine: a tangible user interface for teaching machine learning. In: Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction, pp. 1–12 (2021)

    Google Scholar 

  24. Köchling, A., Wehner, M.C.: Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Bus. Res. 13, 795–848 (2020). https://doi.org/10.1007/s40685-020-00134-w

    Article  Google Scholar 

  25. Kong, S-C., Cheung, W.M-Y, Zhang, G.: Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Comput. Educ.: Artif. Intell., 1–12 (2021).https://doi.org/10.1016/j.caeai.2021.100026

  26. Long, D., Magerko, B.: What Is AI literacy? competencies and design considerations. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–16 (2020). https://doi.org/10.1145/3313831.3376727

  27. Menczer, F.: Here’s Exactly How Social Media Algorithms Can Manipulate You. Big Think (2021).https://bigthink.com/the-present/social-media-algorithms-manipulate-you/. Accessed 20 May 2023

  28. Ng, T.K.: New interpretation of extracurricular activities via social networking sites: a case study of artificial intelligence learning at a secondary school in Hong Kong. J. Educ. Train. Stud. 9(1), 49–60 (2021)

    Article  Google Scholar 

  29. Ng, T.K., Chu, K.W.: Motivating students to learn AI through social networking sites: a case study in Hong Kong. Online Learning 25(1), 195–208 (2021)

    Article  Google Scholar 

  30. Ng, D., et al.: Conceptualizing AI literacy: an exploratory review. Comput. Educ.: Artif. Intell. 2, 100041 (2021). https://doi.org/10.1016/j.caeai.2021.100041

    Article  Google Scholar 

  31. Nguyen, C.T.: Echo chambers and epistemic bubbles. Episteme 17(2), 141–161 (2020). https://doi.org/10.1017/epi.2018.32

    Article  Google Scholar 

  32. Naik, N., et al.: Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Front. Surg. 9(862322), 1–6 (2022). https://doi.org/10.3389/fsurg.2022.862322

    Article  Google Scholar 

  33. OECD.: Artificial Intelligence (2022). www.oecd.org. https://www.oecd.org/digital/artificial-intelligence/. Accessed 20 Aug 2023

  34. O’Neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, 1st edn. Crown, New York (2016)

    MATH  Google Scholar 

  35. Rawls, J.: Political Liberalism, revised Columbia University Press, New York (2005)

    Google Scholar 

  36. Rawls, J.: Justice as Fairness: A Restatement. Harvard University Press, Cambridge Massachusetts (2001)

    Book  Google Scholar 

  37. Rawls, J.: A Theory of Justice, revised Harvard University Press, Cambridge, Massachusetts (1999)

    Book  Google Scholar 

  38. Rawls, J.: Social Unity and Primary Goods. In: Freeman, S. (ed.) Collected Papers, pp. 359–387. Harvard University Press, Cambridge, MA (1999)

    Google Scholar 

  39. Ruiter, A.: The distinct wrong of deepfakes. Philos. Technol. 34, 1311–1332 (2021). https://doi.org/10.1007/s13347-021-00459-2

    Article  Google Scholar 

  40. Sandel, M.J.: Review of political liberalism. J. Rawls. Harvard Law Rev. 107(7), 1765–1794 (1994). https://doi.org/10.2307/1341828

    Article  Google Scholar 

  41. Smith, L., Fay, N.: What social media facilitates, social media should regulate: duties in the new public sphere. Polit. Q. 92(4), 613–620 (2021). https://doi.org/10.1111/1467-923X.13011

    Article  Google Scholar 

  42. Stahl, B.C.: Ethical issues of AI. In: Stahl, B.C., (Eds.) Artificial Intelligence for a Better Future An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies. SpringerBriefs in Research and Innovation Governance SRIG, pp. 35–53. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-69978-9_4

  43. Stone, P., Brooks, R., Brynjolfsson, E., et al.: Artificial intelligence and life in 2030: one hundred year study on artificial intelligence. Report of the 2015 Study Panel, Technical report, pp. 1–52 (2016)

    Google Scholar 

  44. The Council of the European Union. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (2021).https://data.consilium.europa.eu/doc/document/ST-8115-2021-INIT/en/pdf. Accessed 20 Aug 2022

  45. The White House. Blueprint for an AI Bill of Rights. The White House (2022). https://www.whitehouse.gov/ostp/ai-bill-of-rights/. Accessed 20 Nov 2022

  46. Thetechnopulse.: AI in Popular Culture: Shaping Perceptions & Inspiring Innovations. The Techno Pulse (2023).https://thetechnopulse.com/ai-popular-culture-impact/. Accessed 20 Aug 2023

  47. Touretzky, D., Gardner-McCune, C., Martin, F., Seehorn, D.: Envisioning AI for K-12: what should every child know about AI? In: Proceedings of the 2019 Conference on Artificial Intelligence, pp. 9795–9799 (2019)

    Google Scholar 

  48. UNESCO.: Recommendation on the Ethics of Artificial Intelligence (2021). https://www.unesco.org/en/legal-affairs/recommendation-ethics-artificial-intelligence. Accessed 22 Nov 2022

  49. Vlasceanu, M., Amodio, D.: Propagation of societal gender inequality by internet search algorithms. Proc. Natl. Acad. Sci. 119, 1–8 (2022). https://doi.org/10.1073/pnas.2204529119

    Article  Google Scholar 

  50. Weidinger, L., et al.: Using the veil of ignorance to align AI systems with principles of justice. Proc. Natl. Acad. Sci. 120(18), 1–9 (2023). https://doi.org/10.1073/pnas.2213709120

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paige Benton .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Benton, P. (2023). AI Literacy: A Primary Good. In: Pillay, A., Jembere, E., J. Gerber, A. (eds) Artificial Intelligence Research. SACAIR 2023. Communications in Computer and Information Science, vol 1976. Springer, Cham. https://doi.org/10.1007/978-3-031-49002-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-49002-6_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-49001-9

  • Online ISBN: 978-3-031-49002-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics