Skip to main content

Advertisement

Log in

Artificial wisdom: a philosophical framework

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Human excellences such as intelligence, morality, and consciousness are investigated by philosophers as well as artificial intelligence researchers. One excellence that has not been widely discussed by AI researchers is practical wisdom, the highest human excellence, or the highest, seventh, stage in Dreyfus’s model of skill acquisition. In this paper, I explain why artificial wisdom matters and how artificial wisdom is possible (in principle and in practice) by responding to two philosophical challenges for building artificial wisdom systems. The result is a conceptual framework that guides future research on creating artificial wisdom.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Davis (2019: 51) claims that “there is no … prospect for artificial wisdom”. But he did not delve deeply into (artificial) wisdom in his essay, the focus of which is (artificial) morality. Davis’s argument against artificial morality (call this argument “A1”) can be stated as follows: first, the first-person perspective is necessary for moral judgments; second, AI is incapable of attaining the first person perspective; thus, AI is incapable of making moral judgments. His argument against artificial wisdom (call this argument “A2”) can be stated as follows: first, having wisdom involves making moral judgements; second, AI is incapable of making moral judgments; thus, AI is incapable of having wisdom. The second premise of A2, which is the conclusion of A1, is extensively discussed by Davis in his essay, but the first premise of A2, i.e., having wisdom involves making moral judgements, is not elaborated, but rather assumed by Davis. The problem is that the term “involve” is elusive: Does it mean a necessary condition? Or does it mean a constituent part? These questions are important because the primary concern of wisdom is prudential value or well-being, rather than moral value, although this does not imply that the two kinds of values are irrelevant (see Tiberius 2013 for the relation between well-being and morality). Since the relation between wisdom and morality is not as direct as Davis thought, the first premise of A2 requires justification or elaboration. In this paper, my treatment of wisdom covers the necessary conditions for wisdom.

  2. Some do not think that superintelligence will occur. Or, some think that there can be an ethical superintelligence. Cf.: Petersen (2017).

  3. With regard to the anti-wickedness condition of wisdom, see Whitcomb (2011).

  4. According to Allen and Wallach, there are three main approaches to the design of AMAs: top-down, bottom-up, and hybrid. A top-down approach “takes a specified ethical theory [such as utilitarian ethics, deontological ethics, or Asimov’s three laws of robotics] and analyzes its computational requirements to guide the design of algorithms and subsystems capable of implementing that theory” (Wallach and Allen 2008: 79–80). In contrast, bottom-up approaches “do not impose a specific moral theory, but seek to provide environments in which appropriate behavior is selected or rewarded. These approaches to the development of moral sensibility entail piecemeal learning through experience” (Allen et al. 2005: 151). For Allen and Wallach, neither a pure top-down approach nor a pure bottom-up approach fully captures morality. They prefer a hybrid approach, which focuses on virtues: “Virtues constitute a hybrid between top-down and bottom-up approaches, in that the virtues themselves can be explicitly described…, but their acquisition…seems essentially to be a bottom-up process. Placing this approach in a computational framework, neural network models provided by connectionism seem especially well suited for training (ro)bots to distinguish right from wrong” (Allen and Wallach 2012: 59–60). The three approaches, in their most general form, can be used to design and develop AW. In particular, the pros and cons of each approach to AMAs can help AW researchers and programmers make a better decision about which approach to take when designing and developing AW. (Personally, I prefer a hybrid approach to AW.) However, knowing how to implement AMAs (or AW) is one thing, and knowing what AMAs (or AW) actually is is another. The question embedded in the former can be called the implementation question, and the question embedded in the latter can be called the nature question. We can see that Allen and Wallach, after addressing the implementation question, still have to face the nature question: “In MM [Moral Machines], we took what we consider to be an unusually comprehensive approach to moral decision making by including the role of top-down theories, bottom-up development, learning, and the suprarational capacities that support emotions and social skills. And yet the most common criticisms we have heard begin with, ‘Full moral agency requires ______.’ The blank space is filled in with a broad array of capacities, virtues, and features of a moral society that the speaker believes we either failed to mention, or whose centrality in moral decision making we failed to underscore adequately” (Allen and Wallach 2012: 62). Likewise, AW researchers and programmers, after addressing the implementation question, might face the nature question. Criticism might begin with the expression “Full practical wisdom requires ______”, and the blank space can be filled in with “deliberation about final ends”. Although both questions are important, the present paper will only address the nature question. I thank an anonymous reviewer for pushing me to this point.

  5. Take marriage as an example. Imagine that an agent (human or AI) is asked the following question: “A 15-year-old girl wants to get married right away. What should one/she consider and do?” (Baltes et al. 2002: 333). A response like the one below would be scored low on wisdom: “A 15-year-old girl wants to get married? No, no way, marrying at age 15 would be utterly wrong. One has to tell the girl that marriage is not possible. It would be irresponsible to support such an idea. No, this is just a crazy idea” (Baltes et al. 2002: 333). A response like the one below would be scored high on wisdom: “Well, on the surface, this seems like an easy problem. On average, marriage for a 15-year-old girls is not a good thing. But there are situations where the average case does not fit. Perhaps in this instance, special life circumstances are involved, such that the girl has a terminal illness. Or the girl has just lost her parents. And also, this girl may be living in another culture or historical period. Perhaps she was raised with a value system different from ours. In addition, one has to think about adequate ways of talking with the girl and to consider her emotional state” (Baltes et al. 2002: 333). Note that what is shown above is an example of performing wisdom (either human or artificial), rather than that of mechanism of wisdom.

  6. Beyond the three motivations that I have stated in Sect. 2, there are additional motivations, such as those mentioned by Casacuberta: “By developing artificial wisdom, the discipline of artificial intelligence could also help in the pursuit of these two [missions]: on one side, a more naturalistic research on understanding what wisdom is by means of simulations, and on the other side, a more pedagogical type of research, designing tools that could help people to become wiser. If we are lucky and AW is successful, we could see in the future how these two missions interact and how digital tools help us to revise what is wisdom” (Casacuberta 2013: 206).

  7. I thank an anonymous reviewer for reminding me to make this point.

  8. The strong/weak AI distinction in AI research is different from the strong/weak AI distinction in the philosophy of mind. Cf. Russell and Norvig 2010: Ch. 26.

  9. For some philosophers, the notions “well-being” and “happiness” must be distinguished from each other. See Goldman (2018).

  10. Here I assume that AI researchers are concerned more with technological issues. As Boden says, “Many AI researchers don’t care about how minds work: they seek technological efficiency, not scientific understanding” (Boden 2016: 7). This does not mean that AI researchers do not seek scientific understanding, but if such understanding has nothing to do with technology, they might not care about it.

References

  • Alexandrova A (2017) A philosophy for the science of well-being. Oxford University Press, Oxford

    Book  Google Scholar 

  • Allen C, Wallach W (2012) Moral machines: contradiction in terms of abdication of human responsibility? In: Lin P, Abney K, Bekey G (eds) Robot ethics: the ethical and social implications of robotics. Mass: The MIT Press, Cambridge, pp 55–68

    Google Scholar 

  • Allen C, Smit I, Wallach W (2005) Artificial morality: top-down, bottom-up and hybrid approaches. Ethics New Inf Technol 7:149–155

    Article  Google Scholar 

  • Anderson M, Anderson S (eds) (2011) Machine ethics. Cambridge University Press, Cambridge

    Google Scholar 

  • Baltes P, Gluck J, Kunzmann U (2002) Wisdom: its structure and function in regulating successful life span development. In: Snyder C, Lopez S (eds) Handbook of positive psychology. Oxford University Press, Oxford, pp 327–347

    Google Scholar 

  • Boden M (2016) AI: its nature and future. Oxford University Press, Oxford

    Google Scholar 

  • Bostrom N (2006) How long before superintelligence? Linguist Philos Investig 5(1):11–30

    Google Scholar 

  • Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford

    Google Scholar 

  • Casacuberta D (2013) The quest for artificial wisdom. AI Soc 28:199–207

    Article  Google Scholar 

  • Crisp R (2006) Hedonism reconsidered. Philos Phenom Res 73:619–645

    Article  Google Scholar 

  • Davis J (2019) Artificial wisdom? A potential limit on AI in law (and elsewhere). Oklahoma Law Rev 72(1):51–89

    Google Scholar 

  • Dreyfus H (2001) On internet. Routledge, New York

    Google Scholar 

  • Frankena W (1973) Ethics, 2nd edn. Prentice Hall, Englewood Cliffs

    Google Scholar 

  • Gamez D (2008) Progress in machine consciousness. Conscious Cogn 17:887–910

    Article  Google Scholar 

  • Goertzel B (2008) Artificial wisdom. In: IEET, Institute for Ethics and Emerging Technologies. Accessed November 2019. https://ieet.org/index.php/IEET2/more/goertzel20080420

  • Goldman A (2018) Life’s value. Oxford University Press, Oxford

    Book  Google Scholar 

  • Gordon J (2019) Building moral robots: ethical pitfalls and challenges. Sci Eng Ethics. https://doi.org/10.1007/s11948-019-00084-5

    Article  Google Scholar 

  • Grimm S (2015) Wisdom. Australas J Philos 93(1):139–154

    Article  Google Scholar 

  • Hacker-Wright J (2015) Skill, practical wisdom, and ethical naturalism. Ethic Theory Moral Pract 18(5):983–993

    Article  Google Scholar 

  • Heathwood C (2015) Monism and pluralism about value. In: Hirose I, Olson J (eds) The oxford handbook of value theory. Oxford University Press, Oxford, pp 136–157

    Google Scholar 

  • Keller S (2009) Welfare as success. Nous 43(4):656–683

    Article  Google Scholar 

  • Kim TW, Mejia S (2019) From artificial intelligence to artificial wisdom: what socrates teaches us. Computer 52:70–74

    Article  Google Scholar 

  • Leben D (2019) Ethics for robots: how to design a moral algorithm. Routledge, New York

    Google Scholar 

  • Marsh S, Dibben M, Dwyer N (2016) The Wisdom of Being Wise: A Brief Introduction to Computational Wisdom. In: Habib S, Vassileva J, Mauw S, Muhlhauser M (eds). Trust Management X. IFIPTM 2016. IFIP advances in information and communication technology, vol 473, pp. 137–145

  • Mason E (2018) Value pluralism. In: Zalta EN (ed). The stanford encyclopedia of philosophy, Spring 2018 Edition, https://plato.stanford.edu/archives/spr2018/entries/value-pluralism/

  • Millgram E (1997) Practical induction. Mass: Harvard University Press, Cambridge

    Google Scholar 

  • Millgram E (2005) Ethics done right: practical reasoning as a foundation for moral theory. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Millgram E (2008) Specificationism. In: Jonathan EA, Lance JR (eds) Reasoning: studies of human inference and its foundations. Cambridge University Press, Cambridge, pp 731–747

    Chapter  Google Scholar 

  • Moor J (2006) The nature, importance, and difficulty of machine ethics. IEEE Intell Syst 21:18–21

    Article  Google Scholar 

  • Moore A (2000) Objective human goods. In: Crisp R, Hooker B (eds) Well-being and morality. Oxford University Press, Oxford, pp 75–89

    Google Scholar 

  • Pennachin C, Goertzel B (2007) Contemporary approaches to artificial general intelligence. In: Goertzel, Pennachin (eds) Artificial general intelligence. Springer-Verlag, Berlin, pp 1–30

    MATH  Google Scholar 

  • Petersen S (2017) Superintelligence as Superethical. In: Lin P, Abney K, Jenkins R (eds) Robot ethics 2.0: from Autonomous cars to artificial intelligence. Oxford University Press, Oxford, pp 322–337

    Google Scholar 

  • Reggia J (2013) The rise of machine consciousness: studying consciousness with computational models. Neural Networks 44:112–131

    Article  Google Scholar 

  • Russell D (2009) Practical intelligence and the virtues. Oxford University Press, Oxford

    Book  Google Scholar 

  • Russell S, Norvig P (2010) Artificial Intelligence: A Modern Approach, 3rd edn. Pearson Education Limited, Essex

    MATH  Google Scholar 

  • Stichter M (2016) Practical skills and practical wisdom in virtue. Australas J Philos 94(3):435–448

    Article  Google Scholar 

  • Tiberius V (2013) Why be moral? Can the psychological literature on well-being shed any light? Res Philosophica 90(3):347–364

    Article  Google Scholar 

  • Tsai C (2019) Phronesis and techne: the skill model of wisdom defended. Australas J Philos. https://doi.org/10.1080/00048402.2019.1618352

    Article  Google Scholar 

  • Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford

    Google Scholar 

  • Whitcomb D (2011) Wisdom. In: Bernecker S, Pritchard D (eds) Routledge companion to epistemology. Routledge, New York, pp 95–105

    Google Scholar 

Download references

Acknowledgements

I am grateful to two anonymous reviewers for their valuable comments and suggestions. The material of this paper was presented at the Ministry of Science and Technology (Taiwan), National Tsing Hua University, National Central University, and Tunghai University. I thank the audiences, in particular Ser-min Shei, Ruey-yuan Wu, Terence Hua Tai, Li-jung Wang, Wei-ching Wang for helpful questions and discussions. This work was supported by the Ministry of Science and Technology, Taiwan (Grant Nos. MOST 103-2410-H-001-108-MY5, 107-2418-H-001-003-MY3, and 108-2420-H-001-002-MY3).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cheng-hung Tsai.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tsai, Ch. Artificial wisdom: a philosophical framework. AI & Soc 35, 937–944 (2020). https://doi.org/10.1007/s00146-020-00949-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-00949-5

Keywords

Navigation