Skip to main content

Advertisement

Log in

Moral appearances: emotions, robots, and human morality

  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For instance, the Laws seem to limit the range of possible human-robot relations to the master–slave model.

  2. For contemporary examples of such rules and arguments see Peter Singer’s work.

  3. I do not agree with this interpretation of Kant. The categorical imperative is not a rule but at best meta-rule asking from us to reason from the moral point of view when we make rules (when we, as autonomous beings, give the rule to ourselves). But as I argued in my book […] this leaves open a lot of space for types of moral reasoning that require the exercise of imaginative and emotional capacities.

  4. Note that there are tensions between the theoretical traditions mentioned here, for instance between a Human and a virtue ethics approach (see for instance Foot’s criticism of Hume, Foot 2002), but Nussbaum has managed to reconcile them in an attractive way.

  5. Influenced by the Stoics, Nussbaum writes that emotions are not just ‘unthinking forces that have no connection with our thoughts, evaluations, or plans’ like ‘the invading currents of some ocean’ (Nussbaum 2001, p. 26–27) but, by contrast, more like ‘forms of judgment’ that ‘ascribe to certain things and persons outside a person’s own control great importance for the person’s own flourishing.’ This renders emotions acknowledgments of vulnerability and lack of self-sufficiency (Nussbaum 2001, p. 22). Note also that this view is not Stoic but neo-Stoic since Nussbaum rejects their normative view of the role emotions should have (the Stoics evaluated the role of emotions negatively) and revises their account of cognition.

  6. Note that emotional moral reasoning does not exclude taking into account rules, laws and conventions.

  7. Given the role of emotions in making moral discriminations, we would not even want ‘psychopathic’ military robots.

  8. The authors argue that trying to build robots according to the rule-based model (that is, turning the rules into algorithms and build them into robots) cannot succeed since such ‘commandment’ models face the problem of conflicting rules. Overriding principles based on moral intuitions we have do not solve this problem since they might not even be universally shared within one culture (Wallach and Allen 2008, p. 84). Moreover, applying deontological and consequentialist theories requires one to gather an enormous amount of information in order to describe the situation and in order to predict, which may be hard for computers—and indeed for humans (p. 86). They give further reasons why morality is hard to computate, which is particularly problematic for Bentham-type utilitarian approaches to ethics (pp. 86–91). They also explicitly discuss problems with Asimov’s laws (pp. 91–95) and, more generally, problems with deontological abstract rules, which run into similar problems as consequentialist theories since this approach also requires us to predict consequences (pp. 95–97). These problems do not only get roboticists into trouble; they cast doubt on the ambitions of much normative moral theory: it shows that (top–down) theory is valuable but that it has significant limitations. .

  9. Today there are already robots that have some capacity to learn in and from social interaction, for instance the robot Kismet developed by Cynthia Breazeal at MIT. In a sense, she has ‘raised’ the robot. However, these developments do not approach human moral and emotional learning.

  10. The Turing test has been proposed by Alan Turing to test if an entity is human or not (Turing 1950).

  11. These conditions have already been proposed by Aristotle and are endorsed by many contemporary writers on freedom and responsibility.

  12. More generally, there is a kind of virtual intentionality (understood in a phenomenological sense): it appears as if the other is conscious and as if that consciousness is directed to objects.

  13. Perhaps this helps to interpret the phenomenon that Japanese designers are more advanced at making humanoid robots: they tend to understand themselves as imitators of nature rather than creators (‘playing God’), which appears to be more a Western idea.

  14. In animal ethics this demand for consistency is known as ‘the argument from marginal cases’.

  15. Note that these moral categories constituted (and arguably still constitute) a kind of moral life that is fundamentally asymmetrical. An alternative, symmetrical moral framework would accommodate perceptions and treatment of robots as companions or co-workers. One might also apply other ‘human’ categories to them. However, I will not further discuss this issue here.

References

  • Asimov, I. (1942). Runaround. Astounding Science Fiction, 94–103.

  • Damasio, A. (1994). Descartes’ error: emotion, reason, and the human brain. New York: G.P. Putnam’s Sons.

    Google Scholar 

  • De Sousa, R. (1987). The rationality of emotion. Cambridge, MA: MIT Press.

    Google Scholar 

  • Foot, P. (2002). Hume on moral judgment. In Virtues and vices. Oxford/New York: Oxford University Press.

  • Goldie, P. (2000). The emotions: a philosophical exploration. Oxford: Oxford University Press.

    Google Scholar 

  • Greene, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105–2108.

    Article  Google Scholar 

  • James, W. (1884). What is an emotion? Mind, 9, 188–205.

    Article  Google Scholar 

  • Kennett, J. (2002). Autism, empathy and moral agency. The Philosophical Quarterly, 52(208), 340–357.

    Article  Google Scholar 

  • Merleau-Ponty, M. (1945). Phénoménologie de la Perception. Paris: Gallimard.

    Google Scholar 

  • Nussbaum, M. C. (1990). Love’s knowledge. Oxford: Oxford University Press.

    Google Scholar 

  • Nussbaum, M. C. (1994). The therapy of desire: theory and practice in hellenistic ethics. Princeton: Princeton University Press.

    Google Scholar 

  • Nussbaum, M. C. (1995). Poetic justice: literary imagination and public life. Boston: Beacon Press.

    Google Scholar 

  • Nussbaum, M. C. (2001). Upheavals of thought: the intelligence of emotions. Cambridge: Cambridge University Press.

    Google Scholar 

  • Prinz, J. (2004). Gut reactions: a perceptual theory of emotion. Oxford: Oxford University Press.

    Google Scholar 

  • Solomon, R. (1980). Emotions and choice. In A. Rorty (Ed.), Explaining emotions (pp. 81–251). Los Angeles: University of California Press.

    Google Scholar 

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    Article  MathSciNet  Google Scholar 

  • Wallach, W., & Allen, C. (2008). Moral machines: teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

Download references

Acknowledgments

I wish to thank the reviewers for their pertinent questions and useful suggestions, which helped improve the paper’s organization and fine-tune its arguments. I also thank Julie Bytheway for her advice on grammar and style and Nicole Vincent for copyediting the final version of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark Coeckelbergh.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Coeckelbergh, M. Moral appearances: emotions, robots, and human morality. Ethics Inf Technol 12, 235–241 (2010). https://doi.org/10.1007/s10676-010-9221-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-010-9221-y

Keywords

Navigation