Abstract
Can we build `moral robots'? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such `psychopathic' robots would be dangerous since they would lack full moral agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances.
- Asimov, I. (1942). Runaround. Astounding Science Fiction, 94-103.Google Scholar
- Damasio, A. (1994). Descartes' error: emotion, reason, and the human brain. New York: G.P. Putnam's Sons.Google Scholar
- De Sousa, R. (1987). The rationality of emotion. Cambridge, MA: MIT Press.Google Scholar
- Foot, P. (2002). Hume on moral judgment. In Virtues and vices. Oxford/New York: Oxford University Press.Google ScholarCross Ref
- Goldie, P. (2000). The emotions: a philosophical exploration. Oxford: Oxford University Press.Google Scholar
- Greene, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108.Google ScholarCross Ref
- James, W. (1884). What is an emotion? Mind, 9, 188-205.Google ScholarCross Ref
- Kennett, J. (2002). Autism, empathy and moral agency. The Philosophical Quarterly, 52(208), 340-357.Google ScholarCross Ref
- Merleau-Ponty, M. (1945). Phénoménologie de la Perception. Paris: Gallimard.Google Scholar
- Nussbaum, M. C. (1990). Love's knowledge. Oxford: Oxford University Press.Google Scholar
- Nussbaum, M. C. (1994). The therapy of desire: theory and practice in hellenistic ethics. Princeton: Princeton University Press.Google Scholar
- Nussbaum, M. C. (1995). Poetic justice: literary imagination and public life. Boston: Beacon Press.Google Scholar
- Nussbaum, M. C. (2001). Upheavals of thought: the intelligence of emotions. Cambridge: Cambridge University Press.Google Scholar
- Prinz, J. (2004). Gut reactions: a perceptual theory of emotion. Oxford: Oxford University Press.Google Scholar
- Solomon, R. (1980). Emotions and choice. In A. Rorty (Ed.), Explaining emotions (pp. 81-251). Los Angeles: University of California Press.Google Scholar
- Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433-460.Google ScholarCross Ref
- Wallach, W., & Allen, C. (2008). Moral machines: teaching robots right from wrong. Oxford: Oxford University Press. Google Scholar
Index Terms
- Moral appearances: emotions, robots, and human morality
Recommendations
The Functional Morality of Robots
It is often argued that a robot cannot be held morally responsible for its actions. The author suggests that one should use the same criteria for robots as for humans, regarding the ascription of moral responsibility. When deciding whether humans are ...
Moral luck and computer ethics: Gauguin in cyberspace
I argue that the problem of `moral luck' is an unjustly neglected topic within Computer Ethics. This is unfortunate given that the very nature of computer technology, its `logical malleability', leads to ever greater levels of complexity, unreliability ...
The Implications of an Externalist Theory of Rule-Following Behaviour for Robot Cognition
Given (1) Wittgenstein’s externalist analysis of the distinction between following a rule and behaving in accordance with a rule, (2) prima facie connections between rule-following and psychological capacities, and (3) pragmatic issues about training, ...
Comments