Abstract
Debates about the implications of empirical research in the natural and social sciences for normative disciplines have recently gained new attention. With the widening scope of neuroscientific investigations into human mental activity, decision-making and agency, neuroethicists and neuroscientists have extensively claimed that results from neuroscientific research should be taken as normatively or even prescriptively relevant. In this chapter, I investigate what these claims could possibly amount to. I distinguish and discuss three readings of the thesis that neuroscientific evidence has normative implications: an action-theoretic, an epistemological, and a metaphysical reading. I conclude that the action-theoretic reading has the most direct normative consequences, even though it is limited to the questions of whether some pre-established moral norms can be realized by individual agents. In contrast, in applying the other two readings, neuroscience can only be said to have normative implications in a very indirect way and only under the condition of making contested metaethical assumptions. All in all, the room for inferring concrete normative judgments from neuroscientific evidence is relatively limited.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
There are several possible views one could adhere to. Besides cognitivism, which will be discussed below, non-cognitivst theories such as quasi-realism might be possible. Also an error-theory of moral judgment according to which humans are only capable of making wrong moral judgments could be an option. Error-theory assumes that there are no moral facts, even though our moral judgments have the property of being either right or wrong (Mackie 1985; Joyce 2007).
- 2.
This is not a thesis about the scientific or ontological status of the mentioned entities. The analogy shall only point to the fact that moral facts seem to differ in relevant ways from typical physical entities and that it is at least controversial whether an explanation can be given of them that involves nothing more than the laws of physics.
- 3.
- 4.
The paradigmatic case of this neuroscientific threat to moral responsibility originates from the work of Benjamin Libet and his colleagues. Libet et al. (1983) investigated the timing of brain processes involved in a simple arm movement and compared them to the timing of a consciously experienced will in relation to self-initiated voluntary acts. They found that the conscious intention to move the arm came 200 milliseconds before the motor act, but 350–400 milliseconds after readiness potential —a buildup of electrical activity that occurs in the brain and precedes actual movement. From this Libet and others concluded that the conscious decision to move the arm cannot be the true cause of the arm’s movement, because it comes too late in the neuropsychological sequence and that we are systematically mistaken if we attribute to ourselves that status of being the originator of our arm’s movement. From there, it is only one tiny step also to draw the conclusion that we would not be morally responsible in the retrospective sense should the arm’s movement cause morally problematic consequences. Additionally, there are other scientific threats to moral responsibility posed by work in social psychology. For instance, it has been shown that the conscious reasons we tend to provide to explain our actions diverge from the actual causes of our actions and that, hence, our actions are often much less transparent to ourselves than we might assume (Bargh and Chartrand 1999; Wilson 2004; Uhlmann and Cohen 2005; Nosek et al. 2007). These findings reveal how the influence of external stimuli and events in our immediate environment can shape our behavior, even if we lack awareness of such influence. They also show how often our decisions and the resulting behaviors are driven by unconscious processes and cognitive biases (Kahneman 2011).
- 5.
The objection of several neuroscientists about the experimental setup notwithstanding, this has been the standard-interpretation for many years. For an alternative interpretation that conceives of Libet’s readiness potential as resulting from spontaneous subthreshold fluctuations in neural activity that build up to form the readiness potential and that has no implications for moral responsibility see Schurger et al. 2012.
- 6.
Here I am only concerned with the possibility of making a case against indeterminism by reference to neuroscientific and psychological investigations. That is not to deny that there are other, more convincing arguments against this view (cf. Frankfurt 1969, 1971; Nagel 1988; Zimmerman 1997 for a compatibilist or Pereboom 1995, 2014 for a hard determinist response).
- 7.
Dualism is a position nowadays held by hardly any philosopher. However, experimental philosophers have evidence that it is held by many philosophical laypersons nonetheless (Nadelhoffer 2014).
- 8.
Consider arguments about how one’s personal history shapes one’s current beliefs and desires (c.f. Strawson 1994).
- 9.
Consider arguments from moral luck, according to which lucky circumstances that are beyond our control could intervene in the execution of our intentions or lucky circumstances bring about our having or not having certain desires that lead to morally problematic actions (c.f. Levy 2011; Zimmerman 1997).
- 10.
Berker (2009) argues that the most charitable reading of Greene’s argument would be that he left out a normative premise. His argument, than, could be reconstructed as such:
-
Descriptive premise: Deontological moral judgments are driven by emotions.
-
Normative Premise: Moral judgments driven by emotions are wrong.
-
Conclusion: Therefore, deontological moral judgments are wrong.
Then, however, there is no normative role to play for neuroscientific evidence because this kind of evidence is only relevant to the descriptive premise. But see (Kumar and Campbell 2012) for a critique of this reconstruction.
-
- 11.
Similar observations have led some authors to argue that there possibly are not any epistemically reliable causal processes underlying moral reasoning (cf. Joyce 2007). Whether this would discredit morality as a whole is a question beyond the scope of this article.
- 12.
I take the claim that certain moral judgments or kinds of moral reasoning are “unreliable” or “irrational” as implying that the actions labelled as such are at least extremely likely to be false. In labelling a judgment as unreliable, one claims a high probability that the judgment is wrong. Similarly, to say that a cognitive process is irrational is to claim that it is does not track the truth and should be considered as false.
- 13.
Note, that I take cognitivism as defined above as compatible with any theory of truth. This means that cognitivism implies some kind of moral realism only in the sense that it must demand the existence of some independent entity that functions as a truth-maker for moral judgments. This, however, does of course not imply that cognitivists must commit to a correspondence theory of truth or to forms of moral realism that conceive of truth-makers as natural facts (for further considerations on this point c.f. Fisher 2010). The restriction of the following discussion on a special form of moral realism (viz. metaethical naturalism) is therefore only due to my intention to make the case for the relevance of neuroscientific evidence (which I take to be evidence about how the world “actually” is) in the realm of moral judgments as strong as possible and to account for some claims about this relevance that have actually been raised in neuroethical and metaethical debates.
- 14.
I will restrict the following discussion to cognitivist theories that assume the objectivity of moral facts, even though the connection of cognitivism with fictionalism (c.f. Joyce 2007) and constructivism (c.f. Street 2006) about moral facts has recently been revived at least with regard to general evolutionary debunking arguments.
- 15.
It has been argued in many other places that the term “naturalistic fallacy” as used by Moore is misleading. For Moore does not only term the identification of “good” with natural properties like “pleasurable” or “desirable” a naturalistic fallacy but also the identification of “good” with metaphysical properties such as “willed by god”.
- 16.
As Moore was an intuitionist and non-cognitivist, and also considered the identification of “good” with non-natural properties as wrong, this view does not imply that moral properties factually exist in a reality beyond natural facts but rather that they do not exist as facts at all.
- 17.
Besides these naturalisms, there are, of course, further non-metaphysical versions of moral realism that conceive of objective moral properties in a different way. For example, subjectivists conceptualize moral facts as psychological facts such that a moral judgment like “Breaking promises is morally wrong” is true, if the judging person is in a corresponding psychological state. Still, others conceive of moral properties as a special kind of facts, which can only be described by reference to socially acquired moral concepts (c.f. McDowell 1994).
- 18.
The term “Cornell Realism” is used to label a set of theories that was developed by different philosophers who taught or studied at Cornell University. The most notable theorists of this philosophical school are Richard Boyd, Nicholas Sturgeon, and David Brink.
- 19.
This seeming circularity shows that Cornell Realists also try to transfer their contextualist understanding of justification and their commitment to confirmation holism (cf. Quine 1951) from the sciences to the moral realm.
- 20.
A “natural kind” is a group of particulars that exists independently of human efforts to order objects in the world. They are classes of objects not made up by human convention. Science is usually assumed to be able to reveal these kinds. Chemical elements or elemental particles such as electrons or quarks are usually considered to be natural kinds. Some philosophers also assume that some mental states, such as beliefs or desires, can constitute natural kinds (c.f. Ellis 2001).
- 21.
This said I’d like to emphasize that this is not a general critique of Neo-Aristotelianism in ethics. The only claim is that this theory does not fit well with a scientific world view, which claims that there are no final ends in nature.
References
Alicke, M. 2000. Culpable Control and the Psychology of Blame. Psychological Bulletin 126 (4): 556–574.
Anscombe, G.E.M. 1958. Modern Moral Philosophy. Philosophy 33 (124): 1–19.
Bargh, J.A., and T.L. Chartrand. 1999. The Unbearable Automaticity of Being. American Psychologist 54 (7): 462–479.
Beebe, J., and W. Buckwalter. 2010. The Epistemic Side-Effect Effect. Mind and Language 25 (4): 474–498.
Berker, S. 2009. The Normative Insignificance of Neuroscience. Philosophy & Public Affairs 37 (4): 293–329.
Blackburn, S. 2000. Ruling Passions. A Theory of Practical Reasoning. Reprinted. Oxford: Clarendon Press.
Boyd, R. 1988. How to Be a Moral Realist. In Essays on Moral Realism, ed. G. Sayre-MacCord, 307–356. Ithaca: Cornell University Press.
Cacioppo, J.T. 2016. Social Neuroscience. Cambridge, MA: The MIT Press.
Cameron, C.D., et al. 2018. Damage to the Ventromedial Prefrontal Cortex is Associated with Impairments in Both Spontaneous and Deliberative Moral Judgments. Neuropsychologia 111: 261–268.
Casebeer, W.D. 2003. Moral Cognition and Its Neural Constituents. Nature Reviews Neuroscience 4 (10): 840–847.
Casebeer, W.D., and P.S. Churchland. 2003. The Neural Mechanisms of Moral Cognition. A Multi-Aspect Approach to Moral Judgment and Decision-Making. Biology and Philosophy 18 (1): 169–194.
Ciaramelli, E., et al. 2007. Selective Deficit in Personal Moral Judgment Following Damage to Ventromedial Prefrontal Cortex. Social Cognitive and Affective Neuroscience 2 (2): 84–92.
Clark, C.J., B.M. Winegard, and R.F. Baumeister. 2019. Forget the Folk: Moral Responsibility Preservation Motives and Other Conditions for Compatibilism. Frontiers in Psychology 10: 215.
Decety, J., and J.M. Cowell. 2018. Interpersonal Harm Aversion as a Necessary Foundation for Morality. A Developmental Neuroscience Perspective. Development and Psychopathology 30 (01): 153–164.
Ellis, B. 2001. Scientific Essentialism. Cambridge: Cambridge University Press.
Fisher, A. 2010. Cognitivism Without Realism. In The Routledge Companion to Ethics, ed. J. Skorupski, 346–355. New York/Routledge: Abingdon.
Foot, P. 2003. Natural Goodness. 1st ed. Oxford: Clarendon Press.
Frankfurt, H.G. 1969. Alternate Possibilities and Moral Responsibility. The Journal of Philosophy 66 (23): 829–839.
———. 1971. Freedom of the Will and the Concept of a Person. The Journal of Philosophy 68 (1): 5–20.
Fraser, B.J. 2014. Evolutionary Debunking Arguments and the Reliability of Moral Cognition. Philosophical Studies 168 (2): 457–473.
Gazzaniga, M.S. 2006. The Ethical Brain. The Science of Our Moral Dilemmas. 1st ed. New York: Harper Perennial.
Greene, J.D. 2003. From Neural ‘Is’ to Moral ‘Ought’: What are the Moral Implications of Neuroscientific Moral Psychology? Nature Reviews Neuroscience 4 (10): 846–850.
Greene, J.D., et al. 2001. An fMRI Investigation of Emotional Engagement in Moral Judgment. Science (New York, N.Y.) 293 (5537): 2105–2108.
———. 2004. The Neural Bases of Cognitive Conflict and Control in Moral Judgment. Neuron 44 (2): 389–400.
Haggard, P., and M. Eimer. 1999. On the Relation Between Brain Potentials and the Awareness of Voluntary Movements. Experimental Brain Research 126 (1): 128–133.
Hanson, L. 2016. The Real Problem with Evolutionary Debunking Arguments. The Philosophical Quarterly 90: pqw075.
Harris, S. 2014. The Moral Landscape. How Science can Determine Human Values. New York: Free Press.
Hursthouse, R. 1999. On Virtue Ethics. Oxford: Oxford University Press.
Joyce, R. 2007. The Evolution of Morality. Cambridge, MA: MIT.
Kahane, G. 2011. Evolutionary Debunking Arguments. Noûs 45 (1): 103–125.
Kahane, G., et al. 2012. The Neural Basis of Intuitive and Counterintuitive Moral Judgment. Social Cognitive and Affective Neuroscience 7 (4): 393–402.
Kahneman, D. 2011. Thinking, Fast and Slow. London: Lane.
Kitcher, P. 2014. The Ethical Project. Cambridge, MA: Harvard University Press.
Knobe, J. 2003. Intentional Action and Side Effects in Ordinary Language. Analysis 63 (3): 190–194.
———. 2010. Action Trees and Moral Judgment. Topics in Cognitive Science 2 (3): 555–578.
Koenigs, M., and D. Tranel. 2007. Irrational Economic Decision-Making after Ventromedial Prefrontal Damage. Evidence from the Ultimatum Game. Journal of Neuroscience 27 (4): 951–956.
Königs, P. 2018. Two Types of Debunking Arguments. Philosophical Psychology 31 (3): 383–402.
Kumar, V. 2015. Moral Judgment as a Natural Kind. Philosophical Studies 172 (11): 2887–2910.
———. 2016. The Empirical Identity of Moral Judgement. The Philosophical Quarterly 66 (265): 783–804.
Kumar, V., and R. Campbell. 2012. On the Normative Significance of Experimental Moral Psychology. Philosophical Psychology 25 (3): 311–330.
———. 2010. Evolutionary Ethics. Farnham/Burlington: Ashgate.
———. 2011. Hard Luck: How Luck Undermines Freedom and Responsibility. Oxford: Oxford University Press.
Liao, S.M. 2017. Neuroscience and Ethics. Experimental Psychology 64 (2): 82–92.
Libet, B., et al. 1983. Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (Readiness-Potential). The Unconscious Initiation of a Freely Voluntary Act. Brain 106 (Pt 3): 623–642.
Luethi, M.S., et al. 2016. Motivational Incentives Lead to a Strong Increase in Lateral Prefrontal Activity After Self-Control Exertion. Social Cognitive and Affective Neuroscience 11 (10): 1618–1626.
Mackie, J.L. 1985. Ethics. Inventing Right and Wrong. Harmondsworth: Penguin Books.
Matusall, S., M. Christen, and I. Kaufmann. 2011. The Emergence of Social Neuroscience as an Academic Discipline. In The Oxford Handbook of Social Neuroscience, ed. J. Decety and J.T. Cacioppo, 9–27. New York: Oxford University Press.
McDowell, J.H. 1994. Mind and World. Cambridge, MA: Harvard University Press.
Mendez, M.F., E. Anderson, and J.S. Shapira. 2005. An Investigation of Moral Judgment in Frontotemporal Dementia. Cognitive and Behavioral Neurology 18 (4): 193–197.
Moll, J., R. de Oliveira-Souza, and R. Zahn. 2008. The Neural Basis of Moral Cognition: Sentiments, Concepts, and Values. Annals of the New York Academy of Sciences 1124 (1): 161–180.
Moore, G.E. 1903. Principia Ethica. Cambridge: Cambridge University Press.
Murray, D., and E. Nahmias. 2014. Explaining Away Incompatibilist Intuitions. Philosophy and Phenomenological Research 88 (2): 434–467.
Nadelhoffer, T. 2011. The Threat of Shrinking Agency and Free Will Disillusionism. In Conscious Will and Responsibility. A Tribute to Benjamin Libet, Series in Neuroscience, Law, and Philosophy, ed. L. Nadel and W. Sinnott-Armstrong, 173–188. Oxford: Oxford University Press.
———. 2014. Dualism, Libertarianism, and Scientific Skepticism About Free Will. In Moral Psychology, Volume 4. Free Will and Moral Responsibility, ed. W. Sinnott-Armstrong, 209–216. Cambridge, MA: MIT Press.
Nagel, T. 1988. Mortal Questions. Cambridge: Cambridge University Press.
Nahmias, E., S. Morris, T. Nadelhoffer, and J. Turner. 2005. Surveying Freedom: Folk Intuitions About Free Will and Moral Responsibility. Philosophical Psychology 18 (5): 561–584.
Nichols, S., and J. Knobe. 2007. Moral Responsibility and Determinism: The Cognitive Science of Folk Intuitions. Nous 41 (4): 663–685.
Nosek, B.A., et al. 2007. Pervasiveness and Correlates of Implicit Attitudes and Stereotypes. European Review of Social Psychology 18 (1): 36–88.
Nussbaum, M.C., and A.K. Sen, eds. 2002. The Quality of Life. A Study Prepared for the World Institute for Development Economics Research (WIDER) of the United Nations University. Oxford: Clarendon Press.
O’Connor, C., G. Rees, and H. Joffe. 2012. Neuroscience in the Public Sphere. Neuron 74 (2): 220–226.
Paschke, L.M., et al. 2015. Motivation by Potential Gains and Losses Affects Control Processes via Different Mechanisms in the Attentional Network. NeuroImage 111: 549–561.
Pereboom, D. 1995. Determinism al Dente. Noûs 29 (1): 21–45.
———. 2014. Free Will, Agency, and Meaning in Life. Oxford: Oxford University Press.
Pettit, D., and J. Knobe. 2009. The Pervasive Impact of Moral Judgment. Mind and Language 24 (5): 586–604.
Prinz, J.J. 2011. Against Empathy. The Southern Journal of Philosophy 49: 214–233.
Quine, W.V.O. 1951. Two Dogmas of Epiricism. The Philosophical Review 60 (1): 20–43.
Racine, E., V. Nguyen, V. Saigle, and V. Dubljević. 2017. Media Portrayal of a Landmark Neuroscience Experiment on Free Will. Science and Engineering Ethics 23: 989–1017.
Ruse, M., and R.J. Richards. 2010. Biology and the Foundations of Ethics. Cambridge: Cambridge University Press.
Schurger, A., J.D. Sitt, and S. Dehaene. 2012. An Accumulator Model for Spontaneous Neural Activity Prior to Self-initiated Movement. Proceedings of the National Academy of Sciences of the United States of America 109 (42): E2904–E2913.
Singer, P. 2005. Ethics and Intuitions. The Journal of Ethics 9 (3–4): 331–352.
Spencer, H. 1897. The Principles of Ethics, Vol. 1. New York: D. Appleton Co.
Strawson, G. 1994. The Impossibility of Moral Responsibility. Philosophical Studies 75 (1–2): 5–24.
Street, S. 2006. A Darwinian Dilemma for Realist Theories of Value. Philosophical Studies 127 (1): 109–166.
Sturgeon, N.L. 1988. Moral Explanation. In Essays on Moral Realism, ed. G. Sayre-MacCord, 229–255. Ithaca: Cornell University Press.
———. 2013. Naturalism in Ethics. In Concise Routledge Encyclopaedia of Philosophy, ed. E. Craig, 615–617. Hoboken: Taylor and Francis.
Uhlmann, E.L., and G.L. Cohen. 2005. Constructed Criteria: Redefining Merit to Justify Discrimination. Psychological Science 16 (6): 474–480.
Valdesolo, P., and D. DeSteno. 2016. Manipulations of Emotional Context Shape Moral Judgment. Psychological Science 17 (6): 476–477.
Wilson, E.O. 1975. Sociobiology. The New Synthesis. Cambridge, MA: Harvard University Press.
Wilson, T.D. 2004. Strangers to Ourselves. Discovering the Adaptive Unconscious. Cambridge, MA/London: Belknap Press.
Zimmerman, M.J. 1997. Moral Responsibility and Ignorance. Ethics 107 (3): 410–426.
Zimmerman, E., and E. Racine. 2012. Ethical Issues in the Translation of Social Neuroscience. A Policy Analysis of Current Guidelines for Public Dialogue in Human Research. Accountability in Research 19 (1): 27–46.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Leefmann, J. (2020). The Neuroscience of Human Morality: Three Levels of Normative Implications. In: Holtzman, G.S., Hildt, E. (eds) Does Neuroscience Have Normative Implications?. The International Library of Ethics, Law and Technology, vol 22. Springer, Cham. https://doi.org/10.1007/978-3-030-56134-5_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-56134-5_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-56133-8
Online ISBN: 978-3-030-56134-5
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)