Joshua Greene has argued that the empirical findings of cognitive science have implications for ethics. In particular, he has argued (1) that people’s deontological judgments in response to trolley problems are strongly influenced by at least one morally irrelevant factor, personal force, and are therefore at least somewhat unreliable, and (2) that we ought to trust our consequentialist judgments more than our deontological judgments when making decisions about unfamiliar moral problems. While many cognitive scientists have rejected Greene’s dual-process theory of (...) moral judgment on empirical grounds, philosophers have mostly taken issue with his normative assertions. For the most part, these two discussions have occurred separately. The current analysis aims to remedy this situation by philosophically analyzing the implications of moral dilemma research using the CNI model of moral decision-making – a formalized, mathematical model that decomposes three distinct aspects of moral-dilemma judgments. In particular, we show how research guided by the CNI model reveals significant conceptual, empirical, and theoretical problems with Greene’s dual-process theory, thereby questioning the foundations of his normative conclusions. (shrink)
The causal premise of the evolutionary debunking argument contends that human moral beliefs are explained by the process of natural selection. While it is universally acknowledged that such a premise is fundamental to the debunker’s case, the vast majority of philosophers focus instead on the epistemic premise that natural selection does not track moral truth and the resulting skeptical conclusion. Recently, however, some have begun to concentrate on the causal premise. So far, the upshot of this small but growing literature (...) has been that the causal premise is likely false due to the seemingly persuasive evidence that our moral beliefs are in fact not the result of natural selection. In this paper, I argue that this view is mistaken. Specifically, I advocate the Innate Biases Model, which contends that there is not only compelling evidence for an evolved cognitive capacity for acquiring norms but also for the existence of an evolutionarily instilled set of cognitive biases that make it either more or less likely that we adopt certain moral beliefs. (shrink)
In his article “Beyond Point-and-Shoot Morality,” Joshua Greene argues that the empirical findings of cognitive neuroscience have implications for ethics. Specifically, he contends that we ought to trust our manual, conscious reasoning system more than our automatic, emotional system when confronting unfamiliar problems; and because cognitive neuroscience has shown that consequentialist judgments are generated by the manual system and deontological judgments are generated by the automatic system, we ought to trust the former more than the latter when facing unfamiliar moral (...) problems. In the present article, I analyze one of the premises of Greene’s argument. In particular, I ask what exactly an unfamiliar problem is and whether moral problems can be classified as unfamiliar. After exploring several different possible interpretations of familiarity and unfamiliarity, I conclude that the concepts are too problematic to be philosophically compelling, and thus should be abandoned. (shrink)
Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well (...) as the definition of humanoid robots. A key notion in this context is the idea of anthropomorphism: the human tendency to attribute human qualities, not only to our fellow human beings, but also to parts of nature and to technologies. This tendency to anthropomorphize technologies by responding to and interacting with them as if they have human qualities is one of the reasons why social robots (in particular social robots designed to look and behave like human beings) can be socially disruptive. As is explained in the chapter, while some ethics researchers believe that anthropomorphization is a mistake that can lead to various forms of deception, others — including both ethics researchers and social roboticists — believe it can be useful or fitting to treat robots in anthropomorphizing ways. The chapter explores that disagreement by, among other things, considering recent philosophical debates about whether social robots can be moral patients, that is, whether it can make sense to treat them with moral consideration. Where one stands on this issue will depend either on one’s views about whether social robots can have, imitate, or represent morally relevant properties, or on how people relate to social robots in their interactions with them. Lastly, the chapter urges that the ethics of social robots should explore intercultural perspectives, and highlights some recent research on Ubuntu ethics and social robots. (shrink)
Over the last few decades, virtue has become increasingly important in philosophy, psychology, cognitive science, and education. However, as each of these disciplines approaches virtue from a decidedly different perspective, it has proven difficult to come up with an understanding of virtue that satisfies the standards of all four disciplines. In their book, Jennifer Wright, Michael Warren, and Nancy Snow attempt to put forward such an understanding.
Human group size seemingly has no limit, with many individuals living alongside thousands—even millions—of others. Non-human primate groups, on the other hand, cannot be sustained past a certain, relatively small size. I propose that Pascal Boyer’s model of ownership psychology may offer an explanation for such a significant divergence.