Abstract Critical thinking about moral decisions considers the consequences of options for the achievement of people's goals. Attempts to think critically lead to error and bias, so intuitive rules are needed to guard against these errors and to save time. Intuitive rules, however, lead to errors and biases of their own. I propose that students be taught to approximate critical thinking itself and that they learn rules of thumb to guard against its pitfalls. In particular, students need to learn certain (...) powerful devices of consequentialist reasoning, such as consideration of precedent setting and of the possibility of error in thinking itself. They also need to learn about the common biases of thinking, especially the bias to favour what one already believes, or what is nearby in time and space. (shrink)
In four experiments, we asked subjects for judgements about scenarios that pit utilitarian outcomes against deontological moral rules, for example, saving more lives vs. a rule against active killing. We measured trait emotions of anger, disgust, sympathy and empathy, asked about the same emotions after each scenario. We found that utilitarian responding to the scenarios, and higher scores on a utilitarianism scale, were correlated negatively with disgust, positively with anger, positively with specific sympathy and state sympathy, and less so with (...) general sympathy or empathy. In a fifth experiment, we asked about anger and sympathy for specific outcomes, and we found that these are consistently predictive of utilitarian responding. (shrink)
Considerable evidence supports the sequential two-system model of moral judgement, as proposed by Greene and others. We tested whether judgement speed and/or personal/impersonal moral dilemmas can predict the kind of moral judgements subjects make for each dilemma, and whether personal dilemmas create difficulty in moral judgements. Our results showed that neither personal/impersonal conditions nor spontaneous/thoughtful-reflection conditions were reliable predictors of utilitarian or deontological moral judgements. Yet, we found support for an alternative view, in which, when the two types of responses (...) are in conflict; the resolution of this conflict depends on both the subject and the dilemma. While thinking about this conflict, subjects sometimes change their minds in both directions, as suggested by the data from a mouse-tracking task. (shrink)
College-student subjects made notes about the morality of early abortion, as if they were preparing for a class discussion. Analysis of the quality of their arguments suggests that a distinction can be made between arguments based on well-supported warrants and those based on warrants that are easily criticised. The subjects also evaluated notes made by other, hypothetical, students preparing for the same discussion. Most subjects evaluated the set of arguments as better when the arguments were all on one side than (...) when both sides were presented, even when the hypothetical student was on the opposite side of the issue from the evaluator. Subjects who favoured one-sidedness also tended to make one-sided arguments themselves. The results suggest that ?myside bias? is partly caused by beliefs about what makes thinking good. (shrink)
In folk psychology and some academic psychology, utilitarian thinking is associated with coldness and deontological thinking is associated with emotion. I suggest, mostly through personal examples, that these associations are far from perfect. Utilitarians experience emotions, which sometimes derive from, and sometimes cause or reinforce, their moral judgments.
A two-systems model of moral judgment proposed by Joshua Greene holds that deontological moral judgments (those based on simple rules concerning action) are often primary and intuitive, and these intuitive judgments must be overridden by reflection in order to yield utilitarian (consequence-based) responses. For example, one dilemma asks whether it is right to push a man onto a track in order to stop a trolley that is heading for five others. Those who favor pushing, the utilitarian response, usually take longer (...) to respond than those who oppose pushing. Greene's model assumes an asymmetry between the processes leading to different responses. We consider an alternative model based on the assumption of symmetric conflict between two response tendencies. By this model, moral dilemmas differ in the "difficulty" of giving a utilitarian response and subjects differ in the "ability" (tendency) to give such responses. (We could just as easily define ability in terms of deontological responses, as the model treats the responses symmetrically.) We thus make an analogy between moral dilemmas and tests of cognitive ability, and we apply the Rasch model, developed for the latter, to estimate the ability-difficulty difference for each dilemma for each subject. We apply this approach to five data sets collected for other purposes by three of the co-authors. Response time (RT), including yes and no responses, is longest when difficulty and ability match, because the subject is indifferent between the two responses, which also have the same RT at this point. When we consider yes/no responses, RT is longest when the model predicts that the response is improbable. Subjects with low ability take longer on the "easier" dilemmas, and vice versa. (shrink)
When people tend toward a political decision, such as voting for the Republican Party, they are often attracted to this decision by one issue, such as the party’s stance on abortion, but then they come to see other issues, such as the party’s stand on taxes, as supporting their decision, even if they would not have thought so in the absence of the decision. I demonstrate this phenomenon with opinion poll data and with an experiment done on the World Wide (...) Web using hypothetical candidates. For the hypothetical candidates, judgments about whether a candidate’s position on issue A favors the candidate or the opponent are correlated with judgments about other positions taken by the candidate (as determined from other hypothetical candidates). This effect is greater in those subjects who rarely make conflicting judgments, in which one issue favors a candidate and another favors the opponent. In a few cases, judgments even reverse, so that a position that is counted as a minus for other candidates becomes a plus for a favored candidate. Reversals in the direction of a candidate’s position are more likely when the candidate is otherwise favored. The experiment provides a new kind of demonstration of “belief overkill,” the tendency to bring all arguments into line with a favored conclusion. (shrink)
In this article, I shall suggest an approach to the justification of normative moral principles which leads, I think, to utilitarianism. The approach is based on asking what moral norms we would each endorse if we had no prior moral commitments. I argue that we would endorse norms that lead to the satisfaction of all our nonmoral values or goals. The same approach leads to a view of utility as consisting of those goals that we would want satisfied. In the (...) second half of the article, I examine the implication of this view for several issues about the nature of utility, such as the use of past and future goals. The argument for utilitarianism is not completed here. The rest of it requires a defense of expected-utility theory, of interpersonal comparison, and of equal consideration. (shrink)
Stanovich & West (S&W) have two goals, one concerned with the evaluation of normative models, the other with development of prescriptive models. Individual differences have no bearing on normative models, which are justified by analysis, not consensus. Individual differences do, however, suggest where it is possible to try to improve human judgments and decisions through education rather than computers.
The heuristics-and-biases approach requires a clear separation of normative and descriptive models. Normative models cannot be justified by intuition, or by consensus. The lack of consensus on normative theory is a problem for prescriptive approaches. One solution to the prescriptive problem is to argue contingently: if you are concerned about consequences, here is a way to make them better.
Cognitive biases that affect decision making may affect the decisions of citizens that influence public policy. To the extent that decisions follow principles other than maximizing utility for all, it is less likely that utility will be maximized, and the citizens will ultimately suffer the results. Here I outline some basic arguments concerning decisions by citizens, using voting as an example. I describe two types of values that may lead to sub-optimal consequences when these values influence political behavior: moralistic values (...) (which people are willing to impose on others regardless of the consequences) and protected values (PVs, values protected from trade-offs). I present evidence against the idea that voting is expressive, i.e., that voters aim to express their moral views rather than to have an effect on outcomes. I show experimentally that PVs are often moralistic. Finally, I present some data that citizens’ think of their duty in a parochial way, neglecting out-groups. I conclude that moral judgments are important determinants of citizen behavior, that these judgments are subject to biases and based on moralistic values, and that, therefore, outcomes are probably less good than they could be. (shrink)
Cognitive biases that affect decision making may affect the decisions of citizens that influence public policy. To the extent that decisions follow principles other than maximizing utility for all, it is less likely that utility will be maximized, and the citizens will ultimately suffer the results. Here I outline some basic arguments concerning decisions by citizens, using voting as an example. I describe two types of values that may lead to sub-optimal consequences when these values influence political behavior: moralistic values (...) and protected values. I present evidence against the idea that voting is expressive, i.e., that voters aim to express their moral views rather than to have an effect on outcomes. I show experimentally that PVs are often moralistic. Finally, I present some data that citizens’ think of their duty in a parochial way, neglecting out-groups. I conclude that moral judgments are important determinants of citizen behavior, that these judgments are subject to biases and based on moralistic values, and that, therefore, outcomes are probably less good than they could be. (shrink)
The methods of experiments in the social sciences should depend on their purposes. To support this claim, I attempt to state some general principles relating method to purpose for three of the issues addressed. (I do not understand what is not a script, so I will omit that issue.) I illustrate my outline with examples from psychological research on judgment and decision making (JDM).
Commitment to a pattern of altruism or self-control may indeed be learnable and sometimes rational. Commitment may also result from illusions. In one illusion, people think that their present behavior causes their future behavior, or causes the behavior of others, when really only correlation is present. Another happy illusion is that morality and self-interest coincide, so that altruism appears self-interested.
Page generated Mon Jul 26 00:32:37 2021 on philpapers-web-84c8c567c7-kx665
cache stats: hit=21321, miss=21505, save= autohandler : 1125 ms called component : 1108 ms search.pl : 815 ms render loop : 804 ms next : 417 ms addfields : 349 ms publicCats : 336 ms autosense : 206 ms match_other : 181 ms menu : 73 ms save cache object : 63 ms retrieve cache object : 42 ms match_cats : 23 ms prepCit : 18 ms initIterator : 8 ms applytpl : 5 ms match_authors : 2 ms intermediate : 0 ms quotes : 0 ms init renderer : 0 ms setup : 0 ms auth : 0 ms writelog : 0 ms