Previous research has discovered a curious phenomenon: groups cooperate less than individuals in a deterministic prisoner’s dilemma game, but cooperate more than individuals when uncertainty is introduced into the game. We conducted two studies to examine three possible processes that might drive groups to be more cooperative than individuals in reducing risks: group risk concern, group cooperation expectation, and pressure to conform to social norms. We found that ex post guilt aversion and ex-post blame avoidance cause group members to be (...) more risk concerned than individuals under uncertainty. These concerns drive groups to choose the cooperation (and risk-reduction) strategy more frequently than individuals. Groups also have higher cooperation expectations for their corresponding groups than individuals have for their corresponding individuals. We found no evidence of pressure to conform to social norms driving groups to be more cooperative than individuals. (shrink)
A two-systems model of moral judgment proposed by Joshua Greene holds that deontological moral judgments (those based on simple rules concerning action) are often primary and intuitive, and these intuitive judgments must be overridden by reflection in order to yield utilitarian (consequence-based) responses. For example, one dilemma asks whether it is right to push a man onto a track in order to stop a trolley that is heading for five others. Those who favor pushing, the utilitarian response, usually take longer (...) to respond than those who oppose pushing. Greene's model assumes an asymmetry between the processes leading to different responses. We consider an alternative model based on the assumption of symmetric conflict between two response tendencies. By this model, moral dilemmas differ in the "difficulty" of giving a utilitarian response and subjects differ in the "ability" (tendency) to give such responses. (We could just as easily define ability in terms of deontological responses, as the model treats the responses symmetrically.) We thus make an analogy between moral dilemmas and tests of cognitive ability, and we apply the Rasch model, developed for the latter, to estimate the ability-difficulty difference for each dilemma for each subject. We apply this approach to five data sets collected for other purposes by three of the co-authors. Response time (RT), including yes and no responses, is longest when difficulty and ability match, because the subject is indifferent between the two responses, which also have the same RT at this point. When we consider yes/no responses, RT is longest when the model predicts that the response is improbable. Subjects with low ability take longer on the "easier" dilemmas, and vice versa. (shrink)
In folk psychology and some academic psychology, utilitarian thinking is associated with coldness and deontological thinking is associated with emotion. I suggest, mostly through personal examples, that these associations are far from perfect. Utilitarians experience emotions, which sometimes derive from, and sometimes cause or reinforce, their moral judgments.
Cognitive biases that affect decision making may affect the decisions of citizens that influence public policy. To the extent that decisions follow principles other than maximizing utility for all, it is less likely that utility will be maximized, and the citizens will ultimately suffer the results. Here I outline some basic arguments concerning decisions by citizens, using voting as an example. I describe two types of values that may lead to sub-optimal consequences when these values influence political behavior: moralistic values (...) (which people are willing to impose on others regardless of the consequences) and protected values (PVs, values protected from trade-offs). I present evidence against the idea that voting is expressive, i.e., that voters aim to express their moral views rather than to have an effect on outcomes. I show experimentally that PVs are often moralistic. Finally, I present some data that citizens’ think of their duty in a parochial way, neglecting out-groups. I conclude that moral judgments are important determinants of citizen behavior, that these judgments are subject to biases and based on moralistic values, and that, therefore, outcomes are probably less good than they could be. (shrink)
When people tend toward a political decision, such as voting for the Republican Party, they are often attracted to this decision by one issue, such as the party’s stance on abortion, but then they come to see other issues, such as the party’s stand on taxes, as supporting their decision, even if they would not have thought so in the absence of the decision. I demonstrate this phenomenon with opinion poll data and with an experiment done on the World Wide (...) Web using hypothetical candidates. For the hypothetical candidates, judgments about whether a candidate’s position on issue A favors the candidate or the opponent are correlated with judgments about other positions taken by the candidate (as determined from other hypothetical candidates). This effect is greater in those subjects who rarely make conflicting judgments, in which one issue favors a candidate and another favors the opponent. In a few cases, judgments even reverse, so that a position that is counted as a minus for other candidates becomes a plus for a favored candidate. Reversals in the direction of a candidate’s position are more likely when the candidate is otherwise favored. The experiment provides a new kind of demonstration of “belief overkill,” the tendency to bring all arguments into line with a favored conclusion. (shrink)
The heuristics-and-biases approach requires a clear separation of normative and descriptive models. Normative models cannot be justified by intuition, or by consensus. The lack of consensus on normative theory is a problem for prescriptive approaches. One solution to the prescriptive problem is to argue contingently: if you are concerned about consequences, here is a way to make them better.
Commitment to a pattern of altruism or self-control may indeed be learnable and sometimes rational. Commitment may also result from illusions. In one illusion, people think that their present behavior causes their future behavior, or causes the behavior of others, when really only correlation is present. Another happy illusion is that morality and self-interest coincide, so that altruism appears self-interested.
The methods of experiments in the social sciences should depend on their purposes. To support this claim, I attempt to state some general principles relating method to purpose for three of the issues addressed. (I do not understand what is not a script, so I will omit that issue.) I illustrate my outline with examples from psychological research on judgment and decision making (JDM).
Stanovich & West (S&W) have two goals, one concerned with the evaluation of normative models, the other with development of prescriptive models. Individual differences have no bearing on normative models, which are justified by analysis, not consensus. Individual differences do, however, suggest where it is possible to try to improve human judgments and decisions through education rather than computers.
College-student subjects made notes about the morality of early abortion, as if they were preparing for a class discussion. Analysis of the quality of their arguments suggests that a distinction can be made between arguments based on well-supported warrants and those based on warrants that are easily criticised. The subjects also evaluated notes made by other, hypothetical, students preparing for the same discussion. Most subjects evaluated the set of arguments as better when the arguments were all on one side than (...) when both sides were presented, even when the hypothetical student was on the opposite side of the issue from the evaluator. Subjects who favoured one-sidedness also tended to make one-sided arguments themselves. The results suggest that ?myside bias? is partly caused by beliefs about what makes thinking good. (shrink)
Abstract Critical thinking about moral decisions considers the consequences of options for the achievement of people's goals. Attempts to think critically lead to error and bias, so intuitive rules are needed to guard against these errors and to save time. Intuitive rules, however, lead to errors and biases of their own. I propose that students be taught to approximate critical thinking itself and that they learn rules of thumb to guard against its pitfalls. In particular, students need to learn certain (...) powerful devices of consequentialist reasoning, such as consideration of precedent setting and of the possibility of error in thinking itself. They also need to learn about the common biases of thinking, especially the bias to favour what one already believes, or what is nearby in time and space. (shrink)
A second-order probability Q(P) may be understood as the probability that the true probability of something has the value P. “True” may be interpreted as the value that would be assigned if certain information were available, including information from reflection, calculation, other people, or ordinary evidence. A rule for combining evidence from two independent sources may be derived, if each source i provides a function Q i (P). Belief functions of the sort proposed by Shafer (1976) also provide a formula (...) for combining independent evidence, Dempster's rule, and a way of representing ignorance of the sort that makes us unsure about the value of P. Dempster's rule is shown to be at best a special case of the rule derived in connection with second-order probabilities. Belief functions thus represent a restriction of a full Bayesian analysis. (shrink)