This article outlines a theory of naive probability. According to the theory, individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an extensional way: They construct mental models of what is true in the various possibilities. Each model represents an equiprobable alternative unless individuals have beliefs to the contrary, in which case some models will have higher probabilities than others. The probability of an event depends on the proportion of models in which it occurs. (...) The theory predicts several phenomena of reasoning about absolute probabilities, including typical biases. It correctly predicts certain cognitive illusions in inferences about relative probabilities. It accommodates reasoning based on numerical premises, and it explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem. Finally, it dispels some common misconceptions of probabilistic reasoning. (shrink)
Do moral appraisals shape judgments of intentionality? A traditional view is that individuals first evaluate whether an action has been carried out intentionally. Then they use this evaluation as input for their moral judgments. Recent studies, however, have shown that individuals’ moral appraisals can also influence their intentionality attributions. They attribute intentionality to the negative side effect of a given action, but not to the positive side effect of the same action. In three experiments, we show that this asymmetry is (...) a robust effect that critically depends on the agent’s beliefs. The asymmetry is reduced when agents are described as not knowing that their action can bring about side effects, and is eliminated when they are deemed to hold a false belief about the consequences of their actions. These results suggest that both evaluative and epistemic considerations are used in intentionality attribution. (shrink)
This paper replies to Politzer’s ( 2007 ) criticisms of the mental model theory of conditionals. It argues that the theory provides a correct account of negation of conditionals, that it does not provide a truth-functional account of their meaning, though it predicts that certain interpretations of conditionals yield acceptable versions of the ‘paradoxes’ of material implication, and that it postulates three main strategies for estimating the probabilities of conditionals.
Many fields of study have shown that group discussion generally improves reasoning performance for a wide range of tasks. This article shows that most of the population, including specialists, does not expect group discussion to be as beneficial as it is. Six studies asked participants to solve a standard reasoning problem—the Wason selection task—and to estimate the performance of individuals working alone and in groups. We tested samples of U.S., Indian, and Japanese participants, European managers, and psychologists of reasoning. Every (...) sample underestimated the improvement yielded by group discussion. They did so even after they had been explained the correct answer, or after they had had to solve the problem in groups. These mistaken intuitions could prevent individuals from making the best of institutions that rely on group discussion, from collaborative learning and work teams to deliberative assemblies. (shrink)
People can reason about the preferences of other agents, and predict their behavior based on these preferences. Surprisingly, the psychology of reasoning has long neglected this fact, and focused instead on disinterested inferences, of which preferences are neither an input nor an output. This exclusive focus is untenable, though, as there is mounting evidence that reasoners take into account the preferences of others, at the expense of logic when logic and preferences point to different conclusions. This article summarizes the most (...) recent account of how reasoners predict the behavior and attitude of other agents based on conditional rules describing actions and their consequences, and reports new experimental data about which assumptions reasoners retract when their predictions based on preferences turn out to be false. (shrink)
This commentary questions Elqayam & Evans' (E&E's) claims that thinking tasks are doomed to have multiple normative readings and that only applied research allows normative evaluations. In fact, some tasks have just one undisputed normative reading, and not only pathological gamblers but also normal individuals sometimes need normative guidance. To conclude, normative evaluations are inevitable in the investigation of human thinking.
Four studies show that observers and readers imagine different alternatives to reality. When participants read a story about a protagonist who chose the more difficult of two tasks and failed, their counterfactual thoughts focused on the easier, unchosen task. But when they observed the performance of an individual who chose and failed the more difficult task, participants' counterfactual thoughts focused on alternative ways to solve the chosen task, as did the thoughts of individuals who acted out the event. We conclude (...) that these role effects may occur because participants' attention is engaged when they experience or observe an event more than when they read about it. (shrink)
We have found that moral considerations interact with belief ascription in determining intentionality judgment. We attribute this finding to a differential availability of plausible counterfactual alternatives that undo the negative side-effect of an action. We conclude that Knobe's thesis does not account for processes by which counterfactuals are generated and how these processes affect moral evaluations.
According to Kanazawa (Psychol Rev 111:512â523, 2004), general intelligence, which he considers as a synonym of abstract thinking, evolved specifically to allow our ancestors to deal with evolutionary novel problems while conferring no advantage in solving evolutionary familiar ones. We present a study whereby the results contradict Kanazawaâs hypothesis by demonstrating that performance on an evolutionary novel problem (an abstract reasoning task) predicts performance on an evolutionary familiar problem (a social reasoning task).
An individual obtains an unfair benefit and faces the dilemma of either hiding it (to avoid being excluded from future interactions) or disclosing it (to avoid being discovered as a deceiver). In line with the target article, we expect that this dilemma will be solved by a fixed individual strategy rather than a case-by-case rational calculation.
I discuss an aspect of individual differences which has not been considered adequately in the target article, despite its potential role in the rationality debate. Besides having different intellectual abilities, different individuals may produce different erroneous responses to the same problem. In deductive reasoning, different response patterns contradict deterministic views of deductive inferences. In decision-making, variations in nonoptimal choice may explain successful collective actions.