Humans massively depend on communication with others, but this leaves them open to the risk of being accidentally or intentionally misinformed. To ensure that, despite this risk, communication remains advantageous, humans have, we claim, a suite of cognitive mechanisms for epistemic vigilance. Here we outline this claim and consider some of the ways in which epistemic vigilance works in mental and social life by surveying issues, research and theories in different domains of philosophy, linguistics, cognitive psychology and the social sciences.
By systematically biasing our beliefs, self-deception can endanger our ability to successfully convey our messages. It can also lead lies to degenerate into more severe damages in relationships. Accordingly, I suggest that the biases reviewed in the target article do not aim at self-deception but instead are the by-products of several other mechanisms: our natural tendency to self-enhance, the confirmation bias inherent in reasoning, and the lack of access to our unconscious minds.
Much evidence has accumulated in favor of such a dual view of reasoning. There is however some vagueness in the way the two systems are characterized. Instead of a principled distinction, we are presented with a bundle of contrasting features - slow/fast, automatic/controlled, explicit/implicit, associationist/rule based, modular/central - that, depending on the specific dual process theory, are attributed more or less exclusively to one of the two systems. As Evans states in a recent review, “it would then be helpful to (...) have some clear basis for this distinction”; he also suggests that “we might be better off talking about type 1 and type 2 processes” rather than systems. We share the intuitions that drove the development of dual system theories. Our goal here is to propose in the same spirit a principled distinction between two types of inferences: ‘intuitive inference’ and ‘reflective inference’. We ground this distinction in a massively modular view of the human mind where metarepresentational modules play an important role in explaining the peculiarities of human psychological evolution. We defend the hypothesis that the main function of reflective inference is to produce and evaluate arguments occurring in interpersonal communication. This function, we claim, helps explain important aspects of reasoning. We review some of the existing evidence and argue that it gives support to this approach. (shrink)
Reasoning research suggests that people use more stringent criteria when they evaluate others' arguments than when they produce arguments themselves. To demonstrate this “selective laziness,” we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in response to reasoning problems, and they were then asked to evaluate other people's arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was (...) someone else's. Among those participants who accepted the manipulation and thus thought they were evaluating someone else's argument, more than half rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people's arguments than of their own, without being overly critical: They are better able to tell valid from invalid arguments when the arguments are someone else's rather than their own. (shrink)
Having defended the usefulness of our definition of reasoning, we stress that reasoning is not only for convincing but also for evaluating arguments, and that as such it has an epistemic function. We defend the evidence supporting the theory against several challenges: People are good informal arguers, they reason better in groups, and they have a confirmation bias. Finally, we consider possible extensions, first in terms of process-level theories of reasoning, and second in the effects of reasoning outside the lab.
The role of reasoning in our moral lives has been increasingly called into question by moral psychology. Not only are intuitions guiding many of our moral judgments and decisions, with reasoning only finding post-hoc rationalizations, but reasoning can sometimes play a negative role, by finding excuses for our moral violations. The observations fit well with the argumentative theory of reasoning (Mercier H, Sperber D, Behav Brain Sci, in press-b), which claims that reasoning evolved to find and evaluate arguments in dialogic (...) contexts. This theory explains the strong confirmation bias that reasoning displays when it produces arguments, which in turn explains its tendency to rationalize our decisions. But this theory also predicts that people should be able to evaluate arguments felicitously and that, as a result, people should reason better in groups, when they are confronted with other people’s arguments. Groups are able to converge on better moral judgments. It is argued that reasoning and argumentation play an important role in our everyday moral lives, and a defense of the value of reasoning for moral change is offered. (shrink)
Expert reasoning is responsible for some of the most stunning human achievements, but also for some of the most disastrous decisions ever made. The argumentative theory of reasoning has proven very effective at explaining the pattern of reasoning’s successes and failures. In the present article, it is expanded to account for expert reasoning. The argumentative theory predicts that reasoning should display a strong confirmation bias. If argument quality is not sufficiently high in a domain, the confirmation bias will make experts (...) tap into their vast knowledge to defend whatever opinion they hold, with polarization and overconfidence as expected results. By contrast, experts should benefit even more from the power of group discussion to make the best of the confirmation bias—when they genuinely disagree that is, otherwise polarization is again likely to ensue. When experts interact with laymen other mechanisms can take the lead, in particular trust calibration and consistency checking. They can yield poor outcomes if experts do not have a sustained interaction with laymen, or if the laymen have strong opinions when they witness a debate between experts. Seeing reasoning as a mechanism of epistemic vigilance aimed at finding and evaluating arguments helps make better sense of expert reasoning performance, be it in individual ratiocination, in debates with other experts, or in interactions with laymen. (shrink)
Often, when several norms are present and may be in conflict, individuals will display a self-serving bias, privileging the norm that best serves their interests. Xiao and Bicchieri (J Econ Psychol 31(3):456–470, 2010) tested the effects of inequality on reciprocating behavior in trust games and showed that—when inequality increases—reciprocity loses its appeal. They hypothesized that self-serving biases in choosing to privilege a particular social norm occur when the choice of that norm is publicly justifiable as reasonable, even if not optimal (...) for one of the parties. In line with the literature on motivated reasoning, this justification should find some degree of support among third parties. The results of our experimental survey of third parties support the hypothesis that biases are not always unilateral selfish assessments. Instead, they occur when the choice to favor a particular norm is supported by a shared sense that it is a reasonable and justifiable choice. (shrink)
Theoreticians of deliberative democracy have sometimes found it hard to relate to the seemingly contradictory experimental results produced by psychologists and political scientists. We suggest that this problem may be alleviated by inserting a layer of psychological theory between the empirical results and the normative political theory. In particular, we expose the argumentative theory of reasoning that makes the observed pattern of findings more coherent. According to this theory, individual reasoning mechanisms work best when used to produce and evaluate arguments (...) during a public deliberation. It predicts that when diverse opinions are discussed, group reasoning will outperform individual reasoning. It also predicts that individuals have a strong confirmation bias. When people reason either alone or with like-minded peers, this confirmation bias leads them to reinforce their initial attitudes, explaining individual and group polarization. We suggest that the failures of reasoning are most likely to be remedied at the collective than at the individual level. (shrink)
Reasoning, defined as the production and evaluation of reasons, is a central process in science. The dominant view of reasoning, both in the psychology of reasoning and in the psychology of science, is of a mechanism with an asocial function: bettering the beliefs of the lone reasoner. Many observations, however, are difficult to reconcile with this view of reasoning; in particular, reasoning systematically searches for reasons that support the reasoner’s initial beliefs, and it only evaluates these reasons cursorily. By contrast, (...) reasoners are well able to evaluate others’ reasons: accepting strong arguments and rejecting weak ones. The argumentative theory of reasoning accounts for these traits of reasoning by postulating that the evolved function of reasoning is to argue: to find arguments to convince others and to change one’s mind when confronted with good arguments. Scientific reasoning, however, is often described as being at odds with such an argumentative mechanisms: scientists are supposed to reason objectively on their own, and to be pigheaded when their theories are challenged, even by good arguments. In this article, we review evidence showing that scientists, when reasoning, are subject to the same biases as are lay people while being able to change their mind when confronted with good arguments. We conclude that the argumentative theory of reasoning explains well key features of scientists’ reasoning and that differences in the way scientists and laypeople reason result from the institutional framework of science. (shrink)
Because reasoning allows us to justify our beliefs and evaluate these justifications it is central to folk epistemology. Following Sperber, and contrary to classical views, it will be argued that reasoning evolved not to complement individual cognition but as an argumentative device. This hypothesis is more consistent with the prevalence of the confirmation and disconfirmation biases. It will be suggested that these biases render the individual use of reasoning hazardous, but that when reasoning is used in its natural, argumentative, context (...) they can represent a smart way to divide labor without loosing epistemic value. (shrink)
We elaborate on the approach to syllogistic reasoning based on “case identification” (Stenning & Oberlander, 1995; Stenning & Yule, 1997). It is shown that this can be viewed as the formalisation of a method of proof that dates back to Aristotle, namely proof by exposition ( ecthesis ), and that there are traces of this method in the strategies described by a number of psychologists, from St rring (1908) to the present day. We hypothesised that by rendering individual cases explicit (...) in the premises, the chance that reasoners would engage in a proof by exposition would be enhanced, and thus performance improved. To do so, we used syllogisms with singular premises (e.g., this X is Y ). This resulted in a uniform increase in performance as compared to performance on the associated standard syllogisms. These results cannot be explained by the main theories of syllogistic reasoning in their current state. (shrink)
Many fields of study have shown that group discussion generally improves reasoning performance for a wide range of tasks. This article shows that most of the population, including specialists, does not expect group discussion to be as beneficial as it is. Six studies asked participants to solve a standard reasoning problem—the Wason selection task—and to estimate the performance of individuals working alone and in groups. We tested samples of U.S., Indian, and Japanese participants, European managers, and psychologists of reasoning. Every (...) sample underestimated the improvement yielded by group discussion. They did so even after they had been explained the correct answer, or after they had had to solve the problem in groups. These mistaken intuitions could prevent individuals from making the best of institutions that rely on group discussion, from collaborative learning and work teams to deliberative assemblies. (shrink)
As the title “Doing without Concepts” suggests Edouard Machery argues that psychologists should stop using the notion of concept because: (1) the only interesting generalizations about concepts can be drawn at the level of types of concepts (prototypes, exemplars and theories) and not the level of concept in general; and (2) competences such as categorization or induction can rely on these different types of concepts (there is not a one to one correspondence between type of concept and competence). I try (...) to make the point that these two elements are not wholly compatible. If several types of concepts are used to perform a given competence (point (2)), then they have to be well regulated (e.g. which type is activated when, which type wins in case of conflict). These regulatory mechanisms can then be the basis for interesting generalizations (against point (1)). On the other hand, it is possible that point (1) applies to competences as well: that there are no interesting generalizations to be drawn about categorization in general. In which case different types of categorization are likely to be underlain by different types of concepts (against point (2)). Even though the arguments laid out in the book are forceful and well supported by empirical evidence, a more positive thesis might have been both more successful rhetorically and more interesting scientifically. (shrink)
Abstract How do people find arguments while engaged in a discussion? Following an analogy with visual search, a mechanism that performs this task is described. It is a metarepresentational device that examines representations in a mostly serial manner until it finds a good enough argument supporting one’s position. It is argued that the mechanism described in dual process theories as ‘system 2’, or analytic reasoning fulfills these requirements. This provides support for the hypothesis that reasoning serves an argumentative function. Content (...) Type Journal Article Pages 1-20 DOI 10.1007/s10503-011-9256-1 Authors Hugo Mercier, Philosophy, Politics and Economics Program, University of Pennsylvania, 313 Cohen Hall, 249 South 36th Street, Philadelphia, PA 19104, USA Journal Argumentation Online ISSN 1572-8374 Print ISSN 0920-427X. (shrink)
Coherence plays an important role in psychology. In this article, I suggest that coherence takes two main forms in humans’ cognitive system. The first belong to ‘system 1’. It relies on the degree of coherence between different representations to regulate them, without coherence being represented. By contrast other mechanisms, belonging to system 2, allow humans to represent the degree of coherence between different representations and to draw inferences from it. It is suggested that the mechanisms of explicit coherence evaluation have (...) social functions. They are used as means of epistemic vigilance—to evaluate what other people tell us. They can also be turned inwards to examine the coherence of our own beliefs. Their function is then to minimize the chances that we are perceived as being incoherent. Evidence from different domains of psychology is briefly reviewed in support of these hypotheses. (shrink)
Many experiments suggest that participants are more critical of arguments that challenge their views or that come from untrustworthy sources. However, other results suggest that this might not be true of demonstrative arguments. A series of four experiments tested whether people are influenced by two factors when they evaluate demonstrative arguments: how confident they are in the answer being challenged by the argument, and how much they trust the source of the argument. Participants were not affected by their confidence in (...) the answer challenged by the argument. By contrast, they were sometimes affected by their trust in the argument’s source. Analyses of reaction times and transfer problems suggest that source trustworthiness did not directly affect argument evaluation, but affected instead the number of times the participants considered the arguments. Our results thus suggest that people can evaluate demonstrative arguments objectively. In conclusion, we defend the hypothesis that people might also be able to evaluate non-demonstrative arguments objectively. These results support the predictions of the argumentative theory of reasoning. (shrink)
In studying how lay people evaluate arguments, psychologists have typically focused on logical form and content. This emphasis has masked an important yet underappreciated aspect of everyday argument evaluation: social cues to argument strength. Here we focus on the ways in which observers evaluate arguments by the reaction they evoke in an audience. This type of evaluation is likely to occur either when people are not privy to the content of the arguments or when they are not expert enough to (...) appropriately evaluate it. Four experiments explore cues that participants might take into account in evaluating arguments from the reaction of the audience. They demonstrate that participants can use audience motivation, expertise, and size as clues to argument quality. By contrast we find no evidence that participants take audience diversity into account. (shrink)
In many circumstances we tend to assume that other people believe or desire what we ourselves believe or desire. This has been labeled 'egocentric bias.' This is not to say that we systematically fail to understand other people and forget that they can have a different perspective. If it were the case, then it would be highly difficult, if not impossible, to communicate, cooperate or compete with them. In those situations, we need to take the other person's perspective and to (...) inhibit our own. But can the other's perspective furtively intrude even when no reason seems to require it, or even when it is detrimental for us? We shall see a series of evidence of what has been called altercentric bias (Samson et al., 2010; Apperly, 2011): other people's beliefs can unduly influence us even when they are wrong. At first sight, altercentric bias questions 1st person priority. In particular, it may appear as incompatible with simulation-based accounts of 3rd person mindreading. We shall argue, on the contrary, that the simulationist framework enables confusions between self and others that go both ways: taking one's beliefs for the other's beliefs (egocentric bias) and vice-versa, taking the other's beliefs for one's beliefs (altercentric bias). We shall then see how the risk of such confusion may be disadvantageous from an evolutionary perspective, questioning thus the evolutionary plausibility of the simulation theory. (shrink)
Although there might seem to be a natural continuity and interplay between the cognitive sciences and the social sciences, the integration of the two has, on the whole, been fraught with difficulties. In some areas the transition was relatively smooth. For instance, political psychology is now a well-recognized branch both of psychology and of political science. In economics, things have been more difficult, with the entrenched assumption of a perfectly rational homo economicus, but behavioral economics is now well recognized, and (...) one of the founders of the field, Daniel Kahneman, went on to win a Nobel Prize.Social and cognitive sciences have proven more difficult to bridge in anthropology and sociology. Most of the efforts have been pursued—and resisted—in anthropology (although, for sociology, see Clément and Kaufmann 2011). At first, scholars attempted to import the methods of evolutionary biology straight into the study of culture (Dawkins 1976; Lumsden and Wilson 1981). This prom .. (shrink)
The Pigeonhole Principle states that if n items are sorted into m categories and if n > m, then at least one category must contain more than one item. For instance, if 22 pigeons are put into 17 pigeonholes, at least one pigeonhole must contain more than one pigeon. This principle seems intuitive, yet when told about a city with 220,000 inhabitants none of whom has more than 170,000 hairs on their head, many people think that it is merely likely (...) that two inhabitants have the exact same number of hair. This failure to apply the Pigeonhole Principle might be due to the large numbers used, or to the cardinal rather than nominal presentation of these numbers. We show that performance improved both when the numbers are presented nominally, and when they are small, albeit less so. We discuss potential interpretations of these results in terms of intuition and reasoning. (shrink)