We examine in detail three classic reasoning fallacies, that is, supposedly ``incorrect'' forms of argument. These are the so-called argumentam ad ignorantiam, the circular argument or petitio principii, and the slippery slope argument. In each case, the argument type is shown to match structurally arguments which are widely accepted. This suggests that it is not the form of the arguments as such that is problematic but rather something about the content of those examples with which they are typically justified. This (...) leads to a Bayesian reanalysis of these classic argument forms and a reformulation of the conditions under which they do or do not constitute legitimate forms of argumentation. (shrink)
Argumentation is pervasive in everyday life. Understanding what makes a strong argument is therefore of both theoretical and practical interest. One factor that seems intuitively important to the strength of an argument is the reliability of the source providing it. Whilst traditional approaches to argument evaluation are silent on this issue, the Bayesian approach to argumentation (Hahn & Oaksford, 2007) is able to capture important aspects of source reliability. In particular, the Bayesian approach predicts that argument content and source reliability (...) should interact to determine argument strength. In this paper, we outline the approach and then demonstrate the importance of source reliability in two empirical studies. These experiments show the multiplicative relationship between the content and the source of the argument predicted by the Bayesian framework. (shrink)
Norms—that is, specifications of what we ought to do—play a critical role in the study of informal argumentation, as they do in studies of judgment, decision-making and reasoning more generally. Specifically, they guide a recurring theme: are people rational? Though rules and standards have been central to the study of reasoning, and behavior more generally, there has been little discussion within psychology about why (or indeed if) they should be considered normative despite the considerable philosophical literature that bears on this (...) topic. In the current paper, we ask what makes something a norm, with consideration both of norms in general and a specific example: norms for informal argumentation. We conclude that it is both possible and desirable to invoke norms for rational argument, and that a Bayesian approach provides solid normative principles with which to do so. (shrink)
The appeal to expert opinion is an argument form that uses the verdict of an expert to support a position or hypothesis. A previous scheme-based treatment of the argument form is formalized within a Bayesian network that is able to capture the critical aspects of the argument form, including the central considerations of the expert's expertise and trustworthiness. We propose this as an appropriate normative framework for the argument form, enabling the development and testing of quantitative predictions as to how (...) people evaluate this argument, suggesting that such an approach might be beneficial to argumentation research generally. We subsequently present two experiments as an example of the potential for future research in this vein, demonstrating that participants' quantitative ratings of the convincingness of a proposition that has been supported with an appeal to expert opinion were broadly consistent with the predictions of the Bayesian model. (shrink)
In this paper, it is argued that the most fruitful approach to developing normative models of argument quality is one that combines the argumentation scheme approach with Bayesian argumentation. Three sample argumentation schemes from the literature are discussed: the argument from sign, the argument from expert opinion, and the appeal to popular opinion. Limitations of the scheme-based treatment of these argument forms are identified and it is shown how a Bayesian perspective may help to overcome these. At the same time, (...) the contributions of the standard scheme-based approach are highlighted, and it is argued that only a combination of the insights of different traditions will yield a complete normative theory of argument quality. (shrink)
Possible measures to mitigate climate change require global collective actions whose impacts will be felt by many, if not all. Implementing such actions requires successful communication of the reasons for them, and hence the underlying climate science, to a degree that far exceeds typical scientific issues which do not require large-scale societal response. Empirical studies have identified factors, such as the perceived level of consensus in scientific opinion and the perceived reliability of scientists, that can limit people's trust in science (...) communicators and their subsequent acceptance of climate change claims. Little consideration has been given, however, to recent formal results within philosophy concerning the relationship between truth, the reliability of evidence sources, the coherence of multiple pieces of evidence/testimonies, and the impact of independence between sources of evidence. This study draws on these results to evaluate exactly what has been established in the empirical literature about the factors that bias the public's reception of scientific communications about climate change. (shrink)
The notion of “the burden of proof” plays an important role in real-world argumentation contexts, in particular in law. It has also been given a central role in normative accounts of argumentation, and has been used to explain a range of classic argumentation fallacies. We argue that in law the goal is to make practical decisions whereas in critical discussion the goal is frequently simply to increase or decrease degree of belief in a proposition. In the latter case, it is (...) not necessarily important whether that degree of belief exceeds a particular threshold (e.g., ‘reasonable doubt’). We explore the consequences of this distinction for the role that the “burden of proof” has played in argumentation and in theories of fallacy. (shrink)
In this article, we argue for the general importance of normative theories of argument strength. We also provide some evidence based on our recent work on the fallacies as to why Bayesian probability might, in fact, be able to supply such an account. In the remainder of the article we discuss the general characteristics that make a specifically Bayesian approach desirable, and critically evaluate putative flaws of Bayesian probability that have been raised in the argumentation literature.
Although argumentation plays an essential role in our lives, there is no integrated area of research on the psychology of argumentation. Instead research on argumentation is conducted in a number of separate research communities that are spread across disciplines and have only limited interaction. With a view to bridging these different strands, we first distinguish between three meanings of the word ?argument?: argument as a reason, argument as a structured sequence of reasons and claims, and argument as a social exchange. (...) All three meanings are integral to a complete understanding of human reasoning and cognition. Cognitive psychological research on argumentation has focused mostly on the first and second of these meanings, so we present perspectives on argumentation from outside of cognitive psychology, which focus on the second and third. Specifically, we give anoverview of the methods, goals, and disciplinary backgrounds of research on the production, the analysis, and the evaluation of arguments. Finally, inintroducing the experimental studies included in this special issue, which were conducted by researchers from a range of theoretical backgrounds, weunderline the breadth of argumentation research as well as stress opportunities for mutual awareness and integration. (shrink)
The statistics of small samples are often quite different from those of large samples, and this needs to be taken into account in assessing the rationality of human behavior. Specifically, in evaluating human responses to environmental statistics, it is the effective environment that matters; that is, the environment actually experienced by the agent needs to be considered, not simply long-run frequencies. Significant deviations from long-run statistics may arise through experiential limitations of the agent that stem from resource constraints and/or information-processing (...) bounds. The article draws together recent work from a number of areas in judgment and decision making ranging from randomness perception (Hahn & Warren, ), information sampling (Hertwig & Pleskac, ; Kareev et al., ), and consequences of choice for exploration or exploitation (e.g., Denrell, ) to demonstrate how proper consideration of these deviations leads to reevaluation of behaviors that are otherwise deemed irrational. (shrink)
There has been much interest in group judgment and the so-called 'wisdom of crowds'. In many real world contexts, members of groups not only share a dependence on external sources of information, but they also communicate with one another, thus introducing correlations among their responses that can diminish collective accuracy. This has long been known, but it has-to date-not been examined to what extent different kinds of communication networks may give rise to systematically different effects on accuracy. We argue that (...) equations that relate group accuracy, individual accuracy, and group diversity are useful theoretical tools for understanding group performance in the context of research on group structure. In particular, these equations may serve to identify the kind of group structures that improve individual accuracy without thereby excessively diminishing diversity so that the net positive effect is an improvement even on the level of collective accuracy. Two experiments are reported where two structures are investigated from this perspective. It is demonstrated that the more constrained network outperforms the network with a free flow of information. (shrink)
In this chapter, we outline the range of argument forms involving causation that can be found in everyday discourse. We also survey empirical work concerned with the generation and evaluation of such arguments. This survey makes clear that there is presently no unified body of research concerned with causal argument. We highlight the benefits of a unified treatment both for those interested in causal cognition and those interested in argumentation, and identify the key challenges that must be met for a (...) full understanding of causal argumentation. (shrink)
In this manuscript we study individual variation in the interpretation of conditionals by establishing individual profiles of the participants based on their behavioral responses and reflective attitudes. To investigate the participants’ reflective attitudes we introduce a new experimental paradigm called the Scorekeeping Task, and a Bayesian mixture model tailored to analyze the data. The goal is thereby to identify the participants who follow the Suppositional Theory of conditionals and Inferentialism and to investigate their performance on the uncertain and-to-if inference task.
One of the most striking features of is the detail with which behavior on logical reasoning tasks can now be predicted and explained. This detail is surprising, given the state of the field 10 to 15 years ago, and it has been brought about by a theoretical program that largely ignores consideration of cognitive processes, that is, any kind of internal behavior that generates overt responding. It seems that an increase in explanatory power can be achieved by restricting a psychological (...) theory. (shrink)
Normative theories provide essential tools for understanding behaviour, not just for reasoning, judgement, and decision-making, but many other areas of cognition as well; and their utility extends to the development of process theories. Furthermore, the way these tools are used has nothing to do with the is-ought fallacy. There therefore seems no basis for the claim that research would be better off without them.
Slippery slope arguments (SSAs) have often been viewed as inherently weak arguments, to be classified together with traditional fallacies of reasoning and argumentation such as circular arguments and arguments from ignorance. Over the last two decades several philosophers have taken a kinder view, often providing historical examples of the kind of gradual change on which slippery slope arguments rely. Against this background, Enoch (2001, Oxford Journal of Legal Studies 21(4), 629–647) presented a novel argument against SSA use that itself invokes (...) a slippery slope. Specifically, he argued that the very reasons that can make SSAs strong arguments mean that we should be poor at abiding by the distinction between good and bad SSAs, making SSAs inherently undesirable. We argue that Enoch’s meta-level SSA fails on both conceptual and empirical grounds. (shrink)
In a recent article in Argumentation, O’Keefe (Argumentation 21:151–163, 2007) observed that the well-known ‘framing effects’ in the social psychological literature on persuasion are akin to traditional fallacies of argumentation and reasoning and could be exploited for persuasive success in a way that conflicts with principles of responsible advocacy. Positively framed messages (“if you take aspirin, your heart will be more healthy”) differ in persuasive effect from negative frames (“if you do not take aspirin, your heart will be less healthy”), (...) despite containing ‘equivalent’ content. This poses a potential problem, because people might be unduly (and unsuspectingly) influenced by mere presentational differences. By drawing on recent cognitive psychological work on framing effects in choice and decision making paradigms, however, we show that establishing whether two arguments are substantively equivalent—and hence, whether there is any normative requirement for them to be equally persuasive—is a difficult task. Even arguments that are logically equivalent may not be information equivalent. The normative implications of this for both speakers and listeners are discussed. (shrink)
Our beliefs and opinions are shaped by others, making our social networks crucial in determining what we believe to be true. Sometimes this is for the good because our peers help us form a more accurate opinion. Sometimes it is for the worse because we are led astray. In this context, we address via agent-based computer simulations the extent to which patterns of connectivity within our social networks affect the likelihood that initially undecided agents in a network converge on a (...) true opinion following group deliberation. The model incorporates a fine-grained and realistic representation of belief and trust, and it allows agents to consult outside information sources. We study a wide range of network structures and provide a detailed statistical analysis concerning the exact contribution of various network metrics to collective competence. Our results highlight and explain the collective risks involved in an overly networked or partitioned society. Specifically, we find that 96% of the variation in collective competence across networks can be attributed to differences in amount of connectivity and clustering, which are negatively correlated with collective competence. A study of bandwagon or “group think” effects indicates that both connectivity and clustering increase the probability that the network, wholly or partly, locks into a false opinion. Our work is interestingly related to Gerhard Schurz’s work on meta-induction and can be seen as broadly addressing a practical limitation of his approach. (shrink)
In this paper, it is argued that Ferguson’s (2003, Argumentation 17, 335–346) recent proposal to reconcile monotonic logic with defeasibility has three counterintuitive consequences. First, the conclusions that can be derived from his new rule of inference are vacuous, a point that as already made against default logics when there are conflicting defaults. Second, his proposal requires a procedural “hack” to the break the symmetry between the disjuncts of the tautological conclusions to which his proposal leads. Third, Ferguson’s proposal amounts (...) to arguing that all everyday inferences are sound by definition. It is concluded that the informal logic response to defeasibility, that an account of the context in which inferences are sound or unsound is required, still stands. It is also observed that another possible response is given by Bayesian probability theory (Oaksford and Chater, in press, Bayesian Rationality: The Probabilistic Approach to Human Reasoning, Oxford University Press, Oxford, UK; Hahn and Oaksford, in press, Synthese). (shrink)
Critical (necessary or sufficient) features in categorisation have a long history, but the empirical evidence makes their existence questionable. Nevertheless, there are some cases that suggest critical feature effects. The purpose of the present work is to offer some insight into why classification decisions might misleadingly appear as if they involve critical features. Utilising Tversky's (1977) contrast model of similarity, we suggest that when an object has a sparser representation, changing any of its features is more likely to lead to (...) a change in identity than it would in objects that have richer representations. Experiment 1 provides a basic test of this suggestion with artificial stimuli, whereby objects with a rich or a sparse representation were transformed by changing one of their features. As expected, we observed more identity judgements in the former case. Experiment 2 further confirms our hypothesis, with realistic stimuli, by assuming that superordinate categories have sparser representations than subordinate ones. These results offer some insight into the way feature changes may or may not lead to identity changes in classification decisions. (shrink)
The Schyns et al. target article demonstrates that different classifications entail different representations, implying “flexible space learning.” We argue that flexibility is required even at the within-category level.
The key weakness of the proposed distinction between rules and similarity is that it effectively converts what was previously seen as a consequence of rule or similarity-based processing, into a definition of rule and similarity themselves – evidence is elevated into a conceptual distinction. This conflicts with fundamental intuitions about processes and erodes the relevance of the debate across cognitive science.
Van Gelder's specification of the dynamical hypothesis does not improve on previous notions. All three key attributes of dynamical systems apply to Turing machines and are hence too general. However, when a more restricted definition of a dynamical system is adopted, it becomes clear that the dynamical hypothesis is too underspecified to constitute an interesting cognitive claim.
The term “moral heuristic” as used by Sunstein seeks to bring together various traditions. However, there are significant differences between uses of the term “heuristic” in the cognitive and the social psychological research, and these differences are accompanied by very distinct evidential criteria. We suggest the term “moral heuristic” should refer to processes, which means that further evidence is required.