Although argumentation plays an essential role in our lives, there is no integrated area of research on the psychology of argumentation. Instead research on argumentation is conducted in a number of separate research communities that are spread across disciplines and have only limited interaction. With a view to bridging these different strands, we first distinguish between three meanings of the word ?argument?: argument as a reason, argument as a structured sequence of reasons and claims, and argument as a social exchange. (...) All three meanings are integral to a complete understanding of human reasoning and cognition. Cognitive psychological research on argumentation has focused mostly on the first and second of these meanings, so we present perspectives on argumentation from outside of cognitive psychology, which focus on the second and third. Specifically, we give anoverview of the methods, goals, and disciplinary backgrounds of research on the production, the analysis, and the evaluation of arguments. Finally, inintroducing the experimental studies included in this special issue, which were conducted by researchers from a range of theoretical backgrounds, weunderline the breadth of argumentation research as well as stress opportunities for mutual awareness and integration. (shrink)
Normative theories provide essential tools for understanding behaviour, not just for reasoning, judgement, and decision-making, but many other areas of cognition as well; and their utility extends to the development of process theories. Furthermore, the way these tools are used has nothing to do with the is-ought fallacy. There therefore seems no basis for the claim that research would be better off without them.
In a recent article in Argumentation, O’Keefe (Argumentation 21:151–163, 2007) observed that the well-known ‘framing effects’ in the social psychological literature on persuasion are akin to traditional fallacies of argumentation and reasoning and could be exploited for persuasive success in a way that conflicts with principles of responsible advocacy. Positively framed messages (“if you take aspirin, your heart will be more healthy”) differ in persuasive effect from negative frames (“if you do not take aspirin, your heart will be less healthy”), (...) despite containing ‘equivalent’ content. This poses a potential problem, because people might be unduly (and unsuspectingly) influenced by mere presentational differences. By drawing on recent cognitive psychological work on framing effects in choice and decision making paradigms, however, we show that establishing whether two arguments are substantively equivalent—and hence, whether there is any normative requirement for them to be equally persuasive—is a difficult task. Even arguments that are logically equivalent may not be information equivalent. The normative implications of this for both speakers and listeners are discussed. (shrink)
Critical (necessary or sufficient) features in categorisation have a long history, but the empirical evidence makes their existence questionable. Nevertheless, there are some cases that suggest critical feature effects. The purpose of the present work is to offer some insight into why classification decisions might misleadingly appear as if they involve critical features. Utilising Tversky's (1977) contrast model of similarity, we suggest that when an object has a sparser representation, changing any of its features is more likely to lead to (...) a change in identity than it would in objects that have richer representations. Experiment 1 provides a basic test of this suggestion with artificial stimuli, whereby objects with a rich or a sparse representation were transformed by changing one of their features. As expected, we observed more identity judgements in the former case. Experiment 2 further confirms our hypothesis, with realistic stimuli, by assuming that superordinate categories have sparser representations than subordinate ones. These results offer some insight into the way feature changes may or may not lead to identity changes in classification decisions. (shrink)
One of the most striking features of is the detail with which behavior on logical reasoning tasks can now be predicted and explained. This detail is surprising, given the state of the field 10 to 15 years ago, and it has been brought about by a theoretical program that largely ignores consideration of cognitive processes, that is, any kind of internal behavior that generates overt responding. It seems that an increase in explanatory power can be achieved by restricting a psychological (...) theory. (shrink)
Argumentation is pervasive in everyday life. Understanding what makes a strong argument is therefore of both theoretical and practical interest. One factor that seems intuitively important to the strength of an argument is the reliability of the source providing it. Whilst traditional approaches to argument evaluation are silent on this issue, the Bayesian approach to argumentation (Hahn & Oaksford, 2007) is able to capture important aspects of source reliability. In particular, the Bayesian approach predicts that argument content and source reliability (...) should interact to determine argument strength. In this paper, we outline the approach and then demonstrate the importance of source reliability in two empirical studies. These experiments show the multiplicative relationship between the content and the source of the argument predicted by the Bayesian framework. (shrink)
In this article, we argue for the general importance of normative theories of argument strength. We also provide some evidence based on our recent work on the fallacies as to why Bayesian probability might, in fact, be able to supply such an account. In the remainder of the article we discuss the general characteristics that make a specifically Bayesian approach desirable, and critically evaluate putative flaws of Bayesian probability that have been raised in the argumentation literature.
Slippery slope arguments (SSAs) have often been viewed as inherently weak arguments, to be classified together with traditional fallacies of reasoning and argumentation such as circular arguments and arguments from ignorance. Over the last two decades several philosophers have taken a kinder view, often providing historical examples of the kind of gradual change on which slippery slope arguments rely. Against this background, Enoch (2001, Oxford Journal of Legal Studies 21(4), 629–647) presented a novel argument against SSA use that itself invokes (...) a slippery slope. Specifically, he argued that the very reasons that can make SSAs strong arguments mean that we should be poor at abiding by the distinction between good and bad SSAs, making SSAs inherently undesirable. We argue that Enoch’s meta-level SSA fails on both conceptual and empirical grounds. (shrink)
The notion of “the burden of proof” plays an important role in real-world argumentation contexts, in particular in law. It has also been given a central role in normative accounts of argumentation, and has been used to explain a range of classic argumentation fallacies. We argue that in law the goal is to make practical decisions whereas in critical discussion the goal is frequently simply to increase or decrease degree of belief in a proposition. In the latter case, it is (...) not necessarily important whether that degree of belief exceeds a particular threshold (e.g., ‘reasonable doubt’). We explore the consequences of this distinction for the role that the “burden of proof” has played in argumentation and in theories of fallacy. (shrink)
We examine in detail three classic reasoning fallacies, that is, supposedly ``incorrect'' forms of argument. These are the so-called argumentam ad ignorantiam, the circular argument or petitio principii, and the slippery slope argument. In each case, the argument type is shown to match structurally arguments which are widely accepted. This suggests that it is not the form of the arguments as such that is problematic but rather something about the content of those examples with which they are typically justified. This (...) leads to a Bayesian reanalysis of these classic argument forms and a reformulation of the conditions under which they do or do not constitute legitimate forms of argumentation. (shrink)
In this paper, it is argued that Ferguson’s (2003, Argumentation 17, 335–346) recent proposal to reconcile monotonic logic with defeasibility has three counterintuitive consequences. First, the conclusions that can be derived from his new rule of inference are vacuous, a point that as already made against default logics when there are conflicting defaults. Second, his proposal requires a procedural “hack” to the break the symmetry between the disjuncts of the tautological conclusions to which his proposal leads. Third, Ferguson’s proposal amounts (...) to arguing that all everyday inferences are sound by definition. It is concluded that the informal logic response to defeasibility, that an account of the context in which inferences are sound or unsound is required, still stands. It is also observed that another possible response is given by Bayesian probability theory (Oaksford and Chater, in press, Bayesian Rationality: The Probabilistic Approach to Human Reasoning, Oxford University Press, Oxford, UK; Hahn and Oaksford, in press, Synthese). (shrink)
The key weakness of the proposed distinction between rules and similarity is that it effectively converts what was previously seen as a consequence of rule or similarity-based processing, into a definition of rule and similarity themselves – evidence is elevated into a conceptual distinction. This conflicts with fundamental intuitions about processes and erodes the relevance of the debate across cognitive science.
The term “moral heuristic” as used by Sunstein seeks to bring together various traditions. However, there are significant differences between uses of the term “heuristic” in the cognitive and the social psychological research, and these differences are accompanied by very distinct evidential criteria. We suggest the term “moral heuristic” should refer to processes, which means that further evidence is required.
Van Gelder's specification of the dynamical hypothesis does not improve on previous notions. All three key attributes of dynamical systems apply to Turing machines and are hence too general. However, when a more restricted definition of a dynamical system is adopted, it becomes clear that the dynamical hypothesis is too underspecified to constitute an interesting cognitive claim.
The Schyns et al. target article demonstrates that different classifications entail different representations, implying “flexible space learning.” We argue that flexibility is required even at the within-category level.
We argue that the notion of distal similarity on which Edelman's reconstruction of the process of perception and the nature of representation rests is ill defined. As a consequence, the mapping between world and description that is supposedly at stake is, in fact, a mapping between two different descriptions or “representations.”.