Fallacies and Argument Appraisal presents an introduction to the nature, identification, and causes of fallacious reasoning, along with key questions for evaluation. Drawing from the latest work on fallacies as well as some of the standard ideas that have remained relevant since Aristotle, Christopher Tindale investigates central cases of major fallacies in order to understand what has gone wrong and how this has occurred. Dispensing with the approach that simply assigns labels and brief descriptions of fallacies, Tindale provides fuller treatments (...) that recognize the dialectical and rhetorical contexts in which fallacies arise. This volume analyzes major fallacies through accessible, everyday examples. Critical questions are developed for each fallacy to help the student identify them and provide considered evaluations. (shrink)
From antiquity to the end of the twentieth century, philosophical discussions of understanding remained undeveloped, guided by a 'received view' that takes understanding to be nothing more than knowledge of an explanation. More recently, however, this received view has been criticized, and bold new philosophical proposals about understanding have emerged in its place. In this book, Kareem Khalifa argues that the received view should be revised but not abandoned. In doing so, he clarifies and answers the most central questions (...) in this burgeoning field of philosophical research: what kinds of cognitive abilities are involved in understanding? What is the relationship between the understanding that explanations provide and the understanding that experts have of broader subject matters? Can there be understanding without explanation? How can one understand something on the basis of falsehoods? Is understanding a species of knowledge? What is the value of understanding? (shrink)
Explanation is asymmetric: if A explains B, then B does not explain A. Tradition- ally, the asymmetry of explanation was thought to favor causal accounts of explanation over their rivals, such as those that take explanations to be inferences. In this paper, we develop a new inferential approach to explanation that outperforms causal approaches in accounting for the asymmetry of explanation.
Sarah Moss argues that in addition to full beliefs, credences can constitute knowledge. She introduces the notion of probabilistic content and shows how it plays a central role not only in epistemology, but in the philosophy of mind and language. Just you can believe and assert propositions, you can believe and assert probabilistic contents.
Recent literature on non-causal explanation raises the question as to whether explanatory monism, the thesis that all explanations submit to the same analysis, is true. The leading monist proposal holds that all explanations support change-relating counterfactuals. We provide several objections to this monist position. 1Introduction2Change-Relating Monism's Three Problems3Dependency and Monism: Unhappy Together4Another Challenge: Counterfactual Incidentalism4.1High-grade necessity4.2Unity in diversity5Conclusion.
Recently, several authors have argued that scientific understanding should be a new topic of philosophical research. In this article, I argue that the three most developed accounts of understanding--Grimm's, de Regt's, and de Regt and Dieks's--can be replaced by earlier accounts of scientific explanation without loss. Indeed, in some cases, such replacements have clear benefits.
Recently, it has been debated as to whether understanding is a species of explanatory knowledge. Those who deny this claim frequently argue that understanding, unlike knowledge, can be lucky. In this paper I argue that current arguments do not support this alleged compatibility between understanding and epistemic luck. First, I argue that understanding requires reliable explanatory evaluation, yet the putative examples of lucky understanding underspecify the extent to which subjects possess this ability. In the course of defending this claim, I (...) also provide a new account of the kind of ‘grasping’ taken to be central to understanding. Second, I show that putative examples of lucky understanding unwittingly deploy a kind of luck that is compatible with knowledge. Finally, appealing to a number of works on explanation and its attendant epistemology, I argue that alleged instances of lucky understanding that overcome these two obstacles will invariably violate certain norms of explanatory inquiry – our paradigmatic understanding-oriented practice. By contrast, knowledge of the same information is immune to these criticisms. Consequently, if understanding is environmentally lucky, it is always inferior to the understanding that a corresponding case of knowledge would provide. (shrink)
Jonathan Kvanvig has argued that “objectual” understanding, i.e. the understanding we have of a large body of information, cannot be reduced to explanatory concepts. In this paper, I show that Kvanvig fails to establish this point, and then propose a framework for reducing objectual understanding to explanatory understanding.
Peter Lipton has argued that understanding can exist in the absence of explanation. We argue that this does not denigrate explanation's importance to understanding. Specifically, we show that all of Lipton's examples are consistent with the idea that explanation is the ideal of understanding, i.e. other modes of understanding ought to be assessed by how well they replicate the understanding provided by a good and correct explanation. We defend this idea by showing that for all of Lipton's examples of non-explanatory (...) understanding of why p , there exists a correct and reasonably good explanation that would provide greater understanding of p. (shrink)
In this incisive study Sarah Broadie gives an argued account of the main topics of Aristotle's ethics: eudaimonia, virtue, voluntary agency, practical reason, akrasia, pleasure, and the ethical status of theoria. She explores the sense of "eudaimonia," probes Aristotle's division of the soul and its virtues, and traces the ambiguities in "voluntary." Fresh light is shed on his comparison of practical wisdom with other kinds of knowledge, and a realistic account is developed of Aristototelian deliberation. The concept of pleasure (...) as value-judgment is expounded, and the problem of akrasia is argued to be less of a problem to Aristotle than to his modern interpreters. Showing that the theoretic ideal of Nicomachean Ethics X is in step with the earlier emphasis on practice, as well as with the doctrine of the Eudemian Ethics, this work makes a major contribution towards the understanding of Aristotle's ethics. (shrink)
The paper investigates the `logical space of reasons' as a social space in which rational agents operate and persons in an important sense come to be. Building from an investigation of argumentative agents in Aristotle's Rhetoric, I discuss both interior and exterior criteria for personhood and propose that the latter shows how argumentation, as a principal activity of the space of reasons, results in the particular kinds of persons we recognize there as rational agents. The overall analysis is indebted to (...) Robert Brandom's centralizing of the practice of giving and receiving reasons and the suggestive ways this can be applied to the realm of argumentation. (shrink)
Der protestantische Theologe Karl Girgensohn ist 1903 mit seinem frühen Werk über das Wesen der Religion an die Öffentlichkeit getreten, welches einen starken religionsphilosophischen Standpunkt zum Ausdruck bringt. Kernüberlegung ist hierbei eine kognitive Theorie des Religiösen, in der die Gottesidee zentral ist. Unter Berücksichtigung der Biographie Girgensohns geht der vorliegende Beitrag auf diese frühe Studie zum Wesen der Religion ein und skizziert den Übergang des Autors von einem philosophischen zu einem experimentell-introspektiven Ansatz der Religiositätsforschung, welcher dann zum Fundament für die (...) Dorpater religionspsychologische Schule wurde. Basierend auf Girgensohns frühem Werk werden abschließend Implikationen für die heutige empirische Theologie vorgeschlagen.The Protestant theologian Karl Girgensohn came to the public in 1903 with his early work on the nature of religion, which expresses a strong religious-philosophical standpoint. The core consideration here is a cognitive theory of the religious, in which the idea of God is central. Taking into account Girgensohn’s biography, the present contribution addresses this early study on the nature of religion and outlines the author’s transition from a philosophical to an experimental-introspective approach to religious research, which then became the foundation for the Dorpat School of the psychology of religion. Based on Girgensohn’s early work, implications for contemporary empirical theology are finally proposed. (shrink)
Epistemologists have recently debated whether understanding is a species of knowledge. However, because they have offered little in the way of a detailed analysis of understanding, they lack the resources to resolve this issue. In this paper, I propose that S understands why p if and only if S has the non-Gettierised true belief that p, and for some proposition q, S has the non-Gettierised true belief that q is the best available explanation of p, S can correctly explain p (...) with q, and S can identify the features that make q the best explanation of p. On this analysis, understanding is reducible to knowing that p and that q is the best available explanation of p. (shrink)
Generic generalizations such as ‘mosquitoes carry the West Nile virus’ or ‘sharks attack bathers’ are often accepted by speakers despite the fact that very few members of the kinds in question have the predicated property. Previous work suggests that such low-prevalence generalizations may be accepted when the properties in question are dangerous, harmful, or appalling. This paper argues that the study of such generic generalizations sheds light on a particular class of prejudiced social beliefs, and points to new ways in (...) which those beliefs might be undermined and combatted. (shrink)
In this paper, we develop and refine the idea that understanding is a species of explanatory knowledge. Specifically, we defend the idea that S understands why p if and only if S knows that p, and, for some q, S’s true belief that q correctly explains p is produced/maintained by reliable explanatory evaluation. We then show how this model explains the reception of James Bjorken’s explanation of scaling by the broader physics community in the late 1960s and early 1970s. The (...) historical episode is interesting because Bjorken’s explanation initially did not provide understanding to other physicists, but was subsequently deemed intelligible when Feynman provided a physical interpretation that led to experimental tests that vindicated Bjorken’s model. Finally, we argue that other philosophical models of scientific understanding are best construed as limiting cases of our more general model. (shrink)
Several authors suggest that understanding and epistemic coherence are tightly connected. Using an account of understanding that makes no appeal to coherence, I explain away the intuitions that motivate this position. I then show that the leading coherentist epistemologies only place plausible constraints on understanding insofar as they replicate my own account’s requirements. I conclude that understanding is only superficially coherent.
The underconsideration argument against inference to the best explanation and scientific realism holds that scientists are not warranted in inferring that the best theory is true, because scientists only ever conceive of a small handful of theories at one time, and as a result, they may not have considered a true theory. However, antirealists have not developed a detailed alternative account of why explanatory inference nevertheless appears so central to scientific practice. In this paper, I provide new defences against some (...) recent objections to the underconsideration argument, while also developing an account of explanatory inference that both survives these criticisms and does not entail realism. (shrink)
Since Mill's seminal work On Liberty, philosophers and political theorists have accepted that we should respect the decisions of individual agents when those decisions affect no one other than themselves. Indeed, to respect autonomy is often understood to be the chief way to bear witness to the intrinsic value of persons. In this book, Sarah Conly rejects the idea of autonomy as inviolable. Drawing on sources from behavioural economics and social psychology, she argues that we are so often irrational (...) in making our decisions that our autonomous choices often undercut the achievement of our own goals. Thus in many cases it would advance our goals more effectively if government were to prevent us from acting in accordance with our decisions. Her argument challenges widely held views of moral agency, democratic values and the public/private distinction, and will interest readers in ethics, political philosophy, political theory and philosophy of law. (shrink)
We have been teaching gender issues and feminist theory for many years, and we know that there is certainly a diversity of views among women, and men, about what counts as feminist or as good for women. Some may see a competent woman running for V.P as inevitably a step forward for women's equality. But consider this.
One of the main themes that has emerged from behavioral decision research during the past three decades is the view that people's preferences are often constructed in the process of elicitation. This idea is derived from studies demonstrating that normatively equivalent methods of elicitation (e.g., choice and pricing) give rise to systematically different responses. These preference reversals violate the principle of procedure invariance that is fundamental to all theories of rational choice. If different elicitation procedures produce different orderings of options, (...) how can preferences be defined and in what sense do they exist? This book shows not only the historical roots of preference construction but also the blossoming of the concept within psychology, law, marketing, philosophy, environmental policy, and economics. Decision making is now understood to be a highly contingent form of information processing, sensitive to task complexity, time pressure, response mode, framing, reference points, and other contextual factors. (shrink)
In this essay, we extend earlier inferentialist-expressivist treatments of traditional logical, semantic, modal, and representational vocabulary (Brandom 1994, 2008, 2015; Peregrin 2014) to explanatory vocabulary. From this perspective, Inference to the Best Explanation (IBE) appears to be an obvious starting point. In its simplest formulation, IBE has the form: A best explains why B, B; so A. It thereby captures one of the central inferential features of explanation. An inferentialist-expressivist treatment of “best explains” would treat it as a logical operator. (...) Analogous to the inferentialist-expressivist treatment of other logical operators, this essay aims to provide introduction and elimination rules for “best explains.” Indeed, by exhibiting a form of detachment, IBE superficially looks like an elimination rule. The sequent calculus LEA+, described in Section 5 below, makes good on this intuition. By showing how “A best explains why B” is related to the underlying, scientific inference “A, so B,” we can purchase the inference ticket of IBE for no more than the cost of science’s material inferences. (shrink)
In a volume devoted to philosophy, religion and the spiritual life, I would like to focus the later part of my essay on a comparison of two Christian spiritual writings of the fourteenth century, the anonymous Cloud of Unknowing in the West, and the Triads of Gregory Palamas in the Byzantine East. Their examples, for reasons which I shall explain, seem to me rich with implications for some of our current philosophical and theological aporias on the nature of the self. (...) Let me explain my thesis in skeletal form at the outset, for it is a complex one, and has several facets. (shrink)
Ducks lay eggs' is a true sentence, and `ducks are female' is a false one. Similarly, `mosquitoes carry the West Nile virus' is obviously true, whereas `mosquitoes don't carry the West Nile virus' is patently false. This is so despite the egg-laying ducks' being a subset of the female ones and despite the number of mosquitoes that don't carry the virus being ninety-nine times the number that do. Puzzling facts such as these have made generic sentences defy adequate semantic treatment. (...) However complex the truth conditions of generics appear to be, though, young children grasp generics more quickly and readily than seemingly simpler quantifiers such as `all' and `some'. I present an account of generics that not only illuminates the strange truth conditions of generics, but also explains how young children find them so comparatively easy to acquire. I then argue that generics give voice to our most cognitively primitive generalizations and that this hypothesis accounts for a variety of facts ranging from acquisition patterns to cross-linguistic data concerning the phonological articulation of operators. I go on to develop an account of the nature of these cognitively fundamental generalizations and argue that this account explains the strange truth-conditional behavior of generics. (shrink)
In his “EMU and Inference,” Mark Newman European Journal for Philosophy of Science, 4:55–74, 2014 provides several interesting challenges to my explanatory model of understanding :15–37, 2012). I offer three replies to Newman’s paper. First, Newman incorrectly attributes to EMU an overly restrictive view about the role of abilities in understanding. Second, his main argument against EMU rests on this incorrect attribution, and would still face difficulties even if this attribution were correct. Third, contrary to his stated ambitions, his own, (...) inferential model of understanding does not have any distinctive advantages over EMU. These three points defend EMU against Newman’s objections. (shrink)
Some omissions seem to be causes. For example, suppose Barry promises to water Alice’s plant, doesn’t water it, and that the plant then dries up and dies. Barry’s not watering the plant – his omitting to water the plant – caused its death. But there is reason to believe that if omissions are ever causes, then there is far more causation by omission than we ordinarily think. In other words, there is reason to think the following thesis true.
The Archival and Constructive views of memory offer contrasting characterizations of remembering and its relation to memory errors. I evaluate the descriptive adequacy of each by offering a close analysis of one of the most prominent experimental techniques by which memory errors are elicited—the Deese-Roediger-McDermott paradigm. Explaining the DRM effect requires appreciating it as a distinct form of memory error, which I refer to as misremembering. Misremembering is a memory error that relies on successful retention of the targeted event. It (...) differs from both successful remembering and from confabulation errors, where the representation produced is wholly inaccurate. As I show, neither the Archival nor the Constructive View can account for the DRM effect because they are insensitive to misremembering’s unique explanatory demands. Fortunately, the explanatory limitations of the Archival and Constructive Views are complementary. This suggests a way.. (shrink)
Plausible (eikotic) reasoning known from ancient Greek (late Academic) skeptical philosophy is shown to be a clear notion that can be analyzed by argumentation methods, and that is important for argumentation studies. It is shown how there is a continuous thread running from the Sophists to the skeptical philosopher Carneades, through remarks of Locke and Bentham on the subject, to recent research in artificial intelligence. Eleven characteristics of plausible reasoning are specified by analyzing key examples of it recognized as important (...) in ancient Greek skeptical philosophy using an artificial intelligence model called the Carneades Argumentation System (CAS). By applying CAS to ancient examples it is shown how plausible reasoning is especially useful for gaining a better understanding of evidential reasoning in law, and argued that it can also be applied to everyday argumentation. Our analysis of the snake and rope example of Carneades is also used to point out some ways CAS needs to be extended if it is to more fully model the views of this ancient philosopher on argumentation. (shrink)
The purpose of this paper is to analyze the structure and the defeasibility conditions of argument from analogy, addressing the issues of determining the nature of the comparison underlying the analogy and the types of inferences justifying the conclusion. In the dialectical tradition, different forms of similarity were distinguished and related to the possible inferences that can be drawn from them. The kinds of similarity can be divided into four categories, depending on whether they represent fundamental semantic features of the (...) terms of the comparison or non-semantic ones, indicating possible characteristics of the referents. Such distinct types of similarity characterize different kinds of analogical arguments, all based on a similar general structure, in which a common genus is abstracted. Depending on the nature of the abstracted common feature, different rules of inference will apply, guaranteeing the attribution of the analogical predicate to the genus and to the primary subject. This analysis of similarity and the relationship thereof with the rules of inference allows a deeper investigation of the defeasibility conditions. (shrink)
Confabulation is a symptom central to many psychiatric diagnoses and can be severely debilitating to those who exhibit the symptom. Theorists, scientists, and clinicians have an understandable interest in the nature of confabulation—pursuing ways to define, identify, treat, and perhaps even prevent this memory disorder. Appeals to confabulation as a clinical symptom rely on an account of memory’s function from which cases like the above can be contrasted. Accounting for confabulation is thus an important desideratum for any candidate theory of (...) memory. Many contemporary memory theorists now endorse Constructivism, where memory is understood as a capacity for constructing plausible representations of past events. Constructivism’s aim is to account for and normalize the prevalence of memory errors in everyday life. Errors are plausible constructions that, on a particular occasion have led to error. They are not, however, evidence of malfunction in the memory system. While Constructivism offers an uplifting repackaging of the memory errors to which we are all susceptible, it has troubling implications for appeals to confabulation in psychiatric diagnosis. By accommodating memory errors within our understanding of memory’s function, Constructivism runs the risk of being unable to explain how confabulation errors are evidence of malfunction. After reviewing the literature on confabulation and Constructivism, respectively, I identify the tension between them and explore how different versions of Constructivism may respond. The paper concludes with a proposal for distinguishing between kinds of false memory—specifically, between misremembering and confabulation—that may provide a route to their reconciliation. (shrink)
We offer a new account of the role of values in theory choice that captures a temporal dimension to the values themselves. We argue that non-epistemic values sometimes serve as “inquiry tickets,” justifying scientists’ pursuit of certain questions in the short run, while the answers to those questions mitigate transient underdetermination in the long run. Our account of inquiry tickets shows that the role of non-epistemic values need not be restricted to belief or acceptance in order to be relevant to (...) hypothesis choice: the relevance of non-epistemic values to a particular cognitive attitude with respect to h vary over time. (shrink)
In this essay, I provide normative guidelines for developing a philosophically interesting and plausible version of social constructivism as a philosophy of science, wherein science aims for social-epistemic values rather than for truth or empirical adequacy. This view is more plausible than the more radical constructivist claim that scientific facts are constructed. It is also more interesting than the modest constructivist claim that representations of such facts emerge in social contexts, as it provides a genuine rival to the scientific axiologies (...) of scientific realists and constructive empiricists. I further contrast my view with positions holding that the aims of science are context dependent, that the unit of normative analysis is the scientific community, and that the aims of science are non-epistemic social values. (shrink)
Explanatory contrastivists hold that we often explain phenomena of the form p rather than q. In this paper, I present a new, social‐epistemological model of contrastive explanation—accountabilism. Specifically, my view is inspired by social‐scientific research that treats explanations fundamentally as accounts; that is, communicative actions that restore one's social status when charged with questionable behaviour. After developing this model, I show how accountabilism provides a more comprehensive model of contrastive explanation than the causal models of contrastive explanation that are currently (...) en vogue. (shrink)
According to the Causal Theory of Memory, remembering a particular past event requires a causal connection between that event and its subsequent representation in memory, specifically, a connection sustained by a memory trace. The CTM is the default view of memory in contemporary philosophy, but debates persist over what the involved memory traces must be like. Martin and Deutscher argued that the CTM required memory traces to be structural analogues of past events. Bernecker and Michaelian, contemporary CTM proponents, reject structural (...) analogues in favor of memory traces as distributed patterns of event features. The proposals are understood as distinct accounts of how memory traces represent past events. But there are two distinct questions one could ask about a trace’s representational features. One might ask how memory traces, qua mental representations, have their semantic properties. Or, what makes memory traces, qua mental representations of memories, distinct from other mental representations. Proponents of the CTM, both past and present, have failed to keep these two questions distinct. The result is a serious but unnoticed problem for the CTM in its current form. Distributed memory traces are incompatible with the CTM. Such traces do not provide a way to track the causal history of individual memories, as the CTM requires. If memory traces are distributed patterns of event features, as Bernecker and Michaelian each claim, then the CTM cannot be right. (shrink)