This monograph examines truth in fiction by applying the techniques of a naturalized logic of human cognitive practices. The author structures his project around two focal questions. What would it take to write a book about truth in literary discourse with reasonable promise of getting it right? What would it take to write a book about truth in fiction as true to the facts of lived literary experience as objectivity allows? It is argued that the most semantically distinctive feature of (...) the sentences of fiction is that they areunambiguously true and false together. It is true that Sherlock Holmes lived at 221B Baker Street and also concurrently false that he did. A second distinctive feature of fiction is that the reader at large knows of this inconsistency and isn’t in the least cognitively molested by it. Why, it is asked, would this be so? What would explain it? Two answers are developed. According to the no-contradiction thesis, the semantically tangled sentences of fiction are indeed logically inconsistent but not logically contradictory. According to the no-bother thesis, if the inconsistencies of fiction were contradictory, a properly contrived logic for the rational management of inconsistency would explain why readers at large are not thrown off cognitive stride by their embrace of those contradictions. As developed here, the account of fiction suggests the presence of an underlying three - or four-valued dialethic logic. The author shows this to be a mistaken impression. There are only two truth-values in his logic of fiction. The naturalized logic of Truth in Fiction jettisons some of the standard assumptions and analytical tools of contemporary philosophy, chiefly because the neurotypical linguistic and cognitive behaviour of humanity at large is at variance with them. Using the resources of a causal response epistemology in tandem with the naturalized logic, the theory produced here is data-driven, empirically sensitive, and open to a circumspect collaboration with the empirical sciences of language and cognition. (shrink)
John Woods' The Logic of Fiction, now thirty-five years old, is a ground-breaking event in the establishment of the semantics of fiction as a stand-alone research programme in the philosophies of language and logic. There is now a large literature about these matters, but Woods' book retains a striking freshness, and still serves as a convincing template of the treatment options for the field's key problems. The book now appears in a second edition with a new Foreword by Nicholas Griffin (...) and an extended bibliography covering the period 1969-2009. As Griffin notes in his Foreword, it is "surprising on looking back to discover how little was written on the semantics of fiction before John Woods' The Logic of Fiction was published in 1974. The surprise is the greater because Woods' book appeared after almost a quarter century of fierce philosophical debate about reference Fictional discourse, one would have thought, would be an important testing ground for philosophical theories of referential expressions and one, moreover, in which the standard theories would likely be tested to destruction. " " One of the great merits of Woods' book is that it takes seriously the wide-ranging demands that fiction imposes on logic and semantics, and does not try to force fiction into some pre-conceived logical mould. but thanks to Woods' pioneering efforts, we are much closer to one now than we were when he set out to write his book. His book was not the last word on the logic of fiction; it was much more important: it was nearly the first." NICHOLAS GRIFFIN is Canada Research Chair in Philosophy at McMaster University. Recent publications include Russell vs Meinong: The Legacy of "On Denoting," edited with Dale Jacquette. JOHN WOODS is Director of the Abductive Systems Group at the University of British Columbia and Charles S. Peirce Visiting Professor of Logic in the Group on Logic and Computational Science, King's College London. He has two forthcoming books on fiction - an edited volume, Fictions and Models: New Essays, and a research monograph, Sherlock's Member: New Perspectives on the Semantics of Fiction, both to appear in 2010. (shrink)
In a world plagued by disagreement and conflict one might expect that the exact sciences of logic and mathematics would provide a safe harbor. In fact these disciplines are rife with internal divisions between different, often incompatible, systems. Do these disagreements admit of resolution? Can such resolution be achieved without disturbing assumptions that the theorems of logic and mathematics state objective truths about the real world? In this original and historically rich book John Woods explores apparently intractable disagreements in logic (...) and the foundations of mathematics and sets out conflict resolution strategies that evade or disarm these stalemates. An important sub-theme of the book is the extent to which pluralism in logic and the philosophy of mathematics undermines realist assumptions. This book makes an important contribution to such areas of philosophy as logic, philosophy of language and argumentation theory. It will also be of interest to mathematicians and computer scientists. (shrink)
Enthymemes are traditionally defined as arguments in which some elements are left unstated. It is an empirical fact that enthymemes are both enormously frequent and appropriately understood in everyday argumentation. Why is it so? We outline an answer that dispenses with the so called "principle of charity", which is the standard notion underlying most works on enthymemes. In contrast, we suggest that a different force drives enthymematic argumentation—namely, parsimony, i.e. the tendency to optimize resource consumption, in light of the agent's (...) goals. On this view, the frequent use of enthymemes does not indicate sub-optimal performance of arguers, requiring appeals to charity for their redemption. On the contrary, it is seen as a highly adaptive argumentation strategy, given the need of everyday reasoners to optimize their cognitive resources. Considerations of parsimony also affect enthymeme reconstruction, i.e. the process by which the interpreter makes sense of the speaker's enthymemes. Far from being driven by any pro-social cooperative instinct, interpretative efforts are aimed at extracting valuable information at reasonable costs from available sources. Thus, there is a tension between parsimony and charity, insofar as the former is a non-social constraint for self-regulation of one's behaviour, whereas the latter implies a pro-social attitude. We will argue that some versions of charity are untenable for enthymeme interpretation, while others are compatible with the view defended here, but still require parsimony to expose the ultimate reasons upon which a presumption of fair treatment in enthymeme reconstruction is founded. (shrink)
One of our purposes here is to expose something of the elementary logical structure of abductive reasoning, and to do so in a way that helps orient theorists to the various tasks that a logic of abduction should concern itself with. We are mindful of criticisms that have been levelled against the very idea of a logic of abduction; so we think it prudent to proceed with a certain diffidence. That our own account of abduction is itself abductive is methodological (...) expression of this diffidence. A second objective is to test our conception of abduction's logical structure against some of the more promising going accounts of abductive reasoning. We offer our various suggestions in a benignly advisory way. The primary targets of our advice is ourselves, meant as guides to work we have yet to complete or, in some instances, start. It is possible that our colleagues in the abduction research communities will find our counsel to be of some interest. But we repeat that our first concern is to try to get ourselves straight about what a logic of abduction should encompass. (shrink)
In all three of its manifestations, —abusive, circumstantial and tu quoque—the role of the ad hominem is to raise a doubt about the opposite party’s casemaking bona-fides.Provided that it is both presumptive and provisional, drawing such a conclusion is not a logical mistake, hence not a fallacy on the traditional conception of it. More remarkable is the role of the ad hominem retort in seeking the reassurance of one’s opponent when, on the face of it, reassurance is precisely what he (...) would seem to be ill-placed to give. Brief concluding remarks are given over to an examination of rival approaches to the ad hominem, especially those in which it is conceived of as a dialectical error. (shrink)
An agent-centered, goal-directed, resource-bound logic of human reasoning would do well to note that individual cognitive agency is typified by the comparative scantness of available cognitive resourcess ignorance-preserving character. My principal purpose here is to tie abduction’s scarce-resource adjustment capacity to its ignorance preservation.
Consider the proposition, "Informal logic is a subdiscipline of philosophy". The best chance of showing this to be true is showing that informal logic is part of logic, which in turn is a part of philosophy. Part 1 is given over to the task of sorting out these connections. If successful, informal logic can indeed be seen as part of philosophy; but there is no question of an exclusive relationship. Part 2 is a critical appraisal of the suggestion that informal (...) logic is applied epistemology. Part 3 examines the claim that informal logic has failed to penetrate into mainstream philosophy, and suggestions for amelioration are considered. (shrink)
Formal nonmonotonic systems try to model the phenomenon that common sense reasoners are able to “jump” in their reasoning from assumptions Δ to conclusions C without their being any deductive chain from Δ to C. Such jumps are done by various mechanisms which are strongly dependent on context and knowledge of how the actual world functions. Our aim is to motivate these jump rules as inference rules designed to optimise survival in an environment with scant resources of effort and time. (...) We begin with a general discussion and quickly move to Section 3 where we introduce five resource principles. We show that these principles lead to some well known nonmonotonic systems such as Nute’s defeasible logic. We also give several examples of practical reasoning situations to illustrate our principles. (shrink)
Traditionally, an enthymeme is an incomplete argument, made so by the absence of one or more of its constituent statements. An enthymeme resolution strategy is a set of procedures for finding those missing elements, thus reconstructing the enthymemes and restoring its meaning. It is widely held that a condition on the adequacy of such procedures is that statements restored to an enthymeme produce an argument that is good in some given respect in relation to which the enthymeme itself is bad. (...) In previous work, we emphasized the role of parsimony in enthymeme resolution strategies and concomitantly downplayed the role of charity . In the present paper, we take the analysis of enthymemes a step further. We will propose that if the pragmatic features that attend the phenomenon of enthymematic communication are duly heeded, the very idea of reconstructing enthymemes loses much of its rationale, and their interpretation comes to be conceived in a new light. (shrink)
This volume serves as a detailed introduction for those new to the field as well as a rich source of new insights and potential research agendas for those already engaged with the philosophy of economics.
Much of cognitive science seeks to provide principled descriptions of various kinds and aspects of rational behaviour, especially in beings like us or AI simulacra of beings like us. For the most part, these investigators presuppose an unarticulated common sense appreciation of the rationality that such behaviour consists in. On those occasions when they undertake to bring the relevant norms to the surface and to give an account of that to which they owe their legitimacy, these investigators tend to favour (...) one or other of three approaches to the normativity question. They are the analyticity or truth-in-a-model approach; the pluralism approach; and the reffective equilibrium approach.All three of these approaches to the normativity question are seriously flawed, never mind that the first two have some substantial provenance among logicians and the third has enjoyed a flourishing philosophical career.Against these views, we propose a strong version of what might be called normatively immanent descriptivism. We attempt to elucidate its virtues and to deal with what appears to be its most central vulnerability, embodied in the plain fact that actual human behaviour is sometimes irrational. (shrink)
This is an examination of similarities and differences between two recent models of abductive reasoning. The one is developed in Atocha Aliseda’s Abductive Reasoning: Logical Investigations into the Processes of Discovery and Evaluation (2006). The other is advanced by Dov Gabbay and the present author in their The Reach of Abduction: Insight and Trial (2005). A principal difference between the two approaches is that in the Gabbay-Woods model, but not in the Aliseda model, abductive inference is ignorance-preserving. A further differ-ence (...) is that Aliseda reconstructs the abduction relation in a semantic tableaux environment, whereas the Woods-Gabbay model, while less systematic, is more general. Of particular note is the connection between abduction and legal reasoning. (shrink)
If one looks to the current textbook lore for reliable taxonomic and analytical information about the petitio principii, one is met with conceptual disarray and much too much nonsense. The present writers have recently attempted to furnish the beginnings of a theoretical reconstruction of this fallacy which is at once faithful to its formidable complexity yet useful as guide for its detection and avoidance. The fact is that the petitio has had a lengthy and interesting history, and in this paper (...) we shall want to explore certain features of its development, such as it may have been. The principal origins of the concept of circular argument are to be found in Aristotle. The Aristotelian doctrine recurs with variations in the sophismata literature of the middle ages and in logic texts and manuals right up to the present day. (shrink)
IT is strange that the informal fallacies should strike us as such obvious breaches of thinking and advocacy, yet should have met with such little success in finding a respectable home within mature logical theory. It might seem that respectable and mature logical theory is most mature and most respectable in the theory of propositions, and that its maturity and respectability in the other logical domains rapidly diminish in inverse proportion to the susceptibility of those domains to be reduced to (...) the logic of propositions. But we are not anxious to promote so severe a view of theoretical accomplishment, and we shall suppose that, at the very least, the informal fallacies have a degree of systematicity that will at once advance our understanding of them nicely beyond the level of intuitive impressions, and also place into retirement the hopelessly inadequate accounts that litter too many otherwise admirable textbooks. (shrink)
In 1902 there arrived in Jena a letter from Russell laying out a proof that shattered Frege’s confidence in logicism, which is widely taken to be the doctrine according to which every truth of arithmetic is re-expressible without relevant loss as a provable truth about a purely logical object. Frege was persuaded that Russell had exposed a pathology in logicism, which faced him with the task of examining its symptoms, diagnosing its cause, assessing its seriousness, arriving at a treatment option, (...) and making an estimate of future prospects. The symptom was the contradiction that had crept into naïve set theory in the form of the set that provably is and is not its own member. The diagnosis was that it is caused by Basic Law V of the Grundgesetze. Triage answers the question, “How bad is it?” The answer was that the contradiction irreparably destroys the logicist project. The treatment option was nil. The disease was untreatable. In due course, the prognosis turned out to be that a scaled-down Fregean logic could have an honourable life as a theory inference for various domains of mathematical discourse, but not for domains containing the logical objects required for logicism. Since there aren’t such objects, there aren’t such domains. On the face of it, Frege’s logicist collapse is astonishing. Why wouldn’t he have repaired the fault in Law V and gone back to the business of bringing logicism to an assured realization? In the course of our reflections, we will have nice occasion to consider the good it might have done Frege to have booked some time with Aristotle had he been able to. By the time we’re finished, we’ll have cause to think that in the end the Russell might well have begged the question against Frege. (shrink)
Historically, the fallacies have been neglected as objects of systematic study. Yet, since Hamblin's famous criticism of the state of fallacy theory, a substantial literature has been produced. A large portion of this literature is the work of Douglas Walton and John Woods. This paper will deal directly with the criticism of that work which has been advanced by van Eemeren and Grootendorst, particularly the complaints found in their writings of 1992, concerning the disunification of the fallacies and the exemplaristic (...) approach of Woods and Walton's theories. It proposes a unification of the theories of Woods and Walton with that of van Eemeren and Grootendorst, and suggests that such a unification could be advantageous to both theories, and highly interesting for fallacy theory in general. (shrink)
This is an examination of the dialectical structure of deep disagreements about matters not open to empirical check. A dramatic case in point is the Law of Non-Contradiction . Dialetheists are notoriously of the view that, in some few cases, LNC has a true negation. The traditional position on LNC is that it is non-negotiable. The standard reason for thinking it non-negotiable is, being a first principle, there is nothing to negotiate. One of my purposes is to show that the (...) first-principle defence of LNC is inadequate. A second purpose is to argue that it flows from this inadequacy that LNC stands or falls on economic considerations, much in the spirit of Quine's pragmatism about logic generally. This is a tactical victory for dialetheists. It gives them room to make the case against LNC on cost-benefit grounds. As things presently stand, no such case can be considered decisive. But, given that costs and benefits shift with changing circumstances, it is possible that a winning case for the dialetheist may present itself in the future. Notwithstanding the rivalry between consistentists and dialetheists, they share a common opponent. This is trivialism, the doctrine that everything whatever is true. It is an ironic alliance, in as much as the dialetheist's case against the consistentist can be adapted to a defence of trivialism. How damaging this turns out to be depends on the adequacy of the reasons for the dialetheist's rejection of trivialism. My further purpose is to show that the damage is slighter than dialetheists commonly believe. (shrink)