Enthymemes are traditionally defined as arguments in which some elements are left unstated. It is an empirical fact that enthymemes are both enormously frequent and appropriately understood in everyday argumentation. Why is it so? We outline an answer that dispenses with the so called "principle of charity", which is the standard notion underlying most works on enthymemes. In contrast, we suggest that a different force drives enthymematic argumentation—namely, parsimony, i.e. the tendency to optimize resource consumption, in light of the agent's (...) goals. On this view, the frequent use of enthymemes does not indicate sub-optimal performance of arguers, requiring appeals to charity for their redemption. On the contrary, it is seen as a highly adaptive argumentation strategy, given the need of everyday reasoners to optimize their cognitive resources. Considerations of parsimony also affect enthymeme reconstruction, i.e. the process by which the interpreter makes sense of the speaker's enthymemes. Far from being driven by any pro-social cooperative instinct, interpretative efforts are aimed at extracting valuable information at reasonable costs from available sources. Thus, there is a tension between parsimony and charity, insofar as the former is a non-social constraint for self-regulation of one's behaviour, whereas the latter implies a pro-social attitude. We will argue that some versions of charity are untenable for enthymeme interpretation, while others are compatible with the view defended here, but still require parsimony to expose the ultimate reasons upon which a presumption of fair treatment in enthymeme reconstruction is founded. (shrink)
In a world plagued by disagreement and conflict one might expect that the exact sciences of logic and mathematics would provide a safe harbor. In fact these disciplines are rife with internal divisions between different, often incompatible, systems. Do these disagreements admit of resolution? Can such resolution be achieved without disturbing assumptions that the theorems of logic and mathematics state objective truths about the real world? In this original and historically rich book John Woods explores apparently intractable disagreements in logic (...) and the foundations of mathematics and sets out conflict resolution strategies that evade or disarm these stalemates. An important sub-theme of the book is the extent to which pluralism in logic and the philosophy of mathematics undermines realist assumptions. This book makes an important contribution to such areas of philosophy as logic, philosophy of language and argumentation theory. It will also be of interest to mathematicians and computer scientists. (shrink)
Formal nonmonotonic systems try to model the phenomenon that common sense reasoners are able to “jump” in their reasoning from assumptions Δ to conclusions C without their being any deductive chain from Δ to C. Such jumps are done by various mechanisms which are strongly dependent on context and knowledge of how the actual world functions. Our aim is to motivate these jump rules as inference rules designed to optimise survival in an environment with scant resources of effort and time. (...) We begin with a general discussion and quickly move to Section 3 where we introduce five resource principles. We show that these principles lead to some well known nonmonotonic systems such as Nute’s defeasible logic. We also give several examples of practical reasoning situations to illustrate our principles. (shrink)
This volume serves as a detailed introduction for those new to the field as well as a rich source of new insights and potential research agendas for those already engaged with the philosophy of economics.
One of our purposes here is to expose something of the elementary logical structure of abductive reasoning, and to do so in a way that helps orient theorists to the various tasks that a logic of abduction should concern itself with. We are mindful of criticisms that have been levelled against the very idea of a logic of abduction; so we think it prudent to proceed with a certain diffidence. That our own account of abduction is itself abductive is methodological (...) expression of this diffidence. A second objective is to test our conception of abduction's logical structure against some of the more promising going accounts of abductive reasoning. We offer our various suggestions in a benignly advisory way. The primary targets of our advice is ourselves, meant as guides to work we have yet to complete or, in some instances, start. It is possible that our colleagues in the abduction research communities will find our counsel to be of some interest. But we repeat that our first concern is to try to get ourselves straight about what a logic of abduction should encompass. (shrink)
Abduction is or subsumes a process of inference. It entertains possible hypotheses and it chooses hypotheses for further scrutiny. There is a large literature on various aspects of non-symbolic, subconscious abduction. There is also a very active research community working on the symbolic (logical) characterisation of abduction, which typically treats it as a form of hypothetico-deductive reasoning. In this paper we start to bridge the gap between the symbolic and sub-symbolic approaches to abduction. We are interested in benefiting from developments (...) made by each community. In particular, we are interested in the ability of non-symbolic systems (neural networks) to learn from experience using efficient algorithms and to perform massively parallel computations of alternative abductive explanations. At the same time, we would like to benefit from the rigour and semantic clarity of symbolic logic. We present two approaches to dealing with abduction in neural networks. One of them uses Connectionist Modal Logic and a translation of Horn clauses into modal clauses to come up with a neural network ensemble that computes abductive explanations in a top-down fashion. The other combines neural-symbolic systems and abductive logic programming and proposes a neural architecture which performs a more systematic, bottom-up computation of alternative abductive explanations. Both approaches employ standard neural network architectures which are already known to be highly effective in practical learning applications. Differently from previous work in the area, our aim is to promote the integration of reasoning and learning in a way that the neural network provides the machinery for cognitive computation, inductive learning and hypothetical reasoning, while logic provides the rigour and explanation capability to the systems, facilitating the interaction with the outside world. Although it is left as future work to determine whether the structure of one of the proposed approaches is more amenable to learning than the other, we hope to have contributed to the development of the area by approaching it from the perspective of symbolic and sub-symbolic integration. (shrink)
Notwithstanding their technical virtuosity and growing presence in mainstream thinking, game theoretic logics have attracted a sceptical question: "Granted that logic can be done game theoretically, but what would justify the idea that this is the preferred way to do it?'' A recent suggestion is that at least part of the desired support might be found in the Greek dialectical writings. If so, perhaps we could say that those works possess a kind of foundational significance. The relation of being foundational (...) for is interesting in its own right. In this paper, I explore its ancient applicability to relevant, paraconsistent and nonmonotonic logics, before returning to the question of its ancestral tie, or want of one, to the modern logics of games. (shrink)
An agent-centered, goal-directed, resource-bound logic of human reasoning would do well to note that individual cognitive agency is typified by the comparative scantness of available cognitive resourcess ignorance-preserving character. My principal purpose here is to tie abduction’s scarce-resource adjustment capacity to its ignorance preservation.
Traditionally, an enthymeme is an incomplete argument, made so by the absence of one or more of its constituent statements. An enthymeme resolution strategy is a set of procedures for finding those missing elements, thus reconstructing the enthymemes and restoring its meaning. It is widely held that a condition on the adequacy of such procedures is that statements restored to an enthymeme produce an argument that is good in some given respect in relation to which the enthymeme itself is bad. (...) In previous work, we emphasized the role of parsimony in enthymeme resolution strategies and concomitantly downplayed the role of charity . In the present paper, we take the analysis of enthymemes a step further. We will propose that if the pragmatic features that attend the phenomenon of enthymematic communication are duly heeded, the very idea of reconstructing enthymemes loses much of its rationale, and their interpretation comes to be conceived in a new light. (shrink)
Much of cognitive science seeks to provide principled descriptions of various kinds and aspects of rational behaviour, especially in beings like us or AI simulacra of beings like us. For the most part, these investigators presuppose an unarticulated common sense appreciation of the rationality that such behaviour consists in. On those occasions when they undertake to bring the relevant norms to the surface and to give an account of that to which they owe their legitimacy, these investigators tend to favour (...) one or other of three approaches to the normativity question. They are the analyticity or truth-in-a-model approach; the pluralism approach; and the reffective equilibrium approach.All three of these approaches to the normativity question are seriously flawed, never mind that the first two have some substantial provenance among logicians and the third has enjoyed a flourishing philosophical career.Against these views, we propose a strong version of what might be called normatively immanent descriptivism. We attempt to elucidate its virtues and to deal with what appears to be its most central vulnerability, embodied in the plain fact that actual human behaviour is sometimes irrational. (shrink)
A possible worlds treatment of the normal alethic modalities was, after classical model theory, logic’s most significant semantic achievement in the century just past. Kripke’s groundbreaking paper appeared in 1959 and, in the scant few succeeding years, its principal analytical tool, possible worlds, was adapted to serve a range of quite different-seeming purposes – from nonnormal logics, to epistemic and doxastic logics, deontic and temporal logics and, not much later, the logic of counterfactual conditionals. In short order, possible worlds acquired (...) a twofold reputation which has steadily enlarged to the present day. They were celebrated for both their mathematical power and their sheer versatility. This sets the stage for what I want to do here. I wish to explore the extent to which the supposed versatility of a possible worlds semantics is justified. In so doing, I shall confine my attention to its role in (1) logics of counterfactual conditionals, and (2) logics of belief. The question I pose is, why and on what grounds should we think that the device of possible worlds turns the semantic trick for these logics? My answer is that they do not turn the trick for them. Whereupon a further question presses for attention. If possible worlds semantics don’t work there, why does virtually everyone think that they do? Answering this second question is risky. Who am I to say why virtually everyone thinks that the possible worlds approach is more successful than I do? Who has vouchsafed me these powers? I shall try to mitigate the riskiness of my answer by contextualizing the evaluation of this approach in the following ways. First, the triumph of possible worlds occurred in the midst of a powerful general trend in logical theory, especially, in the past 60 years. In that period, logical theory became aggressively and widely pluralistic. Second, the versatility – the sheer ubiquity – of possible worlds as a tool of semantic and philosophical analysis, gives to possible worlds a kind of hegemonic standing.. (shrink)
In all three of its manifestations, —abusive, circumstantial and tu quoque—the role of the ad hominem is to raise a doubt about the opposite party’s casemaking bona-fides.Provided that it is both presumptive and provisional, drawing such a conclusion is not a logical mistake, hence not a fallacy on the traditional conception of it. More remarkable is the role of the ad hominem retort in seeking the reassurance of one’s opponent when, on the face of it, reassurance is precisely what he (...) would seem to be ill-placed to give. Brief concluding remarks are given over to an examination of rival approaches to the ad hominem, especially those in which it is conceived of as a dialectical error. (shrink)
Greek, Indian and Arabic Logic marks the initial appearance of the multi-volume Handbook of the History of Logic. Additional volumes will be published when ready, rather than in strict chronological order. Soon to appear are The Rise of Modern Logic: From Leibniz to Frege. Also in preparation are Logic From Russell to Gödel, The Emergence of Classical Logic, Logic and the Modalities in the Twentieth Century, and The Many-Valued and Non-Monotonic Turn in Logic. Further volumes will follow, including Mediaeval and (...) Renaissance Logic and Logic: A History of its Central. In designing the Handbook of the History of Logic, the Editors have taken the view that the history of logic holds more than an antiquarian interest, and that a knowledge of logic's rich and sophisticated development is, in various respects, relevant to the research programmes of the present day. Ancient logic is no exception. The present volume attests to the distant origins of some of modern logic's most important features, such as can be found in the claim by the authors of the chapter on Aristotle's early logic that, from its infancy, the theory of the syllogism is an example of an intuitionistic, non-monotonic, relevantly paraconsistent logic. Similarly, in addition to its comparative earliness, what is striking about the best of the Megarian and Stoic traditions is their sophistication and originality. Logic is an indispensably important pivot of the Western intellectual tradition. But, as the chapters on Indian and Arabic logic make clear, logic's parentage extends more widely than any direct line from the Greek city states. It is hardly surprising, therefore, that for centuries logic has been an unfetteredly international enterprise, whose research programmes reach to every corner of the learned world. Like its companion volumes, Greek, Indian and Arabic Logic is the result of a design that gives to its distinguished authors as much space as would be needed to produce highly authoritative chapters, rich in detail and interpretative reach. The aim of the Editors is to have placed before the relevant intellectual communities a research tool of indispensable value. Together with the other volumes, Greek, Indian and Arabic Logic, will be essential reading for everyone with a curiosity about logic's long development, especially researchers, graduate and senior undergraduate students in logic in all its forms, argumentation theory, AI and computer science, cognitive psychology and neuroscience, linguistics, forensics, philosophy and the history of philosophy, and the history of ideas. (shrink)
For scientific essentialists, the only logical possibilities of existence are the real (or metaphysical) ones, and such possibilities, they say, are relative to worlds. They are not a priori, and they cannot just be invented. Rather, they are discoverable only by the a posteriori methods of science. There are, however, many philosophers who think that real possibilities are knowable a priori, or that they can just be invented. Marc Lange [Lange 2004] thinks that they can be invented, and tries to (...) use his inventions to argue that the essentialist theory of counterfactual conditionals developed in Scientific Essentialism [Ellis 2001, hereafter SE ] is flawed. (shrink)
Based on the premise that what is relevant, consistent, or true may change from context to context, a formal framework of relevance and context is proposed in which • contexts are mathematical entities • each context has its own language with relevant implication • the languages of distinct contexts are connected by embeddings • inter-context deduction is supported by bridge rules • databases are sets of formulae tagged with deductive histories and the contexts they belong to • abduction and revision (...) are supported by a notion of consistency of formulae and sets of formulae which are relative to a context, and which can, in turn, be seen as constituents of agendas. (shrink)
Consider the proposition, "Informal logic is a subdiscipline of philosophy". The best chance of showing this to be true is showing that informal logic is part of logic, which in turn is a part of philosophy. Part 1 is given over to the task of sorting out these connections. If successful, informal logic can indeed be seen as part of philosophy; but there is no question of an exclusive relationship. Part 2 is a critical appraisal of the suggestion that informal (...) logic is applied epistemology. Part 3 examines the claim that informal logic has failed to penetrate into mainstream philosophy, and suggestions for amelioration are considered. (shrink)
Logic’s historically central mission has been to provide formally precise descriptions of logical consequence. This was done with two broad expectations in mind. One was that a pre-theoretically recognizable concept of consequence would be present in the ensuing formalization. The other was that the formalization would be mathematically mature. The first expectation calls for conceptual adequacy. The other calls for technical virtuosity. The record of the past century and a third discloses a tension between the two. Accordingly, logicians have sought (...) a reasoned, if delicate, rapprochement, one in which each expectation would be given its due, but well-short of free sway. Recent developments have imperiled this perestroika. One is logic’s massive and often rivalrous pluralism, and the cheapening relativism to which it beckons. This is exacerbated by the long-acknowledged part that the formal representations of logic distort the logical particles of natural language. The present paper discusses what might be done about this. (shrink)
This is an examination of the dialectical structure of deep disagreements about matters not open to empirical check. A dramatic case in point is the Law of Non-Contradiction . Dialetheists are notoriously of the view that, in some few cases, LNC has a true negation. The traditional position on LNC is that it is non-negotiable. The standard reason for thinking it non-negotiable is, being a first principle, there is nothing to negotiate. One of my purposes is to show that the (...) first-principle defence of LNC is inadequate. A second purpose is to argue that it flows from this inadequacy that LNC stands or falls on economic considerations, much in the spirit of Quine's pragmatism about logic generally. This is a tactical victory for dialetheists. It gives them room to make the case against LNC on cost-benefit grounds. As things presently stand, no such case can be considered decisive. But, given that costs and benefits shift with changing circumstances, it is possible that a winning case for the dialetheist may present itself in the future. Notwithstanding the rivalry between consistentists and dialetheists, they share a common opponent. This is trivialism, the doctrine that everything whatever is true. It is an ironic alliance, in as much as the dialetheist's case against the consistentist can be adapted to a defence of trivialism. How damaging this turns out to be depends on the adequacy of the reasons for the dialetheist's rejection of trivialism. My further purpose is to show that the damage is slighter than dialetheists commonly believe. (shrink)
When someone is asked to speak his mind, it is sometimes possible for him to furnish what his utterance appears to have omitted. In such cases we might say that he had a mind to speak. Sometimes, however, the opposite is true. Asked to speak his mind, our speaker finds that he has no mind to speak. When it is possible to speak one's mind and when not is largely determined by the kinds of beings we are and by the (...) kinds of resources we are able to draw upon. In either case, not speaking one's mind is leaving something out whose articulation would or could matter for the purposes for which one was speaking in the first place. Inarticulation is no fleetingly contingent and peripheral phenomenon in human thinking and discourse. It is a substantial and dominant commonplace. In Part One I attempt to say something about what it is about the human agent that makes inarticulateness so rife. In Part Two, I consider various strategies for making the unarticulated explicit, and certain constraints on such processes. I shall suggest, among other things, that standard treatments of enthymematic reconstruction are fundamentally misconceived. (shrink)
This is an examination of similarities and differences between two recent models of abductive reasoning. The one is developed in Atocha Aliseda’s Abductive Reasoning: Logical Investigations into the Processes of Discovery and Evaluation (2006). The other is advanced by Dov Gabbay and the present author in their The Reach of Abduction: Insight and Trial (2005). A principal difference between the two approaches is that in the Gabbay-Woods model, but not in the Aliseda model, abductive inference is ignorance-preserving. A further differ-ence (...) is that Aliseda reconstructs the abduction relation in a semantic tableaux environment, whereas the Woods-Gabbay model, while less systematic, is more general. Of particular note is the connection between abduction and legal reasoning. (shrink)
True to the spirit of Topoi’s Untimely Reviews section, the present essay is a work of the counterfactual imagination. Suppose that Quine’s “Two Dogmas” had been written and published in the late 1990s rather than the early 1950s. What, in those circumstances, would philosophical commentary look like, especially against the marked developments in Quine’s philosophy in that same period? In short, how would Quine’s “Two Dogmas” stand up as a late 1990s paper rather than an early 1950s paper? Answering that (...) question is my task here. (shrink)
Epistemology and informal logic have overlapping and broadly similar subject matters. A principle of methodological symmetry is: philosophical theories of sufficiently similar subject matters should engage similar methods. Suppose the best way to do epistemology is in highly formalized ways, with a large role for mathematical methods. The symmetry principle suggests this is also the best way to do the logic of the reasoning and argument, the subject matter of informal logic. A capitulation to mathematics is inimical to informal logicians, (...) yet formal methods and mathematical models are an emerging force in epistemology. What is to be done? What’s sauce for the goose of epistemology is sauce for the gander of informal logic. (shrink)
I discuss eight theses espoused or occasioned by Toulmin: The validity standard is nearly always the wrong standard for real-life reasoning. Little in good reasoning is topic neutral. The probability calculus distorts much probabilistic reasoning. Scant resources have a benign influence on human reasoning. Theoretical progress and conceptual change are connected. Logic should investigate the cognitive aspects of reasoning and arguing. Ideal models are unsuitable for normativity. The role of the Can Do Principle.