One of our purposes here is to expose something of the elementary logical structure of abductive reasoning, and to do so in a way that helps orient theorists to the various tasks that a logic of abduction should concern itself with. We are mindful of criticisms that have been levelled against the very idea of a logic of abduction; so we think it prudent to proceed with a certain difﬁdence. That our own account of abduction is itself abductive is methodological (...) expression of this difﬁ- dence. A second objective is to test our conception of abduction’s logical structure against some of the more promising going accounts of abductive reasoning. We offer our various suggestions in a benignly advisory way. The primary targets of our advice is ourselves, meant as guides to work we have yet to complete or, in some instances, start. It is possible that our colleagues in the abduction research communities will ﬁnd our counsel to be of some interest. But we repeat that our ﬁrst concern is to try to get ourselves straight about what a logic of abduction should encompass. (shrink)
This is an examination of the dialectical structure of deep disagreements about matters not open to empirical check. A dramatic case in point is the Law of Non- Contradiction (LNC). Dialetheists are notoriously of the view that, in some few cases.
The use of models in the construction of scientific theories is as widespread as it is philosophically interesting (and, one might say, vexing).1 In neither philosophical nor scientific practice do we find a univocal concept of model.2 But there is one established usage to which we want to direct our particular attention in this paper, in which a model is constituted by the theorist’s idealizations and abstractions. Idealizations are expressed by statements known to be false. Abstractions are achieved by suppressing (...) what is known to be true. Idealizations over-represent empirical phenomena. Abstractions underrepresent them. We might think of idealizations and abstractions as one another’s duals. Either way, they are purposeful distortions of phenomena on the ground.3.. (shrink)
MacColl is the recent subject of three interesting theses. One is that he is the probable originator of pluralism in logic. The other is that his pluralism expresses an underlying instrumentalism. The third is that the first two help explain his post-1909 neglect. Although there are respects in which he is both a pluralist and an instrumentalist, I will suggest that it is difficult to find in MacColl’s writings a pluralism which honours the threefold attribution of having been originated by (...) him, having been rooted in an instrumentalism adapted to logic, and occasion of his neglect. (shrink)
duction; so we think it prudent to proceed with a certain difﬁdence. That our own account of abduction is itself abductive is methodological expression of this difﬁ- dence. A second objective is to test our conception of abduction’s logical structure against some of the more promising going accounts of abductive reasoning.
The logic that was purpose-built to accommodate the hoped-for reduction of arithmetic gave to language a dominant and pivotal place. Flowing from the founding efforts of Frege, Peirce, and Whitehead and Russell, this was a logic that incorporated proof theory into syntax, and in so doing made of grammar a senior partner in the logicistic enterprise. The seniority was reinforced by soundness and completeness metatheorems, and, in time, Quine would quip that the “grammar [of logic] is linguistics on purpose” [Quine, (...) 1970, p. 15] and that “logic chases truth up the tree of grammar” [Quine, 1970, p. 35]. Nor was the centrality of syntax lost with the G¨. (shrink)
The logic of fiction has been a stand-alone research programme only since the early 1970s.1 It is a fair question as to why in the first place fictional discourse would have drawn the interest of professional logicians. It is a question admitting of different answers. One is that, since fictional names are “empty”, fiction is a primary datum for any logician seeking a suitably comprehensive logic of denotation. Another answer arises from the so-called incompleteness problem, exemplified by the fact (or (...) apparent fact) that some fictional sentences – think of “Sherlock Holmes’ mother was nick-named ‘Polly’” − are neither true nor false. These are sentences to command the attention of logicians who work on non-bivalent logics. A further spur to logical engagement is the supposed fictionality of certain kinds of ideal models in science and certain classes of mathematical objects. No doubt, there are other features of fictional discourse that provide the logician with a natural entré, but perhaps it would also be correct to say that the fiction’s biggest draw for logicians is that our quite common beliefs about the fictional constitute what Nicholas Rescher calls “aporetic clusters”, so named after the Latinized Greek aporos for “impassable”.2 An aporetic cluster is a set of claims such that.. (shrink)
A possible worlds treatment of the normal alethic modalities was, after classical model theory, logic’s most significant semantic achievement in the century just past. Kripke’s groundbreaking paper appeared in 1959 and, in the scant few succeeding years, its principal analytical tool, possible worlds, was adapted to serve a range of quite different-seeming purposes – from nonnormal logics, to epistemic and doxastic logics, deontic and temporal logics and, not much later, the logic of counterfactual conditionals. In short order, possible worlds acquired (...) a twofold reputation which has steadily enlarged to the present day. They were celebrated for both their mathematical power and their sheer versatility. This sets the stage for what I want to do here. I wish to explore the extent to which the supposed versatility of a possible worlds semantics is justified. In so doing, I shall confine my attention to its role in (1) logics of counterfactual conditionals, and (2) logics of belief. The question I pose is, why and on what grounds should we think that the device of possible worlds turns the semantic trick for these logics? My answer is that they do not turn the trick for them. Whereupon a further question presses for attention. If possible worlds semantics don’t work there, why does virtually everyone think that they do? Answering this second question is risky. Who am I to say why virtually everyone thinks that the possible worlds approach is more successful than I do? Who has vouchsafed me these powers? I shall try to mitigate the riskiness of my answer by contextualizing the evaluation of this approach in the following ways. First, the triumph of possible worlds occurred in the midst of a powerful general trend in logical theory, especially, in the past 60 years. In that period, logical theory became aggressively and widely pluralistic. Second, the versatility – the sheer ubiquity – of possible worlds as a tool of semantic and philosophical analysis, gives to possible worlds a kind of hegemonic standing.. (shrink)
“A model is a work of fiction(. There are the obvious idealizations of physics – infinite potentials, zero-time correlations, perfect rigid rods, and frictionless planes. But it would be a mistake to think entirely in terms of idealizations of properties we conceive of as limiting cases, to which we can approach closer and closer in reality. For some properties are not even approached in reality. They are pure fictions.” Nancy Cartwright..
Epistemology and informal logic have overlapping and broadly similar subject matters. A principle of methodological symmetry is: philosophical theories of sufficiently similar subject matters should engage similar methods. Suppose the best way to do epistemology is in highly formalized ways, with a large role for mathematical methods. The symmetry principle suggests this is also the best way to do the logic of the reasoning and argument, the subject matter of informal logic. A capitulation to mathematics is inimical to informal logicians, (...) yet formal methods and mathematical models are an emerging force in epistemology. What is to be done? What’s sauce for the goose of epistemology is sauce for the gander of informal logic. (shrink)
This volume serves as a detailed introduction for those new to the field as well as a rich source of new insights and potential research agendas for those already engaged with the philosophy of economics.
An agent-centered, goal-directed, resource-bound logic of human reasoning would do well to note that individual cognitive agency is typified by the comparative scantness of available cognitive resourcess ignorance-preserving character. My principal purpose here is to tie abduction’s scarce-resource adjustment capacity to its ignorance preservation.
Logic’s historically central mission has been to provide formally precise descriptions of logical consequence. This was done with two broad expectations in mind. One was that a pre-theoretically recognizable concept of consequence would be present in the ensuing formalization. The other was that the formalization would be mathematically mature. The first expectation calls for conceptual adequacy. The other calls for technical virtuosity. The record of the past century and a third discloses a tension between the two. Accordingly, logicians have sought (...) a reasoned, if delicate, rapprochement, one in which each expectation would be given its due, but well-short of free sway. Recent developments have imperiled this perestroika. One is logic’s massive and often rivalrous pluralism, and the cheapening relativism to which it beckons. This is exacerbated by the long-acknowledged part that the formal representations of logic distort the logical particles of natural language. The present paper discusses what might be done about this. (shrink)
Traditionally, an enthymeme is an incomplete argument, made so by the absence of one or more of its constituent statements. An enthymeme resolution strategy is a set of procedures for finding those missing elements, thus reconstructing the enthymemes and restoring its meaning. It is widely held that a condition on the adequacy of such procedures is that statements restored to an enthymeme produce an argument that is good in some given respect in relation to which the enthymeme itself is bad. (...) In previous work, we emphasized the role of parsimony in enthymeme resolution strategies and concomitantly downplayed the role of charity . In the present paper, we take the analysis of enthymemes a step further. We will propose that if the pragmatic features that attend the phenomenon of enthymematic communication are duly heeded, the very idea of reconstructing enthymemes loses much of its rationale, and their interpretation comes to be conceived in a new light. (shrink)
Enthymemes are traditionally defined as arguments in which some elements are left unstated. It is an empirical fact that enthymemes are both enormously frequent and appropriately understood in everyday argumentation. Why is it so? We outline an answer that dispenses with the so called "principle of charity", which is the standard notion underlying most works on enthymemes. In contrast, we suggest that a different force drives enthymematic argumentation—namely, parsimony, i.e. the tendency to optimize resource consumption, in light of the agent's (...) goals. On this view, the frequent use of enthymemes does not indicate sub-optimal performance of arguers, requiring appeals to charity for their redemption. On the contrary, it is seen as a highly adaptive argumentation strategy, given the need of everyday reasoners to optimize their cognitive resources. Considerations of parsimony also affect enthymeme reconstruction, i.e. the process by which the interpreter makes sense of the speaker's enthymemes. Far from being driven by any pro-social cooperative instinct, interpretative efforts are aimed at extracting valuable information at reasonable costs from available sources. Thus, there is a tension between parsimony and charity, insofar as the former is a non-social constraint for self-regulation of one's behaviour, whereas the latter implies a pro-social attitude. We will argue that some versions of charity are untenable for enthymeme interpretation, while others are compatible with the view defended here, but still require parsimony to expose the ultimate reasons upon which a presumption of fair treatment in enthymeme reconstruction is founded. (shrink)
The God of the Biblical and patristic tradition, though perhaps incomplete, possesses properties including those that involve genidentity or C-connections with us. Thus God's existence is at least possible. Using a modified version of Parson's elaboration of Meinong's theory of objects, we find that God exists if we do. But we also find that much else exists if we do; rather too much for confident belief.
There are passages in Fallacies suggesting a skeptical attitude to the very idea of inductive arguments, hence to the existence of inductive fallacies. Although the passages are brief and few in number, it would appear that Hamblin’s resistance stems from doubts about the existence of relations of inductive consequence. This paper attempts to find a case in which such skepticism might plausibly be grounded. The case it proposes is highly conjectural, but important if true. Its greater importance lies in the (...) threat it creates for the whole class of nonmonotonic logics. (shrink)
True to the spirit of Topoi’s Untimely Reviews section, the present essay is a work of the counterfactual imagination. Suppose that Quine’s “Two Dogmas” had been written and published in the late 1990s rather than the early 1950s. What, in those circumstances, would philosophical commentary look like, especially against the marked developments in Quine’s philosophy in that same period? In short, how would Quine’s “Two Dogmas” stand up as a late 1990s paper rather than an early 1950s paper? Answering that (...) question is my task here. (shrink)
Les théoriciens sémantistes de la fiction cherchent typiquement à expliquer nos relations sémantiques au fictionnel dans le contexte plus général des théories de la référence, privilégiant une explication de la sémantique sur le psychologique. Dans cet article, nous défendons une dépendance inverse. Par l’éclaircissement de nos relations psychologiques au fictionnel, nous trouverons un guide pour savoir comment développer une sémantique de la fiction. S’ensuivra une esquisse de la sémantique.
Formal nonmonotonic systems try to model the phenomenon that common sense reasoners are able to “jump” in their reasoning from assumptions Δ to conclusions C without their being any deductive chain from Δ to C. Such jumps are done by various mechanisms which are strongly dependent on context and knowledge of how the actual world functions. Our aim is to motivate these jump rules as inference rules designed to optimise survival in an environment with scant resources of effort and time. (...) We begin with a general discussion and quickly move to Section 3 where we introduce five resource principles. We show that these principles lead to some well known nonmonotonic systems such as Nute’s defeasible logic. We also give several examples of practical reasoning situations to illustrate our principles. (shrink)
In criminal cases at common law, juries are permitted to convict on wholly circumstantial evidence even in the face of a reasonable case for acquittal. This generates the highly counterintuitive—if not absurd—consequence that there being reason to think that the accused didn’t do it is not reason to doubt that he did. This is the no-reason-to-doubt problem. It has a technical solution provided that the evidence on which it is reasonable to think that the accused didn’t do it is a (...) different subset of the total evidence from that on which there is no reason to doubt that he did do it. It lies in the adversarial nature of criminal proceedings in the common law tradition that the subsets of the total evidence on which counsel base their opposing arguments are themselves different from and often incompatible with one another. While this solves the no-reason-to-doubt problem, it does so at the cost of triggering a second problem just as bad. It is the no-rival problem, according to which incompatible theories of the case based on incompatible subsets of the evidence cannot be rivals of one another. If neither party’s case contradicts the other’s then, by the burden of proof requirement, criminal convictions are impossible. Once having generated the dilemma, the object of the paper is to determine how it might be escaped. (shrink)
Abduction is or subsumes a process of inference. It entertains possible hypotheses and it chooses hypotheses for further scrutiny. There is a large literature on various aspects of non-symbolic, subconscious abduction. There is also a very active research community working on the symbolic (logical) characterisation of abduction, which typically treats it as a form of hypothetico-deductive reasoning. In this paper we start to bridge the gap between the symbolic and sub-symbolic approaches to abduction. We are interested in benefiting from developments (...) made by each community. In particular, we are interested in the ability of non-symbolic systems (neural networks) to learn from experience using efficient algorithms and to perform massively parallel computations of alternative abductive explanations. At the same time, we would like to benefit from the rigour and semantic clarity of symbolic logic. We present two approaches to dealing with abduction in neural networks. One of them uses Connectionist Modal Logic and a translation of Horn clauses into modal clauses to come up with a neural network ensemble that computes abductive explanations in a top-down fashion. The other combines neural-symbolic systems and abductive logic programming and proposes a neural architecture which performs a more systematic, bottom-up computation of alternative abductive explanations. Both approaches employ standard neural network architectures which are already known to be highly effective in practical learning applications. Differently from previous work in the area, our aim is to promote the integration of reasoning and learning in a way that the neural network provides the machinery for cognitive computation, inductive learning and hypothetical reasoning, while logic provides the rigour and explanation capability to the systems, facilitating the interaction with the outside world. Although it is left as future work to determine whether the structure of one of the proposed approaches is more amenable to learning than the other, we hope to have contributed to the development of the area by approaching it from the perspective of symbolic and sub-symbolic integration. (shrink)
In all three of its manifestations, —abusive, circumstantial and tu quoque—the role of the ad hominem is to raise a doubt about the opposite party’s casemaking bona-fides.Provided that it is both presumptive and provisional, drawing such a conclusion is not a logical mistake, hence not a fallacy on the traditional conception of it. More remarkable is the role of the ad hominem retort in seeking the reassurance of one’s opponent when, on the face of it, reassurance is precisely what he (...) would seem to be ill-placed to give. Brief concluding remarks are given over to an examination of rival approaches to the ad hominem, especially those in which it is conceived of as a dialectical error. (shrink)
E. C. W. Krabbe characterizes a metadialogue as a dialogue about a dialogue, which in turn, is characterized as a ground level dialogue. Krabbe raises a number of interesting questions about this distinction, of which the most pressing is whether the difference between ground level and metadialogues can be drawn in a principled and suitably general way. In this note, I develop the idea that something counts as a metadialogue to the extent that it stands to its ground level counterpart (...) in a relation of irrelevance. The irrelevance in question subsumes a triple of subconcepts: strategic relevance, agenda-relevance and irredundancy-relevance. (shrink)
This is an examination of similarities and differences between two recent models of abductive reasoning. The one is developed in Atocha Aliseda’s Abductive Reasoning: Logical Investigations into the Processes of Discovery and Evaluation (2006). The other is advanced by Dov Gabbay and the present author in their The Reach of Abduction: Insight and Trial (2005). A principal difference between the two approaches is that in the Gabbay-Woods model, but not in the Aliseda model, abductive inference is ignorance-preserving. A further differ-ence (...) is that Aliseda reconstructs the abduction relation in a semantic tableaux environment, whereas the Woods-Gabbay model, while less systematic, is more general. Of particular note is the connection between abduction and legal reasoning. (shrink)
Based on the premise that what is relevant, consistent, or true may change from context to context, a formal framework of relevance and context is proposed in which • contexts are mathematical entities • each context has its own language with relevant implication • the languages of distinct contexts are connected by embeddings • inter-context deduction is supported by bridge rules • databases are sets of formulae tagged with deductive histories and the contexts they belong to • abduction and revision (...) are supported by a notion of consistency of formulae and sets of formulae which are relative to a context, and which can, in turn, be seen as constituents of agendas. (shrink)
For scientific essentialists, the only logical possibilities of existence are the real (or metaphysical) ones, and such possibilities, they say, are relative to worlds. They are not a priori, and they cannot just be invented. Rather, they are discoverable only by the a posteriori methods of science. There are, however, many philosophers who think that real possibilities are knowable a priori, or that they can just be invented. Marc Lange [Lange 2004] thinks that they can be invented, and tries to (...) use his inventions to argue that the essentialist theory of counterfactual conditionals developed in Scientific Essentialism [Ellis 2001, hereafter SE ] is flawed. (shrink)
Greek, Indian and Arabic Logic marks the initial appearance of the multi-volume Handbook of the History of Logic. Additional volumes will be published when ready, rather than in strict chronological order. Soon to appear are The Rise of Modern Logic: From Leibniz to Frege. Also in preparation are Logic From Russell to Gödel, The Emergence of Classical Logic, Logic and the Modalities in the Twentieth Century, and The Many-Valued and Non-Monotonic Turn in Logic. Further volumes will follow, including Mediaeval and (...) Renaissance Logic and Logic: A History of its Central. In designing the Handbook of the History of Logic, the Editors have taken the view that the history of logic holds more than an antiquarian interest, and that a knowledge of logic's rich and sophisticated development is, in various respects, relevant to the research programmes of the present day. Ancient logic is no exception. The present volume attests to the distant origins of some of modern logic's most important features, such as can be found in the claim by the authors of the chapter on Aristotle's early logic that, from its infancy, the theory of the syllogism is an example of an intuitionistic, non-monotonic, relevantly paraconsistent logic. Similarly, in addition to its comparative earliness, what is striking about the best of the Megarian and Stoic traditions is their sophistication and originality. Logic is an indispensably important pivot of the Western intellectual tradition. But, as the chapters on Indian and Arabic logic make clear, logic's parentage extends more widely than any direct line from the Greek city states. It is hardly surprising, therefore, that for centuries logic has been an unfetteredly international enterprise, whose research programmes reach to every corner of the learned world. Like its companion volumes, Greek, Indian and Arabic Logic is the result of a design that gives to its distinguished authors as much space as would be needed to produce highly authoritative chapters, rich in detail and interpretative reach. The aim of the Editors is to have placed before the relevant intellectual communities a research tool of indispensable value. Together with the other volumes, Greek, Indian and Arabic Logic, will be essential reading for everyone with a curiosity about logic's long development, especially researchers, graduate and senior undergraduate students in logic in all its forms, argumentation theory, AI and computer science, cognitive psychology and neuroscience, linguistics, forensics, philosophy and the history of philosophy, and the history of ideas. (shrink)
In a world plagued by disagreement and conflict one might expect that the exact sciences of logic and mathematics would provide a safe harbor. In fact these disciplines are rife with internal divisions between different, often incompatible, systems. Do these disagreements admit of resolution? Can such resolution be achieved without disturbing assumptions that the theorems of logic and mathematics state objective truths about the real world? In this original and historically rich book John Woods explores apparently intractable disagreements in logic (...) and the foundations of mathematics and sets out conflict resolution strategies that evade or disarm these stalemates. An important sub-theme of the book is the extent to which pluralism in logic and the philosophy of mathematics undermines realist assumptions. This book makes an important contribution to such areas of philosophy as logic, philosophy of language and argumentation theory. It will also be of interest to mathematicians and computer scientists. (shrink)
When someone is asked to speak his mind, it is sometimes possible for him to furnish what his utterance appears to have omitted. In such cases we might say that he had a mind to speak. Sometimes, however, the opposite is true. Asked to speak his mind, our speaker finds that he has no mind to speak. When it is possible to speak one's mind and when not is largely determined by the kinds of beings we are and by the (...) kinds of resources we are able to draw upon. In either case, not speaking one's mind is leaving something out whose articulation would or could matter for the purposes for which one was speaking in the first place. Inarticulation is no fleetingly contingent and peripheral phenomenon in human thinking and discourse. It is a substantial and dominant commonplace. In Part One I attempt to say something about what it is about the human agent that makes inarticulateness so rife. In Part Two, I consider various strategies for making the unarticulated explicit, and certain constraints on such processes. I shall suggest, among other things, that standard treatments of enthymematic reconstruction are fundamentally misconceived. (shrink)
Perelman and Olbrechts-Tyteca write in The New Rhetoric that, “The first half of this chapter is devoted to the analysis of the relations that establish reality by resort to the particular case. The latter can play a wide variety of roles; as an example, it makes generalization possible. . . .” I will suggest that no fallacy theorist or philosopher of science who has a serious interest in bringing the fallacy of hasty generalization to theoretical heel should omit consideration of (...) these wise Belgian words. Although these very words appear to endorse the fallacy outright, they do no such thing in fact. It proves instructive to learn why. (shrink)
Consider the proposition, "Informal logic is a subdiscipline of philosophy". The best chance of showing this to be true is showing that informal logic is part of logic, which in turn is a part of philosophy. Part 1 is given over to the task of sorting out these connections. If successful, informal logic can indeed be seen as part of philosophy; but there is no question of an exclusive relationship. Part 2 is a critical appraisal of the suggestion that informal (...) logic is applied epistemology. Part 3 examines the claim that informal logic has failed to penetrate into mainstream philosophy, and suggestions for amelioration are considered. (shrink)
A slippery slope argument is an argument to this twofold effect. First, that if a policy or practice P is permitted, then we lack the dialectical resources to demonstrate that a similar policy or practice P* is not permissible. Since P* is indeed not permissible, we should not endorse policy or practice P. At the heart of such arguments is the idea of dialectical impotence, the inability to stop the acceptance of apparently small deviations from a heretofore secure policy or (...) practice from leading to apparently large and unacceptable deviations. Using examples of analogical arguments and sorites arguments I examine this phenomenon in the context of collapsing taboos. (shrink)