One of our purposes here is to expose something of the elementary logical structure of abductive reasoning, and to do so in a way that helps orient theorists to the various tasks that a logic of abduction should concern itself with. We are mindful of criticisms that have been levelled against the very idea of a logic of abduction; so we think it prudent to proceed with a certain difﬁdence. That our own account of abduction is itself abductive is methodological (...) expression of this difﬁ- dence. A second objective is to test our conception of abduction’s logical structure against some of the more promising going accounts of abductive reasoning. We offer our various suggestions in a benignly advisory way. The primary targets of our advice is ourselves, meant as guides to work we have yet to complete or, in some instances, start. It is possible that our colleagues in the abduction research communities will ﬁnd our counsel to be of some interest. But we repeat that our ﬁrst concern is to try to get ourselves straight about what a logic of abduction should encompass. (shrink)
This is an examination of the dialectical structure of deep disagreements about matters not open to empirical check. A dramatic case in point is the Law of Non- Contradiction (LNC). Dialetheists are notoriously of the view that, in some few cases.
The use of models in the construction of scientific theories is as widespread as it is philosophically interesting (and, one might say, vexing).1 In neither philosophical nor scientific practice do we find a univocal concept of model.2 But there is one established usage to which we want to direct our particular attention in this paper, in which a model is constituted by the theorist’s idealizations and abstractions. Idealizations are expressed by statements known to be false. Abstractions are achieved by suppressing (...) what is known to be true. Idealizations over-represent empirical phenomena. Abstractions underrepresent them. We might think of idealizations and abstractions as one another’s duals. Either way, they are purposeful distortions of phenomena on the ground.3.. (shrink)
MacColl is the recent subject of three interesting theses. One is that he is the probable originator of pluralism in logic. The other is that his pluralism expresses an underlying instrumentalism. The third is that the first two help explain his post-1909 neglect. Although there are respects in which he is both a pluralist and an instrumentalist, I will suggest that it is difficult to find in MacColl’s writings a pluralism which honours the threefold attribution of having been originated by (...) him, having been rooted in an instrumentalism adapted to logic, and occasion of his neglect. (shrink)
In times past there was a celebrated, and somewhat mythical, disagreement between William James and W.K. Clifford. Clifford thought that our cognitive ends were best advanced by a determined effort to avoid error. James thought that our cognitive flourishing was ineliminably linked to a venturing forth for truth. Each carries its own procedural implications. For James, it was Nothing Ventured, Nothing Gained. For Clifford it was Nothing Ventured, Nothing Lost. Of course, these are caricatures; but we know what’s meant, at (...) least roughly. The Clifford-James divide carves up Quine’s philosophical architecture in an important way.1 Quine is a Jamesian about science and a Cliffordian about philosophy. Quine knows that science will get nowhere if it is not allowed to make mistakes, lots of them. Like Peirce and Dewey, Quine is a fallibilist about science. He thinks that the correct, indeed the best, methods for science are those that get things wrong with a notable frequency. Even so, there are two considerations which make these the right procedures. One is that a condition of getting things right in science is getting things wrong, though not the same things at the same time. Another is that the error-susceptible.. (shrink)
duction; so we think it prudent to proceed with a certain difﬁdence. That our own account of abduction is itself abductive is methodological expression of this difﬁ- dence. A second objective is to test our conception of abduction’s logical structure against some of the more promising going accounts of abductive reasoning.
The logic that was purpose-built to accommodate the hoped-for reduction of arithmetic gave to language a dominant and pivotal place. Flowing from the founding efforts of Frege, Peirce, and Whitehead and Russell, this was a logic that incorporated proof theory into syntax, and in so doing made of grammar a senior partner in the logicistic enterprise. The seniority was reinforced by soundness and completeness metatheorems, and, in time, Quine would quip that the “grammar [of logic] is linguistics on purpose” [Quine, (...) 1970, p. 15] and that “logic chases truth up the tree of grammar” [Quine, 1970, p. 35]. Nor was the centrality of syntax lost with the G¨. (shrink)
The logic of fiction has been a stand-alone research programme only since the early 1970s.1 It is a fair question as to why in the first place fictional discourse would have drawn the interest of professional logicians. It is a question admitting of different answers. One is that, since fictional names are “empty”, fiction is a primary datum for any logician seeking a suitably comprehensive logic of denotation. Another answer arises from the so-called incompleteness problem, exemplified by the fact (or (...) apparent fact) that some fictional sentences – think of “Sherlock Holmes’ mother was nick-named ‘Polly’” − are neither true nor false. These are sentences to command the attention of logicians who work on non-bivalent logics. A further spur to logical engagement is the supposed fictionality of certain kinds of ideal models in science and certain classes of mathematical objects. No doubt, there are other features of fictional discourse that provide the logician with a natural entré, but perhaps it would also be correct to say that the fiction’s biggest draw for logicians is that our quite common beliefs about the fictional constitute what Nicholas Rescher calls “aporetic clusters”, so named after the Latinized Greek aporos for “impassable”.2 An aporetic cluster is a set of claims such that.. (shrink)
A possible worlds treatment of the normal alethic modalities was, after classical model theory, logic’s most significant semantic achievement in the century just past. Kripke’s groundbreaking paper appeared in 1959 and, in the scant few succeeding years, its principal analytical tool, possible worlds, was adapted to serve a range of quite different-seeming purposes – from nonnormal logics, to epistemic and doxastic logics, deontic and temporal logics and, not much later, the logic of counterfactual conditionals. In short order, possible worlds acquired (...) a twofold reputation which has steadily enlarged to the present day. They were celebrated for both their mathematical power and their sheer versatility. This sets the stage for what I want to do here. I wish to explore the extent to which the supposed versatility of a possible worlds semantics is justified. In so doing, I shall confine my attention to its role in (1) logics of counterfactual conditionals, and (2) logics of belief. The question I pose is, why and on what grounds should we think that the device of possible worlds turns the semantic trick for these logics? My answer is that they do not turn the trick for them. Whereupon a further question presses for attention. If possible worlds semantics don’t work there, why does virtually everyone think that they do? Answering this second question is risky. Who am I to say why virtually everyone thinks that the possible worlds approach is more successful than I do? Who has vouchsafed me these powers? I shall try to mitigate the riskiness of my answer by contextualizing the evaluation of this approach in the following ways. First, the triumph of possible worlds occurred in the midst of a powerful general trend in logical theory, especially, in the past 60 years. In that period, logical theory became aggressively and widely pluralistic. Second, the versatility – the sheer ubiquity – of possible worlds as a tool of semantic and philosophical analysis, gives to possible worlds a kind of hegemonic standing.. (shrink)
“A model is a work of fiction(. There are the obvious idealizations of physics – infinite potentials, zero-time correlations, perfect rigid rods, and frictionless planes. But it would be a mistake to think entirely in terms of idealizations of properties we conceive of as limiting cases, to which we can approach closer and closer in reality. For some properties are not even approached in reality. They are pure fictions.” Nancy Cartwright..
Semantic theorists of fiction typically look for an account of our semantic relations to the fictional within general-purpose theories of reference, privileging an explanation of the semantic over the psychological. In this paper, we counsel a reverse dependency. In sorting out our psychological relations to the fictional, there is useful guidance about how to proceed with the semantics of fiction. A sketch of the semantics follows.
Enthymemes are traditionally defined as arguments in which some elements are left unstated. It is an empirical fact that enthymemes are both enormously frequent and appropriately understood in everyday argumentation. Why is it so? We outline an answer that dispenses with the so called “principle of charity”, which is the standard notion underlying most works on enthymemes. In contrast, we suggest that a different force drives enthymematic argumentation—namely, parsimony, i.e. the tendency to optimize resource consumption, in light of the agent’s (...) goals. On this view, the frequent use of enthymemes does not indicate sub-optimal performance of arguers, requiring appeals to charity for their redemption. On the contrary, it is seen as a highly adaptive argumentation strategy, given the need of everyday reasoners to optimize their cognitive resources. Considerations of parsimony also affect enthymeme reconstruction, i.e. the process by which the interpreter makes sense of the speaker’s enthymemes. Far from being driven by any pro-social cooperative instinct, interpretative efforts are aimed at extracting valuable information at reasonable costs from available sources. Thus, there is a tension between parsimony and charity, insofar as the former is a non-social constraint for self-regulation of one’s behaviour, whereas the latter implies a pro-social attitude. We will argue that some versions of charity are untenable for enthymeme interpretation, while others are compatible with the view defended here, but still require parsimony to expose the ultimate reasons upon which a presumption of fair treatment in enthymeme reconstruction is founded. (shrink)
The God of the Biblical and patristic tradition, though perhaps incomplete, possesses properties including those that involve genidentity or C-connections with us. Thus God's existence is at least possible. Using a modified version of Parson's elaboration of Meinong's theory of objects, we find that God exists if we do. But we also find that much else exists if we do; rather too much for confident belief.
Logic’s historically central mission has been to provide formally precise descriptions of logical consequence. This was done with two broad expectations in mind. One was that a pre-theoretically recognizable concept of consequence would be present in the ensuing formalization. The other was that the formalization would be mathematically mature. The first expectation calls for conceptual adequacy. The other calls for technical virtuosity. The record of the past century and a third discloses a tension between the two. Accordingly, logicians have sought (...) a reasoned, if delicate, rapprochement, one in which each expectation would be given its due, but well-short of free sway. Recent developments have imperiled this perestroika. One is logic’s massive and often rivalrous pluralism, and the cheapening relativism to which it beckons. This is exacerbated by the long-acknowledged part that the formal representations of logic distort the logical particles of natural language. The present paper discusses what might be done about this. (shrink)
Traditionally, an enthymeme is an incomplete argument, made so by the absence of one or more of its constituent statements. An enthymeme resolution strategy is a set of procedures for finding those missing elements, thus reconstructing the enthymemes and restoring its meaning. It is widely held that a condition on the adequacy of such procedures is that statements restored to an enthymeme produce an argument that is good in some given respect in relation to which the enthymeme itself is bad. (...) In previous work, we emphasized the role of parsimony in enthymeme resolution strategies and concomitantly downplayed the role of charity . In the present paper, we take the analysis of enthymemes a step further. We will propose that if the pragmatic features that attend the phenomenon of enthymematic communication are duly heeded, the very idea of reconstructing enthymemes loses much of its rationale, and their interpretation comes to be conceived in a new light. (shrink)
There are passages in Fallacies suggesting a skeptical attitude to the very idea of inductive arguments, hence to the existence of inductive fallacies. Although the passages are brief and few in number, it would appear that Hamblin’s resistance stems from doubts about the existence of relations of inductive consequence. This paper attempts to find a case in which such skepticism might plausibly be grounded. The case it proposes is highly conjectural, but important if true. Its greater importance lies in the (...) threat it creates for the whole class of nonmonotonic logics. (shrink)
Formal nonmonotonic systems try to model the phenomenon that common sense reasoners are able to “jump” in their reasoning from assumptions Δ to conclusions C without their being any deductive chain from Δ to C. Such jumps are done by various mechanisms which are strongly dependent on context and knowledge of how the actual world functions. Our aim is to motivate these jump rules as inference rules designed to optimise survival in an environment with scant resources of effort and time. (...) We begin with a general discussion and quickly move to Section 3 where we introduce five resource principles. We show that these principles lead to some well known nonmonotonic systems such as Nute’s defeasible logic. We also give several examples of practical reasoning situations to illustrate our principles. (shrink)
Abduction is or subsumes a process of inference. It entertains possible hypotheses and it chooses hypotheses for further scrutiny. There is a large literature on various aspects of non-symbolic, subconscious abduction. There is also a very active research community working on the symbolic (logical) characterisation of abduction, which typically treats it as a form of hypothetico-deductive reasoning. In this paper we start to bridge the gap between the symbolic and sub-symbolic approaches to abduction. We are interested in benefiting from developments (...) made by each community. In particular, we are interested in the ability of non-symbolic systems (neural networks) to learn from experience using efficient algorithms and to perform massively parallel computations of alternative abductive explanations. At the same time, we would like to benefit from the rigour and semantic clarity of symbolic logic. We present two approaches to dealing with abduction in neural networks. One of them uses Connectionist Modal Logic and a translation of Horn clauses into modal clauses to come up with a neural network ensemble that computes abductive explanations in a top-down fashion. The other combines neural-symbolic systems and abductive logic programming and proposes a neural architecture which performs a more systematic, bottom-up computation of alternative abductive explanations. Both approaches employ standard neural network architectures which are already known to be highly effective in practical learning applications. Differently from previous work in the area, our aim is to promote the integration of reasoning and learning in a way that the neural network provides the machinery for cognitive computation, inductive learning and hypothetical reasoning, while logic provides the rigour and explanation capability to the systems, facilitating the interaction with the outside world. Although it is left as future work to determine whether the structure of one of the proposed approaches is more amenable to learning than the other, we hope to have contributed to the development of the area by approaching it from the perspective of symbolic and sub-symbolic integration. (shrink)
This is an examination of similarities and differences between two recent models of abductive reasoning. The one is developed in Atocha Aliseda’s Abductive Reasoning: Logical Investigations into the Processes of Discovery and Evaluation (2006). The other is advanced by Dov Gabbay and the present author in their The Reach of Abduction: Insight and Trial (2005). A principal difference between the two approaches is that in the Gabbay-Woods model, but not in the Aliseda model, abductive inference is ignorance-preserving. A further differ-ence (...) is that Aliseda reconstructs the abduction relation in a semantic tableaux environment, whereas the Woods-Gabbay model, while less systematic, is more general. Of particular note is the connection between abduction and legal reasoning. (shrink)
Based on the premise that what is relevant, consistent, or true may change from context to context, a formal framework of relevance and context is proposed in which • contexts are mathematical entities • each context has its own language with relevant implication • the languages of distinct contexts are connected by embeddings • inter-context deduction is supported by bridge rules • databases are sets of formulae tagged with deductive histories and the contexts they belong to • abduction and revision (...) are supported by a notion of consistency of formulae and sets of formulae which are relative to a context, and which can, in turn, be seen as constituents of agendas. (shrink)
For scientific essentialists, the only logical possibilities of existence are the real (or metaphysical) ones, and such possibilities, they say, are relative to worlds. They are not a priori, and they cannot just be invented. Rather, they are discoverable only by the a posteriori methods of science. There are, however, many philosophers who think that real possibilities are knowable a priori, or that they can just be invented. Marc Lange [Lange 2004] thinks that they can be invented, and tries to (...) use his inventions to argue that the essentialist theory of counterfactual conditionals developed in Scientific Essentialism [Ellis 2001, hereafter SE ] is flawed. (shrink)
Greek, Indian and Arabic Logic marks the initial appearance of the multi-volume Handbook of the History of Logic. Additional volumes will be published when ready, rather than in strict chronological order. Soon to appear are The Rise of Modern Logic: From Leibniz to Frege. Also in preparation are Logic From Russell to Gödel, The Emergence of Classical Logic, Logic and the Modalities in the Twentieth Century, and The Many-Valued and Non-Monotonic Turn in Logic. Further volumes will follow, including Mediaeval and (...) Renaissance Logic and Logic: A History of its Central. In designing the Handbook of the History of Logic, the Editors have taken the view that the history of logic holds more than an antiquarian interest, and that a knowledge of logic's rich and sophisticated development is, in various respects, relevant to the research programmes of the present day. Ancient logic is no exception. The present volume attests to the distant origins of some of modern logic's most important features, such as can be found in the claim by the authors of the chapter on Aristotle's early logic that, from its infancy, the theory of the syllogism is an example of an intuitionistic, non-monotonic, relevantly paraconsistent logic. Similarly, in addition to its comparative earliness, what is striking about the best of the Megarian and Stoic traditions is their sophistication and originality. Logic is an indispensably important pivot of the Western intellectual tradition. But, as the chapters on Indian and Arabic logic make clear, logic's parentage extends more widely than any direct line from the Greek city states. It is hardly surprising, therefore, that for centuries logic has been an unfetteredly international enterprise, whose research programmes reach to every corner of the learned world. Like its companion volumes, Greek, Indian and Arabic Logic is the result of a design that gives to its distinguished authors as much space as would be needed to produce highly authoritative chapters, rich in detail and interpretative reach. The aim of the Editors is to have placed before the relevant intellectual communities a research tool of indispensable value. Together with the other volumes, Greek, Indian and Arabic Logic, will be essential reading for everyone with a curiosity about logic's long development, especially researchers, graduate and senior undergraduate students in logic in all its forms, argumentation theory, AI and computer science, cognitive psychology and neuroscience, linguistics, forensics, philosophy and the history of philosophy, and the history of ideas. (shrink)
In a world plagued by disagreement and conflict one might expect that the exact sciences of logic and mathematics would provide a safe harbor. In fact these disciplines are rife with internal divisions between different, often incompatible, systems. Do these disagreements admit of resolution? Can such resolution be achieved without disturbing assumptions that the theorems of logic and mathematics state objective truths about the real world? In this original and historically rich book John Woods explores apparently intractable disagreements in logic (...) and the foundations of mathematics and sets out conflict resolution strategies that evade or disarm these stalemates. An important sub-theme of the book is the extent to which pluralism in logic and the philosophy of mathematics undermines realist assumptions. This book makes an important contribution to such areas of philosophy as logic, philosophy of language and argumentation theory. It will also be of interest to mathematicians and computer scientists. (shrink)
When someone is asked to speak his mind, it is sometimes possible for him to furnish what his utterance appears to have omitted. In such cases we might say that he had a mind to speak. Sometimes, however, the opposite is true. Asked to speak his mind, our speaker finds that he has no mind to speak. When it is possible to speak one's mind and when not is largely determined by the kinds of beings we are and by the (...) kinds of resources we are able to draw upon. In either case, not speaking one's mind is leaving something out whose articulation would or could matter for the purposes for which one was speaking in the first place. Inarticulation is no fleetingly contingent and peripheral phenomenon in human thinking and discourse. It is a substantial and dominant commonplace. In Part One I attempt to say something about what it is about the human agent that makes inarticulateness so rife. In Part Two, I consider various strategies for making the unarticulated explicit, and certain constraints on such processes. I shall suggest, among other things, that standard treatments of enthymematic reconstruction are fundamentally misconceived. (shrink)
Perelman and Olbrechts-Tyteca write in The New Rhetoric that, “The first half of this chapter is devoted to the analysis of the relations that establish reality by resort to the particular case. The latter can play a wide variety of roles; as an example, it makes generalization possible. . . .” I will suggest that no fallacy theorist or philosopher of science who has a serious interest in bringing the fallacy of hasty generalization to theoretical heel should omit consideration of (...) these wise Belgian words. Although these very words appear to endorse the fallacy outright, they do no such thing in fact. It proves instructive to learn why. (shrink)
A slippery slope argument is an argument to this twofold effect. First, that if a policy or practice P is permitted, then we lack the dialectical resources to demonstrate that a similar policy or practice P* is not permissible. Since P* is indeed not permissible, we should not endorse policy or practice P. At the heart of such arguments is the idea of dialectical impotence, the inability to stop the acceptance of apparently small deviations from a heretofore secure policy or (...) practice from leading to apparently large and unacceptable deviations. Using examples of analogical arguments and sorites arguments I examine this phenomenon in the context of collapsing taboos. (shrink)
An account of analogical characterization is developed in which the following things are claimed.(1) Analogical predications are irreflexive, asymmetrical, atransitive and non-inversive. (2) Analogies A and B share role-similarity descriptions sufficiently abstract to overcome the differences between A and B. Analogies pivot on the point of limited similarity and substantial, even radical, difference. (3) The semantical theory for sentences making analogical attributions requires a distinction between (sentential) meaning as truth conditions and (sentential) meaning as a functional compound of the meanings (...) of contained lexical items. Analogical sentences possess both kinds of meaning. They are true via their truth conditions and would be false via their lexical meanings. The distinctive feature of the lexical meaning of analogical sentences is the tightness of constraints on closure. The implications of analogical sentences, given their lexical meanings, though there, aren't drawn. It is in this sense that analogies are made and not found. (shrink)
In an attempt to overcome the traditional casual neglect of the study of the informal fallacies, we here treat one fallacy, the ad baculum, at an adequate theoretical level in order to determine how it may best be understood as a fallacy. We conclude, after following through a number of plausible routes of tracking down the essential fallaciousness of the ad baculum, that the type of phenomenon apparently so typically thought to constitute ad baculum by the texts is not, so (...) far as we can tell, an instance of a logical fallacy. (shrink)
Abstract: Current philosophical trends in North America are again raising the issue as to whether or not there can be ? moral experts?. An expert is defined here as one who predicts and explains better than the layman m a particular domain on the basis of his specialized underlying knowledge of it This analysis is then applied to the domain of morality. Special attention is given to the claim that moral philosophers are professionally more capable of critically thinking through the (...) nature of moral problems. It is argued that philosophers tend to neglect the area of actual argumentation about specific moral issues, and that it is here, at the point of contact with living moral experience and empirical research into it that the possibility of ? moral expertise? lies. (shrink)