A number of authors have argued that Peano Arithmetic supplemented with a logicalvalidity predicate is inconsistent in much the same manner as is PA supplemented with an unrestricted truth predicate. In this paper I show that, on the contrary, there is no genuine paradox of logicalvalidity—a completely general logicalvalidity predicate can be coherently added to PA, and the resulting system is consistent. In addition, this observation lead to a number of novel, (...) and important, insights into the nature of logicalvalidity itself. (shrink)
What are people who disagree about logic disagreeing about? The paper argues that (in a wide range of cases) they are primarily disagreeing about how to regulate their degrees of belief. An analogy is drawn between beliefs about validity and beliefs about chance: both sorts of belief serve primarily to regulate degrees of belief about other matters, but in both cases the concepts have a kind of objectivity nonetheless.
We consider notions of truth and logicalvalidity defined in various recent constructions of Hartry Field. We try to explicate his notion of determinate truth by clarifying the path-dependent hierarchies of his determinateness operator.
The purpose of this paper is to explore the issue of how the validity of the parallel inference is possible in view of its deep semantic-syntactic structure. I first present a philosophical interpretation of the ancient Mohist treatment of the parallel inference concerning its semantic-syntactic structure. Then, to formally and accurately capture the later Mohist point in this connection for the sake of giving a general condition for the validity of the parallel inference, I suggest a modern (...) class='Hi'>logical treatment via an expanded predicate logic account. (shrink)
Recanati takes for granted the conveyance conception of linguistic communica- tion, although it is not very clear exactly where he lies on the spectrum of possible variations. Even if we disavow all such conceptions of linguistic communication, there will be a place for semantic theory in articulating normative concepts such as logical consistency and logicalvalidity. An approach to semantics focused on such normative concepts is illustrated using the example of “It’s raining”. It is argued that Recanati’s (...) conception of semantics as involving the pragmatics of saturation and modulation cannot account for the logical properties of “It’s raining. (shrink)
A natural language argument may be valid in at least two nonequivalent senses: it may be interpretationally or representationally valid (Etchemendy in The concept of logical consequence. Harvard University Press, Cambridge, 1990). Interpretational and representational validity can both be formally exhibited by classical first-order logic. However, as these two notions of informal validity differ extensionally and first-order logic fixes one determinate extension for the notion of formal validity (or consequence), some arguments must be formalized by unrelated (...) nonequivalent formalizations in order to formally account for their interpretational or representational validity, respectively. As a consequence, arguments must be formalized subject to different criteria of adequate formalization depending on which variant of informal validity is to be revealed. This paper develops different criteria that formalizations of an argument have to satisfy in order to exhibit the latter’s interpretational or representational validity. (shrink)
The traditional view that all logical truths are metaphysically necessary has come under attack in recent years. The contrary claim is prominent in David Kaplan’s work on demonstratives, and Edward Zalta has argued that logical truths that are not necessary appear in modal languages supplemented only with some device for making reference to the actual world (and thus independently of whether demonstratives like ‘I’, ‘here’, and ‘now’ are present). If this latter claim can be sustained, it strikes close (...) to the heart of the traditional view. I begin this paper by discussing and refuting Zalta’s argument in the context of a language for propositional modal logic with an actuality connective (section 1). This involves showing that his argument in favor of real world validity his preferred explication of logical truth, is fallacious. Next (section 2) I argue for an alternative explication of logical truth called general validity. Since the rule of necessitation preserves general validity, the argument of section 2 provides a reason for affirming the traditional view. Finally (section 3) I show that the intuitive idea behind the discredited notion of real world validity finds legitimate expression in an object language connective for deep necessity. (shrink)
Logic is formal in the sense that all arguments of the same form as logically valid arguments are also logically valid and hence truth-preserving. However, it is not known whether all arguments that are valid in the usual model-theoretic sense are truthpreserving. Tarski claimed that it could be proved that all arguments that are valid (in the sense of validity he contemplated in his 1936 paper on logical consequence) are truthpreserving. But he did not offer the proof. The (...) question arises whether the usual modeltheoretic sense of validity and Tarski's 1936 sense are the same. I argue in this paper that they probably are not, and that the proof Tarski had in mind, although unusable to prove that model-theoretically valid arguments are truth-preserving, can be used to prove that arguments valid in Tarski's 1936 sense are truth-preserving. (shrink)
Does general validity or real world validity better represent the intuitive notion of logical truth for sentential modal languages with an actuality connective? In (Philosophical Studies 130:436–459, 2006) I argued in favor of general validity, and I criticized the arguments of Zalta (Journal of Philosophy 85:57–74, 1988) for real world validity. But in Nelson and Zalta (Philosophical Studies 157:153–162, 2012) Michael Nelson and Edward Zalta criticize my arguments and claim to have established the superiority of (...) real world validity. Section 1 of the present paper introduces the problem and sets out the basic issues. In Sect. 2 I consider three of Nelson and Zalta’s arguments and find all of them deficient. In Sect. 3 I note that Nelson and Zalta direct much of their criticism at a phrase (‘true at a world from the point of view of some distinct world as actual’) I used only inessentially in Hanson (Philosophical Studies 130:436–459, 2006), and that their account of the philosophical foundations of modal semantics leaves them ill equipped to account for the plausibility of modal logics weaker than S5. Along the way I make several general suggestions for ways in which philosophical discussions of logical matters–especially, but not limited to, discussions of truth and logical truth for languages containing modal and indexical terms–might be facilitated and made more productive. (shrink)
We can distinguish two non-equivalent ways in which a natural language argument can be valid: it can be interpretationally or representationally valid. However, there is just one notion of classical first-order validity for formal languages: truth-preservation in all classical first-order models. To ease the tension, Baumgartner suggests that we should understand interpretational and representational validity as imposing different adequacy conditions on formalizations of natural language arguments. I argue against this proposal. To that end, I first show that Baumgartner’s (...) definition of representational validity is extensionally inadequate. I present a number of natural language arguments that we pre-theoretically hold to be representationally valid, but are not representationally valid according to Baumgartner’s definition. I then point to two further untenable features of Baumgartner’s definitions: according to Baumgartner’s definition of a representationally correct formalization, we cannot arrive at formalizations in a recursive way, and Baumgartner’s definition of representational validity is non-monotonic. I conclude that interpretational and representational validity cannot be understood as merely imposing different adequacy conditions on formalizations. If we want to capture our interpretational and representational intuitions, we need two different formal definitions of validity. (shrink)
In order to capture our intuitions about the logical consistency of sentences and the logicalvalidity of arguments, a semantics for a natural language has to allow for the fact that different occurrences of a single bare demonstrative, such as “this”, may refer to different objects. But it is not obvious how to formulate a semantic theory in order to achieve this result. This paper first criticizes several proposals: that we should formulate our semantics as a semantics (...) for tokens, not expressions, Kaplan’s idea that syntax associates a demonstration with each occurrence of a demonstrative, Braun’s idea that a context may specify shifts in context across the evaluation of the expressions in a sentence; and Predelli’s idea that we should countenance different classes of contexts. Finally, a solution is proposed that allows that a natural language persists across the addition of basic lexical items but defines logical properties in terms of language stages. A surprising result is that we do not need to think of demonstratives as taking different referents in different situations. (shrink)
Deflationists about truth seek to undermine debates about the nature of truth by arguing that the truth predicate is merely a device that allows us to express a certain kind of generality. I argue that a parallel approach is available in the case of logical consequence. Just as deflationism about truth offers an alternative to accounts of truth's nature in terms of correspondence or justification, deflationism about consequence promises an alternative to model-theoretic or proof-theoretic accounts of consequence's nature. I (...) then argue, against considerations put forward by Field and Beall, that Curry's paradox no more rules out deflationism about consequence than the liar paradox rules out deflationism about truth. (shrink)
A theory of reference may be either an analysis of reference or merely an account of the correct use of the verb "refer". If we define the validity of arguments in the standard way, in terms of assignments of individuals and sets to the nonlogical vocabulary of the language, then we will be committed to seeking an analysis of reference. Those who prefer a metalinguistic account, therefore, will desire an alternative to standard semantics. One alternative is the Quinean conception (...) of logicalvalidity as essentially a matter of logical form. Another alternative is Leblanc's truth-value semantics. But these prove to be either inadequate for purposes of metatheory or philosophically unsatisfactory. This paper shows how validity (i.e., semantic consequence) may be defined in a way that avoid the problems facing these other alternatives to standard semantics and also permits a metalinguistic account of reference. The validity of arguments is treated as a matter of logical form, but validity for forms is defined on analogy with the definition of semantic consequence in truth-value semantics. (A more radical kind of semantics without reference is the context logical approach represented in several of my other publications.). (shrink)
In this paper the informativeness account of assertion (Pagin in Assertion. Oxford University Press, Oxford, 2011) is extended to account for inference. I characterize the conclusion of an inference as asserted conditionally on the assertion of the premises. This gives a notion of conditional assertion (distinct from the standard notion related to the affirmation of conditionals). Validity and logicalvalidity of an inference is characterized in terms of the application of method that preserves informativeness, and contrasted with (...) consequence and logical consequence, that is defined in terms of truth preservation. The proposed account is compared with that of Prawitz (Logica yearbook 2008, pp. 175-192. College Publications, London, 2009). (shrink)
Though it is standardly assumed that supervaluationism applied to vagueness is committed to global validity, Achille Varzi (2007) argues that the supervaluationist should take seriously the idea of adopting local validity instead. Varzi’s motivation for the adoption of local validity is largely based on two objections against the global notion: that it brings some counterexamples to classically valid rules of inference and that it is inconsistent with unrestricted higher-order vagueness. In this discussion I review these objections and (...) point out ways to address them not considered in Varzi’s paper. (shrink)
The need to distinguish between logical and extra-logical varieties of inference, entailment, validity, and consistency has played a prominent role in meta-ethical debates between expressivists and descriptivists. But, to date, the importance that matters of logical form play in these distinctions has been overlooked. That’s a mistake given the foundational place that logical form plays in our understanding of the difference between the logical and the extra-logical. This essay argues that descriptivists are better (...) positioned than their expressivist rivals to provide the needed account of logical form, and so better able to capture the needed distinctions. This finding is significant for several reasons: First, it provides a new argument against expressivism. Second, it reveals that descriptivists can make use of this new argument only if they are willing to take a controversial—but plausible—stand on claims about the nature and foundations of logic. (shrink)
Tarski's Undefinability of Truth Theorem comes in two versions: that no consistent theory which interprets Robinson's Arithmetic (Q) can prove all instances of the T-Scheme and hence define truth; and that no such theory, if sound, can even express truth. In this note, I prove corresponding limitative results for validity. While Peano Arithmetic already has the resources to define a predicate expressing logicalvalidity, as Jeff Ketland has recently pointed out (2012, Validity as a primitive. Analysis (...) 72: 421-30), no theory which interprets Q closed under the standard structural rules can define nor express validity, on pain of triviality. The results put pressure on the widespread view that there is an asymmetry between truth and validity, viz. that while the former cannot be defined within the language, the latter can. I argue that Vann McGee's and Hartry Field's arguments for the asymmetry view are problematic. (shrink)
What formulas are tense-logically valid depends on the structure of time, for example on whether it has a beginning. Logicians have investigated what formulas correspond to what physical hypotheses about time. Analogously, we can investigate what formulas of modal logic correspond to what metaphysical hypotheses about necessity. It is widely held that physical hypotheses about time may be contingent. If so, tense-logicalvalidity may be contingent. In contrast, validity in modal logic is typically taken to be non-contingent, (...) as reflected by the general acceptance of the so-called “rule of necessitation.” But as has been argued by various authors in recent years, metaphysical hypotheses may likewise be contingent. If, in particular, hypotheses about the extent of possibility are contingent, we should expect modal-logicalvalidity to be contingent too. Let “contingentism” be the view that everything that is not ruled out by logic is possible. I shall investigate what the right system of modal logic is, if contingentism is true. Given plausible assumptions, the system contains the McKinsey principle, and is thus not even contained in S5. It also contains simple and elegant iteration principles for the contingency operator: something is contingent if and only if it is contingently contingent. (shrink)
An instantia is a technique to refute other's arguments, found in many tracts from the latter half of the twelfth century. An instantia has (or appears to have) the same form as the argument to be refuted and its falsity is more evident than that of the argument.Precursors of instantiae are among the teachings of masters active in the first half of the century. These masters produce counter-arguments against various inferential forms in order to examine their validity. But the (...) aim of producing counter-arguments change in the latter half of the century into refuting other's arguments to win in debate by any means available. Logicians of that period do not care whether the counter-arguments (instantiae) are sophistical or not, viz. the falsity of instantiae is or is not due to the flaw common to the argument to be refuted.Many instantiae they produce involve logical entanglements into which they themselves have little clear insight. Some instantiae and the attempts to explain them grows into the new theories in the “terminist texts” around 1200 A.D., when instantia literature itself disappears. Some instantiae and the issues they raise have no place in terminist texts, and sink into oblivion. (shrink)
Personal reflections on the philosophical career of Henry Johnstone, B.S. Haverford College, 1942, and Ph.D. Harvard, 1950, professor at Williams College 1948-1952 and Pennsylvania State University, 1952 - 2000. Founder and editor of Philosophy and Rhetoric, Johnstone wrote eight books, including two logic texts, three monographs, and over 150 articles or reviews. The focus is on his efforts to resolve problems stemming from the conflict between the logical empiricism Johnstone embraced in his dissertation, and the arguments of his absolute (...) idealist colleagues at Williams, efforts he pursued in Philosophy and Argument (1959), and Validity and Rhetoric in Philosophical Argument (1978). (shrink)
Kreisel’s set-theoretic problem is the problem as to whether any logical consequence of ZFC is ensured to be true. Kreisel and Boolos both proposed an answer, taking truth to mean truth in the background set-theoretic universe. This article advocates another answer, which lies at the level of models of set theory, so that truth remains the usual semantic notion. The article is divided into three parts. It first analyzes Kreisel’s set-theoretic problem and proposes one way in which any model (...) of set theory can be compared to a background universe and shown to contain internal models. It then defines logical consequence with respect to a model of ZFC, solves the model-scaled version of Kreisel’s set-theoretic problem, and presents various further results bearing on internal models. Finally, internal models are presented as accessible worlds, leading to an internal modal logic in which internal reflection corresponds to modal reflexivity, and resplendency corresponds to modal axiom 4. (shrink)
It is one thing for a given proposition to follow or to not follow from a given set of propositions and it is quite another thing for it to be shown either that the given proposition follows or that it does not follow.* Using a formal deduction to show that a conclusion follows and using a countermodel to show that a conclusion does not follow are both traditional practices recognized by Aristotle and used down through the history of logic. These (...) practices presuppose, respectively, a criterion of validity and a criterion of invalidity each of which has been extended and refined by modern logicians: deductions are studied in formal syntax (proof theory) and coun¬termodels are studied in formal semantics (model theory). The purpose of this paper is to compare these two criteria to the corresponding criteria employed in Boole’s first logical work, The Mathematical Analysis of Logic (1847). In particular, this paper presents a detailed study of the relevant metalogical passages and an analysis of Boole’s symbolic derivations. It is well known, of course, that Boole’s logical analysis of compound terms (involving ‘not’, ‘and’, ‘or’, ‘except’, etc.) contributed to the enlargement of the class of propositions and arguments formally treatable in logic. The present study shows, in addition, that Boole made significant contributions to the study of deduc¬tive reasoning. He identified the role of logical axioms (as opposed to inference rules) in formal deductions, he conceived of the idea of an axiomatic deductive sys¬tem (which yields logical truths by itself and which yields consequences when ap¬plied to arbitrary premises). Nevertheless, surprisingly, Boole’s attempt to imple¬ment his idea of an axiomatic deductive system involved striking omissions: Boole does not use his own formal deductions to establish validity. Boole does give symbolic derivations, several of which are vitiated by “Boole’s Solutions Fallacy”: the fallacy of supposing that a solution to an equation is necessarily a logical consequence of the equation. This fallacy seems to have led Boole to confuse equational calculi (i.e., methods for gen-erating solutions) with deduction procedures (i.e., methods for generating consequences). The methodological confusion is closely related to the fact, shown in detail below, that Boole had adopted an unsound criterion of validity. It is also shown that Boole totally ignored the countermodel criterion of invalid¬ity. Careful examination of the text does not reveal with certainty a test for invalidity which was adopted by Boole. However, we have isolated a test that he seems to use in this way and we show that this test is ineffectual in the sense that it does not serve to identify invalid arguments. We go beyond the simple goal stated above. Besides comparing Boole’s earliest criteria of validity and invalidity with those traditionally (and still generally) employed, this paper also investigates the framework and details of THE MATHEMATICAL ANALYSIS OF LOGIC. (shrink)
We investigate the philosophical significance of the existence of different semantic systems with respect to which a given deductive system is sound and complete. Our case study will be Corcoran's deductive system D for Aristotelian syllogistic and some of the different semantic systems for syllogistic that have been proposed in the literature. We shall prove that they are not equivalent, in spite of D being sound and complete with respect to each of them. Beyond the specific case of syllogistic, the (...) goal is to offer a general discussion of the relations between informal notions—in this case, an informal notion of deductive validity—and logical apparatuses such as deductive systems and (model-theoretic or other) semantic systems that aim at offering technical, formal accounts of informal notions. Specifically, we will be interested in Kreisel's famous 'squeezing argument'; we shall ask ourselves what a plurality of semantic systems (understood as classes of mathematical structures) may entail for the cogency of specific applications of the squeezing argument. More generally, the analysis brings to the fore the need for criteria of adequacy for semantic systems based on mathematical structures. Without such criteria, the idea that the gap between informal and technical accounts of validity can be bridged is put under pressure. (shrink)
Compare two conceptions of validity: under an example of a modal conception, an argument is valid just in case it is impossible for the premises to be true and the conclusion false; under an example of a topic-neutral conception, an argument is valid just in case there are no arguments of the same logical form with true premises and a false conclusion. This taxonomy of positions suggests a project in the philosophy of logic: the reductive analysis of the (...) modal conception of logical consequence to the topic-neutral conception. Such a project would dispel the alleged obscurity of the notion of necessity employed in the modal conception in favour of the clarity of an account of logical consequence given in terms of tractable notions of logical form, universal generalization and truth simpliciter. In a series of publications, John Etchemendy has characterized the model-theoretic definition of logical consequence as truth preservation in all models as intended to provide just such an analysis. In this paper, I will argue that Aristotle intends to provide an account of a modal conception of logical consequence in topic-neutral terms and so is engaged in a project comparable to the one described above. That Aristotle would be engaged in this sort of project is controversial. Under the standard reading of the Prior Analytics, Aristotle does not and cannot provide an account of logical consequence. Rather, he must take the validity of the first figure syllogisms (such as the syllogism known by its medieval mnemonic ‘Barbara’: A belongs to all B; B belongs to all C; so A belongs to all C) as obvious and not needing justification; he then establishes the validity of the other syllogisms by showing that they stand in a suitable relation to the first figure syllogisms. I will argue that Aristotle does attempt to provide an account of logical consequence—namely, by appeal to certain mereological theorems. For example, he defends the status of Barbara as a syllogism by appeal to the transitivity of mereological containment. There are, as I will discuss, reasons to doubt the success of this account. But the attempt is not implausible given certain theses Aristotle holds in semantics, mereology and the theory of relations. (shrink)
Charles Pigden has argued for a logical Is/Ought gap on the grounds of the conservativeness of logic. I offer a counter-example which shows that Pigden’s argument is unsound and that there need be no logical gap between Is-premises and an Ought-conclusion. My counter-example is an argument which is logically valid, has only Is-premises and an Ought-conclusion, does not purport to violate the conservativeness of logic, and does not rely on controversial assumptions about Aristotelian biology or 'institutional facts.'.
We present and discuss various formalizations of Modal Logics in Logical Frameworks based on Type Theories. We consider both Hilbert- and Natural Deduction-style proof systems for representing both truth (local) and validity (global) consequence relations for various Modal Logics. We introduce several techniques for encoding the structural peculiarities of necessitation rules, in the typed -calculus metalanguage of the Logical Frameworks. These formalizations yield readily proof-editors for Modal Logics when implemented in Proof Development Environments, such as Coq or (...) LEGO. (shrink)
In this paper I discuss a prevailing view by which logical terms determine forms of sentences and arguments and therefore the logicalvalidity of arguments. This view is common to those who hold that there is a principled distinction between logical and nonlogical terms and those holding relativistic accounts. I adopt the Tarskian tradition by which logicalvalidity is determined by form, but reject the centrality of logical terms. I propose an alternative framework (...) for logic where logical terms no longer play a distinctive role. This account employs a new notion of semantic. (shrink)
Alethic pluralism holds that there are many truth properties. The view has been challenged to make sense of the notion of logicalvalidity, understood as necessary truth preservation, when inferences involving different areas of discourse are concerned. I argue that the solution proposed by Edwards to solve the analogous problem of mixed compounds can straightforwardly be adapted to give alethic pluralists also a viable account of validity.
The aim of the book is to show that the ’five ways’ of Thomas Aquinas, i.e., his five arguments to prove the existence of God, are logically correct arguments by the standards of modern predicate logic. In the first chapter this is done by commenting on the two preliminary articles preceding the five ways in which Thomas Aquinas points out that on the one hand the existence of God is not self-evident to us and on the other hand, that, similar (...) as in some scientific explanations, the mere existence of a cause for an effect which is evidently known to us can be proved. In the second chapter every argument is translated into the symbolic form of predicate logic and its logicalvalidity is shown. Additionally a detailed and critical discussion of the premises of each argument is given. (publisher). (shrink)
Ontologically minimal truth law semantics are provided for various branches of formal logic (classical propositional logic, S5 modal propositional logic, intuitionistic propositional logic, classical elementary predicate logic, free logic, and elementary arithmetic). For all of them logicalvalidity/truth is defined in an ontologically minimal way, that is, not via truth value assignments or interpretations. Semantical soundness and completeness are proved (in an ontologically minimal way) for a calculus of classical elementary predicate logic.
According to a prevalent view among philosophers formal logic is the philosopher’s main tool to assess the validity of arguments, i.e. the philosopher’s ars iudicandi. By drawing on a famous dispute between Russell and Strawson over the validity of a certain kind of argument – of arguments whose premises feature definite descriptions – this paper casts doubt on the accuracy of the ars iudicandi conception. Rather than settling the question whether the contentious arguments are valid or not, Russell (...) and Strawson, upon discussing the proper logical analysis of definite descriptions, merely contrast converse informal validity assessments rendered explicit by nonequivalent logical for-malizations. (shrink)
A formula is a contingent logical truth when it is true in every model M but, for some model M , false at some world of M . We argue that there are such truths, given the logic of actuality. Our argument turns on defending Tarski’s definition of truth and logical truth, extended so as to apply to modal languages with an actuality operator. We argue that this extension is the philosophically proper account of validity. We counter (...) recent arguments to the contrary presented in Hanson’s ‘Actuality, Necessity, and Logical Truth’ (Philos Stud 130:437–459, 2006 ). (shrink)
The essay addresses the well‐known idea that there has to be a place for intuition, thought of as a kind of non‐inferential rational insight, in the epistemology of basic logic if our knowledge of its principles is non‐empirical and is to allow of any finite, non‐circular reconstruction. It is argued that the error in this idea consists in its overlooking the possibility that there is, properly speaking, no knowledge of the validity of principles of basic logic. When certain important (...) distinctions are observed, for instance that between recognising that Modus Ponenes is sound and recognising that it is proof against the competent discovery of basic counterexamples, the case for thinking that there is indeed no space for genuine recognition of the validity of Modus Ponens becomes increasingly impressive. It is argued however that, the impossibility of knowledge notwithstanding, we are, in an important sense, entitled to take it that Modus Ponens is sound and that this notion of entitlement can help break the trichotomy ‐ intuition, inference, experience ‐ which imprisons our ordinary thinking about logical knowledge. (shrink)
This special issue collects together nine new essays on logical consequence :the relation obtaining between the premises and the conclusion of a logically valid argument. The present paper is a partial, and opinionated,introduction to the contemporary debate on the topic. We focus on two inﬂuential accounts of consequence, the model-theoretic and the proof-theoretic, and on the seeming platitude that valid arguments necessarilypreserve truth. We brieﬂy discuss the main objections these accounts face, as well as Hartry Field’s contention that such (...) objections show consequenceto be a primitive, indeﬁnable notion, and that we must reject the claim that valid arguments necessarily preserve truth. We suggest that the accountsin question have the resources to meet the objections standardly thought to herald their demise and make two main claims: (i) that consequence, as opposed to logical consequence, is the epistemologically signiﬁcant relation philosophers should be mainly interested in; and (ii) that consequence is a paradoxical notion if truth is. (shrink)
In this paper, I seek to undermine G.A. Cohen ’s polemical use of a metaethical claim he makes in his article, ‘ Facts and Principles’, by arguing that that use requires an unsustainable equivocation between epistemic and logical grounding. I begin by distinguishing three theses that Cohen has offered during the course of his critique of Rawls and contractualism more generally, the foundationalism about grounding thesis, the justice as non-regulative thesis, and the justice as all-encompassing thesis, and briefly argue (...) that they are analytically independent of each other. I then offer an outline of the foundationalism about grounding thesis, characterising it, as Cohen does, as a demand of logic. That thesis claims that whenever a normative principle is dependent on a fact, it is so dependent in virtue of some other principle. I then argue that although this is true as a matter of logic, it, as Cohen admits, cannot be true of actual justifications, since logic cannot tell us anything about the truth as opposed to the validity of arguments. Facts about a justification cannot then be decisive for whether or not a given argument violates the foundationalism about grounding thesis. As long as, independently of actual justifications, theorists can point to plausible logically grounding principles, as I argue contractualists can, Cohen ’s thesis lacks critical bite. (shrink)
Since the time of Aristotle's students, interpreters have considered Prior Analytics to be a treatise about deductive reasoning, more generally, about methods of determining the validity and invalidity of premise-conclusion arguments. People studied Prior Analytics in order to learn more about deductive reasoning and to improve their own reasoning skills. These interpreters understood Aristotle to be focusing on two epistemic processes: first, the process of establishing knowledge that a conclusion follows necessarily from a set of premises (that is, on (...) the epistemic process of extracting information implicit in explicitly given information) and, second, the process of establishing knowledge that a conclusion does not follow. Despite the overwhelming tendency to interpret the syllogistic as formal epistemology, it was not until the early 1970s that it occurred to anyone to think that Aristotle may have developed a theory of deductive reasoning with a well worked-out system of deductions comparable in rigor and precision with systems such as propositional logic or equational logic familiar from mathematical logic. When modern logicians in the 1920s and 1930s first turned their attention to the problem of understanding Aristotle's contribution to logic in modern terms, they were guided both by the Frege-Russell conception of logic as formal ontology and at the same time by a desire to protect Aristotle from possible charges of psychologism. They thought they saw Aristotle applying the informal axiomatic method to formal ontology, not as making the first steps into formal epistemology. They did not notice Aristotle's description of deductive reasoning. Ironically, the formal axiomatic method (in which one explicitly presents not merely the substantive axioms but also the deductive processes used to derive theorems from the axioms) is incipient in Aristotle's presentation. Partly in opposition to the axiomatic, ontically-oriented approach to Aristotle's logic and partly as a result of attempting to increase the degree of fit between interpretation and text, logicians in the 1970s working independently came to remarkably similar conclusions to the effect that Aristotle indeed had produced the first system of formal deductions. They concluded that Aristotle had analyzed the process of deduction and that his achievement included a semantically complete system of natural deductions including both direct and indirect deductions. Where the interpretations of the 1920s and 1930s attribute to Aristotle a system of propositions organized deductively, the interpretations of the 1970s attribute to Aristotle a system of deductions, or extended deductive discourses, organized epistemically. The logicians of the 1920s and 1930s take Aristotle to be deducing laws of logic from axiomatic origins; the logicians of the 1970s take Aristotle to be describing the process of deduction and in particular to be describing deductions themselves, both those deductions that are proofs based on axiomatic premises and those deductions that, though deductively cogent, do not establish the truth of the conclusion but only that the conclusion is implied by the premise-set. Thus, two very different and opposed interpretations had emerged, interestingly both products of modern logicians equipped with the theoretical apparatus of mathematical logic. The issue at stake between these two interpretations is the historical question of Aristotle's place in the history of logic and of his orientation in philosophy of logic. This paper affirms Aristotle's place as the founder of logic taken as formal epistemology, including the study of deductive reasoning. A by-product of this study of Aristotle's accomplishments in logic is a clarification of a distinction implicit in discourses among logicians--that between logic as formal ontology and logic as formal epistemology. (shrink)
It is often assumed that the supervaluationist theory of vagueness is committed to a global notion of logical consequence, in contrast with the local notion characteristic of modal logics. There are, at least, two problems related to the global notion of consequence. First, it brings some counterexamples to classically valid patterns of inference. Second, it is subject to an objection related to higher-order vagueness . This paper explores a third notion of logical consequence, and discusses its adequacy for (...) the supervaluationist theory. The paper proceeds in two steps. In the first step, the paper provides a deductive notion of consequence for global validity using the tableaux method. In the second step, the paper provides a notion of logical consequence which is an alternative to global validity, and discusses i) whether it is acceptable to the supervaluationist and ii) whether it plays a better role in a theory of vagueness in the face of the problems related to the global notion. (shrink)
Rejecting structural contraction has been proposed as a strategy for escaping semantic paradoxes. The challenge for its advocates has been to make intuitive sense of how contraction might fail. I offer a way of doing so, based on a “naive” interpretation of the relation between structure and logical vocabulary in a sequent proof system. The naive interpretation of structure motivates the most common way of blaming Curry-style paradoxes on illicit contraction. By contrast, the naive interpretation will not as easily (...) motivate one recent noncontractive approach to the Liar paradox. (shrink)