In this paper, we distinguish two versions of Curry's paradox: c-Curry, the standard conditional-Curry paradox, and v-Curry, a validity-involving version of Curry's paradox that isn’t automatically solved by solving c-curry. A uniﬁed treatment of curry paradox thus calls for a uniﬁed treatment of both c-Curry and v-Curry. If, as is often thought, c-Curry paradox is to be solved via non-classical logic, then v-Curry may require a lesson about the structure—indeed, the substructure—of the validity relation itself.
Since Saul Kripke’s influential work in the 1970s, the revisionary approach to semantic paradox—the idea that semantic paradoxes must be solved by weakening classical logic—has been increasingly popular. In this paper, we present a new revenge argument to the effect that the main revisionary approaches breed new paradoxes that they are unable to block.
This article offers an overview of inferential role semantics. We aim to provide a map of the terrain as well as challenging some of the inferentialist’s standard commitments. We begin by introducing inferentialism and placing it into the wider context of contemporary philosophy of language. §2 focuses on what is standardly considered both the most important test case for and the most natural application of inferential role semantics: the case of the logical constants. We discuss some of the (alleged) benefits (...) of logical inferentialism, chiefly with regards to the epistemology of logic, and consider a number of objections. §3 introduces and critically examines the most influential and most fully developed form of global inferentialism: Robert Brandom’s inferentialism about linguistic and conceptual content in general. Finally, in §4 we consider a number of general objections to IRS and consider possible responses on the inferentialist’s behalf. (shrink)
Contextualist approaches to the Liar Paradox postulate the occurrence of a context shift in the course of the Liar reasoning. In particular, according to the contextualist proposal advanced by Charles Parsons and Michael Glanzberg, the Liar sentence L doesn’t express a true proposition in the initial context of reasoning c, but expresses a true one in a new, richer context c', where more propositions are available for expression. On the further assumption that Liar sentences involve propositional quantifiers whose domains may (...) vary with context, the Liar reasoning is blocked. But why should context shift? We argue that the paradox involves principles of contextualist reflection that explain, by analogy with well-known reflection principles for arithmetic, why context must shift from c to c' in the course of the Liar reasoning. This provides a diagnosis of the Liar Paradox—one that equally applies to two revenge arguments against contextualist approaches, one recently advanced by Andrew Bacon, the other mentioned by Charles Parsons and more recently revived by Cory Juhl. (shrink)
The revisionary approach to semantic paradox is commonly thought to have a somewhat uncomfortable corollary, viz. that, on pain of triviality, we cannot affirm that all valid arguments preserve truth (Beall2007, Beall2009, Field2008, Field2009). We show that the standard arguments for this conclusion all break down once (i) the structural rule of contraction is restricted and (ii) how the premises can be aggregated---so that they can be said to jointly entail a given conclusion---is appropriately understood. In addition, we briefly rehearse (...) some reasons for restricting structural contraction. (shrink)
Tarski's Undefinability of Truth Theorem comes in two versions: that no consistent theory which interprets Robinson's Arithmetic (Q) can prove all instances of the T-Scheme and hence define truth; and that no such theory, if sound, can even express truth. In this note, I prove corresponding limitative results for validity. While Peano Arithmetic already has the resources to define a predicate expressing logical validity, as Jeff Ketland has recently pointed out (2012, Validity as a primitive. Analysis 72: 421-30), no theory (...) which interprets Q closed under the standard structural rules can define nor express validity, on pain of triviality. The results put pressure on the widespread view that there is an asymmetry between truth and validity, viz. that while the former cannot be defined within the language, the latter can. I argue that Vann McGee's and Hartry Field's arguments for the asymmetry view are problematic. (shrink)
We cast doubts on the suggestion, recently made by Graham Priest, that glut theorists may express disagreement with the assertion of A by denying A. We show that, if denial is to serve as a means to express disagreement, it must be exclusive, in the sense of being correct only if what is denied is false only. Hence, it can’t be expressed in the glut theorist’s language, essentially for the same reasons why Boolean negation can’t be expressed in such a (...) language either. We then turn to an alternative proposal, recently defended by Beall (in Analysis 73(3):438–445, 2013; Rev Symb Log, 2014), for expressing truth and falsity only, and hence disagreement. According to this, the exclusive semantic status of A, that A is either true or false only, can be conveyed by adding to one’s theory a shrieking rule of the form A & ~A |- \bot, where \bot entails triviality. We argue, however, that the proposal doesn’t work either. The upshot is that glut theorists face a dilemma: they can either express denial, or disagreement, but not both. Along the way, we offer a bilateral logic of exclusive denial for glut theorists—an extension of the logic commonly called LP. (shrink)
We present a revenge argument for non-reflexive theories of semantic notions – theories which restrict the rule of assumption, or initial sequents of the form φ ⊩ φ. Our strategy follows the general template articulated in Murzi and Rossi : we proceed via the definition of a notion of paradoxicality for non-reflexive theories which in turn breeds paradoxes that standard non-reflexive theories are unable to block.
It is sometimes held that rules of inference determine the meaning of the logical constants: the meaning of, say, conjunction is fully determined by either its introduction or its elimination rules, or both; similarly for the other connectives. In a recent paper, Panu Raatikainen (2008) argues that this view - call it logical inferentialism - is undermined by some "very little known" considerations by Carnap (1943) to the effect that "in a definite sense, it is not true that the standard (...) rules of inference" themselves suffice to "determine the meanings of [the] logical constants" (p. 2). In a nutshell, Carnap showed that the rules allow for non-normal interpretations of negation and disjunction. Raatikainen concludes that "no ordinary formalization of logic ... is sufficient to `fully formalize' all the essential properties of the logical constants" (ibid.). We suggest that this is a mistake. Pace Raatikainen, intuitionists like Dummett and Prawitz need not worry about Carnap's problem. And although bilateral solutions for classical inferentialists - as proposed by Timothy Smiley and Ian Rumfitt - seem inadequate, it is not excluded that classical inferentialists may be in a position to address the problem too. (shrink)
Deflationists argue that ‘true’ is merely a logico-linguistic device for expressing blind ascriptions and infinite generalisations. For this reason, some authors have argued that deflationary truth must be conservative, i.e. that a deflationary theory of truth for a theory S must not entail sentences in S’s language that are not already entailed by S. However, it has been forcefully argued that any adequate theory of truth for S must be non-conservative and that, for this reason, truth cannot be deflationary :493–521, (...) 1998; Ketland in Mind 108:69–94, 1999). We consider two defences of conservative deflationism, respectively proposed by Waxman :429–463, 2017) and Tennant :551–582, 2002), and argue that they are both unsuccessful. In Waxman’s hands, deflationists are committed either to a non-purely expressive notion of truth, or to a conception of mathematics that does not allow them to justifiably exclude non-conservative theories of truth. Tennant’s conservative deflationism fares no better: if deflationist truth must be conservative over arithmetic, it can be shown to collapse into a non-conservative variety of deflationism. (shrink)
According to logical inferentialists, the meanings of logical expressions are fully determined by the rules for their correct use. Two key proof-theoretic requirements on admissible logical rules, harmony and separability, directly stem from this thesis—requirements, however, that standard single-conclusion and assertion-based formalizations of classical logic provably fail to satisfy :1035–1051, 2011). On the plausible assumption that our logical practice is both single-conclusion and assertion-based, it seemingly follows that classical logic, unlike intuitionistic logic, can’t be accounted for in inferentialist terms. In (...) this paper, I challenge orthodoxy and introduce an assertion-based and single-conclusion formalization of classical propositional logic that is both harmonious and separable. In the framework I propose, classicality emerges as a structural feature of the logic. (shrink)
Beall and Murzi :143–165, 2013) introduce an object-linguistic predicate for naïve validity, governed by intuitive principles that are inconsistent with the classical structural rules. As a consequence, they suggest that revisionary approaches to semantic paradox must be substructural. In response to Beall and Murzi, Field :1–19, 2017) has argued that naïve validity principles do not admit of a coherent reading and that, for this reason, a non-classical solution to the semantic paradoxes need not be substructural. The aim of this paper (...) is to respond to Field’s objections and to point to a coherent notion of validity which underwrites a coherent reading of Beall and Murzi’s principles: grounded validity. The notion, first introduced by Nicolai and Rossi, is a generalisation of Kripke’s notion of grounded truth, and yields an irreflexive logic. While we do not advocate the adoption of a substructural logic, we take the notion of naïve validity to be a legitimate semantic notion that points to genuine expressive limitations of fully structural revisionary approaches. (shrink)
This chapter introduces inferential role semantics (IRS) and some of the challenges it faces. It also introduces inferentialism and places it into the wider context of contemporary philosophy of language. The chapter focuses on what is standardly considered both the most important test case for and the most natural application of IRS: logical inferentialism, the view that the meanings of the logical expressions are fully determined by the basic rules for their correct use, and that to understand a logical expression (...) is to use it in accordance with the appropriate rules. It discusses some of the benefits of logical inferentialism, chiefly with regard to the epistemology of logic, and considers a number of objections. The chapter critically examines Robert Brandom's inferentialism about linguistic and conceptual content in general. Finally, it considers a number of general objections to IRS and possible responses on the inferentialist's behalf. (shrink)
On a widespread naturalist view, the meanings of mathematical terms are determined, and can only be determined, by the way we use mathematical language—in particular, by the basic mathematical principles we’re disposed to accept. But it’s mysterious how this can be so, since, as is well known, minimally strong first-order theories are non-categorical and so are compatible with countless non-isomorphic interpretations. As for second-order theories: though they typically enjoy categoricity results—for instance, Dedekind’s categoricity theorem for second-order PA and Zermelo’s quasi-categoricity (...) theorem for second-order ZFC—these results require full second-order logic. So appealing to these results seems only to push the problem back, since the principles of second-order logic are themselves non-categorical: those principles are compatible with restricted interpretations of the second-order quantifiers on which Dedekind’s and Zermelo’s results are no longer available. In this paper, we provide a naturalist-friendly, non-revisionary solution to an analogous but seemingly more basic problem—Carnap’s Categoricity Problem for propositional and first-order logic—and show that our solution generalizes, giving us full second-order logic and thereby securing the categoricity or quasi-categoricity of second-order mathematical theories. Briefly, the first-order quantifiers have their intended interpretation, we claim, because we’re disposed to follow the quantifier rules in an open-ended way. As we show, given this open-endedness, the interpretation of the quantifiers must be permutation-invariant and so, by a theorem recently proved by Bonnay and Westerståhl, must be the standard interpretation. Analogously for the second-order case: we prove, by generalizing Bonnay and Westerståhl’s theorem, that the permutation invariance of the interpretation of the second-order quantifiers, guaranteed once again by the open-endedness of our inferential dispositions, suffices to yield full second-order logic. (shrink)
It is often argued that fully structural theories of truth and related notions are incapable of expressing a nonstratified notion of defectiveness. We argue that recently much-discussed non-contractive theories suffer from the same expressive limitation, provided they identify the defective sentences with the sentences that yield triviality if they are assumed to satisfy structural contraction.
Logical orthodoxy has it that classical first-order logic, or some extension thereof, provides the right extension of the logical consequence relation. However, together with naïve but intuitive principles about semantic notions such as truth, denotation, satisfaction, and possibly validity and other naïve logical properties, classical logic quickly leads to inconsistency, and indeed triviality. At least since the publication of Kripke’s Outline of a theory of truth , an increasingly popular diagnosis has been to restore consistency, or at least non-triviality, by (...) restricting some classical rules. Our modest aim in this note is to briefly introduce the main strands of the current debate on paradox and logical revision, and point to some of the potential challenges revisionary approaches might face, with reference to the nine contributions to the present volume.For a recent introduction to non-classical theories of truth and other semantic notions, see the excellent Beall a .. (shrink)
Paul Horwich (1990) once suggested restricting the T-Schema to the maximally consistent set of its instances. But Vann McGee (1992) proved that there are multiple incompatible such sets, none of which, given minimal assumptions, is recursively axiomatizable. The analogous view for set theory---that Naïve Comprehension should be restricted according to consistency maxims---has recently been defended by Laurence Goldstein (2006; 2013). It can be traced back to W.V.O. Quine(1951), who held that Naïve Comprehension embodies the only really intuitive conception of set (...) and should be restricted as little as possible. The view might even have been held by Ernst Zermelo (1908), who,according to Penelope Maddy (1988), subscribed to a ‘one step back from disaster’ rule of thumb: if a natural principle leads to contra-diction, the principle should be weakened just enough to block the contradiction. We prove a generalization of McGee’s Theorem, anduse it to show that the situation for set theory is the same as that for truth: there are multiple incompatible sets of instances of Naïve Comprehension, none of which, given minimal assumptions, is recursively axiomatizable. This shows that the view adumbrated by Goldstein, Quine and perhaps Zermelo is untenable. (shrink)
In this paper, I focus on some intuitionistic solutions to the Paradox of Knowability. I first consider the relatively little discussed idea that, on an intuitionistic interpretation of the conditional, there is no paradox to start with. I show that this proposal only works if proofs are thought of as tokens, and suggest that anti-realists themselves have good reasons for thinking of proofs as types. In then turn to more standard intuitionistic treatments, as proposed by Timothy Williamson and, most recently, (...) Michael Dummett. Intuitionists can either point out the intuitionistc invalidity of the inference from the claim that all truths are knowable to the insane conclusion that all truths are known, or they can outright demur from asserting the existence of forever-unknown truths, perhaps questioning—as Dummett now suggests—the applicability of the Principle of Bivalence to a certain class of empirical statements. I argue that if intuitionists reject strict finitism—the view that all truths are knowable by beings just like us—the prospects for either proposal look bleak. (shrink)
This special issue collects together nine new essays on logical consequence :the relation obtaining between the premises and the conclusion of a logically valid argument. The present paper is a partial, and opinionated,introduction to the contemporary debate on the topic. We focus on two inﬂuential accounts of consequence, the model-theoretic and the proof-theoretic, and on the seeming platitude that valid arguments necessarilypreserve truth. We brieﬂy discuss the main objections these accounts face, as well as Hartry Field’s contention that such objections (...) show consequenceto be a primitive, indeﬁnable notion, and that we must reject the claim that valid arguments necessarily preserve truth. We suggest that the accountsin question have the resources to meet the objections standardly thought to herald their demise and make two main claims: (i) that consequence, as opposed to logical consequence, is the epistemologically signiﬁcant relation philosophers should be mainly interested in; and (ii) that consequence is a paradoxical notion if truth is. (shrink)
In a series of recent papers, Corine Besson argues that dispositionalist accounts of logical knowledge conflict with ordinary reasoning. She cites cases in which, rather than applying a logical principle to deduce certain implications of our antecedent beliefs, we revise some of those beliefs in the light of their unpalatable consequences. She argues that such instances of, in Gilbert Harman’s phrase, ‘reasoned change in view’ cannot be accommodated by the dispositionalist approach, and that we would do well to conceive of (...) logical knowledge as a species of propositional knowledge instead. In this paper, we propose a dispositional account that is more general than the one Besson considers, viz. one that does not merely apply to beliefs, and claim that dispositionalists have the resources to account for reasoned change in view. We then raise what we take to be more serious challenges for the dispositionalist view, and sketch some lines of response dispositionalists might offer. (shrink)
John MacFarlane has recently presented a novel argument in support of truth- relativism. According to this, contextualists fail to accommodate retrospective reassessments of propositional contents, when it comes to languages which are rich enough to express actuality. The aim of this note is twofold. First, it is to argue that the argument can be effectively rejected, since it rests on an inadequate conception of actuality. Second, it is to offer a more plausible account of actuality in branching time, along the (...) line of Lewis (1970/1983). (shrink)
A well-known proof by Alonzo Church, first published in 1963 by Frederic Fitch, purports to show that all truths are knowable only if all truths are known. This is the Paradox of Knowability. If we take it, quite plausibly, that we are not omniscient, the proof appears to undermine metaphysical doctrines committed to the knowability of truth, such as semantic anti-realism. Since its rediscovery by Hart and McGinn (1976), many solutions to the paradox have been offered. In this article, we (...) present a new proof to the effect that not all truths are knowable, which rests on different assumptions from those of the original argument published by Fitch. We highlight the general form of the knowability paradoxes, and argue that anti-realists who favour either an hierarchical or an intuitionistic approach to the Paradox of Knowability are confronted with a dilemma: they must either give up anti-realism or opt for a highly controversial interpretation of the principle that every truth is knowable. (shrink)
I argue that the standard anti-realist argument from manifestability to intuitionistic logic is either unsound or invalid. Strong interpretations of the manifestability of understanding are falsified by the existence of blindspots for knowledge. Weaker interpretations are either too weak, or gerrymandered and ad hoc. Either way, they present no threat to classical logic.
Richard Heck has recently drawn attention on a new version of the Liar Paradox, one which relies on logical resources that are so weak as to suggest that it may not admit of any “truly satisfying, consistent solution”. I argue that this conclusion is too strong. Heck's Liar reduces to absurdity principles that are already rejected by consistent paracomplete theories of truth, such as Kripke's and Field's. Moreover, the new Liar gives us no reasons to think that (versions of) these (...) principles cannot be consistently retained once the structural rule of contraction is restricted. I suggest that revisionary logicians have independent reasons for restricting such a rule. (shrink)
Anti-realists typically contend that truth is epistemically constrained. Truth, they say, cannot outstrip our capacity to know. Some anti-realists are also willing to make a further claim: if truth is epistemically constrained, classical logic is to be given up in favour of intuitionistic logic. Here we shall be concerned with one argument in support of this thesis - Crispin Wright's Basic Revisionary Argument, first presented in his Truth and Objectivity. We argue that the reasoning involved in the argument, if correct, (...) validates a parallel argument that leads to conclusions that are unacceptable to classicists and intuitionists alike. (shrink)
The Surprise Exam Paradox is well-known: a teacher announces that there will be a surprise exam the following week; the students argue by an intuitively sound reasoning that this is impossible; and yet they can be surprised by the teacher. We suggest that a solution can be found scattered in the literature, in part anticipated by Wright and Sudbury, informally developed by Sorensen, and more recently discussed, and dismissed, by Williamson. In a nutshell, the solution consists in realising that the (...) teacher's announcement is a blindspot that can only be known if the week is at least 2 days long. Along the way, we criticise Williamson's own treatment of the paradox. In Williamson's view, the Surprise is similar to the Paradox of the Glimpse and, because of their similarities, both these paradoxes ought to receive a uniform treatment-one that involves locating an illicit application of the KK Principle. We argue that there's no deep analogy between the Surprise and the Glimpse and that, even if there were, the Surprise reasoning reaches a paradoxical conclusion before the KK Principle is used. Rather, in both the Surprise and the Glimpse, the blame should be put on other epistemic principles-respectively, a knowledge retention and a margin for error principle. (shrink)
I discuss Prawitz’s claim that a non-reliabilist answer to the question “What is a proof?” compels us to reject the standard Bolzano-Tarski account of validity, andto account for the meaning of a sentence in broadly veriﬁcationist terms. I sketch what I take to be a possible way of resisting Prawitz’s claim---one that concedes the anti-reliabilist assumption from which Prawitz’s argument proceeds.
Richard Heck has recently drawn attention on a new version of the Liar Paradox, one which relies on logical resources that are so weak as to suggest that it may not admit of any ‘‘truly satisfying, consistent solution’’. I argue that this conclusion is too strong. Heck’s Liar reduces to absurdity principles that are already rejected by consistent paracomplete theories of truth, such as Kripke’s and Field’s. Moreover, the new Liar gives us no reasons to think that these principles cannot (...) be consistently retained once the structural rule of contraction is restricted. I suggest that revisionary logicians have independent reasons for restricting such a rule. (shrink)