Timothy Williamson famously offered an argument from these Tarskian principles in favor of bivalence. I show, dwelling on (Andjelkovic & Williamson, 2000), that the argument depends on a contentious formulation of the Tarskian principles about truth (and falsity), which the supervaluationist can reject without jeopardizing the Tarskian insight. In the mentioned paper, Adjelkovic and Williamson argue that, even if the appropriate formulation seems to make room for failure of bivalence in borderline cases, this appearance is illusory, once one (...) grants an independent further principle involving biconditionals. I finally argue that the formulation of this is, however, contentious in a similar manner. (shrink)
In this paper, I focus on some intuitionistic solutions to the Paradox of Knowability. I first consider the relatively little discussed idea that, on an intuitionistic interpretation of the conditional, there is no paradox to start with. I show that this proposal only works if proofs are thought of as tokens, and suggest that anti-realists themselves have good reasons for thinking of proofs as types. In then turn to more standard intuitionistic treatments, as proposed by Timothy Williamson and, most recently, (...) Michael Dummett. Intuitionists can either point out the intuitionistc invalidity of the inference from the claim that all truths are knowable to the insane conclusion that all truths are known, or they can outright demur from asserting the existence of forever-unknown truths, perhaps questioning—as Dummett now suggests—the applicability of the Principle of Bivalence to a certain class of empirical statements. I argue that if intuitionists reject strict finitism—the view that all truths are knowable by beings just like us—the prospects for either proposal look bleak. (shrink)
In this paper we aim to disentangle the thesis that the future is open from theses that often get associated or even conflated with it. In particular, we argue that the open future thesis is compatible with both the unrestricted principle of bivalence and determinism with respect to the laws of nature. We also argue that whether or not the future (and indeed the past) is open has no consequences as to the existence of (past and) future ontology.
On standard versions of supervaluationism, truth is equated with supertruth, and does not satisfy bivalence: some truth-bearers are neither true nor false. In this paper I want to confront a well-known worry about this, recently put by Wright as follows: ‘The downside . . . rightly emphasized by Williamson . . . is the implicit surrender of the T-scheme’. I will argue that such a cost is not high: independently motivated philosophical distinctions support the surrender of the T- scheme, (...) and suggest acceptable approximations. (shrink)
I consider two related objections to the claim that the law of excluded middle does not imply bivalence. One objection claims that the truth predicate captured by supervaluation semantics is not properly motivated. The second objection says that even if it is, LEM still implies bivalence. I show that LEM does not imply bivalence in a supervaluational language. I also argue that considering supertruth as truth can be reasonably motivated.
Determining whether the law of excluded middle requires bivalence depends upon whether we are talking about sentences or propositions. If we are talking about sentences, neither side has a decisive case. If we are talking of propositions, there is a strong argument on the side of those who say the excluded middle does require bivalence. I argue that all challenges to this argument can be met.
This paper is an attack on the Dummett-Prawitz view that the principle of bivalence has a crucial double significance, metaphysical and meaning theoretical. On the one hand it is said that holding bivalence valid is what characterizes a realistic view, i.e. a view in metaphysics, and on the other hand it is said that there are meaning theoretical arguments against its acceptability. I argue that these two aspects are incompatible. If the failure of validity of bivalence depends (...) on properties of linguistic meaning, then there are no metaphysical consequences to be drawn. The case for this view is straightforward as long as we are discussing a language different from our own. But it seems that the distinction between failure because of meaning and failure because of reality cannot be applied to our own language, simply because our own language is just what we use to represent reality. I argue that this impression is illusory. In order to draw a conclusion about reality, meaning must be connected with truth in a non-trivial way, and precisely this cannot be done in the language for which the meaning theory itself is correct. (shrink)
Abstract The success of the pragmatic account of truth is often thought to founder on the principle of bivalence?the principle which holds that every genuine statement in the indicative mood is either true or false. For pragmatists must, it seems, claim that the principle does not hold for theoretical statements and observation statements about the past. That is, it seems that pragmatists must deny objective truth?values to these perfectly respectable sorts of hypotheses. In this paper, after examining three pragmatist (...) attitudes towards bivalence, I shall suggest that the pragmatist's proper stance is to treat bivalence as a regulative assumption of inquiry. (shrink)
Timothy Williamson, in various places, has put forward an argument that is supposed to show that denying bivalence is absurd. This paper is an examination of the logical force of this argument, which is found wanting.
It is highly intuitive that the future is open and the past is closed now—whereas it is unsettled now whether there will be a fourth world war, it is settled that there was a first. Recently, it has become increasingly popular to claim that the intuitive openness of the future implies that contingent statements about the future, such as ‘There will be a sea battle tomorrow,’ are nonbivalent (neither true nor false). In this paper, we argue that the non-bivalence (...) of future contingents is at odds with our pre-theoretic intuitions about the openness of the future. These intuitions are revealed by our pragmatic judgments concerning the correctness and incorrectness of assertions of future contingents. We argue that the pragmatic data together with a plausible account of assertion shows that in many cases we take future contingents to be true (or to be false), though we take the future to be open in relevant respects. It follows that appeals to intuition to support the non-bivalence of future contingents are untenable. Intuition favours bivalence. (shrink)
Writers such as Stalnaker and Dummett have argued that specific features of subjunctive conditional statements undermine the principle of bivalence. This, paper is concerned with rebutting such claims. 1. It is shown how subjective conditionals pose a prima facie threat to bivalence, and how this threat can be dissolved by a distinction between the results of negating a subjective conditional and of negating its consequent. To make this distinction is to side with Lewis against Stalnaker in a dispute (...) about possible worlds semantics for such conditionals, and reasons are given for doing so. 2. These arguments are extended to answer Dummett''s claim that behaviourist and phenomenalist analyses in terms of subjunctive conditions violate bivalence. This answer is shown to be compatible with the principle that hypothetical statements are true only in virtue of categorical facts. (shrink)
Let us begin with a word about what our topic is not. There is a familiar kind of argument for an epistemic view of vagueness in which one claims that denying bivalence introduces logical puzzles and complications that are not easily overcome. One then points out that, by ‘going epistemic’, one can preserve bivalence—and thus evade the complications. James Cargile presented an early version of this kind of argument [Cargile 1969], and Tim Williamson seemingly makes a similar point (...) in his paper, ‘Vagueness and Ignorance’, when he says that ‘classical logic and semantics are vastly superior to . . . alternatives in simplicity, power, past success, and integration with theories in other domains’, and contends that this provides some grounds for not treating vagueness in this way [Williamson 1996: 279].2 Obviously an argument of this kind invites a rejoinder about the puzzles and complications that the epistemic view introduces. Here are two quick examples. First, postulating, as the epistemicist does, linguistic facts no speaker of the language could possibly know, and which have no causal link to actual or possible speech behaviour, is accompanied by a litany of disadvantages—as the reader can imagine. Second, since Williamson’s preferred explanation of our failure to know the exact boundary of vague predicates is precisely that speakers don’t know the relevant linguistic facts—and this because they might be changing constantly, in subtle ways—we risk preserving classical logic at the cost of giving up its usefulness. If this epistemic view is correct, then for all we know, every supposedly valid argument might involve a fallacy of equivocation between what the words in the premises mean and, milliseconds later, what the (homophonous but not known-to-be synonymous) words in the conclusion mean. It looks, then, like there will be complexities whichever way one goes. In which case, whether this familiar kind of argument ends up being compelling depends very much on detailed issues concerning which complications are introduced by which logics for vagueness.. (shrink)
This paper outlines an approach to the principle of bivalence based on very general, but still elementary, semantic considerations. The principle of bivalence states that (a) “every sentence is either true or false”. Clearly, some logics are bivalent while others are not. A more general formulation of (a) uses the concept of designated and non-designated logical values and is captured by (b) “every sentence is either designated or non-designated”. Yet this formulation seems trivial, because the concept of non-designated (...) value is negative. In order to refine the analysis, the class of anti-designated values has been distinguished. The non-trivial version of the principle of bivalence is expressed by (c) “every sentence is either designated or anti-designated”. The last part of the paper mentions some extralogical reasons for considering the principle of bivalence with truth being a designated value as intimately connected to human thinking and behavior. (shrink)
Evaluative processes have their roots in early evolutionary history, as survival is dependent on an organism’s ability to identify and respond appropriately to positive, rewarding or otherwise salubrious stimuli as well as to negative, noxious, or injurious stimuli. Consequently, evaluative processes are ubiquitous in the animal kingdom and are represented at multiple levels of the nervous system, including the lowest levels of the neuraxis. While evolution has sculpted higher level evaluative systems into complex and sophisticated information-processing networks, they do not (...) come to replace, but rather to interact with more primitive lower level representations. Indeed, there are basic features of the underlying neuroarchitectural plan for evaluative processes that are common across levels of organization—including that of evaluative bivalence. (shrink)
The aim of this paper is rather modest: we do not intend to reconstruct Aristotle’s theory of truth (although we are convinced that there is such a thing), and we will not try to settle the issue concerning Bivalence in Aristotle. We merely want, on the one hand, to argue for the consistency between the main Aristotelian texts on truth and a possible rejection of Bivalence; and on the other hand, to investigate the conditions of a possible counterexample (...) to Bivalence. The motivation for this research is also very specific. We are interested in the apparent violation of Bivalence introduced by vague predicates, and in particular we want to respond to a family of arguments put forward by T. Williamson in support of the idea that allowing for exceptions to Bivalence would be incoherent. We have focused on these arguments for two reasons. On the one hand, what is allegedly threatened by a denial of Bivalence is no less than the very “nature of truth or falsity”. On the other hand, Aristotle is explicitly mentioned as one of the defendants of this “natural” conception of truth, and we are reminded about the connection between Aristotle’s theory and Tarski’s semantic conception. These arguments, therefore, give us an occasion to explore Aristotle’s analysis of the nature of truth and falsity, and to examine its connection with the Tarskian conception of truth. In particular, we would like to question the assumption, which has become a commonplace in the field of analytical philosophy, that Aristotle’s notion of truth can be encoded in the pair of disquotational biconditionals that derive from Tarski’s “T schema”. (shrink)
My purpose in this paper is to argue that the classical notion of entailment is not suitable for non-bivalent logics, to propose an appropriate alternative and to suggest a generalized entailment notion suitable to bivalent and non-bivalent logics alike. In classical two valued logic, one can not infer a false statement from one that is not false, any more than one can infer from a true statement a statement that is not true. In classical logic in fact preserving truth and (...) preserving non-falsity are one and the same thing. They are not the same in non-bivalent logics however and I will argue that the classical notion of entailment that preserves only truth is not strong enough for such a logic. I will show that if we retain the classical notion of entailment in a logic that has three values, true, false and a third value in between, an inconsistency can be derived that can be resolved only by measures that seriously disable the logic. I will show this for a logic designed to allow for semantic presuppositions, then I will show that we get the same result in any three valued logic with the same value ordering. I will finally suggest how the notion of entailment should be generalized so that this problem may be avoided. The strengthened notion of entailment I am proposing is a conservative extension of the classical notion that preserves not only truth but the order of all values in a logic, so that the value of an entailed statement must alway be at least as great as the value of the sequence of statements entailing it. A notion of entailment this strong or stronger will, I believe, be found to be applicable to non-classical logics generally. In the opinion of Dana Scott, no really workable three valued logic has yet been developed. It is hard to disagree with this. A workable three valued logic however could perhaps be developed however, if we had a notion of entailment suitable to non-bivalent logics. (shrink)
Putative resolutions of the sorites paradox in which the major premise is declared false or illegitimate, Including max black's treatment in terms of the alleged illegitimacy of vague attributions to borderline cases, Are rejected on semantical grounds. The resort to a non-Bivalent logic of representational "accuracy" with a continuum of accuracy values is shown to resolve the paradox, And the identification of accuracy values as truth values is defended as compatible with the central insight of the correspondence theory of truth (...) and with the practical legitimacy of most applications of ordinary, Bivalent logic to statements involving vague predicates. (shrink)
In chapter 9 of De Interpretatione, Aristotle offers a defense of free will against the threat of fatalism. According to the traditional interpretation, Aristotle concedes the validity of the fatalist's arguments and then proceeds to reject the Principle of Bivalence in order to avoid the fatalist's conclusion. Assuming that the traditional interpretation is right on this point, it remains to be seen why Aristotle felt compelled to reject such an intuitive semantic principle rather than challenge the fatalist's inference from (...) truth to necessity. The answer, I contend, lies in Aristotle's theory of truth and truthmakers. (shrink)
It is generally agreed that vague predicates like ‘red’, ‘rich’, ‘tall’, and ‘bald’, have borderline cases of application. For instance, a cloth patch whose color lies midway between a definite red and a definite orange is a borderline case for ‘red’, and an American man five feet eleven inches in height is (arguably) a borderline case for ‘tall’. The proper analysis of borderline cases is a matter of dispute, but most theorists of vagueness agree at least in the thought that (...) borderline cases for vague predicate ‘ ’ are items whose satisfaction of ‘ ’ is in some sense unclear or problematic: it is unclear whether or not the patch is red, unclear whether or not the man is tall.1 For example, Lynda Burns cites a widespread view as holding that borderline cases “are not definitely within the positive or negative extension of the predicate. … Border- line cases are seen as falling within a gap between the cases of definite application of the predicate and cases of definite application of its negation” (1995, 30). Michael Tye writes that the “concept of a border- line case is the concept of a case that is neither definitely in nor defi- nitely out” (1994b, 18). (shrink)
In De Interpretatione 6-9, Aristotle considers three logical principles: the principle of bivalence, the law of excluded middle, and the rule of contradictory pairs (according to which of any contradictory pair of statements, exactly one is true and the other false). Surprisingly, Aristotle accepts none of these without qualification. I offer a coherent interpretation of these chapters as a whole, while focusing special attention on two sorts of statements that are of particular interest to Aristotle: universal statements not made (...) universally and future particular statements. With respect to the former, I argue that Aristotle takes them to be indeterminate and so to violate the rule of contradictory pairs. With respect to the latter, the subject of the much discussed ninth chapter, I argue that the rule of contradictory pairs, and not the principle of bivalence, is the focus of Aristotle's refutation. Nevertheless, Aristotle rejects bivalence for future particular statements. (shrink)
A view often expressed is that to classify the liar sentence as neither true nor false is satisfactory for the simple liar but not for the strengthened liar. I argue that in fact it is equally unsatisfactory for both liars. I go on to discuss whether, nevertheless, Kripke''s theory of truth represents an advance on that of Tarski.
In their paper “Vagueness, Ignorance, and Margins for Error” Kenton Machina and Harry Deutsch criticize the epistemic theory of vagueness. This paper answers their objections. The main issues discussed are: the relation between meaning and use; the principle of bivalence; the ontology of vaguely specified classes; the proper form of margin for error principles; iterations of epistemic operators and semantic compositionality; the relation or lack of it between quantum mechanics and theories of vagueness.
The Knowability Paradox purports to show that the controversial but not patently absurd hypothesis that all truths are knowable entails the implausible conclusion that all truths are known. The notoriety of this argument owes to the negative light it appears to cast on the view that there can be no verification-transcendent truths. We argue that it is overly simplistic to formalize the views of contemporary verificationists like Dummett, Prawitz or Martin-Löf using the sort of propositional modal operators which are employed (...) in the original derivation of the Paradox. Instead we propose that the central tenet of verificationism is most accurately formulated as follows: if φ is true, then there exists a proof of φ Building on the work of Artemov (Bull Symb Log 7(1): 1-36, 2001), a system of explicit modal logic with proof quantifiers is introduced to reason about such statements. When the original reasoning of the Paradox is developed in this setting, we reach not a contradiction, but rather the conclusion that there must exist non-constructed proofs. This outcome is evaluated relative to the controversy between Dummett and Prawitz about proof existence and bivalence. (shrink)
According to Quine, in any disagreement over basic logical laws the contesting parties must mean different things by the connectives or quantifiers implicated in those laws; when a deviant logician ‘tries to deny the doctrine he only changes the subject’. The standard (Heyting) semantics for intuitionism offers some confirmation for this thesis, for it represents an intuitionist as attaching quite different senses to the connectives than does a classical logician. All the same, I think Quine was wrong, even about the (...) dispute between classicists and intuitionists. I argue for this by presenting an account of consequence, and a cognate semantic theory for the language of the propositional calculus, which (a) respects the meanings of the connectives as embodied in the familiar classical truth-tables, (b) does not presuppose Bivalence, and with respect to which (c) the rules of the intuitionist propositional calculus are sound and complete. Thus the disagreement between classicists and intuitionists, at least, need not stem from their attaching different senses to the connectives; one may deny the doctrine without changing the subject. The basic notion of my semantic theory is truth at a possibility , where a possibility is a way that (some) things might be, but which differs from a possible world in that the way in question need not be fully specific or determinate. I compare my approach with a previous theory of truth at a possibility due to Lloyd Humberstone, and with a previous attempt to refute Quine’s thesis due to John McDowell. (shrink)
Open future is incompatible with realism about possible worlds. Since realistically conceived (concrete or abstract) possible worlds are maximal in the sense that they contain/represent the full history of a possible spacetime, past and future included, if such a world is actual now, the future is fully settled now, which rules out openness. The kind of metaphysical indeterminacy required for open future is incompatible with the kind of maximality which is built into the concept of possible worlds. The paper discusses (...) various modal realist responses and argues that they provide ersatz openness only, or they lead to incoherence, or they render the resulting theory inadequate as a theory of modality. The paper also considers various accounts of the open future, including rejection of bivalence, supervaluationism, and the ‘thin red line’ view (TRL), and claims that a version of (TRL) can avoid the incompatibility problem, but only at the cost of deflating the notion of openness. (shrink)
An account of the logic of bivalent languages with truth-value gaps is given. This account is keyed to the use of tables introduced by S. C. Kleene. The account has two guiding ideas. First, that the bivalence property insures that the language satisfies classical logic. Second, that the general concepts of a valid sentence and an inconsistent sentence are, respectively, as sentences which are not false in any model and sentences which are not true in any model. What recommends (...) this approach is (1) its relative simplicity, and (2) the fact that it leaves the fundamental features of classical logic intact. (shrink)
This paper deals with the problem of future contingents, and focuses on two classical logical principles, excluded middle and bivalence. One may think that different attitudes are to be adopted towards these two principles in order to solve the problem. According to what seems to be a widely held hypothesis, excluded middle must be accepted while bivalence must be rejected. The paper goes against that line of thought. In the first place, it shows how the rejection of (...) class='Hi'>bivalence leads to implausible consequences if excluded middle is accepted. In the second place, it addresses the question of why one should reject bivalence, and finds no satisfactory answer. /// Este artículo trata el problema de los futuros contingentes, y se enfoca en dos principios lógicos clásicos: el tercero excluido y la bivalencia. Se podría pensar que una solución del problema requiere actitudes diferentes hacia estos dos principios. Según una hipótesis que parece ampliamente compartida, el tercero excluido debe ser aceptado, mientras que la bivalencia debe ser rechazada. Este artículo argumenta en contra de esta línea de pensamiento. En primer lugar, se aborda cómo el rechazo de la bivalencia lleva a consecuencias poco plausibles si el tercero excluido es aceptado. En segundo lugar, se enfrenta la cuestión de por qué se debería rechazar la bivalencia, sin encontrar una respuesta satisfactoria. (shrink)
A doctrine that occurs intermittently in Quine’s work is that there is no extra-theoretic truth. This paper explores this doctrine, and argues that on its best interpretation it is inconsistent with three views Quine also accepts: bivalence, mathematical Platonism, and the disquotational account of truth.
Classical logic rests on the assumption that there are two mutually exclusive and jointly exhaustive truth values. This assumption has always been surrounded by philosophical controversy. Doubts have been raised about its legitimacy, and hence about the legitimacy of classical logic. Usually, the assumption is stated in the form of a general principle, namely the principle that every proposition is either true or false. Then, the philosophical controversy is often framed in terms of the question whether every proposition is either (...) true or false. The main purpose of the paper is to show that there is something wrong in this way of putting things. The point is that the common way of understanding the controversial assumption is misconceived, as it rests on a wrong picture of propositions. In the first part of the paper I outline this picture and I argue against it. In the second part I sketch a different picture of propositions and I suggest how this leads to conceive the issue of classical logic in different terms. (shrink)
Este ensayo ofrece un análisis del argumento de Crisipo a favor de que todo tiene una causa en Cicerón, De Fato 20. Para ello, se discute en qué sentido el argumento es fatalista y si el tipo de fatalismo que implica alienta la inacción. Asimismo, se presenta una nueva interpretación de la réplica de Crisipo al Argumento Perezoso en Eusebio, Praep. ev. 6.8.28. En particular se sostiene que, para Crisipo, la relación entre sucesos codestinados es analítica: a fin de determinar (...) qué sucesos futuros están codestinados con sucesos presentes, basta analizar los conceptos que se emplean para describir los primeros. /// In this paper I undertake an examination of Chrysippus' argument in Cicero's De Fato 20 for the view that everything has a cause, by discussing in what sense it is fatalist and whether the kind of fatalism it implies encourages idleness. A novel interpretation is offered of Chrysippus' refutation of the Idle Argument at Eusebius, Praep. ev. 6.8.28. In particular, I argue that for Chrysippus the connection between co-fated events is analytic: to determine which future events are co-fated with present ones, it is sufficient to analyse the concepts that are used to describe the former. (shrink)