This paper is concerned with the debate between substantival and relational theories of space-time, and discusses two difficulties that beset the relationalist: a difficulty posed by field theories, and another difficulty (discussed at greater length) called the problem of quantities. A main purpose of the paper is to argue that possibility can not always be used as a surrogate of ontology, and that in particular that there is no hope of using possibility to solve the problem of quantities.
The paper outlines a view of normativity that combines elements of relativism and expressivism, and applies it to normative concepts in epistemology. The result is a kind of epistemological anti-realism, which denies that epistemic norms can be (in any straightforward sense) correct or incorrect; it does allow some to be better than others, but takes this to be goal-relative and is skeptical of the existence of best norms. It discusses the circularity that arises from the fact that we need to (...) use epistemic norms to gather the facts with which to evaluate epistemic norms; relatedly, it discusses how epistemic norms can rationally evolve. It concludes with some discussion of the impact of this view on "ground level" epistemology. (shrink)
There are quite a few theses about logic that are in one way or another pluralist: they hold (i) that there is no uniquely correct logic, and (ii) that because of this, some or all debates about logic are illusory, or need to be somehow reconceived as not straightforwardly factual. Pluralist theses differ markedly over the reasons offered for there being no uniquely correct logic. Some such theses are more interesting than others, because they more radically affect how we are (...) initially inclined to understand debates about logic. Can one find a pluralist thesis that is high on the interest scale, and also true? (shrink)
The paper tries to spell out a connection between deductive logic and rationality, against Harman's arguments that there is no such connection, and also against the thought that any such connection would preclude rational change in logic. One might not need to connect logic to rationality if one could view logic as the science of what preserves truth by a certain kind of necessity (or by necessity plus logical form); but the paper points out a serious obstacle to any such (...) view. (shrink)
1. Background. At least from the time of the ancient Greeks, most philosophers have held that some of our knowledge is independent of experience, or “a priori”. Indeed, a major tenet of the rationalist tradition in philosophy was that a great deal of our knowledge had this character: even Kant, a critic of some of the overblown claims of rationalism, thought that the structure of space could be known a priori, as could many of the fundamental principles of physics; and (...) Hegel is reputed to have claimed to have deduced on a priori grounds that the number of planets is exactly five. (shrink)
There are many reasons why one might be tempted to reject certain instances of the law of excluded middle. And it is initially natural to take ‘reject’ to mean ‘deny’, that is, ‘assert the negation of’. But if we assert the negation of a disjunction, we certainly ought to assert the negation of each disjunct (since the disjunction is weaker1 than the disjuncts). So asserting..
Both in dealing with the semantic paradoxes and in dealing with vagueness and indeterminacy, there is some temptation to weaken classical logic: in particular, to restrict the law of excluded middle. The reasons for doing this are somewhat different in the two cases. In the case of the semantic paradoxes, a weakening of classical logic (presumably involving a restriction of excluded middle) is required if we are to preserve the naive theory of truth without inconsistency. In the case of vagueness (...) and indeterminacy, there is no worry about inconsistency; but a central intuition is that we must reject the factual status of certain sentences, and it hard to see how we can do that while claiming that the law of excluded middle applies to those sentences. So despite the different routes, we have a similar conclusion in the two cases. (shrink)
1. Of what use is the concept of causation? Bertrand Russell [1912-13] argued that it is not useful: it is “a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.” His argument for this was that the kind of physical theories that we have come to regard as fundamental leave no place for the notion of causation: not only does the word ‘cause’ not appear in the advanced sciences, but the (...) laws that these sciences state are incompatible with causation as we normally understand it. But Nancy Cartwright has argued  that abandoning the concept of causation would cripple science; her conclusion was based not on fundamental physics, but on more ordinary science such as the search for the causes of cancer. She argues that Russell was right that the fundamental theories of modern physics say nothing, even implicitly, about causation, and concludes on this basis that such theories are incomplete. It is with this cluster of issues that I will begin my discussion. (shrink)
Are there questions for which 'there is no determinate fact of the matter' as to which answer is correct? Most of us think so, but there are serious difficulties in maintaining the view, and in explaining the idea of determinateness in a satisfactory manner. The paper argues that to overcome the difficulties, we need to reject the law of excluded middle; and it investigates the sense of 'rejection' that is involved. The paper also explores the logic that is required if (...) we reject excluded middle, with special emphasis on the conditional. There is also discussion of higher order indeterminacy (in several different senses) and of penumbral connections; and there is a suggested definition of determinateness in terms of the conditional and a discussion of the extent to which the notion of determinateness is objective. And there are suggestions about a unified treatment of vagueness and the semantic paradoxes. (shrink)
A correspondence theory of truth explains truth in terms of various correspondence relations (e.G., Reference) between words and the extralinguistic world. What are the consequences of quine's doctrine of indeterminacy for correspondence theories? in "ontological relativity" quine implicitly claims that correspondence theories are impossible; that is what the doctrine of 'relative reference' amounts to. But quine's doctrine of relative reference is incoherent. Those who think the indeterminacy thesis valid should not try to relativize reference, They should abandon the relation and (...) replace it by certain more general correspondence relations between words and extralinguistic objects. Doing so will not interfere with the task of defining truth in terms of correspondence relations. (shrink)
Discussion of Chapter 5 of Stephen Schiffer's "The Things We Mean' in which Stephen Schiffer advances two novel theses: 1. Vagueness (and indeterminacy more generally) is a psychological phenomenon; 2. It is indeterminate whether classical logic applies in situations where vagueness matters.
It is “the received wisdom” that any intuitively natural and consistent resolution of a class of semantic paradoxes immediately leads to other paradoxes just as bad as the ﬁrst. This is often called the “revenge problem”. Some proponents of the received wisdom draw the conclusion that there is no hope of any natural treatment that puts all the paradoxes to rest: we must either live with the existence of paradoxes that we are unable to treat, or adopt artiﬁcial and ad (...) hoc means to avoid them. Others (“dialetheists”) argue that we can put the paradoxes to rest, but only by licensing the acceptance of some contradictions (presumably in a paraconsistent logic that prevents the contradictions from spreading everywhere). (shrink)
Bayesian decision theory can be viewed as the core of psychological theory for idealized agents. To get a complete psychological theory for such agents, you have to supplement it with input and output laws. On a Bayesian theory that employs strict conditionalization, the input laws are easy to give. On a Bayesian theory that employs Jeffrey conditionalization, there appears to be a considerable problem with giving the input laws. However, Jeffrey conditionalization can be reformulated so that the problem disappears, and (...) in fact the reformulated version is more natural and easier to work with on independent grounds. (shrink)
Consider the following argument: (1) Bertrand Russell was old at age 3×1018 nanoseconds (that’s about 95 years) (2) He wasn’t old at age 0 nanoseconds (3) So there is a number N such that he was old at N nanoseconds and not old at k nanoseconds for any k
Naive truth theory is, roughly, the theory of truth that in classical logic leads to well-known paradoxes (such as the Liar paradox and the Curry paradox). One response to these paradoxes is to weaken classical logic by restricting the law of excluded middle and introducing a conditional not defined from the other connectives in the usual way. In "New Grounds for Naive Truth Theory" (), Steve Yablo develops a new version of this response, and cites three respects in which he (...) deems it superior to a version that I’ve advocated in several papers. I think he’s right that my version was non-optimal in some of these respects (one and a half of them, to be precise); however, Yablo’s own account seems to me to have some undesirable features as well. In this paper I will explore some variations on his account, and end up tentatively advocating a synthesis of his account and mine (one that is somewhat closer to mine than to his). (shrink)
The paper shows how we can add a truth predicate to arithmetic (or formalized syntactic theory), and keep the usual truth schema Tr( ) ↔ A (understood as the conjunction of Tr( ) → A and A → Tr( )). We also keep the full intersubstitutivity of Tr(>A>)) with A in all contexts, even inside of an →. Keeping these things requires a weakening of classical logic; I suggest a logic based on the strong Kleene truth tables, but with → (...) as an additional connective, and where the effect of classical logic is preserved in the arithmetic or formal syntax itself. Section 1 is an introduction to the problem and some of the difficulties that must be faced, in particular as to the logic of the →; Section 2 gives a construction of an arithmetically standard model of a truth theory; Section 3 investigates the logical laws that result from this; and Section 4 provides some philosophical commentary. (shrink)
It might be thought that we could argue for the consistency of a mathematical theory T within T, by giving an inductive argument that all theorems of T are true and inferring consistency. By Gödel's second incompleteness theorem any such argument must break down, but just how it breaks down depends on the kind of theory of truth that is built into T. The paper surveys the possibilities, and suggests that some theories of truth give far more intuitive diagnoses of (...) the breakdown than do others. The paper concludes with some morals about the nature of validity and about a possible alternative to the idea that mathematical theories are indefinitely extensible. (shrink)
Tim Maudlin’s Truth and Paradox is terrific. In some sense its solution to the paradoxes is familiar—the book advocates an extension of what’s called the Kripke-Feferman theory (although the definition of validity it employs disguises this fact). Nonetheless, the perspective it casts on that solution is completely novel, and Maudlin uses this perspective to try to make the prima facie unattractive features of this solution seem palatable, indeed inescapable. Moreover, the book deals with many important issues that most writers on (...) the paradoxes never deal with, including issues about the application of the Gödel theorems to powerful theories that include our theory of truth. The book includes intriguing excursions into general metaphysics, e.g. on the nature of logic, facts, vagueness, and much more; and it’s lucid and lively, a pleasure to read. It will interest a wide range of philosophers. (shrink)
The paper offers a solution to the semantic paradoxes, one in which (1) we keep the unrestricted truth schema "True( ) ↔ A", and (2) the object language can include its own metalanguage. Because of the first feature, classical logic must be restricted, but full classical reasoning applies in "ordinary" contexts, including standard set theory. The more general logic that replaces classical logic includes a principle of substitutivity of equivalents, which with the truth schema leads to the general intersubstitutivity of (...) True( ) with A within the language. The logic is also shown to have the resources required to represent the way in which sentences (like the Liar sentence and the Curry sentence) that lead to paradox in classical logic are "defective". We can in fact define a hierarchy of "defectiveness" predicates within the language. Contrary to claims that any solution to the paradoxes just breeds further paradoxes ("revenge problems") involving defectiveness predicates, there is a general consistency/conservativeness proof that shows that talk of truth and the various "levels of defectiveness" can all be made coherent together within a single object language. (shrink)