I started out as a student of physics, hard-working, interested, but alas, not ‘in love’ with my subject. Then logic struck, and having become interested in this subject for various reasons – including the fascinating personality of my first teacher –, I switched after my candidate’s program, to take two master’s degrees, in mathematics and in philosophy. The beauty of mathematics was clear to me at once, with the amazing power, surprising twists, and indeed the music, of abstract arguments. As (...) our professor of Analysis wrote at the time in our study guide “Mathematics is about the delight in the purity of trains of thought”, and oldfashioned though this phrasing sounded in the revolutionary 1960s, it did resonate with me. Then I had the privilege of being taught set-theoretic topology by a group of brilliant students around De Groot, our leading expert around the time, who worked with Moore’s method of discovering a subject for oneself. Topology unfolded from a few definitions and examples to real theorems that we had to prove ourselves – and the take-home exam took sleepless nights, as it included proving some results from scratch which came from a recent dissertation (as it turned out later). Only at the very end did De Groot appear, to give one lecture on Tychonoff’s Theorem where an application was made of the Axiom of Choice, a sacral act only to be performed by tenured full professors. (shrink)
We will present a new lottery-style paradox on counterfactuals and chance. The upshot will be: combining natural assumptions on (i) the truth values of ordinary counterfactuals, (ii) the conditional chances of possible but non-actual events, (iii) the manner in which (i) and (ii) relate to each other, and (iv) a fragment of the logic of counterfactuals leads to disaster. In contrast with the usual lottery-style paradoxes, logical closure under conjunction—that is, in this case, the rule of Agglomeration of (consequents of) (...) counterfactuals—will not play a role in the derivation and will not be entailed by our premises either. We will sketch four obvious but problematic ways out of the dilemma, and we will end up with a new resolution strategy that is non-obvious but (as we hope) less problematic: contextualism about what counts as a proposition. This proposal will not just save us from the paradox, it will also save each premise in at least some context, and it will be motivated by independent considerations from measure theory and probability theory. (shrink)
We show that finitely axiomatized first-order theories that involve some criterion of identity for entities of a category C can be reformulated as conjunctions of a non-triviality statement and a criterion of identity for entities of category C again. From this, we draw two conclusions: First, criteria of identity can be very strong deductively. Second, although the criteria of identity that are constructed in the proof of the theorem are not good ones intuitively, it is difficult to say what exactly (...) is wrong with them once the modern metaphysical view of identity criteria is presupposed. (shrink)
This article suggests that scientific philosophy, especially mathematical philosophy, might be one important way of doing philosophy in the future. Along the way, the article distinguishes between different types of scientific philosophy; it mentions some of the scientific methods that can serve philosophers; it aims to undermine some worries about mathematical philosophy; and it tries to make clear why in certain cases the application of mathematical methods is necessary for philosophical progress.
We argue that giving up on the closure of rational belief under conjunction comes with a substantial price. Either rational belief is closed under conjunction, or else the epistemology of belief has a serious diachronic deficit over and above the synchronic failures of conjunctive closure. The argument for this, which can be viewed as a sequel to the preface paradox, is called the ‘review paradox'; it is presented in four distinct, but closely related versions.
The so-called Paradox of Serious Possibility is usually regarded as showing that the standard axioms of belief revision do not apply to belief sets that are introspectively closed. In this article we argue to the contrary: we suggest a way of dissolving the Paradox of Serious Possibility so that introspective statements are taken to express propositions in the standard sense, which may thus be proper members of belief sets, and accordingly the normal axioms of belief revision apply to them. Instead (...) the paradox is avoided by making explicit, for any occurrence of an introspective modality in the object language, the belief state to which this occurrence refers; this will make it impossible for any doxastic modality to refer to two distinct belief sets within one and the same context of doxastic appraisal. By this move the standard derivation of a contradiction from the theory of belief revision in the presence of introspectively closed belief sets does not go through any more, and indeed the premisses of the Paradox of Serious Possibility become jointly consistent once they are reformulated with our amended introspective modalities only. Additionally, we present a probabilistic version of the Paradox of Serious Possibility which can be avoided in a perfectly analogous manner. (shrink)
This article explores ways in which the Revision Theory of Truth can be expressed in the object language. In particular, we investigate the extent to which semantic deficiency, stable truth, and nearly stable truth can be so expressed, and we study different axiomatic systems for the Revision Theory of Truth.
This is part B of a paper in which we defend a semantics for counterfactuals which is probabilistic in the sense that the truth condition for counterfactuals refers to a probability measure. Because of its probabilistic nature, it allows a counterfactual to be true even in the presence of relevant -worlds, as long such exceptions are not too widely spread. The semantics is made precise and studied in different versions which are related to each other by representation theorems. Despite its (...) probabilistic nature, we show that the semantics and the resulting system of logic may be regarded as a naturalistically vindicated variant of David Lewis work. We argue that counterfactuals have two kinds of pragmatic meanings and come attached with two types of degrees of acceptability or belief, one being suppositional, the other one being truth based as determined by our probabilistic semantics; these degrees could not always coincide due to a new triviality result for counterfactuals, and they should not be identified in the light of their different interpretation and pragmatic purpose. However, for plain assertability the difference between them does not matter. Hence, if the suppositional theory of counterfactuals is formulated with sufficient care, our truth-conditional theory of counterfactuals is consistent with it. The results of our investigation are used to assess a claim considered by Hawthorne and Hájek, that is, the thesis that most ordinary counterfactuals are false. (shrink)
Famously, Frank P. Ramsey suggested a test for the acceptability of conditionals. Recently, David Chalmers and Alan Hájek (2007) have criticized a qualitative variant of the Ramsey test for indicative conditionals. In this paper we argue for the following three claims: (i) Chalmers and Hájek are right that the variant of the Ramsey test that they attack is not the correct way of spelling out an acceptability test for indicative conditionals. But there is a suppositional variant of the Ramsey test (...) which is still stated in purely qualitative terms, which avoids the problems, and which looks correct. (ii) While the variant of the Ramsey test that Chalmers and Hájek criticize is not correct, it is still a good approximation of a correct formulation of the Ramsey test which may be usefully employed in various contexts. (iii) The variant of the Ramsey test that Chalmers and Hájek suggest as a substitute for the deficient version of the Ramsey test is itself subject to worries similar to those raised by Chalmers and Hájek, if it is given a non-suppositional interpretation. (shrink)
This is a personal, incomplete, and very informal take on the role of logic in general philosophy of science, which is aimed at a broader audience. We defend and advertise the application of logical methods in philosophy of science, starting with the beginnings in the Vienna Circle and ending with some more recent logical developments.
Rudolf Carnap's Der logische Aufbau der Welt (The Logical Structure of the World) is generally conceived of as being the failed manifesto of logical positivism. In this paper we will consider the following question: How much of the Aufbau can actually be saved? We will argue that there is an adaptation of the old system which satisfies many of the demands of the original programme. In order to defend this thesis, we have to show how a new 'Aufbau-like' programme may (...) solve or circumvent the problems that affected the original Aufbau project. In particular, we are going to focus on how a new system may address the well-known difficulties in Carnap's Aufbau concerning abstraction, dimensionality, and theoretical terms. (shrink)
We present a way of classifying the logically possible ways out of Gärdenfors' inconsistency or triviality result on belief revision with conditionals. For one of these ways—conditionals which are not descriptive but which only have an inferential role as being given by the Ramsey test—we determine which of the assumptions in three different versions of Gärdenfors' theorem turn out to be false. This is done by constructing ranked models in which such Ramsey-test conditionals are evaluated and which are subject to (...) natural postulates on belief revision and acceptability sets for conditionals. Along the way we show that in contrast with what Gärdenfors himself proposed, there is no dichotomy of the form: either the Ramsey test has to be given up or the Preservation condition. Instead, both of them follow from our postulates. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...) follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
In discussions about whether the Principle of the Identity of Indiscernibles is compatible with structuralist ontologies of mathematics, it is usually assumed that individual objects are subject to criteria of identity which somehow account for the identity of the individuals. Much of this debate concerns structures that admit of non-trivial automorphisms. We consider cases from graph theory that violate even weak formulations of PII. We argue that (i) the identity or difference of places in a structure is not to be (...) accounted for by anything other than the structure itself and that (ii) mathematical practice provides evidence for this view. We want to thank Leon Horsten, Jeff Ketland, Øystein Linnebo, John Mayberry, Richard Pettigrew, and Philip Welch for valuable comments on drafts of this paper. We are especially grateful to Fraser MacBride for correcting our interpretation of two of his papers and for other helpful comments. CiteULike Connotea Del.icio.us What's this? (shrink)
We show that a set of prima facie plausible assumptions on the relation of meaning resemblance – one of which is a compositionality postulate – is inconsistent. On this basis we argue that either there is no theoretically useful notion of semantic resemblance at all, or the traditional conception of the compositionality of meaning has to be adapted. In the former case, arguments put forward by Nelson Goodman and Paul Churchland in favor of the concept of meaning resemblance are defeated. (...) In the other case, it must be possible to account for 'degrees of compositionality' or for other refinements of compositionality that are compatible with meaning resemblance. (shrink)
We introduce an epistemic theory of truth according to which the same rational degree of belief is assigned to Tr(. It is shown that if epistemic probability measures are only demanded to be finitely additive (but not necessarily σ-additive), then such a theory is consistent even for object languages that contain their own truth predicate. As the proof of this result indicates, the theory can also be interpreted as deriving from a quantitative version of the Revision Theory of Truth.
We investigate the conditions under which quasianalysis, i.e., Carnap's method of abstraction in his Aufbau, yields adequate results. In particular, we state both necessary and sufficient conditions for the so-called faithfulness and fullness of quasianalysis, and analyze adequacy as the conjunction of faithfulness and fullness. It is shown that there is no method of (re-)constructing properties from similarity that delivers adequate results in all possible cases, if the same set of individuals is presupposed for properties and for similarity, and if (...) similarity is a relation of finite arity. The theory is applied to various examples, including Russell's construction of temporal instants and Carnap's constitution of the phenomenal counterparts to quality spheres. Our results explain why the former is adequate while the latter is bound to fail. (shrink)
On the basis of impossibility results on probability, belief revision, and conditionals, it is argued that conditional beliefs differ from beliefs in conditionals qua mental states. Once this is established, it will be pointed out in what sense conditional beliefs are still conditional, even though they may lack conditional contents, and why it is permissible to still regard them as beliefs, although they are not beliefs in conditionals. Along the way, the main logical, dispositional, representational, and normative properties of conditional (...) beliefs are studied, and it is explained how the failure of not distinguishing conditional beliefs from beliefs in conditionals can lead philosophical and empirical theories astray. (shrink)
We investigate the research programme of dynamic doxastic logic (DDL) and analyze its underlying methodology. The Ramsey test for conditionals is used to characterize the logical and philosophical differences between two paradigmatic systems, AGM and KGM, which we develop and compare axiomatically and semantically. The importance of Gärdenfors’s impossibility result on the Ramsey test is highlighted by a comparison with Arrow’s impossibility result on social choice. We end with an outlook on the prospects and the future of DDL.
If an agent believes that the probability of E being true is 1/2, should she accept a bet on E at even odds or better? Yes, but only given certain conditions. This paper is about what those conditions are. In particular, we think that there is a condition that has been overlooked so far in the literature. We discovered it in response to a paper by Hitchcock (2004) in which he argues for the 1/3 answer to the Sleeping Beauty problem. (...) Hitchcock argues that this credence follows from calculating her fair betting odds, plus the assumption that Sleeping Beauty’s credences should track her fair betting odds. We will show that this last assumption is false. Sleeping Beauty’s credences should not follow her fair betting odds due to a peculiar feature of her epistemic situation. (shrink)
Werning applies a theorem by Hodges in order to put forward an argument against Quine’s thesis of the indeterminacy of translation (understood as a thesis on meaning, not on reference) and in favour of what Werning calls ‘semantic realism’. We show that the argument rests on two critical premises both of which are false. The reasons for these failures are explained and the actual place of this application of Hodges’ theorem within Quine’s philosophy of language is outlined.
. Interpreted dynamical systems are dynamical systems with an additional interpretation mapping by which propositional formulas are assigned to system states. The dynamics of such systems may be described in terms of qualitative laws for which a satisfaction clause is defined. We show that the systems Cand CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively. Inhibition networks, artificial neural networks, logic programs, and evolutionary systems are (...) instances of such interpreted dynamical systems, and thus our results entail that each of them may be described correctly and, in a sense, even completely by qualitative laws that obey the rules of a nonmonotonic logic system. (shrink)
What kinds of sentences with truth predicate may be inserted plausibly and consistently into the T-scheme? We state an answer in terms of dependence: those sentences which depend directly or indirectly on non-semantic states of affairs (only). In order to make this precise we introduce a theory of dependence according to which a sentence φ is said to depend on a set Φ of sentences iff the truth value of φ supervenes on the presence or absence of the sentences of (...) Φ in/from the extension of the truth predicate. Both φ and the members of Φ are allowed to contain the truth predicate. On that basis we are able define notions such as ungroundedness or self-referentiality within a classical semantics, and we can show that there is an adequate definition of truth for the class of sentences which depend on non-semantic states of affairs. (shrink)
This monograph provides a new account of justified inference as a cognitive process. In contrast to the prevailing tradition in epistemology, the focus is on low-level inferences, i.e., those inferences that we are usually not consciously aware of and that we share with the cat nearby which infers that the bird which she sees picking grains from the dirt, is able to fly. Presumably, such inferences are not generated by explicit logical reasoning, but logical methods can be used to describe (...) and analyze such inferences. Part 1 gives a purely system-theoretic explication of belief and inference. Part 2 adds a reliabilist theory of justification for inference, with a qualitative notion of reliability being employed. Part 3 recalls and extends various systems of deductive and nonmonotonic logic and thereby explains the semantics of absolute and high reliability. In Part 4 it is proven that qualitative neural networks are able to draw justified deductive and nonmonotonic inferences on the basis of distributed representations. This is derived from a soundness/completeness theorem with regard to cognitive semantics of nonmonotonic reasoning. The appendix extends the theory both logically and ontologically, and relates it to A. Goldman's reliability account of justified belief. This text will be of interest to epistemologists and logicians, to all computer scientists who work on nonmonotonic reasoning and neural networks, and to cognitive scientists. (shrink)
In this paper we investigate two purely syntactical notions ofcircularity, which we call ``self-application'''' and ``self-inclusion.'''' Alanguage containing self-application allows linguistic items to beapplied to themselves. In a language allowing for self-inclusion thereare expressions which include themselves as a proper part. We introduceaxiomatic systems of syntax which include identity criteria andexistence axioms for such expressions. The consistency of these axiomsystems will be shown by providing a variety of different models –these models being our circular languages. Finally we will show what (...) apossible semantics for these circular languages could look like. (shrink)
If □ is conceived as an operator, i.e., an expression that gives applied to a formula another formula, the expressive power of the language is severely restricted when compared to a language where □ is conceived as a predicate, i.e., an expression that yields a formula if it is applied to a term. This consideration favours the predicate approach. The predicate view, however, is threatened mainly by two problems: Some obvious predicate systems are inconsistent, and possible-worlds semantics for predicates of (...) sentences has not been developed very far. By introducing possible-worlds semantics for the language of arithmetic plus the unary predicate □, we tackle both problems. Given a frame (W, R) consisting of a set W of worlds and a binary relation R on W, we investigate whether we can interpret □ at every world in such a way that □ $\ulcorner A \ulcorner$ holds at a world ᵆ ∊ W if and only if A holds at every world $\upsilon$ ∊ W such that ᵆR $\upsilon$ . The arithmetical vocabulary is interpreted by the standard model at every world. Several 'paradoxes' (like Montague's Theorem, Gödel's Second Incompleteness Theorem, McGee's Theorem on the ω-inconsistency of certain truth theories, etc.) show that many frames, e.g., reflexive frames, do not allow for such an interpretation. We present sufficient and necessary conditions for the existence of a suitable interpretation of □ at any world. Sound and complete semi-formal systems, corresponding to the modal systems K and K4, for the class of all possible-worlds models for predicates and all transitive possible-worlds models are presented. We apply our account also to nonstandard models of arithmetic and other languages than the language of arithmetic. (shrink)
The difficulties with formalizing the intensional notions necessity, knowability and omniscience, and rational belief are well-known. If these notions are formalized as predicates applying to (codes of) sentences, then from apparently weak and uncontroversial logical principles governing these notions, outright contradictions can be derived. Tense logic is one of the best understood and most extensively developed branches of intensional logic. In tense logic, the temporal notions future and past are formalized as sentential operators rather than as predicates. The question therefore (...) arises whether the notions that are investigated in tense logic can be consistently formalized as predicates. In this paper it is shown that the answer to this question is negative. The logical treatment of the notions of future and past as predicates gives rise to paradoxes due the specific interplay between both notions. For this reason, the tense paradoxes that will be presented are not identical to the paradoxes referred to above. (shrink)
This is the second part of a paper dealing with truth and translation. In Part A a revised version of Tarski's Convention T has been presented, which explicitly refers to a translation mapping from the object language to the metalanguage; the vague notion of a translation has been replaced by a precise definition. At the end of Part A it has been shown that interpreted languages exist, which allow for vicious self-reference but which nevertheless contain their own truth predicate - (...) this is possible if truth is based on a nonstandard translation mapping. However, this result has only been proved for languages without quantifiers. In Part B we now extend the result to first-order languages, and we show that this can be done in three different ways. In each case, the addition of a truth predicate to an interpreted language with a high degree of expressiveness leads to changes in the ontology of the language. (shrink)
This papers deals with the class of axiomatic theories of truth for semantically closed languages, where the theories do not allow for standard models; i.e., those theories cannot be interpreted as referring to the natural number codes of sentences only (for an overview of axiomatic theories of truth in general, see Halbach). We are going to give new proofs for two well-known results in this area, and we also prove a new theorem on the nonstandardness of a certain theory of (...) truth. The results indicate that the proof strategies for all the theorems on the nonstandardness of such theories are "essentially" of the same kind of structure. (shrink)
The aim of this paper is to give a certain algebraic account of truth: we want to define what we mean by De Morgan-valued truth models and show their existence even in the case of semantical closure: that is, languages may contain their own truth predicate if they are interpreted by De Morgan-valued models. Before we can prove this result, we have to repeat some basic facts concerning De Morgan-valued models in general, and we will introduce a notion of truth (...) both on the object- and on the metalanguage level appropriate for such models. The definitions and the existence theorem are extensions of Kripke's, Woodruff's, and Visser's concepts and results concerning three- and four-valued truth models. (shrink)