If we try to evaluate the sentence on line 1 we ¯nd ourselves going in an unending cycle. For this reason alone we may conclude that the sentence is not true. Moreover we are driven to this conclusion by an elementary argument: If the sentence is true then what it asserts is true, but what it asserts is that the sentence on line 1 is not true. Consequently the sentence on line 1 is not true. But when we write this (...) true conclusion on line 2 we ¯nd ourselves repeating the very same sentence. It seems that we are unable to deny the truth of the sentence on line 1 without asserting it at the same time. (shrink)
The goal of this paper is a comprehensive analysis of basic reasoning patterns that are characteristic of vague predicates. The analysis leads to rigorous reconstructions of the phenomena within formal systems. Two basic features are dealt with. One is tolerance: the insensitivity of predicates to small changes in the objects of predication (a one-increment of a walking distance is a walking distance). The other is the existence of borderline cases. The paper shows why these should be treated as different, though (...) related phenomena. Tolerance is formally reconstructed within a proposed framework of contextual logic, leading to a solution of the Sorites paradox. Borderline-vagueness is reconstructed using certain modality operators; the set-up provides an analysis of higher order vagueness and a derivation of scales of degrees for the property in question. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
The semantic paradoxes, whose paradigm is the Liar, played a crucial role at a crucial juncture in the development of modern logic. In his 1908 seminal paper, Russell outlined a system, soon to become that of the Principia Mathematicae, whose main goal was the solution of the logical paradoxes, both semantic and settheoretic. Russell did not distinguish between the two and his theory of types was designed to solve both kinds in the same uniform way. Set theoreticians, however, were content (...) to treat only the set-theoretic paradoxes, putting aside the semantic ones as a non-mathematical concern. This separation was explicitly proposed, eighteen years after Russell’s paper, by Ramsey, though he, like Russell, advocated a system that addresses both kinds. Since then, the semantic paradoxes have been viewed within the perspective of the theory of truth, where they have occupied a respectable niche, but one of rather specialized interest. (shrink)
There are three sections in this paper. The first is a philosophical discussion of the general problem of reasoning under limited deductive capacity. The second sketches a rigorous way of assigning probabilities to statements in pure arithmetic; motivated by the preceding discussion, it can nonetheless be read separately. The third is a philosophical discussion that highlights the shifting contextual character of subjective probabilities and beliefs.
Non-standard models were introduced by Skolem, first for set theory, then for Peano arithmetic. In the former, Skolem found support for an anti-realist view of absolutely uncountable sets. But in the latter he saw evidence for the impossibility of capturing the intended interpretation by purely deductive methods. In the history of mathematics the concept of a nonstandard model is new. An analysis of some major innovations–the discovery of irrationals, the use of negative and complex numbers, the modern concept of function, (...) and non-Euclidean geometry–reveals them as essentially different from the introduction of non-standard models. Yet, non-Euclidean geometry, which is discussed at some length, is relevant to the present concern; for it raises the issue of intended interpretation. The standard model of natural numbers is the best candidate for an intended interpretation that cannot be captured by a deductive system. Next, I suggest, is the concept of a wellordered set, and then, perhaps, the concept of a constructible set. One may have doubts about a realistic conception of the standard natural numbers, but such doubts cannot gain support from non-standard models. Attempts to utilize non-standard models for an anti-realist position in mathematics, which appeal to meaning-as-use, or to arguments of the kind proposed by Putnam, fail through irrelevance, or lead to incoherence. Robinson’s skepticism, on the other hand, is a coherent position, though one that gives up on providing a detailed philosophical account. The last section enumerates various uses of non-standard models. (shrink)
In a recent paper S. McCall adds another link to a chain of attempts to enlist Gödel’s incompleteness result as an argument for the thesis that human reasoning cannot be construed as being carried out by a computer.1 McCall’s paper is undermined by a technical oversight. My concern however is not with the technical point. The argument from Gödel’s result to the no-computer thesis can be made without following McCall’s route; it is then straighter and more forceful. Yet the argument (...) fails in an interesting and revealing way. And it leaves a remainder: if some computer does in fact simulate all our mathematical reasoning, then, in principle, we cannot fully grasp how it works. Gödel’s result also points out a certain essential limitation of self-reflection. The resulting picture parallels, not accidentally, Davidson’s view of psychology, as a science that in principle must remain “imprecise”, not fully spelt out. What is intended here by “fully grasp”, and how all this is related to self-reflection, will become clear at the end of this comment. (shrink)
The technique of minimizing information (infomin) has been commonly employed as a general method for both choosing and updating a subjective probability function. We argue that, in a wide class of cases, the use of infomin methods fails to cohere with our standard conception of rational degrees of belief. We introduce the notion of a deceptive updating method and argue that non-deceptiveness is a necessary condition for rational coherence. Infomin has been criticized on the grounds that there are no higher (...) order probabilities that ‘support’ it, but the appeal to higher order probabilities is a substantial assumption that some might reject. Our elementary arguments from deceptiveness do not rely on this assumption. While deceptiveness implies lack of higher order support, the converse does not, in general, hold, which indicates that deceptiveness is a more objectionable property. We offer a new proof of the claim that infomin updating of any strictly-positive prior with respect to conditional-probability constraints is deceptive. In the case of expected-value constraints, infomin updating of the uniform prior is deceptive for some random variables but not for others. We establish both a necessary condition and a sufficient condition (which extends the scope of the phenomenon beyond cases previously considered) for deceptiveness in this setting. Along the way, we clarify the relation which obtains between the strong notion of higher order support, in which the higher order probability is defined over the full space of first order probabilities, and the apparently weaker notion, in which it is defined over some smaller parameter space. We show that under certain natural assumptions, the two are equivalent. Finally, we offer an interpretation of Jaynes, according to which his own appeal to infomin methods avoids the incoherencies discussed in this paper. (shrink)
We trace self-reference phenomena to the possibility of naming functions by names that belong to the domain over which the functions are defined. A naming system is a structure of the form ,{ }), where D is a non-empty set; for every a∈ D, which is a name of a k-ary function, {a}: Dk → D is the function named by a, and type is the type of a, which tells us if a is a name and, if it is, (...) the arity of the named function. Under quite general conditions we get a fixed point theorem, whose special cases include the fixed point theorem underlying Gödel's proof, Kleene's recursion theorem and many other theorems of this nature, including the solution to simultaneous fixed point equations. Partial functions are accommodated by including “undefined” values; we investigate different systems arising out of different ways of dealing with them. Many-sorted naming systems are suggested as a natural approach to general computatability with many data types over arbitrary structures. The first part of the paper is a historical reconstruction of the way Gödel probably derived his proof from Cantor's diagonalization, through the semantic version of Richard. The incompleteness proof–including the fixed point construction–result from a natural line of thought, thereby dispelling the appearance of a “magic trick”. The analysis goes on to show how Kleene's recursion theorem is obtained along the same lines. (shrink)
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representation theorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities assigned to consequences. (...) Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representation theorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
Self-reference in semantics, which leads to well-known paradoxes, is a thoroughly researched subject. The phenomenon can appear also in decision theoretic situations. There is a structural analogy between the two and, more interestingly, an analogy between principles concerning truth and those concerning rationality. The former can serve as a guide for clarifying the latter. Both the analogies and the disanalogies are illuminating.
Dummett’s The Logical Foundations of Metaphysics (LFM) outlines an ambitious project that has been at the core of his work during the last forty years. The project is built around a particular conception of the theory of meaning (or philosophy of language), according to which such a theory should constitute the corner stone of philosophy and, in particular, provide answers to various metaphysical questions. The present paper is intended as a critical evaluation of some of the main features of that (...) approach. My negative answer to the title’s question notwithstanding, I find Dummett’s analyses, which both inform and are guided by his project, of very high value. Among the subjects to be discussed here, which relate to but are not fully reflected in the title, are the concept of a full-blooded theory (in 4.) Davidson’s program (in 5.) and holism, to which the last third of this paper (section 6.) is devoted. That section can be read independently; to some extent, this is also true of 4. and 5. taken together. (shrink)
The paper outlines a project in the philosophy of mathematics based on a proposed view of the nature of mathematical reasoning. It also contains a brief evaluative overview of the discipline and some historical observations; here it points out and illustrates the division between the philosophical dimension, where questions of realism and the status of mathematics are treated, and the more descriptive and looser dimension of epistemic efficiency, which has to do with ways of organizing the mathematical material. The paper’s (...) concern is with the first. The grand tradition in the philosophy of mathematics goes back to the foundational debates at the end of the 19th and the first decades of the 20th century. Logicism went together with a realistic view of actual infinities; rejection of, or skepticism about actual infinities derived from conceptions that were Kantian in spirit. Yet questions about the nature of mathematical reasoning should be distinguished from questions about realism (the extent of objective knowledge– independent mathematical truth). Logicism is now dead. Recent attempts to revive it are based on a redefinition of “logic”, which exploits the flexibility of the concept; they yield no interesting insight into the nature of mathematics. A conception of mathematical reasoning, broadly speaking along Kantian lines, need not imply anti–realism and can be pursued and investigated, leaving questions of realism open. Using some concrete examples of non–formal mathematical proofs, the paper proposes that mathematics is the study of forms of organization—-a concept that should be taken as primitive, rather than interpreted in terms of set–theoretic structures. For set theory itself is a study of a particular form of organization, albeit one that provides a modeling for the other known mathematical systems. In a nutshell: “We come to know mathematical truths through becoming aware of the properties of some of the organizational forms that underlie our world. This is possible, due to a capacity we have: to reflect on some of our own practices and the ways of organizing our world, and to realize what they imply.. (shrink)
Contextuality is trivially pervasive: all human experience takes place in endlessly changing environments and inexorably moving time frames. In order to have any meaning, the changing items must be placed within a more stable setting, a framework that is not subject to the same kind of contextual change. Total contextuality collapses into chaos, or becomes ineffable. While basic learning is highly contextual (one learns by example), what is learned transcends the examples used in the learning. Perhaps, in a similar manner, (...) artistic expression transcends context by fully embracing it. In any case, a philosophical account of contextuality is itself stated in a more absolute mode, not necessarily a picture from an “eternal” view point, but at least one that avoids the contextuality which it describes. (shrink)
This short sketch of Gödel’s incompleteness proof shows how it arises naturally from Cantor’s diagonalization method [1891]. It renders Gödel’s proof and its relation to the semantic paradoxes transparent. Some historical details, which are often ignored, are pointed out. We also make some observations on circularity and draw brief comparisons with natural language. The sketch does not include the messy details of the arithmetization of the language, but the motives for it are made obvious. We suggest this as a more (...) efficient way to teach the topic than what is found in the standard textbooks. For the sake of self–containment Cantor’s original diagonalization is included. A broader and more technical perspective on diagonalization is given in [Gaifman 2005]. In [1891] Cantor presented a new type of argument that shows that the set of all binary sequences (sequences of the form a0, a1,…,an,…, where each ai is either 0 or 1) is not denumerable ─ that is, cannot be arranged in a sequence, where the index ranges over the natural numbers. Let A0, A2,…An, … be a sequence of binary sequences. Say An = an,0, an,1, …, an,i, … . Define a new sequence A* = b0, b1,…,bn,… , by putting. (shrink)
This short sketch of Gödel’s incompleteness proof shows how it arises naturally from Cantor’s diagonalization method [1891]. It renders the proof of the so–called fixed point theorem transparent. We also point out various historical details and make some observations on circularity and some comparisons with natural language. The sketch does not include the messy details of the arithmetization of the language, but the motive for arithmetization and what it should accomplish are made obvious. We suggest this as a way to (...) teach the incompleteness results to students that have had a basic course in logic, which is more efficient than the standard textbooks. For the sake of self–containment Cantor’s original diagonalization is included. A broader and more technical perspective on diagonalization is given in [Gaifman 2005]. Motivated partly by didactic considerations, the present paper presents things somewhat differently. It also includes various points concerning natural language and circularity that appear only here. (shrink)
Please send the completed questionnaire by October 1, 2005 either electronically to Vincent F. Hendricks ([email protected]) or John Symons ([email protected]) or mail (fax) to Vincent F. Hendricks, Dept. of Philosophy and Science Studies, Roskilde University, DK4000 Roskilde, Denmark, Fax: +45 4674 3012..