According to Paul Snowdon, one directly perceives an object x iff one is in a position to make a true demonstrative judgement of the form “That is x”. Whenever one perceives an object x indirectly (or dependently , as Snowdon puts it) it is the case that there exists an item y (which is not identical to x) such that one can count as demonstrating x only if one acknowledges that y bears a certain relation to x. In this paper (...) I argue that what we hear directly are sounds, and that material objects (such as violins and goldfinches) are only indirectly heard. However, there are cases of auditory object perception that should count as direct : Some blind persons’ ears are so sensitive to the way sound waves are modified by things in their surroundings that they can detect objects such as other persons, fences or trees. Interestingly, objects localized in this way make themselves felt via a kind of pressure in the perceiver’s face (that is why the phenomenon is commonly called “facial vision”), the perception is phenomenally quite different from hearing. Since, to some degree, most people are able to conclude from the way it sounds that, say, they stand at the foot of a concrete wall (when there is enough traffic noise around), we can imagine situations where two persons perceive the same wall, one indirectly (demonstratively apprehending sounds) and the other directly (demonstratively apprehending nothing but the wall). These cases invite us to discuss the role phenomenology plays in determining whether an object is perceived directly or indirectly. (shrink)
Was Benjamin Franklin the old John Dewey or the new Socrates? While this might strike the reader as an absurd question, scholars have supplied plausible answers. James Campbell takes the position that he was the old Dewey—or, at least, a nascent Deweyan pragmatist. Franklin biographer Walter Isaacson agrees, claiming that Franklin "laid the foundation for the most influential of America's homegrown philosophies, pragmatism" (491). Lorraine Pangle, on the other hand, defends the view that Franklin's thought and writings were distinctly Socratic. (...) I would like to accomplish two objectives in this article that might initially appear incompatible: one, to doubt whether the question is a good one and, two, to assume the .. (shrink)
This article builds on a case study about how teacher education students may actually learn racism through their program. It employs an analysis of how new racism is operationalized in today's sociopolitical contexts. Field placements and knowledge taught about various groups are critiqued as major teacher education reform efforts that particularly facilitate teaching racism. It seeks to examine and theorize about this occurrence through an analysis of new invisible forms of racism, power, and whiteness. It finally explores how this racism (...) can be unlearned through reanalyzing teacher reform efforts and choosing to purposefully center programs on a systematic analysis of how these invisible operations shape programs and unintended program outcomes. (shrink)
This volume is a collection of essays presented at the 31st International Wittgenstein Symposium, Kirchberg, in August 2008. It has the character of a high-quality journal issue. There is no introduction, and the papers do not all directly bear on the topic of the original conference, which was "Reduction and Elimination in Philosophy and the Sciences". In what follows, I offer a short description of each paper, and add critical remarks in some cases.
When Ian Hacking won the Holberg International Memorial Prize 2009 his candidature was said to strengthen the legitimacy of the prize after years of controversy. Ole Jacob Madsen, Johannes Servan and Simen Andersen Øyen have talked to Ian Hacking about current questions in the philosophy and history of science.
Rudolf Carnap's Der logische Aufbau der Welt (The Logical Structure of the World) is generally conceived of as being the failed manifesto of logical positivism. In this paper we will consider the following question: How much of the Aufbau can actually be saved? We will argue that there is an adaptation of the old system which satisfies many of the demands of the original programme. In order to defend this thesis, we have to show how a new 'Aufbau-like' programme may (...) solve or circumvent the problems that affected the original Aufbau project. In particular, we are going to focus on how a new system may address the well-known difficulties in Carnap's Aufbau concerning abstraction, dimensionality, and theoretical terms. (shrink)
In discussions about whether the Principle of the Identity of Indiscernibles is compatible with structuralist ontologies of mathematics, it is usually assumed that individual objects are subject to criteria of identity which somehow account for the identity of the individuals. Much of this debate concerns structures that admit of non-trivial automorphisms. We consider cases from graph theory that violate even weak formulations of PII. We argue that (i) the identity or difference of places in a structure is not to be (...) accounted for by anything other than the structure itself and that (ii) mathematical practice provides evidence for this view. We want to thank Leon Horsten, Jeff Ketland, Øystein Linnebo, John Mayberry, Richard Pettigrew, and Philip Welch for valuable comments on drafts of this paper. We are especially grateful to Fraser MacBride for correcting our interpretation of two of his papers and for other helpful comments. CiteULike Connotea Del.icio.us What's this? (shrink)
This thesis seeks to advance our understanding of what intuitions are. I argue that there is a class of mental states deserving of the label ‘intuition’, and which is a good candidate for a psychological kind, a kind which cuts the mind at its natural joints. These mental states are experiences of a certain kind. In particular, they are experiences with representational content, and with a certain phenomenal character.
A simple reductive view of intuition holds that intuition is a type of belief. That an agent who intuits that p sometimes believes that p is false is often thought to demonstrate that the simple reductive view is false. I show that this argument is inconclusive, but also that an argument for the same conclusion can be rebuilt using the notion of rational criticisability. I then use that notion to argue that perception is also not reducible to belief, and that (...) neither intuition nor perception is reducible to credence. (shrink)
In some philosophical arguments an important role is played by the claim that certain situations differ from each other with respect to phenomenology. One class of such arguments are minimal pair arguments. These have been used to argue that there is cognitive phenomenology, that high-level properties are represented in perceptual experience, that understanding has phenomenology, and more. I argue that facts about our mental lives systematically block such arguments, reply to a range of objections, and apply my critique to some (...) examples from the literature. (shrink)
Most theories of conditionals and attitudes do not analyze either phenomenon in terms of the other. A few view attitude reports as a species of conditionals (e.g. Stalnaker 1984, Heim 1992). Based on evidence from Kalaallisut, this paper argues for the opposite thesis: conditionals are a species of attitude reports. The argument builds on prior findings that conditionals are modal topic-comment structures (e.g. Haiman 1978, Bittner 2001), and that in mood-based Kalaallisut English future (e.g. Ole will win) translates into a (...) factual report of a prospect-oriented attitudinal state (e.g. expectation or anxiety, see Bittner 2005). It is argued that in conditionals the antecedent introduces a topical subdomain of an input modal base (Kratzer 1981) and requires the consequent to comment. The comment is a factual report of an attitude to the topical antecedent sub-domain. [Revised version published 2011 as "Time and modality without tenses or modals"]. (shrink)
If an agent believes that the probability of E being true is 1/2, should she accept a bet on E at even odds or better? Yes, but only given certain conditions. This paper is about what those conditions are. In particular, we think that there is a condition that has been overlooked so far in the literature. We discovered it in response to a paper by Hitchcock (2004) in which he argues for the 1/3 answer to the Sleeping Beauty problem. (...) Hitchcock argues that this credence follows from calculating her fair betting odds, plus the assumption that Sleeping Beauty’s credences should track her fair betting odds. We will show that this last assumption is false. Sleeping Beauty’s credences should not follow her fair betting odds due to a peculiar feature of her epistemic situation. (shrink)
On the basis of impossibility results on probability, belief revision, and conditionals, it is argued that conditional beliefs differ from beliefs in conditionals qua mental states. Once this is established, it will be pointed out in what sense conditional beliefs are still conditional, even though they may lack conditional contents, and why it is permissible to still regard them as beliefs, although they are not beliefs in conditionals. Along the way, the main logical, dispositional, representational, and normative properties of conditional (...) beliefs are studied, and it is explained how the failure of not distinguishing conditional beliefs from beliefs in conditionals can lead philosophical and empirical theories astray. (shrink)
What kinds of sentences with truth predicate may be inserted plausibly and consistently into the T-scheme? We state an answer in terms of dependence: those sentences which depend directly or indirectly on non-semantic states of affairs (only). In order to make this precise we introduce a theory of dependence according to which a sentence φ is said to depend on a set Φ of sentences iff the truth value of φ supervenes on the presence or absence of the sentences of (...) Φ in/from the extension of the truth predicate. Both φ and the members of Φ are allowed to contain the truth predicate. On that basis we are able define notions such as ungroundedness or self-referentiality within a classical semantics, and we can show that there is an adequate definition of truth for the class of sentences which depend on non-semantic states of affairs. (shrink)
Logical pluralism has been in vogue since JC Beall and Greg Restall 2006 articulated and defended a new pluralist thesis. Recent criticisms such as Priest 2006a and Field 2009 have suggested that there is a relationship between their type of logical pluralism and the meaning-variance thesis for logic. This is the claim, often associated with Quine 1970, that a change of logic entails a change of meaning. Here we explore the connection between logical pluralism and meaning-variance, both in general and (...) for Beall and Restall's theory specifically. We argue that contrary to what Beall and Restall claim, their type of pluralism is wedded to meaning-variance. We then develop an alternative form of logical pluralism that circumvents at least some forms of meaning-variance. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
I argue against Montero’s claim that Conservation of Energy (CoE) has nothing to do with Physicalism. I reject her reconstruction of the argument from CoE against interactionist dualism, and offer instead an alternative reconstruction that better captures the intuitions of those who believe that there is a conflict between interactionist dualism and CoE.
We investigate the research programme of dynamic doxastic logic (DDL) and analyze its underlying methodology. The Ramsey test for conditionals is used to characterize the logical and philosophical differences between two paradigmatic systems, AGM and KGM, which we develop and compare axiomatically and semantically. The importance of Gärdenfors’s impossibility result on the Ramsey test is highlighted by a comparison with Arrow’s impossibility result on social choice. We end with an outlook on the prospects and the future of DDL.
We present a way of classifying the logically possible ways out of Gärdenfors' inconsistency or triviality result on belief revision with conditionals. For one of these ways—conditionals which are not descriptive but which only have an inferential role as being given by the Ramsey test—we determine which of the assumptions in three different versions of Gärdenfors' theorem turn out to be false. This is done by constructing ranked models in which such Ramsey-test conditionals are evaluated and which are subject to (...) natural postulates on belief revision and acceptability sets for conditionals. Along the way we show that in contrast with what Gärdenfors himself proposed, there is no dichotomy of the form: either the Ramsey test has to be given up or the Preservation condition. Instead, both of them follow from our postulates. (shrink)
The model-theoretic analysis of the concept of logical consequence has come under heavy criticism in the last couple of decades. The present work looks at an alternative approach to logical consequence where the notion of inference takes center stage. Formally, the model-theoretic framework is exchanged for a proof-theoretic framework. It is argued that contrary to the traditional view, proof-theoretic semantics is not revisionary, and should rather be seen as a formal semantics that can supplement model-theory. Specifically, there are formal resources (...) to provide a proof-theoretic semantics for both intuitionistic and classical logic. We develop a new perspective on proof-theoretic harmony for logical constants which incorporates elements from the substructural era of proof-theory. We show that there is a semantic lacuna in the traditional accounts of harmony. A new theory of how inference rules determine the semantic content of logical constants is developed. The theory weds proof-theoretic and model-theoretic semantics by showing how proof-theoretic rules can induce truth-conditional clauses in Boolean and many-valued settings. It is argued that such a new approach to how rules determine meaning will ultimately assist our understanding of the apriori nature of logic. (shrink)
I started out as a student of physics, hard-working, interested, but alas, not ‘in love’ with my subject. Then logic struck, and having become interested in this subject for various reasons – including the fascinating personality of my first teacher –, I switched after my candidate’s program, to take two master’s degrees, in mathematics and in philosophy. The beauty of mathematics was clear to me at once, with the amazing power, surprising twists, and indeed the music, of abstract arguments. As (...) our professor of Analysis wrote at the time in our study guide “Mathematics is about the delight in the purity of trains of thought”, and oldfashioned though this phrasing sounded in the revolutionary 1960s, it did resonate with me. Then I had the privilege of being taught set-theoretic topology by a group of brilliant students around De Groot, our leading expert around the time, who worked with Moore’s method of discovering a subject for oneself. Topology unfolded from a few definitions and examples to real theorems that we had to prove ourselves – and the take-home exam took sleepless nights, as it included proving some results from scratch which came from a recent dissertation (as it turned out later). Only at the very end did De Groot appear, to give one lecture on Tychonoff’s Theorem where an application was made of the Axiom of Choice, a sacral act only to be performed by tenured full professors. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...) follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
It is sometimes held that rules of inference determine the meaning of the logical constants: the meaning of, say, conjunction is fully determined by either its introduction or its elimination rules, or both; similarly for the other connectives. In a recent paper, Panu Raatikainen (2008) argues that this view - call it logical inferentialism - is undermined by some "very little known" considerations by Carnap (1943) to the effect that "in a definite sense, it is not true that the standard (...) rules of inference" themselves suffice to "determine the meanings of [the] logical constants" (p. 2). In a nutshell, Carnap showed that the rules allow for non-normal interpretations of negation and disjunction. Raatikainen concludes that "no ordinary formalization of logic ... is sufficient to `fully formalize' all the essential properties of the logical constants" (ibid.). We suggest that this is a mistake. Pace Raatikainen, intuitionists like Dummett and Prawitz need not worry about Carnap's problem. And although bilateral solutions for classical inferentialists - as proposed by Timothy Smiley and Ian Rumfitt - seem inadequate, it is not excluded that classical inferentialists may be in a position to address the problem too. (shrink)
Some authors have claimed that ante rem structuralism has problems with structures that have indiscernible places. In response, I argue that there is no requirement that mathematical objects be individuated in a non-trivial way. Metaphysical principles and intuitions to the contrary do not stand up to ordinary mathematical practice, which presupposes an identity relation that, in a sense, cannot be defined. In complex analysis, the two square roots of –1 are indiscernible: anything true of one of them is true of (...) the other. I suggest that i functions like a parameter in natural deduction systems. I gave an early version of this paper at a workshop on structuralism in mathematics and science, held in the Autumn of 2006, at Bristol University. Thanks to the organizers, particularly Hannes Leitgeb, James Ladyman, and Øystein Linnebo, to my commentator Richard Pettigrew, and to the audience there. The paper also benefited considerably from a preliminary session at the Arché Research Centre at the University of St Andrews. I am indebted to my colleagues Craige Roberts, for help with the linguistics literature, and Ben Caplan and Gabriel Uzquiano, for help with the metaphysics. Thanks also to Hannes Leitgeb and Jeffrey Ketland for reading an earlier version of the manuscript and making helpful suggestions. I also benefited from conversations with Richard Heck, John Mayberry, Kevin Scharp, and Jason Stanley. CiteULike Connotea Del.icio.us What's this? (shrink)
The so-called Paradox of Serious Possibility is usually regarded as showing that the standard axioms of belief revision do not apply to belief sets that are introspectively closed. In this article we argue to the contrary: we suggest a way of dissolving the Paradox of Serious Possibility so that introspective statements are taken to express propositions in the standard sense, which may thus be proper members of belief sets, and accordingly the normal axioms of belief revision apply to them. Instead (...) the paradox is avoided by making explicit, for any occurrence of an introspective modality in the object language, the belief state to which this occurrence refers; this will make it impossible for any doxastic modality to refer to two distinct belief sets within one and the same context of doxastic appraisal. By this move the standard derivation of a contradiction from the theory of belief revision in the presence of introspectively closed belief sets does not go through any more, and indeed the premisses of the Paradox of Serious Possibility become jointly consistent once they are reformulated with our amended introspective modalities only. Additionally, we present a probabilistic version of the Paradox of Serious Possibility which can be avoided in a perfectly analogous manner. (shrink)