In ‘A Non-Pragmatic Vindication of Probabilism’, Jim Joyce attempts to ‘depragmatize’ de Finetti’s prevision argument for the claim that our partial beliefs ought to satisfy the axioms of probability calculus. In this paper, I adapt Joyce’s argument to give a non-pragmatic vindication of various versions of David Lewis’ Principal Principle, such as the version based on Isaac Levi's account of admissibility, Michael Thau and Ned Hall's New Principle, and Jenann Ismael's Generalized Principal Principle. Joyce enumerates properties that must be had (...) by any measure of the distance from a set of partial beliefs to the set of truth values; he shows that, on any such measure, and for any set of partial beliefs that violates the probability axioms, there is a set that satisfies those axioms that is closer to every possible set of truth values. I replace truth values by objective chances in his argument; I show that for any set of partial beliefs that violates the probability axioms or a version of the Principal Principle, there is a set that satisfies them that is closer to every possible set of objective chances. (shrink)
In the philosophy of mathematics, indispensability arguments aim to show that we are justified in believing that abstract mathematical objects exist. I wish to defend a particular objection to such arguments that has become increasingly popular recently. It is called instrumental nominalism. I consider the recent versions of this view and conclude that it has yet to be given an adequate formulation. I provide such a formulation and show that it can be used to answer the indispensability arguments. -/- There (...) are two main indispensability arguments in the literature, though one has received nearly all of the attention. They correspond to two ways in which we use mathematics in science and in everyday life. We use mathematical language to help us describe non-mathematical reality; and we use mathematical reasoning to help us perform inferences concerning non-mathematical reality using only a feasible amount of cognitive power. The former use is the starting point of the Quine-Putnam indispensability argument ([Quine, 1980a], [Quine, 1980b], [Quine, 1981a], [Quine, 1981b], [Putnam, 1979a], [Putnam, 1979b]); the latter provides the basis for Ketland’s more recent argument ([Ketland, 2005]). I begin by considering the Quine-Putnam argument and introduce instrumental nominalism to defuse it. Then I show that Ketland’s argument can be defused in a similar way. (shrink)
RichardPettigrew offers an extended investigation into a particular way of justifying the rational principles that govern our credences. The main principles that he justifies are the central tenets of Bayesian epistemology, though many other related principles are discussed along the way. Pettigrew looks to decision theory in order to ground his argument. He treats an agent's credences as if they were a choice she makes between different options, gives an account of the purely epistemic utility enjoyed (...) by different sets of credences, and then appeals to the principles of decision theory to show that, when epistemic utility is measured in this way, the credences that violate the principles listed above are ruled out as irrational. The account of epistemic utility set out here is the veritist's: the sole fundamental source of epistemic utility for credences is their accuracy. Thus, Pettigrew conducts an investigation in the version of epistemic utility theory known as accuracy-first epistemology. (shrink)
Jim Joyce has presented an argument for Probabilism based on considerations of epistemic utility [Joyce, 1998]. In a recent paper, I adapted this argument to give an argument for Probablism and the Principal Principle based on similar considerations [Pettigrew, 2012]. Joyce’s argument assumes that a credence in a true proposition is better the closer it is to maximal credence, whilst a credence in a false proposition is better the closer it is to minimal credence. By contrast, my argument in (...) that paper assumed (roughly) that a credence in a proposition is better the closer it is to the objective chance of that proposition. In this paper, I present an epistemic utility argument for Probabilism and the Principal Principle that retains Joyce’s assumption rather than the alternative I endorsed in the earlier paper. I argue that this results in a superior argument for these norms. (shrink)
The sharpest corner of the cutting edge of recent epistemology is to be found in RichardPettigrew’s Accuracy and the Laws of Credence. In this fine book Pettigrew argues that a certain kind of accuracy-based value monism entails that rational credence manifests a host of features emphasized by anti-externalists in epistemology. Specifically, he demonstrates how a particular version of accuracy-based value monism—to be discussed at length below—when placed with some not implausible views about how epistemic value and (...) rationality relate to one another, ensures that rational credence manifests many of the structural properties emphasized by those who give evidence pride of place in the theory of rationality. A major goal of Pettigrew’s book, then, is to make clear how accuracy-based value monism fits together with the phenomena used by those who argue against accuracy-based externalism.2 2. (shrink)
Beliefs come in different strengths. An agent's credence in a proposition is a measure of the strength of her belief in that proposition. Various norms for credences have been proposed. Traditionally, philosophers have tried to argue for these norms by showing that any agent who violates them will be lead by her credences to make bad decisions. In this article, we survey a new strategy for justifying these norms. The strategy begins by identifying an epistemic utility function and a decision-theoretic (...) norm; we then show that the decision-theoretic norm applied to the epistemic utility function yields the norm for credences that we wish to justify. We survey results already obtained using this strategy, and we suggest directions for future research. (shrink)
to appear in Szabó Gendler, T. & J. Hawthorne (eds.) Oxford Studies in Epistemology volume 6 We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And (...) how do they relate to the individual credences assigned by the members of the particular group in question? According to the credal judgment aggregation principle, Linear Pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this paper, I give an argument for Linear Pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle. (shrink)
Famously, William James held that there are two commandments that govern our epistemic life: Believe truth! Shun error! In this paper, I give a formal account of James' claim using the tools of epistemic utility theory. I begin by giving the account for categorical doxastic states – that is, full belief, full disbelief, and suspension of judgment. Then I will show how the account plays out for graded doxastic states – that is, credences. The latter part of the paper thus (...) answers a question left open in Pettigrew. View HTML Send article to KindleTo send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Find out more about the Kindle Personal Document Service.JAMESIAN EPISTEMOLOGY FORMALISED: AN EXPLICATION OF ‘THE WILL TO BELIEVE’Volume 13, Issue 3Richard PettigrewDOI: https://doi.org/10.1017/epi.2015.44Your Kindle email address Please provide your Kindle email.@free.kindle.com@kindle.com Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Dropbox. JAMESIAN EPISTEMOLOGY FORMALISED: AN EXPLICATION OF ‘THE WILL TO BELIEVE’Volume 13, Issue 3Richard PettigrewDOI: https://doi.org/10.1017/epi.2015.44Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Google Drive. JAMESIAN EPISTEMOLOGY FORMALISED: AN EXPLICATION OF ‘THE WILL TO BELIEVE’Volume 13, Issue 3Richard PettigrewDOI: https://doi.org/10.1017/epi.2015.44Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Export citation Request permission. (shrink)
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend (...) on it. As I show, whether this is possible depends on the formulation of the norm under consideration. (shrink)
(This is for the series Elements of Decision Theory published by Cambridge University Press and edited by Martin Peterson) -/- Our beliefs come in degrees. I believe some things more strongly than I believe others. I believe very strongly that global temperatures will continue to rise during the coming century; I believe slightly less strongly that the European Union will still exist in 2029; and I believe much less strongly that Cardiff is east of Edinburgh. My credence in something is (...) a measure of the strength of my belief in it; it represents my level of confidence in it. These are the states of mind we report when we say things like ‘I’m 20% confident I switch off the gas before I left' or ‘I’m 99.9% confident that it is raining outside'. -/- There are laws that govern these credences. For instance, I shouldn't be more confident that sea levels will rise by over 2 metres in the next 100 years than I am that they'll rise by over 1 metre, since the latter is true if the former is. This book is about a particular way we might try to establish these laws of credence: the Dutch Book arguments (For briefer overviews of these arguments, see Alan Hájek’s entry in the Oxford Handbook of Rational and Social Choice and Susan Vineberg’s entry in the Stanford Encyclopaedia.) -/- We begin, in Chapter 2, with the standard formulation of the various Dutch Book arguments that we'll consider: arguments for Probabilism, Countable Additivity, Regularity, and the Principal Principle. In Chapter 3, we subject this standard formulation to rigorous stress-testing, and make some small adjustments so that it can withstand various objections. What we are left with is still recognisably the orthodox Dutch Book argument. In Chapter 4, we set out the Dutch Strategy argument for Conditionalization. In Chapters 5 and 6, we consider two objections to Dutch Book arguments that cannot be addressed by making small adjustments. Instead, we must completely redesign those arguments, replacing them with ones that share a general approach but few specific details. In Chapter 7, we consider a further objection to which I do not have a response. In Chapter 8, we'll ask what happens to the Dutch Book arguments if we change certain features of the basic framework in which we've been working: first, we ask how Dutch Book arguments fare when we consider credences in self-locating propositions, such as It is Monday; second, we lift the assumption that the background logic is classical and explore Dutch Book arguments for non-classical logics; third, we lift the assumption that an agent's credal state can be represented by a single assignment of numerical values to the propositions she considers. In Chapter 9, we present the mathematical results that underpin these arguments. (shrink)
There are decision problems where the preferences that seem rational to many people cannot be accommodated within orthodox decision theory in the natural way. In response, a number of alternatives to the orthodoxy have been proposed. In this paper, I offer an argument against those alternatives and in favour of the orthodoxy. I focus on preferences that seem to encode sensitivity to risk. And I focus on the alternative to the orthodoxy proposed by Lara Buchak’s risk-weighted expected utility theory. I (...) will show that the orthodoxy can be made to accommodate all of the preferences that Buchak’s theory can accommodate. (shrink)
In “A Nonpragmatic Vindication of Probabilism”, Jim Joyce argues that our credences should obey the axioms of the probability calculus by showing that, if they don't, there will be alternative credences that are guaranteed to be more accurate than ours. But it seems that accuracy is not the only goal of credences: there is also the goal of matching one's credences to one's evidence. I will consider four ways in which we might make this latter goal precise: on the first, (...) the norms to which this goal gives rise act as ‘side constraints’ on our choice of credences; on the second, matching credences to evidence is a goal that is weighed against accuracy to give the overall cognitive value of credences; on the third, as on the second, proximity to the evidential goal and proximity to the goal of accuracy are both sources of value, but this time they are incomparable; on the fourth, the evidential goal is not an independent goal at all, but rather a byproduct of the goal of accuracy. All but the fourth way of making the evidential goal precise are pluralist about credal virtue: there is the virtue of being accurate and there is the virtue of matching the evidence and neither reduces to the other. The fourth way is monist about credal virtue: there is just the virtue of being accurate. The pluralist positions lead to problems for Joyce's argument; the monist position avoids them. I endorse the latter. (shrink)
Consider Phoebe and Daphne. Phoebe has credences in 1 million propositions. Daphne, on the other hand, has credences in all of these propositions, but she's also got credences in 999 million other propositions. Phoebe's credences are all very accurate. Each of Daphne's credences, in contrast, are not very accurate at all; each is a little more accurate than it is inaccurate, but not by much. Whose doxastic state is better, Phoebe's or Daphne's? It is clear that this question is analogous (...) to a question that has exercised ethicists over the past thirty years. How do we weigh a population consisting of some number of exceptionally happy and satisfied individuals against another population consisting of a much greater number of people whose lives are only just worth living? This is the question that occasions population ethics. In this paper, I go in search of the correct population ethics for credal states. (shrink)
According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Ian Hacking (1967), I develop a version of Bayesianism that permits logical ignorance. This includes an account of the synchronic norms that govern a logically ignorant individual at any given time, (...) as well as an account of how we reduce our logical ignorance by learning logical facts and how we should update our credences in response to such evidence. At the end, I explain why the requirement of logical omniscience remains true of ideal agents with no computational, processing, or storage limitations. (shrink)
With his Humean thesis on belief, Leitgeb seeks to say how beliefs and credences ought to interact with one another. To argue for this thesis, he enumerates the roles beliefs must play and the properties they must have if they are to play them, together with norms that beliefs and credences intuitively must satisfy. He then argues that beliefs can play these roles and satisfy these norms if, and only if, they are related to credences in the way set out (...) in the Humean thesis. I begin by raising questions about the roles that Leitgeb takes beliefs to play and the properties he thinks they must have if they are to play them successfully. After that, I question the assumption that, if there are categorical doxastic states at all, then there is just one kind of them—to wit, beliefs—such that the states of that kind must play all of these roles and conform to all of these norms. Instead, I will suggest, if there are categorical doxastic states, there may be many different kinds of such state such that, for each kind, the states of that type play some of the roles Leitgeb takes belief to play and each of which satisfies some of the norms he lists. As I will argue, the usual reasons for positing categorical doxastic states alongside credences all tell equally in favour of accepting a plurality of kinds of them. This is the thesis I dub pluralism about belief states. (shrink)
We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising has increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And how do they relate to the individual credences assigned by the members of the particular (...) group in question? According to the credal judgement aggregation principle, linear pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this chapter, I give an argument for linear pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle. (shrink)
In a recent paper in this journal, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson argue that the principal principle entails the principle of indifference. In this article, I argue that it does not. Lewis’s version of the principal principle notoriously depends on a notion of admissibility, which Lewis uses to restrict its application. HLWW base their argument on certain intuitions concerning when one proposition is admissible for another: Conditions 1 and 2. There are two ways of reading their (...) argument, depending on how you understand the status of these conditions. Reading 1: The correct account of admissibility is determined independently of these two principles, and yet these two principles follow from that correct account. Reading 2: The correct account of admissibility is determined in part by these two principles, so that the principles follow from that account but only because the correct account is constrained so that it must satisfy them. HLWW show that given an account of admissibility on which Conditions 1 and 2 hold, the principal principle entails the principle of indifference. I argue that on either reading of the argument, it fails. First, I argue that there is a plausible account of admissibility on which Conditions 1 and 2 are false. That defeats Reading 1. Next, I argue that the intuitions that lead us to assent to Condition 2 also lead us to assent to other very closely related principles that are inconsistent with Condition 2. This, I claim, casts doubt on the reliability of those intuitions, and thus removes our justification for Condition 2. This defeats Reading 2 of the HLWW argument. Thus, the argument fails. 1Introduction 2Introducing the Principal Principle 3Introducing the Principle of Indifference 4The HLWW Argument 4.1Reading 1: Admissibility justifies Conditions 1 and 2 4.2Reading 2: Conditions 1 and 2 constrain admissibility 5Conclusion. (shrink)
The Dutch Book Argument for Probabilism assumes Ramsey's Thesis (RT), which purports to determine the prices an agent is rationally required to pay for a bet. Recently, a new objection to Ramsey's Thesis has emerged (Hedden 2013, Wronski & Godziszewski 2017, Wronski 2018)--I call this the Expected Utility Objection. According to this objection, it is Maximise Subjective Expected Utility (MSEU) that determines the prices an agent is required to pay for a bet, and this often disagrees with Ramsey's Thesis. I (...) suggest two responses to Hedden's objection. First, we might be permissive: agents are permitted to pay any price that is required or permitted by RT, and they are permitted to pay any price that is required or permitted by MSEU. This allows us to give a revised version of the Dutch Book Argument for Probabilism, which I call the Permissive Dutch Book Argument. Second, I suggest that even the proponent of the Expected Utility Objection should admit that RT gives the correct answer in certain very limited cases, and I show that, together with MSEU, this very restricted version of RT gives a new pragmatic argument for Probabilism, which I call the Bookless Pragmatic Argument. (shrink)
In this paper, we seek a reliabilist account of justified credence. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Existing accounts of reliabilism about justified credence comes in the same two varieties: Jeff Dunn (2015) proposes a version of process reliabilism, while Weng Hong Tang (2016) offers a version of indicator reliabilism. As we will see, both face the same objection. If they are right about what justification is, it (...) is mysterious why we care about justification, for neither of the accounts explains how justification is connected to anything of epistemic value. We will call this the Connection Problem. I begin by describing Dunn’s process reliabilism and Tang’s indicator reliabilism. I argue that, understood correctly, they are, in fact, extensionally equivalent. That is, Dunn and Tang reach the top of the same mountain, albeit by different routes. However, I argue that both face the Connection Problem. In response, I offer my own version of reliabilism, which is both process and indicator, and I argue that it solves that problem. Furthermore, I show that it is also extensionally equivalent to Dunn’s reliabilism and Tang’s. Thus, I reach the top of the same mountain as well. (shrink)
In a recent paper in this journal, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson argue that the principal principle entails the principle of indifference. In this paper, I argue that it does not. Lewis’s version of the principal principle notoriously depends on a notion of admissibility, which Lewis uses to restrict its application. HLWW base their argument on certain intuitions concerning when one proposition is admissible for another: Conditions 1 and 2. There are two ways of reading their (...) argument, depending on how you understand the status of these conditions. Reading 1: The correct account of admissibility is determined independently of these two principles, and yet these two principles follow from that correct account. Reading 2: The correct account of admissibility is determined in part by these two principles, so that the principles follow from that account but only because the correct account is constrained so that it must satisfy them. HLWWshow that, given an account of admissibility on which Conditions 1 and 2 hold, the principal principle entails the principle of indifference. I argue that, on either reading of the argument, it fails. First, I argue that there is a plausible account of admissibility on which Conditions 1 and 2 are false. That defeats reading 1. Next, I argue that the intuitions that lead us to assent to Condition 2 also lead us to assent to other very closely related principles that are inconsistent with Condition 2. This, I claim, casts doubt on the reliability of those intuitions, and thus removes our justification for Condition 2. This defeats the second reading of the HLWW argument. Thus, the argument fails. (shrink)
There are many kinds of epistemic experts to which we might wish to defer in setting our credences. These include: highly rational agents, objective chances, our own future credences, our own current credences, and evidential probabilities. But exactly what constraint does a deference requirement place on an agent's credences? In this paper we consider three answers, inspired by three principles that have been proposed for deference to objective chances. We consider how these options fare when applied to the other kinds (...) of epistemic experts mentioned above. Of the three deference principles we consider, we argue that two of the options face insuperable difficulties. The third, on the other hand, fares well|at least when it is applied in a particular way. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...) follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
In formal epistemology, we use mathematical methods to explore the questions of epistemology and rational choice. What can we know? What should we believe and how strongly? How should we act based on our beliefs and values? We begin by modelling phenomena like knowledge, belief, and desire using mathematical machinery, just as a biologist might model the fluctuations of a pair of competing populations, or a physicist might model the turbulence of a fluid passing through a small aperture. Then, we (...) explore, discover, and justify the laws governing those phenomena, using the precision that mathematical machinery affords. For example, we might represent a person by the strengths of their beliefs, and we might measure these using real numbers, which we call credences. Having done this, we might ask what the norms are that govern that person when we represent them in that way. How should those credences hang together? How should the credences change in response to evidence? And how should those credences guide the person’s actions? This is the approach of the first six chapters of this handbook. In the second half, we consider different representations—the set of propositions a person believes; their ranking of propositions by their plausibility. And in each case we ask again what the norms are that govern a person so represented. Or, we might represent them as having both credences and full beliefs, and then ask how those two representations should interact with one another. This handbook is incomplete, as such ventures often are. Formal epistemology is a much wider topic than we present here. One omission, for instance, is social epistemology, where we consider not only individual believers but also the epistemic aspects of their place in a social world. Michael Caie’s entry on doxastic logic touches on one part of this topic, but there is much more. Relatedly, there is no entry on epistemic logic, nor any on knowledge more generally. There are still more gaps. These omissions should not be taken as ideological choices. This material is missing, not because it is any less valuable or interesting, but because we v failed to secure it in time. Rather than delay publication further, we chose to go ahead with what is already a substantial collection. We anticipate a further volume in the future that will cover more ground. Why an open access handbook on this topic? A number of reasons. The topics covered here are large and complex and need the space allowed by the sort of 50 page treatment that many of the authors give. We also wanted to show that, using free and open software, one can overcome a major hurdle facing open access publishing, even on topics with complex typesetting needs. With the right software, one can produce attractive, clear publications at reasonably low cost. Indeed this handbook was created on a budget of exactly £0 (≈ $0). Our thanks to PhilPapers for serving as publisher, and to the authors: we are enormously grateful for the effort they put into their entries. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
In this paper, I describe and motivate a new species of mathematical structuralism, which I call Instrumental Nominalism about Set-Theoretic Structuralism. As the name suggests, this approach takes standard Set-Theoretic Structuralism of the sort championed by Bourbaki and removes its ontological commitments by taking an instrumental nominalist approach to that ontology of the sort described by Joseph Melia and Gideon Rosen. I argue that this avoids all of the problems that plague other versions of structuralism.
Beliefs come in different strengths. What are the norms that govern these strengths of belief? Let an agent's belief function at a particular time be the function that assigns, to each of the propositions about which she has an opinion, the strength of her belief in that proposition at that time. Traditionally, philosophers have claimed that an agent's belief function at any time ought to be a probability function (Probabilism), and that she ought to update her belief function upon obtaining (...) new evidence by conditionalizing on that evidence (Conditionalization). Until recently, the central arguments for these claims have been pragmatic. But these putative justifications fail to identify what is epistemically irrational about violating Probabilism or Conditionalization. A new approach, which I will call epistemic utility theory, attempts to remedy this. It treats beliefs as epistemic acts; and it appeals to the notion of an epistemic utility function, which measures of how epistemically valuable a particular belief function is for a particular way the world might be. It then formulates fundamental epistemic norms that are analogous to the fundamental practical norms that underlie decision theory. I survey the results obtained so far in this young research project, and present a sustained critique of certain assumptions that have been made by a number of philosophers working in this area. (shrink)
I offer a new interpretation of Aristotle's philosophy of geometry, which he presents in greatest detail in Metaphysics M 3. On my interpretation, Aristotle holds that the points, lines, planes, and solids of geometry belong to the sensible realm, but not in a straightforward way. Rather, by considering Aristotle's second attempt to solve Zeno's Runner Paradox in Book VIII of the Physics , I explain how such objects exist in the sensibles in a special way. I conclude by considering the (...) passages that lead Jonathan Lear to his fictionalist reading of Met . M3,1 and I argue that Aristotle is here describing useful heuristics for the teaching of geometry; he is not pronouncing on the meaning of mathematical talk. (shrink)
If numbers were identified with any of their standard set-theoretic realizations, then they would have various non-arithmetical properties that mathematicians are reluctant to ascribe to them. Dedekind and later structuralists conclude that we should refrain from ascribing to numbers such ‘foreign’ properties. We first rehearse why it is hard to provide an acceptable formulation of this conclusion. Then we investigate some forms of abstraction meant to purge mathematical objects of all ‘foreign’ properties. One form is inspired by Frege; the other (...) by Dedekind. We argue that both face problems. (shrink)
Questions about the relation between identity and discernibility are important both in philosophy and in model theory. We show how a philosophical question about identity and dis- cernibility can be ‘factorized’ into a philosophical question about the adequacy of a formal language to the description of the world, and a mathematical question about discernibility in this language. We provide formal definitions of various notions of discernibility and offer a complete classification of their logical relations. Some new and surprising facts are (...) proved; for instance, that weak dis- cernibility corresponds to discernibility in a language with constants for every object, and that weak discernibility is the most discerning nontrivial discernibility relation. (shrink)
Anyone familiar with Richard Kraut's work in ancient philosophy will be excited to see him putting aside the dusty tomes of the ancients and delving into ethics first-hand. He does not disappoint. His book is a lucid and wide-ranging discussion that provides at least the core of an ethical theory and an appealing set of answers to a range of ethical questions.Kraut aims to provide an alternative to utilitarianism that preserves the good-centred nature of that theory. He claims that (...) all justification ‘proceeds by way of good and bad’ and that the only way for something to be good or bad is for it to be good or bad for some living thing. He is adamant that this does not commit him to utilitarianism, nor to downplaying considerations such as promise-keeping or special relationships. On Kraut's view, such factors can make it the case that I have more reason to perform one action than another but it is a condition of my having any reason to perform an action that it does some good or impedes some harm. Kraut once seems to dissent from this, claiming that: ‘the strength of a practical reason varies according to …. (shrink)
Following strict rules of interpretation, this book focuses on the ideas in Plato's early and middle dialogues that lie within the fields now called logic and methodology, specifically elenchus and dialectic and the method of hypothesis.
Michael Rescorla (2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever you take yourself to learn something with certainty, it's true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla's new argument by giving a very general Dutch Book argument that applies to many cases of updating (...) beyond those covered by Conditionalization, and then showing how Rescorla's version follows as a special case of that. Second, I want to show how to generalise Briggs and Pettigrew's Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs & Pettigrew 2018). (shrink)
Beliefs come in different strengths. What are the norms that govern these strengths of belief? Let an agent's belief function at a particular time be the function that assigns, to each of the propositions about which she has an opinion, the strength of her belief in that proposition at that time. Traditionally, philosophers have claimed that an agent's belief function at any time ought to be a probability function, and that she ought to update her belief function upon obtaining new (...) evidence by conditionalizing on that evidence. Until recently, the central arguments for these claims have been pragmatic. But these putative justifications fail to identify what is epistemically irrational about violating Probabilism or Conditionalization. A new approach, which I will call epistemic utility theory, attempts to remedy this. It treats beliefs as epistemic acts; and it appeals to the notion of an epistemic utility function, which measures of how epistemically valuable a particular belief function is for a particular way the world might be. It then formulates fundamental epistemic norms that are analogous to the fundamental practical norms that underlie decision theory. I survey the results obtained so far in this young research project, and present a sustained critique of certain assumptions that have been made by a number of philosophers working in this area. (shrink)