Greaves and Wallace argue that conditionalization maximizes expected accuracy. In this paper I show that their result only applies to a restricted range of cases. I then show that the update procedure that maximizes expected accuracy in general is one in which, upon learning P, we conditionalize, not on P, but on the proposition that we learned P. After proving this result, I provide further generalizations and show that much of the accuracy-first epistemology program is committed to KK-like iteration (...) principles and to the existence of a class of propositions that rational agents will be certain of if and only if they are true. (shrink)
Recently, several epistemologists have defended an attractive principle of epistemic rationality, which we shall call Ur-Prior Conditionalization. In this essay, I ask whether we can justify this principle by appealing to the epistemic goal of accuracy. I argue that any such accuracy-based argument will be in tension with Evidence Externalism, i.e., the view that agent's evidence may entail non-trivial propositions about the external world. This is because any such argument will crucially require the assumption that, independently of all empirical (...) evidence, it is rational for an agent to be certain that her evidence will always include truths, and that she will always have perfect introspective access to her own evidence. This assumption is in tension with Evidence Externalism. I go on to suggest that even if we don't accept Evidence Externalism, the prospects for any accuracy-based justification for Ur-Prior Conditionalization are bleak. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of (...) class='Hi'>Conditionalization to choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different (...) ways, and these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
The applicability of Bayesian conditionalization in setting one’s posterior probability for a proposition, α, is limited to cases where the value of a corresponding prior probability, PPRI(α|∧E), is available, where ∧E represents one’s complete body of evidence. In order to extend probability updating to cases where the prior probabilities needed for Bayesian conditionalization are unavailable, I introduce an inference schema, defeasible conditionalization, which allows one to update one’s personal probability in a proposition by conditioning on a proposition (...) that represents a proper subset of one’s complete body of evidence. While defeasible conditionalization has wider applicability than standard Bayesian conditionalization (since it may be used when the value of a relevant prior probability, PPRI(α|∧E), is unavailable), there are circumstances under which some instances of defeasible conditionalization are unreasonable. To address this difficulty, I outline the conditions under which instances of defeasible conditionalization are defeated. To conclude the article, I suggest that the prescriptions of direct inference and statistical induction can be encoded within the proposed system of probability updating, by the selection of intuitively reasonable prior probabilities. (shrink)
I show that David Lewis’s principal principle is not preserved under Jeffrey conditionalization. Using this observation, I argue that Lewis’s reason for rejecting the desire as belief thesis and Adams’s thesis applies also to his own principal principle. 1 Introduction2 Adams’s Thesis, the Desire as Belief Thesis, and the Principal Principle3 Jeffrey Conditionalization4 The Principal Principles Not Preserved under Jeffrey Conditionalization5 Inadmissible Experiences.
This paper discusses simultaneous belief updates. I argue here that modeling such belief updates using the Principle of Minimum Information can be regarded as applying Jeffrey conditionalization successively, and so that, contrary to what many probabilists have thought, the simultaneous belief updates can be successfully modeled by means of Jeffrey conditionalization.
How do temporal and eternal beliefs interact? I argue that acquiring a temporal belief should have no effect on eternal beliefs for an important range of cases. Thus, I oppose the popular view that new norms of belief change must be introduced for cases where the only change is the passing of time. I defend this position from the purported counter-examples of the Prisoner and Sleeping Beauty. I distinguish two importantly different ways in which temporal beliefs can be acquired and (...) draw some general conclusions about their impact on eternal beliefs. (shrink)
This discussion note examines a recent argument for the principle that any counterfactual with true components is itself true. That argument rests upon two widely accepted principles of counterfactual logic to which the paper presents counterexamples. The conclusion speculates briefly upon the wider lessons that philosophers should draw from these examples for the semantics of counterfactuals.
Boghossian’s (2003) proposal to conditionalize concepts as a way to secure their legitimacy in disputable cases applies well, not just to pejoratives – on whose account Boghossian first proposed it – but also to thick ethical concepts. It actually has important advantages when dealing with some worries raised by the application of thick ethical terms, and the truth and facticity of corresponding statements. In this paper, I will try to show, however, that thick ethical concepts present a specific case, whose (...) analysis requires a somewhat different reconstruction from that which Boghossian offers. A proper account of thick ethical concepts should be able to explain how ‘evaluated’ and ‘evaluation’ are connected. (shrink)
I explain what exactly constrains presupposition projection in compound sentences and argue that the presuppositions that do not project are conditionalized, giving rise to inferable conditional presuppositions. I combine elements of and which, together with an additional, independently motivated assumption, make it possible to construct an analysis that makes correct predictions. The core of my proposal is as follows: When a speaker felicitously utters a compound sentence whose constituent clauses require presuppositions, the hearer will infer that the speaker presupposes those (...) propositions, unless the sentence contains some element that makes the hearer realize that, if the speaker actually presupposed them, she would be either uninformative or inconsistent in her beliefs. In these cases, the propositions that would have been presupposed, had the clauses been uttered in isolation, will not be presupposed, i.e. the clausal presuppositions will not project. (shrink)
According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization: that is, her posterior subjective probability after taking account of evidence X, pnew, is to be set equal to her prior conditional probability pold(·|X). Bayesians can be challenged to provide a justification for their claim that conditionalization is recommended by rationality—whence the normative force of the injunction to conditionalize? There are several existing justifications for conditionalization, but none directly addresses the idea that conditionalization (...) will be epistemically rational if and only if it can reasonably be expected to lead to epistemically good outcomes. We apply the approach of cognitive decision theory to provide a justification for conditionalization using precisely that idea. We assign epistemic utility functions to epistemically rational agents; an agent’s epistemic utility is to depend both upon the actual state of the world and on the agent’s credence distribution over possible states. We prove that, under independently motivated conditions, conditionalization is the unique updating rule that maximizes expected epistemic utility. (shrink)
Expected accuracy arguments have been used by several authors (Leitgeb and Pettigrew, and Greaves and Wallace) to support the diachronic principle of conditionalization, in updates where there are only finitely many possible propositions to learn. I show that these arguments can be extended to infinite cases, giving an argument not just for conditionalization but also for principles known as ‘conglomerability’ and ‘reflection’. This shows that the expected accuracy approach is stronger than has been realized. I also argue that (...) we should be careful to distinguish diachronic update principles from related synchronic principles for conditional probability. (shrink)
Are counterfactuals with true antecedents and consequents automatically true? That is, is Conjunction Conditionalization: if (X & Y), then (X > Y) valid? Stalnaker and Lewis think so, but many others disagree. We note here that the extant arguments for Conjunction Conditionalization are unpersuasive, before presenting a family of more compelling arguments. These arguments rely on some standard theorems of the logic of counterfactuals as well as a plausible and popular semantic claim about certain semifactuals. Denying Conjunction (...) class='Hi'>Conditionalization, then, requires rejecting other aspects of the standard logic of counterfactuals, or else our intuitive picture of semifactuals. (shrink)
Van Fraassen famously endorses the Principle of Reflection as a constraint on rational credence, and argues that Reflection is entailed by the more traditional principle of Conditionalization. He draws two morals from this alleged entailment. First, that Reflection can be regarded as an alternative to Conditionalization – a more lenient standard of rationality. And second, that commitment to Conditionalization can be turned into support for Reflection. Van Fraassen also argues that Reflection implies Conditionalization, thus offering a (...) new justification for Conditionalization. I argue that neither principle entails the other, and thus neither can be used to motivate the other in the way van Fraassen says. There are ways to connect Conditionalization to Reflection, but these connections depend on poor assumptions about our introspective access, and are not tight enough to draw the sorts of conclusions van Fraassen wants. Upon close examination, the two principles seem to be getting at two quite independent epistemic norms. (shrink)
One can have no prior credence whatsoever (not even zero) in a temporally indexical claim. This fact saves the principle of conditionalization from potential counterexample and undermines the Elga and Arntzenius/Dorr arguments for the thirder position and Lewis' argument for the halfer position on the Sleeping Beauty Problem, thereby supporting the double-halfer position. -/- .
In this paper, I argue for a view largely favorable to the Thirder view: when Sleeping Beauty wakes up on Monday, her credence in the coin’s landing heads is less than 1/2. Let’s call this “the Lesser view.” For my argument, I (i) criticize Strict Conditionalization as the rule for changing de se credences; (ii) develop a new rule; and (iii) defend it by Gaifman’s Expert Principle. Finally, I defend the Lesser view by making use of this new rule.
Bayesian Conditionalization is a widely used proposal for how to update one’s beliefs upon the receipt of new evidence. This is in part because of its attention to the totality of one’s evidence, which often includes facts about what one’s new evidence is and how one has come to have it. However, an increasingly popular position in epistemology holds that one may gain new evidence, construed as knowledge, without being in a position to know that one has gained this (...) evidence. These are cases of KK-Failure, cases where one knows p but is not in a position to know that one knows p. This paper assumes that certain KK-Failures are possible and argues that Conditionalization goes wrong in those cases. (shrink)
Bayesian decision theory can be viewed as the core of psychological theory for idealized agents. To get a complete psychological theory for such agents, you have to supplement it with input and output laws. On a Bayesian theory that employs strict conditionalization, the input laws are easy to give. On a Bayesian theory that employs Jeffrey conditionalization, there appears to be a considerable problem with giving the input laws. However, Jeffrey conditionalization can be reformulated so that the (...) problem disappears, and in fact the reformulated version is more natural and easier to work with on independent grounds. (shrink)
In “Generalized Conditionalization and the Sleeping Beauty Problem,” Anna Mahtani and I offer a new argument for thirdism that relies on what we call “generalized conditionalization.” Generalized conditionalization goes beyond conventional conditionalization in two respects: first, by sometimes deploying a space of synchronic, essentially temporal, candidate-possibilities that are not “prior” possibilities; and second, by allowing for the use of preliminary probabilities that arise by first bracketing, and then conditionalizing upon, “old evidence.” In “Beauty and Conditionalization: (...) Reply to Horgan and Mahtani,” Joel Pust replies to the Horgan/Mahtani argument, raising several objections. In my view his objections do not undermine the argument, but they do reveal a need to provide several further elaborations of it—elaborations that I think are independently plausible. In this paper I will address his objections, by providing the elaborations that I think they prompt. (shrink)
Colin Howson (1995 ) offers a counter-example to the rule of conditionalization. I will argue that the counter-example doesn't hit its target. The problem is that Howson mis-describes the total evidence the agent has. In particular, Howson overlooks how the restriction that the agent learn 'E and nothing else' interacts with the de se evidence 'I have learnt E'.
We present a new argument for the claim that in the Sleeping Beauty problem, the probability that the coin comes up heads is 1/3. Our argument depends on a principle for the updating of probabilities that we call ‘generalized conditionalization’, and on a species of generalized conditionalization we call ‘synchronic conditionalization on old information’. We set forth a rationale for the legitimacy of generalized conditionalization, and we explain why our new argument for thirdism is immune to (...) two attacks that Pust (Synthese 160:97–101, 2008) has leveled at other arguments for thirdism. (shrink)
I re-examine Coherence Arguments (Dutch Book Arguments, No Arbitrage Arguments) for diachronic constraints on Bayesian reasoning. I suggest to replace the usual game–theoretic coherence condition with a new decision–theoretic condition ('Diachronic Sure Thing Principle'). The new condition meets a large part of the standard objections against the Coherence Argument and frees it, in particular, from a commitment to additive utilities. It also facilitates the proof of the Converse Dutch Book Theorem. I first apply the improved Coherence Argument to van Fraassen's (...) (1984) Reflection principle. I then point out the failure of a Coherence Argument that is intended to support Conditionalization as a naive, universal, update rule. I also point out that Reflection is incompatible with the universal use of Conditionalization thus interpreted. The Coherence Argument therefore defeats the naive view on Bayesian learning that it was originally designed to justify. (shrink)
David Chalmers has recently argued that Bayesian conditionalization is a constraint on conceptual constancy, and that this constraint, together with “standard Bayesian considerations about evidence and updating,” is incompatible with the Quinean claim that every belief is rationally revisable. Chalmers’s argument presupposes that the sort of conceptual constancy that is relevant to Bayesian conditionalization is the same as the sort of conceptual constancy that is relevant to the claim that every belief is rationally revisable. To challenge this presupposition (...) I explicate a sort of “conceptual role” constancy that a rational subject could take to be necessary and sufficient for a rule of Bayesian conditionalization to govern her belief updating, and show that a rational subject may simultaneously commit herself to updating her beliefs in accord with such a rule and accept the claim that every belief is rationally revisable. (shrink)
This paper shows that any view of future contingent claims that treats such claims as having indeterminate truth values or as simply being false implies probabilistic irrationality. This is because such views of the future imply violations of reflection, special reflection and conditionalization.
This paper discusses how to update one’s credences based on evidence that has initial probability 0. I advance a diachronic norm, Kolmogorov Conditionalization, that governs credal reallocation in many such learning scenarios. The norm is based upon Kolmogorov’s theory of conditional probability. I prove a Dutch book theorem and converse Dutch book theorem for Kolmogorov Conditionalization. The two theorems establish Kolmogorov Conditionalization as the unique credal reallocation rule that avoids a sure loss in the relevant learning scenarios.
Conditionalization is an intuitive and popular epistemic principle. By contrast, the Reﬂection principle is well known to have some very unappealing consequences. But van Fraassen argues that Conditionalization entails Reﬂection, so that proponents of Conditionalization must accept Reﬂection and its consequences. Van Fraassen also argues that Reﬂection implies Conditionalization, thus oﬀering a new justiﬁcation for Conditionalization. I argue that neither principle entails the other, and thus neither can be used to motivate the other in the (...) way van Fraassen says. I also propose a replacement for Reﬂection that accounts for the intuitions that made Reﬂection appealing, but doesn’t lead to Reﬂection’s bad consequences. (shrink)
Accuracy-based arguments for conditionalization and probabilism appear to have a significant advantage over their Dutch Book rivals. They rely only on the plausible epistemic norm that one should try to decrease the inaccuracy of one's beliefs. Furthermore, it seems that conditionalization and probabilism follow from a wide range of measures of inaccuracy. However, we argue that among the measures in the literature, there are some from which one can prove conditionalization, others from which one can prove probabilism, (...) and none from which one can prove both. Hence at present, the accuracy-based approach cannot underwrite both conditionalization and probabilism. (shrink)
Horgan and Mahtani (Erkenntnis 78: 333–351, 2013) present a new argument for the 1/3 answer to the Sleeping Beauty problem resting on a principle for updating probabilities which they call “generalized conditionalization.” They allege that this new argument is immune to two attacks which have been recently leveled at other arguments for thirdism. I argue that their new argument rests on a probability distribution which is (a) no more justified than an alternative distribution favoring a different answer to the (...) problem, and (b) ultimately unjustified. I go on to show that generalized conditionalization cannot be applied in the manner suggested, given the cogency of the aforementioned attacks on thirder arguments. Hence, the new argument fails to advance the case for the 1/3 answer. (shrink)
An approach to testing theories describing a multiverse, that has gained interest of late, involves comparing theory-generated probability distributions over observables with their experimentally measured values. It is likely that such distributions, were we indeed able to calculate them unambiguously, will assign low probabilities to any such experimental measurements. An alternative to thereby rejecting these theories, is to conditionalize the distributions involved by restricting attention to domains of the multiverse in which we might arise. In order to elicit a crisp (...) prediction, however, one needs to make a further assumption about how typical we are of the chosen domains. In this paper, we investigate interactions between the spectra of available assumptions regarding both conditionalization and typicality, and draw out the effects of these interactions in a concrete setting; namely, on predictions of the total number of species that contribute significantly to dark matter. In particular, for each conditionalization scheme studied, we analyze how correlations between densities of different dark matter species affect the prediction, and explicate the effects of assumptions regarding typicality. We find that the effects of correlations can depend on the conditionalization scheme, and that in each case atypicality can significantly change the prediction. In doing so, we demonstrate the existence of overlaps in the predictions of different "frameworks" consisting of conjunctions of theory, conditionalization scheme and typicality assumption. This conclusion highlights the acute challenges involved in using such tests to identify a preferred framework that aims to describe our observational situation in a multiverse. (shrink)
Philosophers investigating the interpretation and use of conditional sentences have long been intrigued by the intuitive correspondence between the probability of a conditional `if A, then C' and the conditional probability of C, given A. Attempts to account for this intuition within a general probabilistic theory of belief, meaning and use have been plagued by a danger of trivialization, which has proven to be remarkably recalcitrant and absorbed much of the creative effort in the area. But there is a strategy (...) for avoiding triviality that has been known for almost as long as the triviality results themselves. What is lacking is a straightforward integration of this approach in a larger framework of belief representation and dynamics. This paper discusses some of the issues involved and proposes an account of belief update by conditionalization. (shrink)
Because the addition of the conditional premise tends to increase modus ponens (MP) inferences, Oaksford & Chater argue that the additional knowledge is assimilated to world knowledge before the Ramsey test is carried out to evaluate P(q|p), so that the process of applying the Ramsey test could become indistinguishable from the process of applying the second-step conditionalization.
Probabilism in epistemology does not have to be of the Bayesian variety. The probabilist represents a person''s opinion as a probability function; the Bayesian adds that rational change of opinion must take the form of conditionalizing on new evidence. I will argue that this is the correct procedure under certain special conditions. Those special conditions are important, and instantiated for example in scientific experimentation, but hardly universal. My argument will be related to the much maligned Reflection Principle (van Fraassen, 1984, (...) 1995), and partly inspired by the work of Brian Skyrms (1987). (shrink)