Sometimes you are unreliable at fulfilling your doxastic plans: for example, if you plan to be fully confident in all truths, probably you will end up being fully confident in some falsehoods by mistake. In some cases, there is information that plays the classical role of *evidence*—your beliefs are perfectly discriminating with respect to some possible facts about the world—and there is a standard expected-accuracy-based justification for planning to *conditionalize* on this evidence. This planning-oriented justification extends to some cases where (...) you do not have transparent evidence, in the sense that your beliefs are not perfectly discriminating with respect to any non-trivial facts. In other cases, accuracy considerations do not tell you to plan to conditionalize on any information at all, but rather to plan to follow a different updating rule. Even in the absence of evidence, accuracy considerations can guide your doxastic plan. (shrink)
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Hacking (1987) and White (2000) influentially argue that it is not. We approach this question through a systematic framework for self-locating epistemology. As it turns out, leading approaches to self-locating evidence agree that the fact that our own universe contains fine-tuned life indeed confirms the existence of a multiverse (at least in a suitably idealized setting). This convergence is no accident: we present two theorems (...) showing that in this setting, *any* updating rule that satisfies a few reasonable conditions will have the same feature. The conclusion that fine-tuned life provides evidence for a multiverse is hard to escape. (shrink)
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call *Countable (...) Independence*. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
Many people do not know or believe there is a God, and many experience a sense of divine absence. Are these (and other) “divine hiddenness” facts evidence against the existence of God? Using Bayesian tools, we investigate *evidential arguments from divine hiddenness*, and respond to two objections to such arguments. The first objection says that the problem of hiddenness is just a special case of the problem of evil, and so if one has responded to the problem of evil then (...) hiddenness has no additional bite. The second objection says that, while hiddenness may be evidence against generic theism, it is not evidence against more specific conceptions of God, and thus hiddenness poses no epistemic challenge to a theist who holds one of these more specific conceptions. Our investigation leaves open just how strong the evidence from hiddenness really is, but we aim to clear away some important reasons for thinking hiddenness is of no evidential significance at all. (shrink)
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, we gather (...) lessons from the established work on credence aggregation, and extend this work with two new impossibility results. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement. (shrink)
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. A (...) popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
Could space consist entirely of extended regions, without any regions shaped like points, lines, or surfaces? Peter Forrest and Frank Arntzenius have independently raised a paradox of size for space like this, drawing on a construction of Cantor’s. I present a new version of this argument and explore possible lines of response.
I examine what the mathematical theory of random structures can teach us about the probability of Plenitude, a thesis closely related to David Lewis's modal realism. Given some natural assumptions, Plenitude is reasonably probable a priori, but in principle it can be (and plausibly it has been) empirically disconfirmed—not by any general qualitative evidence, but rather by our de re evidence.
“There are no gaps in logical space,” David Lewis writes, giving voice to sentiment shared by many philosophers. But different natural ways of trying to make this sentiment precise turn out to conflict with one another. One is a *pattern* idea: “Any pattern of instantiation is metaphysically possible.” Another is a *cut and paste* idea: “For any objects in any worlds, there exists a world that contains any number of duplicates of all of those objects.” We use resources from model (...) theory to show the inconsistency of certain packages of combinatorial principles and the consistency of others. (shrink)
I examine three ‘anti-object’ metaphysical views: nihilism, generalism, and anti-quantificationalism. After setting aside nihilism, I argue that generalists should be anti-quantificationalists. Along the way, I attempt to articulate what a ‘metaphysically perspicuous’ language might even be.
David Lewis holds that a single possible world can provide more than one way things could be. But what are possible worlds good for if they come apart from ways things could be? We can make sense of this if we go in for a metaphysical understanding of what the world is. The world does not include everything that is the case—only the genuine facts. Understood this way, Lewis's “cheap haecceitism” amounts to a kind of metaphysical anti-haecceitism: it says there (...) aren't any genuine facts about individuals over and above their qualitative roles. (shrink)
We explore the view that Frege's puzzle is a source of straightforward counterexamples to Leibniz's law. Taking this seriously requires us to revise the classical logic of quantifiers and identity; we work out the options, in the context of higher-order logic. The logics we arrive at provide the resources for a straightforward semantics of attitude reports that is consistent with the Millian thesis that the meaning of a name is just the thing it stands for. We provide models to show (...) that some of these logics are non-degenerate. (shrink)
Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? *Fanaticism* says yes: for every bad outcome, there is a tiny chance of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better. I consider two related recent arguments for Fanaticism: Beckstead and Thomas's argument from *strange dependence on space and time*, and (...) Wilkinson's *Indology* argument. While both arguments are instructive, neither is persuasive. In fact, the general principles that underwrite the arguments (a *separability* principle in the first case, and a *reflection* principle in the second) are *inconsistent* with Fanaticism. In both cases, though, it is possible to rehabilitate arguments for Fanaticism based on restricted versions of those principles. The situation is unstable: plausible general principles tell *against* Fanaticism, but restrictions of those same principles (with strengthened auxiliary assumptions) *support* Fanaticism. All of the consistent views that emerge are very strange. (shrink)
The Epistemic Objection says that certain theories of time imply that it is impossible to know which time is absolutely present. Standard presentations of the Epistemic Objection are elliptical—and some of the most natural premises one might fill in to complete the argument end up leading to radical skepticism. But there is a way of filling in the details which avoids this problem, using epistemic safety. The new version has two interesting upshots. First, while Ross Cameron alleges that the Epistemic (...) Objection applies to presentism as much as to theories like the growing block, the safety version does not overgeneralize this way. Second, the Epistemic Objection does generalize in a different, overlooked way. The safety objection is a serious problem for a widely held combination of views: “propositional temporalism” together with “metaphysical eternalism”. (shrink)
We prove a representation theorem for preference relations over countably infinite lotteries that satisfy a generalized form of the Independence axiom, without assuming Continuity. The representing space consists of lexicographically ordered transfinite sequences of bounded real numbers. This result is generalized to preference orders on abstract superconvex spaces.
Suppose that all non-qualitative facts are grounded in qualitative facts. I argue that this view naturally comes with a picture in which trans-world identity is indeterminate. But this in turn leads to either pervasive indeterminacy in the non-qualitative, or else contingency in what facts about modality and possible worlds are determinate.
“Pragmatic encroachers” about knowledge generally advocate two ideas: (1) you can rationally act on what you know; (2) knowledge is harder to achieve when more is at stake. Charity Anderson and John Hawthorne have recently argued that these two ideas may not fit together so well. I extend their argument by working out what “high stakes” would have to mean for the two ideas to line up, using decision theory.
Some hold that the lesson of Russell’s paradox and its relatives is that mathematical reality does not form a ‘definite totality’ but rather is ‘indefinitely extensible’. There can always be more sets than there ever are. I argue that certain contact puzzles are analogous to Russell’s paradox this way: they similarly motivate a vision of physical reality as iteratively generated. In this picture, the divisions of the continuum into smaller parts are ‘potential’ rather than ‘actual’. Besides the intrinsic (...) interest of this metaphysical picture, it has important consequences for the debate over absolute generality. It is often thought that ‘indefinite extensibility’ arguments at best make trouble for mathematical platonists; but the contact arguments show that nominalists face the same kind of difficulty, if they recognize even the metaphysical possibility of the picture I sketch. (shrink)
The counterpart theorist has a problem: there is no obvious way to understand talk about actuality in terms of counterparts. Fara and Williamson have charged that this obstacle cannot be overcome. Here I defend the counterpart theorist by offering systematic interpretations of a quantified modal language that includes an actuality operator. Centrally, I disentangle the counterpart relation from a related notion, a ‘representation relation’. The relation of possible things to the actual things they represent is variable, and an adequate account (...) of modal language must keep track of the way it is systematically shifted by modal operators. I apply my account to resolve several puzzles about counterparts and actuality. In technical appendices, I prove some important logical results about this ‘representational’ counterpart system and its relationship to other modal systems. (shrink)
The problem of evil is the most prominent argument against the existence of God. Skeptical theists contend that it is not a good argument. Their reasons for this contention vary widely, involving such notions as CORNEA, epistemic appearances, 'gratuitous' evils, 'levering' evidence, and the representativeness of goods. We aim to dispel some confusions about these notions, in particular by clarifying their roles within a probabilistic epistemology. In addition, we develop new responses to the problem of evil from both the phenomenal (...) conception of evidence and the knowledge-first view of evidence. (shrink)
This paper explores the idea that it is instrumentally valuable to learn normative truths. We consider an argument for "normative hedging" based on this principle, and examine the structure of decision-making under moral uncertainty that arises from it. But it also turns out that the value of normative information is inconsistent with the principle that learning *empirical* truths is instrumentally valuable. We conclude with a brief comment on "metanormative regress.".
The fine-tuning argument purports to show that particular aspects of fundamental physics provide evidence for the existence of God. This argument is legitimate, yet there are numerous doubts about its legitimacy. There are various misgivings about the fine-tuning argument which are based on misunderstandings. In this paper we will go over several major misapprehensions, and explain why they do not undermine the basic cogency of the fine-tuning argument.
How should the opinion of a group be related to the opinions of the group members? In this article, we will defend a package of four norms – coherence, locality, anonymity and unanimity. Existing results show that there is no tenable procedure for aggregating outright beliefs or for aggregating credences that meet these criteria. In response, we consider the prospects for aggregating credal pairs – pairs of prior probabilities and evidence. We show that there is a method of aggregating credal (...) pairs that possesses all four virtues. (shrink)
Jeffrey (1983) proposed a generalization of conditioning as a means of updating probability distributions when new evidence drives no event to certainty. His rule requires the stability of certain conditional probabilities through time. We tested this assumption (“invariance”) from the psychological point of view. In Experiment 1 participants offered probability estimates for events in Jeffrey’s candlelight example. Two further scenarios were investigated in Experiment 2, one in which invariance seems justified, the other in which it does not. (...) Results were in rough conformity to Jeffrey (1983)’s principle. (shrink)
Bulang He,1,2 Jeffrey M Hamdorf2 1Liver and Kidney Transplant Unit, Sir Charles Gairdner Hospital, Perth, WA, Australia; 2School of Surgery, The University of Western Australia, Perth, WA, Australia Aims: The aim of this paper was to review the current status of laparoscopic/robotic kidney transplant and evaluate its feasibility and safety in comparison with conventional standard "open" kidney transplant. Methods: An electronic search of PubMed, Embase, and the Cochrane library database was performed to identify the papers between January 1980 and (...) June 2013 that reported on laparoscopic/robotic kidney transplantation. The terms "laparoscopic kidney/renal transplant" and "robotic kidney/renal transplant " were used. Cross-referencing was also used to find the further publications. Only English language reports were selected and accepted for descriptive analysis. Results: A total of 17 papers and abstracts were retrieved. There were two case-control studies of small volume. High-level evidence comparing the safety and efficacy of laparoscopic/robotic kidney transplant with conventional open kidney transplant was not available at the time of this review. Conclusion: The limited published data have suggested that laparoscopic/robotic kidney transplant may offer the advantages of less pain, better cosmesis, possible shorter hospital stay, and fewer wound complications, without compromising graft function. Accordingly, some immunosuppressive agents, such as sirolimus, might be able to be commenced earlier, after laparoscopic/robotic kidney transplant. The techniques are various at this early stage. A uniformed operative technique may be established in the near future. With refinement of laparoscopic devices, this technique may be widely employed. Further studies will be needed to demonstrate the advantages of laparoscopic/robotic kidney transplant over the conventional open kidney transplant. Keywords: laparoscopic surgery, robotic surgery. (shrink)
Given the personalist's latitudinarian conception of rationality, what is progress toward wisdom? An answer is in C. I. Lewis's concept of the "congruence" of propositions, propositions so related that the antecedent probability of any one of them will be increased if the remainder can be assumed. This effect can be modelled in the probability calculus with due attention to the temporal sequencing of our learning of contingent propositions without ever becoming certain of them, as Jeffrey proposes. A diachronic (...) bootstrapping effect is obtained for Ockham's razor and premises of arguments about a god's existence. As a theory's probability rises with increased evidence, the probability of our earlier evidence rises too. (shrink)
In the age of the `return to the empirical' in which the theoretical disputes of an earlier era seem to have fallen silent, we seek to excavate the intellectual conditions for reviving theoretical debate, for it is upon this recovery that deeper understanding of the nature and purpose of empirical social science depends. We argue against the all too frequent turn to ontology, whereby critical realists have sought an epistemological guarantor of sociological validity. We seek, to the contrary, to crystallize (...) a culturally-based, hermeneutic account of a rational social science. Derived from disputes within the sociology of culture, on the one hand, and the long-standing concerns of interpretive philosophy, on the other, we offer a cultural-sociological approach to epistemology. We view the production of truth in social science as a reading of a meaningful social world, and as a performance of truth-claims that is constrained by evidence, but whose success depends on other contextual factors. We conclude that the rationality of social science can be achieved only by forgoing ontology. Theories are abstractions of investigators' meanings that allow the interpretation of social meanings in turn, whether those are actions, relations, or structures. Successful explanations are those that intertwine these meaning structures of investigators and actors in an effective way. (shrink)
According to the theory Russell defends in The Analysis of Mind, ‘true memories’ (roughly, memories that are not remembering-hows) are recollections of past events accompanied by a feeling of familiarity. While memory images play a vital role in this account, Russell does not pay much attention to the fact that imagery plays different roles in different sorts of memory. In most cases that Russell considers, memory is based on an image that serves as a datum (imagebased memories), (...) but there are other cases in which memory judgment requires an image without being based on it (answer-memories). A good example for the former is when a person, asked what the colour of the sea was last afternoon, recalls an image and forms a judgment on this basis. In the second case she may recognize the sea and entertain a memory image of it without ‘reading off’ the memory judgment from this picture. That is, the image does not prompt but itself is part of the propositional content of answer memories. Since in this latter case the feeling of familiarity is constitutive of the recollection but cannot serve as its explanans, answer memories do not conform to Russell’s account. According to Lindsay Judson this is not a vice of the theory, since Russell never meant to extend it to answer memories. Despite having a certain appeal of benevolence, Judson’s interpretation is not supported by textual evidence. Taking side with David Pears, I will argue that Russell did not properly differentiate between image-based memory and answer memory, and illegitimately extended his theory to the latter. (shrink)
There are narrowest bounds for P(h) when P(e) = y and P(h/e) = x, which bounds collapse to x as y goes to 1. A theorem for these bounds -- bounds for probable modus ponens -- entails a principle for updating on possibly uncertain evidence subject to these bounds that is a generalization of the principle for updating by conditioning on certain evidence. This way of updating on possibly uncertain evidence is appropriate when updating by ’probability (...) kinematics’ or ’Jeffrey-conditioning’ is, and apparently in countless other cases as well. A more complicated theorem due to Karl Wagner -- bounds for probable modus tollens -- registers narrowest bounds for P(not h) when P(not e) = y and P(e/h) = x. This theorem serves another principle for updating on possibly uncertain evidence that might be termed ’contraditioning’, though it is for a way of updating that seems in practice to be frequently not appropriate. It is definitely not a way of putting down a theory -- for example, a random-chanc. (shrink)
Computability and Logic has become a classic because of its accessibility to students without a mathematical background and because it covers not simply the staple topics of an intermediate logic course, such as Godel's incompleteness theorems, but also a large number of optional topics, from Turing's theory of computability to Ramsey's theorem. This 2007 fifth edition has been thoroughly revised by John Burgess. Including a selection of exercises, adjusted for this edition, at the end of each chapter, it offers (...) a simpler treatment of the representability of recursive functions, a traditional stumbling block for students on the way to the Godel incompleteness theorems. This updated edition is also accompanied by a website as well as an instructor's manual. (shrink)
Richard Jeffrey espoused an antifoundationalist variant of Bayesian thinking that he termed ‘Radical Probabilism’. Radical Probabilism denies both the existence of an ideal, unbiased starting point for our attempts to learn about the world and the dogma of classical Bayesianism that the only justified change of belief is one based on the learning of certainties. Probabilistic judgment is basic and irreducible. Bayesian conditioning is appropriate when interaction with the environment yields new certainty of belief in some proposition but leaves (...) one’s conditional beliefs untouched (the ‘Rigidity’ condition). Although Richard Jeffrey denied the general applicability of this condition, one of his main contributions to probabilistic thinking is a form of belief updating—now typically called ‘Jeffrey conditioning’ or ‘probability kinematics’—that is appropriate in circumstances in which Rigidity is satisfied, but where the interaction causes one to reevaluate one’s probability judgments over some partition of the possibility space without conferring certainty on any particular element. The most familiar occasion for Jeffrey conditioning is receipt of uncertain evidence: things partially perceived or remembered. But it also serves to illuminate belief updating occasioned by a change in one’s degrees of conditional belief, a kind of belief change largely ignored by classical Bayesianism. I argue that such changes in conditional belief can also be basic (in the sense of not being analyzable as a consequence of conditioning on factual information) and offer a kinematical model for a particular kind change in conditional belief. Both are applied to changes in preference. Finally, I argue that Rigidity can fail when changes of belief give inferential grounds for changes in conditional belief (and vice versa). These failures show that conditioning methods are properly regarded, not as valid rules of inference, but as tools in the ‘art of judgment’. (shrink)
Merging of opinions results underwrite Bayesian rejoinders to complaints about the subjective nature of personal probability. Such results establish that sufficiently similar priors achieve consensus in the long run when fed the same increasing stream of evidence. Initial subjectivity, the line goes, is of mere transient significance, giving way to intersubjective agreement eventually. Here, we establish a merging result for sets of probability measures that are updated by Jeffrey conditioning. This generalizes a number of different merging results in the (...) literature. We also show that such sets converge to a shared, maximally informed opinion. Convergence to a maximally informed opinion is a (weak) Jeffrey conditioning analogue of Bayesian “convergence to the truth” for conditional probabilities. Finally, we demonstrate the philosophical significance of our study by detailing applications to the topics of dynamic coherence, imprecise probabilities, and probabilistic opinion pooling. (shrink)
Suppose that several individuals who have separately assessed prior probability distributions over a set of possible states of the world wish to pool their individual distributions into a single group distribution, while taking into account jointly perceived new evidence. They have the option of first updating their individual priors and then pooling the resulting posteriors or first pooling their priors and then updating the resulting group prior. If the pooling method that they employ is such that they arrive (...) at the same final distribution in both cases, the method is said to be externally Bayesian, a property first studied by Madansky . We show that a pooling method for discrete distributions is externally Bayesian if and only if it commutes with Jeffrey conditioning, parameterized in terms of certain ratios of new to old odds, as in Wagner , rather than in terms of the posterior probabilities of members of the disjoint family of events on which such conditioning originates. (shrink)
Jeffreyupdating is a natural extension of Bayesian updating to cases where the evidence is uncertain. But, the resulting degrees of belief appear to be sensitive to the order in which the uncertain evidence is acquired, a rather un-Bayesian looking effect. This order dependence results from the way in which basic Jeffreyupdating is usually extended to sequences of updates. The usual extension seems very natural, but there are other plausible ways to extend Bayesian (...) class='Hi'>updating that maintain order-independence. I will explore three models of sequential updating, the usual extension and two alternatives. I will show that the alternative updating schemes derive from extensions of the usual rigidity requirement, which is at the heart of Jeffreyupdating. Finally, I will establish necessary and sufficient conditions for order-independent updating, and show that extended rigidity is closely related to these conditions. (shrink)
Some philosophers respond to Leibniz’s “shift” argument against absolute space by appealing to antihaecceitism about possible worlds, using David Lewis’s counterpart theory. But separated from Lewis’s distinctive system, it is difficult to understand what this doctrine amounts to or how it bears on the Leibnizian argument. In fact, the best way of making sense of the relevant kind of antihaecceitism concedes the main point of the Leibnizian argument, pressing us to consider alternative spatiotemporal metaphysics.
The existence of mereological sums can be derived from an abstraction principle in a way analogous to numbers. I draw lessons for the thesis that “composition is innocent” from neo-Fregeanism in the philosophy of mathematics.
David Builes presents a paradox concerning how confident you should be that any given member of an infinite collection of fair coins landed heads, conditional on the information that they were all flipped and only finitely many of them landed heads. We argue that if you should have any conditional credence at all, it should be 1/2.
This paper seeks to defend the following conclusions: The program advanced by Carnap and other necessarians for probability logic has little to recommend it except for one important point. Credal probability judgments ought to be adapted to changes in evidence or states of full belief in a principled manner in conformity with the inquirer’s confirmational commitments—except when the inquirer has good reason to modify his or her confirmational commitment. Probability logic ought to spell out the constraints on rationally coherent confirmational (...) commitments. In the case where credal judgments are numerically determinate confirmational commitments correspond to Carnap’s credibility functions mathematically represented by so—called confirmation functions. Serious investigation of the conditions under which confirmational commitments should be changed ought to be a prime target for critical reflection. The necessarians were mistaken in thinking that confirmational commitments are immune to legitimate modification altogether. But their personalist or subjectivist critics went too far in suggesting that we might dispense with confirmational commitments. There is room for serious reflection on conditions under which changes in confirmational commitments may be brought under critical control. Undertaking such reflection need not become embroiled in the anti inductivism that has characterized the work of Popper, Carnap and Jeffrey and narrowed the focus of students of logical and methodological issues pertaining to inquiry. (shrink)
Over the last two decades, scientific accounts of religion have received a great deal of scholarly and popular attention both because of their intrinsic interest and because they are widely as constituting a threat to the religion they analyse. The Believing Primate aims to describe and discuss these scientific accounts as well as to assess their implications. The volume begins with essays by leading scientists in the field, describing these accounts and discussing evidence in their favour. Philosophical and theological reflections (...) on these accounts follow, offered by leading philosophers, theologians, and scientists. This diverse group of scholars address some fascinating underlying questions: Do scientific accounts of religion undermine the justification of religious belief? Do such accounts show religion to be an accidental by-product of our evolutionary development? And, whilst we seem naturally disposed toward religion, would we fare better or worse without it? Bringing together dissenting perspectives, this provocative collection will serve to freshly illuminate ongoing debate on these perennial questions. (shrink)
Jeffrey conditioning tells an agent how to update her priors so as to grant a given probability to a particular event. Weighted averaging tells an agent how to update her priors on the basis of testimonial evidence, by changing to a weighted arithmetic mean of her priors and another agent’s priors. We show that, in their respective settings, these two seemingly so different updating rules are axiomatized by essentially the same invariance condition. As a by-product, this sheds new (...) light on the question how weighted averaging should be extended to deal with cases when the other agent reveals only parts of her probability distribution. The combination of weighted averaging and Jeffrey conditioning is a comprehensive updating rule to deal with such cases, which is again axiomatized by invariance under embedding. We conclude that, even though one may dislike Jeffrey conditioning or weighted averaging, the two make a natural pair when a policy for partial testimonial evidence is needed. (shrink)
Ginger Schultheis offers a novel and interesting argument against epistemic permissivism. While we think that her argument is ultimately uncompelling, we think its faults are instructive. We explore the relationship between epistemic permissivism, Margin-for-Error principles, and an epistemological version of Dominance reasoning.
Belief-revision models of knowledge describe how to update one’s degrees of belief associated with hypotheses as one considers new evidence, but they typically do not say how probabilities become associated with meaningful hypotheses in the first place. Here we consider a variety of Skyrms–Lewis signaling game (Lewis in Convention. Harvard University Press, Cambridge, 1969; Skyrms in Signals evolution, learning, & information. Oxford University Press, New York, 2010) where simple descriptive language and predictive practice and associated basic expectations coevolve. Rather than (...) assigning prior probabilities to hypotheses in a fixed language then conditioning on new evidence, the agents begin with no meaningful language or expectations then evolve to have expectations conditional on their descriptions as they evolve to have meaningful descriptions for the purpose of successful prediction. The model, then, provides a simple but concrete example of how the process of evolving a descriptive language suitable for inquiry might also provide agents with conditional expectations that reflect the type and degree of predictive success in fact afforded by their evolved predictive practice. This illustrates one way in which the traditional problem of priors may simply fail to apply to one’s model of evolving inquiry. (shrink)
Laurie Paul has recently argued that transformative experiences pose a problem for decision theory. According to Paul, agents facing transformative experiences do not possess the states required for decision theory to formulate its prescriptions. Agents facing transformative experiences are impoverished relative to their decision problems, and decision theory doesn’t know what to do with impoverished agents. Richard Pettigrew takes Paul’s challenge seriously. He grants that decision theory cannot handle decision problems involving transformative experiences. To deal with the problems posed by (...) transformative experiences, Pettigrew proposes two alterations to decision theory. The first alteration is meant to handle the problem posed by epistemically transformative experiences, and the second alteration is meant to handle the problem posed by personally transformative experiences. I argue that Pettigrew’s proposed alterations are untenable. Pettigrew’s novel decision theory faces both formal and philosophical problems. It is doubtful that Pettigrew can formulate the sort of decision theory he wants, and further doubtful that he should want such a decision theory in the first place. Moreover, the issues with Pettigrew’s proposed alterations help reveal issues with Paul’s initial challenge to decision theory. I suggest that transformative experiences should not be taken to pose a problem for decision theory, but should instead be taken to pose a topic for ethics. (shrink)
We offer a new motivation for imprecise probabilities. We argue that there are propositions to which precise probability cannot be assigned, but to which imprecise probability can be assigned. In such cases the alternative to imprecise probability is not precise probability, but no probability at all. And an imprecise probability is substantially better than no probability at all. Our argument is based on the mathematical phenomenon of non-measurable sets. Non-measurable propositions cannot receive precise probabilities, but there is a natural way (...) for them to receive imprecise probabilities. The mathematics of non-measurable sets is arcane, but its epistemological import is far-reaching; even apparently mundane propositions are liable to be affected by non-measurability. The phenomenon of non-measurability dramatically reshapes the dialectic between critics and proponents of imprecise credence. Non-measurability offers natural rejoinders to prominent critics of imprecise credence. Non-measurability even reverses some of the critics’ arguments—by the very lights that have been used to argue against imprecise credences, imprecise credences are better than precise credences. (shrink)