I defend the thesis that legal standards of proof are reducible to thresholds of probability. Many have rejected this thesis because it seems to entail that defendants can be found liable solely on the basis of statistical evidence. I argue that this inference is invalid. I do so by developing a view, called Legal Causalism, that combines Thomson's (1986) causal analysis of evidence with recent work in formal theories of causal inference. On this view, legal standards of proof can be (...) reduced to probabilities, but deriving these probabilities involves more than just statistics. (shrink)
In many assessment problems—aptitude testing, hiring decisions, appraisals of the risk of recidivism, evaluation of the credibility of testimonial sources, and so on—the fair treatment of different groups of individuals is an important goal. But individuals can be legitimately grouped in many different ways. Using a framework and fairness constraints explored in research on algorithmic fairness, I show that eliminating certain forms of bias across groups for one way of classifying individuals can make it impossible to eliminate such bias across (...) groups for another way of dividing people up. And this point generalizes if we require merely that assessments be approximately bias-free. Moreover, even if the fairness constraints are satisfied for some given partitions of the population, the constraints can fail for the coarsest common refinement, that is, the partition generated by taking intersections of the elements of these coarser partitions. This shows that these prominent fairness constraints admit the possibility of forms of intersectional bias. (shrink)
“Double-halfers” think that throughout the Sleeping Beauty Scenario, Beauty ought to maintain a credence of 1/2 in the proposition that the fair coin toss governing the experimental protocol comes up heads. Titelbaum (2012) introduces a novel variation on the standard scenario, one involving an additional coin toss, and claims that the double-halfer is committed to the absurd and embarrassing result that Beauty’s credence in an indexical proposition concerning the outcome of a future fair coin toss is not 1/2. I argue (...) that there is no reason to regard the credence required by the double-halfer as any less acceptable than the one deemed required by Titelbaum. (shrink)
Schurz (2019, ch. 4) argues that probabilistic accounts of induction fail. In particular, he criticises probabilistic accounts of induction that appeal to direct inference principles, including subjective Bayesian approaches (e.g., Howson 2000) and objective Bayesian approaches (see, e.g., Williamson 2017). In this paper, I argue that Schurz’ preferred direct inference principle, namely Reichenbach’s Principle of the Narrowest Reference Class, faces formidable problems in a standard probabilistic setting. Furthermore, the main alternative direct inference principle, Lewis’ Principal Principle, is also hard to (...) reconcile with standard probabilism. So, I argue, standard probabilistic approaches cannot appeal to direct inference to explicate the logic of induction. However, I go on to defend a non-standard objective Bayesian account of induction: I argue that this approach can both accommodate direct inference and provide a viable account of the logic of induction. I then defend this account against Schurz’ criticisms. (shrink)
According to an infinite frequency principle, it is rational, under certain conditions, to set your credence in an outcome to the limiting frequency of that outcome if the experiment were repeated indefinitely. I argue that most infinite frequency principles are undesirable in at least one of the following ways: accepting the principle would lead you to accept bets with sure losses, the principle gives no guidance in the case of deterministic experiments like coin tosses and the principle relies on a (...) metaphysical property, ‘chanciness’, whose necessary and sufficient conditions are unknown. I show that a frequency principle that is based on the principal principle suffers from problems related to the definition of ‘chance’ or ‘chanciness’, which could lead to all three of the above problems. I introduce a version of the infinite frequency principle that does not rely on a notion of chance or chanciness and does not suffer from any of these problems. (shrink)
An aspect of Peirce’s thought that may still be underappreciated is his resistance to what Levi calls _pedigree epistemology_, to the idea that a central focus in epistemology should be the justification of current beliefs. Somewhat more widely appreciated is his rejection of the subjective view of probability. We argue that Peirce’s criticisms of subjectivism, to the extent they grant such a conception of probability is viable at all, revert back to pedigree epistemology. A thoroughgoing rejection of pedigree in the (...) context of probabilistic epistemology, however, _does_ challenge prominent subjectivist responses to the problem of the priors. (shrink)
A Boltzmann Brain, haphazardly formed through the unlikely but still possible random assembly of physical particles, is a conscious brain having experiences just like an ordinary person. The skeptical possibility of being a Boltzmann Brain is an especially gripping one: scientific evidence suggests our actual universe’s full history may ultimately contain countless short-lived Boltzmann Brains with experiences just like yours or mine. I propose a solution to the skeptical challenge posed by these countless actual Boltzmann Brains. My key idea is (...) roughly this: the skeptical argument that you’re one of the Boltzmann Brains requires you to make a statistical inference, but the Principle of Total Evidence blocks us from making the inference. I discuss how my solution contrasts with a recent suggestion, made by Sean Carroll and David Chalmers, for how to address the skeptical challenge posed by Boltzmann Brains. And I discuss how my solution handles certain relevant concerns about what to do when we have higher-order evidence indicating that our first-order evidence is misleading. (shrink)
Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis for Bayesian (...) reasoning with idealized models in science. (shrink)
Direct inferences identify certain probabilistic credences or confirmation-function-likelihoods with values of objective chances or relative frequencies. The best known version of a direct inference principle is David Lewis’s Principal Principle. Certain kinds of statements undermine direct inferences. Lewis calls such statements inadmissible. We show that on any Bayesian account of direct inference several kinds of intuitively innocent statements turn out to be inadmissible. This may pose a significant challenge to Bayesian accounts of direct inference. We suggest some ways in which (...) these challenges may be addressed. (shrink)
John D. Norton’s “Material Theory of Induction” has been one of the most intriguing recent additions to the philosophy of induction. Norton’s account appears to be a notably natural account of actual inductive practices, although his theory has attracted considerable criticism. I detail several novel issues for his theory but argue that supplementing the Material Theory with a theory of direct inference could address these problems. I argue that if this combination is possible, a stronger theory of inductive reasoning emerges, (...) which has a more propitious answer to the Problem of Induction. (shrink)
I here aim to show that a particular approach to the problem of induction, which I will call “induction by direct inference”, comfortably handles Goodman’s problem of induction. I begin the article by describing induction by direct inference. After introducing induction by direct inference, I briefly introduce the Goodman problem, and explain why it is, prima facie, an obstacle to the proposed approach. I then show how one may address the Goodman problem, assuming one adopts induction by direct inference as (...) an approach to the problem of induction. In particular, I show that a relatively standard treatment of what some have called the “Reference Class problem” addresses the Goodman Problem. Indeed, plausible and relatively standard principles of direct inference yield the conclusion that the Goodman inference (involving the grue predicate) is defeated, so it is unnecessary to invoke considerations of ‘projectibility’ in order to address the Goodman problem. I conclude the article by discussing the generality of the proposed approach, in dealing with variants of Goodman’s example. (shrink)
The paper takes a closer look at the role of knowledge and evidence in legal theory. In particular, the paper examines a puzzle arising from the evidential standard Preponderance of the Evidence and its application in civil procedure. Legal scholars have argued since at least the 1940s that the rule of the Preponderance of the Evidence gives rise to a puzzle concerning the role of statistical evidence in judicial proceedings, sometimes referred to as the Problem of Bare Statistical Evidence. While (...) this puzzle has led to the development of a multitude of accounts and approaches in the legal literature, I argue here that the problem can be resolved fairly straightforwardly within a knowledge-first framework. (shrink)
We argue that David Lewis’s principal principle implies a version of the principle of indifference. The same is true for similar principles that need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. 1 The Argument2 Some Objections Met.
It is well known that there are, at least, two sorts of cases where one should not prefer a direct inference based on a narrower reference class, in particular: cases where the narrower reference class is gerrymandered, and cases where one lacks an evidential basis for forming a precise-valued frequency judgment for the narrower reference class. I here propose (1) that the preceding exceptions exhaust the circumstances where one should not prefer direct inference based on a narrower reference class, and (...) (2) that minimal frequency information for a narrower (non-gerrymandered) reference class is sufficient to yield the defeat of a direct inference for a broader reference class. By the application of a method for inferring relatively informative expected frequencies, I argue that the latter claim does not result in an overly incredulous approach to direct inference. The method introduced here permits one to infer a relatively informative expected frequency for a reference class R', given frequency information for a superset of R' and/or frequency information for a sample drawn from R'. (shrink)
In attempting to form rational personal probabilities by direct inference, it is usually assumed that one should prefer frequency information concerning more specific reference classes. While the preceding assumption is intuitively plausible, little energy has been expended in explaining why it should be accepted. In the present article, I address this omission by showing that, among the principled policies that may be used in setting one’s personal probabilities, the policy of making direct inferences with a preference for frequency information for (...) more specific reference classes yields personal probabilities whose accuracy is optimal, according to all proper scoring rules, in situations where all of the relevant frequency information is point-valued. Assuming that frequency information for narrower reference classes is preferred, when the relevant frequency statements are point-valued, a dilemma arises when choosing whether to make a direct inference based upon relatively precise-valued frequency information for a broad reference class, R, or upon relatively imprecise-valued frequency information for a more specific reference class, R*. I address such cases, by showing that it is often possible to make a precise-valued frequency judgment regarding R* based on precise-valued frequency information for R, using standard principles of direct inference. Having made such a frequency judgment, the dilemma of choosing between and is removed, and one may proceed by using the precise-valued frequency estimate for the more specific reference class as a premise for direct inference. (shrink)
Recent attempts to resolve the Paradox of the Gatecrasher rest on a now familiar distinction between individual and bare statistical evidence. This paper investigates two such approaches, the causal approach to individual evidence and a recently influential (and award-winning) modal account that explicates individual evidence in terms of Nozick's notion of sensitivity. This paper offers counterexamples to both approaches, explicates a problem concerning necessary truths for the sensitivity account, and argues that either view is implausibly committed to the impossibility of (...) no-fault wrongful convictions. The paper finally concludes that the distinction between individual and bare statistical evidence cannot be maintained in terms of causation or sensitivity. We have to look elsewhere for a solution of the Paradox of the Gatecrasher. (shrink)
We report a series of experiments examining whether people ascribe knowledge for true beliefs based on probabilistic evidence. Participants were less likely to ascribe knowledge for beliefs based on probabilistic evidence than for beliefs based on perceptual evidence or testimony providing causal information. Denial of knowledge for beliefs based on probabilistic evidence did not arise because participants viewed such beliefs as unjustified, nor because such beliefs leave open the possibility of error. These findings rule out traditional philosophical accounts for why (...) probabilistic evidence does not produce knowledge. The experiments instead suggest that people deny knowledge because they distrust drawing conclusions about an individual based on reasoning about the population to which it belongs, a tendency previously identified by “judgment and decision making” researchers. Consistent with this, participants were more willing to ascribe knowledge for beliefs based on probabilistic evidence that is specific to a particular case. 2016 APA, all rights reserved). (shrink)
In this article it is argued that the standard theoretical account of risk in the contemporary literature, which is cast along probabilistic lines, is flawed, in that it is unable to account for a particular kind of risk. In its place a modal account of risk is offered. Two applications of the modal account of risk are then explored. First, to epistemology, via the defence of an anti-risk condition on knowledge in place of the normal anti-luck condition. Second, to legal (...) theory, where it is shown that this account of risk can cast light on the debate regarding the extent to which a criminal justice system can countenance the possibility of wrongful convictions. (shrink)
There are currently two robust traditions in philosophy dealing with doxastic attitudes: the tradition that is concerned primarily with all-or-nothing belief, and the tradition that is concerned primarily with degree of belief or credence. This paper concerns the relationship between belief and credence for a rational agent, and is directed at those who may have hoped that the notion of belief can either be reduced to credence or eliminated altogether when characterizing the norms governing ideally rational agents. It presents a (...) puzzle which lends support to two theses. First, that there is no formal reduction of a rational agent’s beliefs to her credences, because belief and credence are each responsive to different features of a body of evidence. Second, that if our traditional understanding of our practices of holding each other responsible is correct, then belief has a distinctive role to play, even for ideally rational agents, that cannot be played by credence. The question of which avenues remain for the credence-only theorist is considered. (shrink)
There are many kinds of epistemic experts to which we might wish to defer in setting our credences. These include: highly rational agents, objective chances, our own future credences, our own current credences, and evidential probabilities. But exactly what constraint does a deference requirement place on an agent's credences? In this paper we consider three answers, inspired by three principles that have been proposed for deference to objective chances. We consider how these options fare when applied to the other kinds (...) of epistemic experts mentioned above. Of the three deference principles we consider, we argue that two of the options face insuperable difficulties. The third, on the other hand, fares well|at least when it is applied in a particular way. (shrink)
The applicability of Bayesian conditionalization in setting one’s posterior probability for a proposition, α, is limited to cases where the value of a corresponding prior probability, PPRI(α|∧E), is available, where ∧E represents one’s complete body of evidence. In order to extend probability updating to cases where the prior probabilities needed for Bayesian conditionalization are unavailable, I introduce an inference schema, defeasible conditionalization, which allows one to update one’s personal probability in a proposition by conditioning on a proposition that represents a (...) proper subset of one’s complete body of evidence. While defeasible conditionalization has wider applicability than standard Bayesian conditionalization (since it may be used when the value of a relevant prior probability, PPRI(α|∧E), is unavailable), there are circumstances under which some instances of defeasible conditionalization are unreasonable. To address this difficulty, I outline the conditions under which instances of defeasible conditionalization are defeated. To conclude the article, I suggest that the prescriptions of direct inference and statistical induction can be encoded within the proposed system of probability updating, by the selection of intuitively reasonable prior probabilities. (shrink)
Nearly 20 years after the Shonubi case and an extended discussion in the Anglophone world on the admissibility and probative force of statistical evidence, the labour courts of Germany seem not to have learned a simple lesson: aleatory probabilities are not informative for the individual in question. In this paper I argue that innumeracy (that is the lack of ability to understand and apply simple numerical concepts) is underestimated – if not ignored – both within the German jurisprudence and legal (...) theory. In this paper the pseudo-scientific methods of analyzing the evidence in the recent GEMA case by the labor courts of Berlin-Brandenburg and the Federal Labor Court are examined and it is shown that the persistence on applying reference class evidence to an individual-case ends up being not only theoretically unacceptable but also socially harmful. (shrink)
The Lottery Paradox is generally thought to point at a conflict between two intuitive principles, to wit, that high probability is sufficient for rational acceptability, and that rational acceptability is closed under logical derivability. Gilbert Harman has offered a solution to the Lottery Paradox that allows one to stick to both of these principles. The solution requires the principle that acceptance licenses conditionalization. The present study shows that adopting this principle alongside the principle that high probability is sufficient for rational (...) acceptability gives rise to another paradox. (shrink)
The thesis that high probability suffices for rational belief, while initially plausible, is known to face the Lottery Paradox. The present paper proposes an amended version of that thesis which escapes the Lottery Paradox. The amendment is argued to be plausible on independent grounds.
Kyburg’s opposition to the subjective Bayesian theory, and in particular to its advocates’ indiscriminate and often questionable use of Dutch Book arguments, is documented and much of it strongly endorsed. However, it is argued that an alternative version, proposed by both de Finetti at various times during his long career, and by Ramsey, is less vulnerable to Kyburg’s misgivings. This is a logical interpretation of the formalism, one which, it is argued, is both more natural and also avoids other, widely-made (...) objections to Bayesianism. (shrink)
We defend a set of acceptance rules that avoids the lottery paradox, that is closed under classical entailment, and that accepts uncertain propositions without ad hoc restrictions. We show that the rules we recommend provide a semantics that validates exactly Adams’ conditional logic and are exactly the rules that preserve a natural, logical structure over probabilistic credal states that we call probalogic. To motivate probalogic, we first expand classical logic to geo-logic, which fills the entire unit cube, and then we (...) project the upper surfaces of the geo-logical cube onto the plane of probabilistic credal states by means of standard, linear perspective, which may be interpreted as an extension of the classical principle of indifference. Finally, we apply the geometrical/logical methods developed in the paper to prove a series of trivialization theorems against question-invariance as a constraint on acceptance rules and against rational monotonicity as an axiom of conditional logic in situations of uncertainty. (shrink)
The article begins by describing two longstanding problems associated with direct inference. One problem concerns the role of uninformative frequency statements in inferring probabilities by direct inference. A second problem concerns the role of frequency statements with gerrymandered reference classes. I show that past approaches to the problem associated with uninformative frequency statements yield the wrong conclusions in some cases. I propose a modification of Kyburg’s approach to the problem that yields the right conclusions. Past theories of direct inference have (...) postponed treatment of the problem associated with gerrymandered reference classes by appealing to an unexplicated notion of projectability . I address the lacuna in past theories by introducing criteria for being a relevant statistic . The prescription that only relevant statistics play a role in direct inference corresponds to the sort of projectability constraints envisioned by past theories. (shrink)
The objective Bayesian view of proof (or logical probability, or evidential support) is explained and defended: that the relation of evidence to hypothesis (in legal trials, science etc) is a strictly logical one, comparable to deductive logic. This view is distinguished from the thesis, which had some popularity in law in the 1980s, that legal evidence ought to be evaluated using numerical probabilities and formulas. While numbers are not always useful, a central role is played in uncertain reasoning by the (...) ‘proportional syllogism’, or argument from frequencies, such as ‘nearly all aeroplane flights arrive safely, so my flight is very likely to arrive safely’. Such arguments raise the ‘problem of the reference class’, arising from the fact that an individual case may be a member of many different classes in which frequencies differ. For example, if 15 per cent of swans are black and 60 per cent of fauna in the zoo is black, what should I think about the likelihood of a swan in the zoo being black? The nature of the problem is explained, and legal cases where it arises are given. It is explained how recent work in data mining on the relevance of features for prediction provides a solution to the reference class problem. (shrink)
In concrete applications of probability, statistical investigation gives us knowledge of some probabilities, but we generally want to know many others that are not directly revealed by our data. For instance, we may know prob(P/Q) (the probability of P given Q) and prob(P/R), but what we really want is prob(P/Q& R), and we may not have the data required to assess that directly. The probability calculus is of no help here. Given prob(P/Q) and prob(P/R), it is consistent with the probability (...) calculus for prob(P/Q& R) to have any value between 0 and 1. Is there any way to make a reasonable estimate of the value of prob(P/Q& R) 1 A related problem occurs when probability practitioners adopt undefended assumptions of statistical independence simply on the basis of not seeing any connection between two propositions. This is common practice, but its justification has eluded probability theorists, and researchers are typically apologetic about making such assumptions. Is there any way to defend the practice? This paper shows that on a certain conception of probability—nomic probability—there are principles of "probable probabilities" that license inferences of the above sort. These are principles telling us that although certain inferences from probabilities to probabilities are not deductively valid, nevertheless the second-order probability of their yielding correct results is 1. This makes it defeasibly reasonable to make the inferences. Thus I argue that it is defeasibly reasonable to assume statistical independence when we have no information to the contrary. And I show that there is a function Y(r, s, a) such that if prob(P/Q) = r, prob(P/R) = s, andprob(P/U) = a (where U is our background knowledge) then it is defeasibly reasonable to expect that prob(P/Q&R) = Y(r, s, a). Numerous other defeasible inferences are licensed by similar principles of probable probabilities. This has the potential to greatly enhance the usefulness of probabilities in practical application. (shrink)
One argument for the thirder position on the Sleeping Beauty problem rests on direct inference from objective probabilities. In this paper, I consider a particularly clear version of this argument by John Pollock and his colleagues (The Oscar Seminar 2008). I argue that such a direct inference is defeated by the fact that Beauty has an equally good reason to conclude on the basis of direct inference that the probability of heads is 1/2. Hence, neither thirders nor halfers can find (...) direct support in an appeal to objective probabilities. (shrink)
In a recent article, Joel Pust argued that direct inference based on reference properties of differing arity are incommensurable, and so direct inference cannot be used to resolve the Sleeping Beauty problem. After discussing the defects of Pust's argument, I offer reasons for thinking that direct inferences based on reference properties of differing arity are commensurable, and that we should prefer direct inferences based on logically stronger reference properties, regardless of arity.
Probabilistic inference from frequencies, such as "Most Quakers are pacifists; Nixon is a Quaker, so probably Nixon is a pacifist" suffer from the problem that an individual is typically a member of many "reference classes" (such as Quakers, Republicans, Californians, etc) in which the frequency of the target attribute varies. How to choose the best class or combine the information? The article argues that the problem can be solved by the feature selection methods used in contemporary Big Data science: the (...) correct reference class is that determined by the features relevant to the target, and relevance is measured by correlation (that is, a feature is relevant if it makes a difference to the frequency of the target). (shrink)
The “reference class problem” is a serious challenge to the use of statistical evidence that arises in a wide variety of cases, including toxic torts, property valuation, and even drug smuggling. At its core, it observes that statistical inferences depend critically on how people, events, or things are classified. As there is (purportedly) no principle for privileging certain categories over others, statistics become manipulable, undermining the very objectivity and certainty that make statistical evidence valuable and attractive to legal actors. In (...) this Essay, I propose a practical solution to the reference class problem by drawing on model selection theory from the statistics literature. The solution has potentially wide-ranging and significant implications for statistics in the law. Not only does it remove another barrier to the use of statistics in legal decisionmaking, but it also suggests a concrete framework by which litigants can present, evaluate, and contest statistical evidence. (shrink)
the symmetry of our evidential situation. If our confidence is best modeled by a standard probability function this means that we are to distribute our subjective probability or credence sharply and evenly over possibilities among which our evidence does not discriminate. Once thought to be the central principle of probabilistic reasoning by great..
Bayesians take “definite” or “single-case” probabilities to be basic. Definite probabilities attach to closed formulas or propositions. We write them here using small caps: PROB(P) and PROB(P/Q). Most objective probability theories begin instead with “indefinite” or “general” probabilities (sometimes called “statistical probabilities”). Indefinite probabilities attach to open formulas or propositions. We write indefinite probabilities using lower case “prob” and free variables: prob(Bx/Ax). The indefinite probability of an A being a B is not about any particular A, but rather about the (...) property of being an A. In this respect, its logical form is the same as that of relative frequencies. For instance, we might talk about the probability of a human baby being female. That probability is about human babies in general — not about individuals. If we examine a baby and determine conclusively that she is female, then the definite probability of her being female is 1, but that does not alter the indefinite probability of human babies in general being female. Most objective approaches to probability tie probabilities to relative frequencies in some way, and the resulting probabilities have the same logical form as the relative frequencies. That is, they are indefinite probabilities. The simplest theories identify indefinite probabilities with relative frequencies.3 It is often objected that such “finite frequency theories” are inadequate because our probability judgments often diverge from relative frequencies. For example, we can talk about a coin being fair (and so the indefinite probability of a flip landing heads is 0.5) even when it is flipped only once and then destroyed (in which case the relative frequency is either 1 or 0). For understanding such indefinite probabilities, it has been suggested that we need a notion of probability that talks about possible instances of properties as well as actual instances.. (shrink)
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of the reference (...) class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all “no-theory” theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a “metaphysical” and an “epistemological” reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains. (shrink)
Probability theory is important not least because of its relevance for decision making, which also means: its relevance for the single case. The frequency theory of probability on its own is irrelevant in the single case. However, Howson and Urbach argue that Bayesianism can solve the frequentist's problem: frequentist-probability information is relevant to Bayesians. The present paper shows that Howson and Urbach's solution cannot work, and indeed that no Bayesian solution can work. There is no way to make frequentist probability (...) relevant. (shrink)
The well known Monty Hall-problem has a clear solution if one deals with a long enough series of individual games. However, the situation is different if one switches to probabilities in a single case. This paper presents an argument for Monty Hall situations with two players (not just one, as is usual). It leads to a quite general conclusion: One cannot apply probabilistic considerations (for or against any of the strategies) to isolated single cases. If one does that, one cannot (...) but violate a very plausible non-arbitrariness condition and is led into a Moore-paradoxical incoherence. Even though arguments for switching are correct as applied to series of games, they don’t say anything useful about what rationality demands in a single case. (shrink)
Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70% of As are Bs” and “a is an A” infer that a is a B with probability 0.7. Direct inference is generalized by Jeffrey’s rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system (...) acting in a complex environment may have to base its actions on a probabilistic model of its environment, and the probabilities needed to form this model can often be obtained by combining statistical background information with particular observations made, i.e., by inductive probabilistic reasoning. In this paper a formal framework for inductive probabilistic reasoning is developed: syntactically it consists of an extension of the language of first-order predicate logic that allows to express statements about both statistical and subjective probabilities. Semantics for this representation language are developed that give rise to two distinct entailment relations: a relation ⊨ that models strict, probabilistically valid, inferences, and a relation that models inductive probabilistic inferences. The inductive entailment relation is obtained by implementing cross-entropy minimization in a preferred model semantics. A main objective of our approach is to ensure that for both entailment relations complete proof systems exist. This is achieved by allowing probability distributions in our semantic models that use non-standard probability values. A number of results are presented that show that in several important aspects the resulting logic behaves just like a logic based on real-valued probabilities alone. (shrink)
In this unique monograph, based on years of extensive work, Chatterjee presents the historical evolution of statistical thought from the perspective of various approaches to statistical induction. Developments in statistical concepts and theories are discussed alongside philosophical ideas on the ways we learn from experience.
Adam Elga takes the Sleeping Beauty example to provide a counter-example to Reflection, since on Sunday Beauty assigns probability 1/2 to H, and she is certain that on Monday she will assign probability 1/3. I will show that there is a natural way for Bas van Fraassen to defend Reflection in the case of Sleeping Beauty, building on van Fraassen’s treatment of forgetting. This will allow me to identify a lacuna in Elga’s argument for 1/3. I will then argue, however, (...) that not all is well with Reflection: there is a problem with van Fraassen’s treatment of forgetting. Ultimately I will agree with Elga’s 1/3 answer. David Lewis maintains that the answer is 1/2; I will argue that cases of forgetting can be used to show that the premiss of Lewis’s argument for 1/2 is false. (shrink)
ON DECEMBER 10, 1991 Charles Shonubi, a Nigerian citizen but a resident of the USA, was arrested at John F. Kennedy International Airport for the importation of heroin into the United States.1 Shonubi's modus operandi was ``balloon swallowing.'' That is, heroin was mixed with another substance to form a paste and this paste was sealed in balloons which were then swallowed. The idea was that once the illegal substance was safely inside the USA, the smuggler would pass the balloons and (...) recover the heroin. On the date of his arrest, Shonubi was found to have swallowed 103 balloons containing a total of 427.4 grams of heroin. There was little doubt about Shonubi's guilt. In fact, there was considerable evidence that he had made at least seven prior heroin-smuggling trips to the USA (although he was not tried for these). In October 1992 Shonubi was convicted in a United States District Court for possessing and importing heroin. Although the conviction was only for crimes associated with Shonubi's arrest date of December 10, 1991, the sentencing judge, Jack B. Weinstein, also made a ®nding that Shonubi had indeed made seven prior drug-smuggling trips to the USA. The interesting part of this case was in the sentencing. According to the federal sentencing guidelines, the sentence in cases such as this should depend on the total quantity of heroin involved. This instruction was interpreted rather broadly.. (shrink)
The logical interpretation of probability, or "objective Bayesianism'' – the theory that (some) probabilities are strictly logical degrees of partial implication – is defended. The main argument against it is that it requires the assignment of prior probabilities, and that any attempt to determine them by symmetry via a "principle of insufficient reason" inevitably leads to paradox. Three replies are advanced: that priors are imprecise or of little weight, so that disagreement about them does not matter, within limits; that it (...) is possible to distinguish reasonable from unreasonable priors on logical grounds; and that in real cases disagreement about priors can usually be explained by differences in the background information. It is argued also that proponents of alternative conceptions of probability, such as frequentists, Bayesians and Popperians, are unable to avoid committing themselves to the basic principles of logical probability. (shrink)
Bishop Butler, [Butler, 1736], said that probability was the very guide of life. But what interpretations of probability can serve this function? It isn’t hard to see that empirical (frequency) views won’t do, and many recent writers-for example John Earman, who has said that Bayesianism is “the only game in town”-have been persuaded by various dutch book arguments that only subjective probability will perform the function required. We will defend the thesis that probability construed in this way offers very little (...) guidance, dutch book arguments notwithstanding. We will sketch a way out of the impasse. (shrink)
In Chapter I of his celebrated Foundations of Probability, A. N. Kolmogorov proposed an axiomatic treatment of the mathematical theory of probability—the approach that assimilated probability theory into measure theory. Kolmogorov followed his statement of the axioms with an account of how “we apply the theory of probability to the actual world of experiments.”.