This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related categories
Siblings:
53 found
Search inside:
(import / add options)   Sort by:
1 — 50 / 53
  1. Mikel Aickin (2000). Connecting Dempster–Shafer Belief Functions with Likelihood-Based Inference. Synthese 123 (3):347-364.
    The Dempster–Shafer approach to expressing beliefabout a parameter in a statistical model is notconsistent with the likelihood principle. Thisinconsistency has been recognized for some time, andmanifests itself as a non-commutativity, in which theorder of operations (combining belief, combininglikelihood) makes a difference. It is proposed herethat requiring the expression of belief to be committed to the model (and to certain of itssubmodels) makes likelihood inference very nearly aspecial case of the Dempster–Shafer theory.
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  2. Frank Arntzenius, Adam Elga & John Hawthorne (2004). Bayesianism, Infinite Decisions, and Binding. Mind 113 (450):251 - 283.
    We pose and resolve several vexing decision theoretic puzzles. Some are variants of existing puzzles, such as 'Trumped' (Arntzenius and McCarthy 1997), 'Rouble trouble' (Arntzenius and Barrett 1999), 'The airtight Dutch book' (McGee 1999), and 'The two envelopes puzzle' (Broome 1995). Others are new. A unified resolution of the puzzles shows that Dutch book arguments have no force in infinite cases. It thereby provides evidence that reasonable utility functions may be unbounded and that reasonable credence functions need not be countably (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  3. Luc Bovens & Stephan Hartmann (2006). An Impossibility Result for Coherence Rankings. Philosophical Studies 128 (1):77-91.
    If we receive information from multiple independent and partially reliable information sources, then whether we are justified to believe these information items is affected by how reliable the sources are, by how well the information coheres with our background beliefs and by how internally coherent the information is. We consider the following question. Is coherence a separable determinant of our degree of belief, i.e. is it the case that the more coherent the new information is, the more justified we are (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  4. Jake Chandler (2010). Contrastive Support: Some Competing Accounts. Synthese (1):1-10.
    I outline four competing probabilistic accounts of contrastive evidential support and consider various considerations that might help arbitrate between these. The upshot of the discussion is that the so-called ‘Law of Likelihood’ is to be preferred to any of the alternatives considered.
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  5. Giulianella Coletti & Barbara Vantaggi (2006). Representability of Ordinal Relations on a Set of Conditional Events. Theory and Decision 60 (2-3):137-174.
  6. S. DeVito (1997). A Gruesome Problem for the Curve-Fitting Solution. British Journal for the Philosophy of Science 48 (3):391-396.
    This paper is a response to Forster and Sober's [1994] solution to the curve-fitting problem. If their solution is correct, it will provide us with a solution to the New Riddle of Induction as well as provide a basis for choosing realism over conventionalism. Examining this solution is also important as Forster and Sober incorporate it in much of their other philosophical work (see Forster [1995a, b, 1994] and Sober [1996, 1995, 1993]). I argue that Forster and Sober's solution is (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  7. Scott DeVito (1997). A Gruesome Problem for the Curve-Fitting Solution. British Journal for the Philosophy of Science 48 (3):391-396.
    This paper is a response to Forster and Sober's [1994] solution to the curve-fitting problem. If their solution is correct, it will provide us with a solution to the New Riddle of Induction as well as provide a basis for choosing realism over conventionalism. Examining this solution is also important as Forster and Sober incorporate it in much of their other philosophical work (see Forster [1995a, b, 1994] and Sober [1996, 1995, 1993]). I argue that Forster and Sober's solution is (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  8. Richard Dietz (2010). On Generalizing Kolmogorov. Notre Dame Journal of Formal Logic 51 (3):323-335.
    In his "From classical to constructive probability," Weatherson offers a generalization of Kolmogorov's axioms of classical probability that is neutral regarding the logic for the object-language. Weatherson's generalized notion of probability can hardly be regarded as adequate, as the example of supervaluationist logic shows. At least, if we model credences as betting rates, the Dutch-Book argument strategy does not support Weatherson's notion of supervaluationist probability, but various alternatives. Depending on whether supervaluationist bets are specified as (a) conditional bets (Cantwell), (b) (...)
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  9. Frank Doring (2000). Conditional Probability and Dutch Books. Philosophy of Science 67 (3):391 - 409.
    There is no set Δ of probability axioms that meets the following three desiderata: (1) Δ is vindicated by a Dutch book theorem; (2) Δ does not imply regularity (and thus allows, among other things, updating by conditionalization); (3) Δ constrains the conditional probability q(·,z) even when the unconditional probability p(z) (=q(z,T)) equals 0. This has significant consequences for Bayesian epistemology, some of which are discussed.
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  10. Igor Douven (1999). Inference to the Best Explanation Made Coherent. Philosophy of Science 66 (Supplement):S424-S435.
    Van Fraassen (1989) argues that Inference to the Best Explanation is incoherent in the sense that adopting it as a rule for belief change will make one susceptible to a dynamic Dutch book. The present paper argues against this. A strategy is described that allows us to infer to the best explanation free of charge.
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  11. Kenny Easwaran (2013). Expected Accuracy Supports Conditionalization—and Conglomerability and Reflection. Philosophy of Science 80 (1):119-142.
  12. Kenny Easwaran (2011). Bayesianism II: Applications and Criticisms. Philosophy Compass 6 (5):321-332.
    In the first paper, I discussed the basic claims of Bayesianism (that degrees of belief are important, that they obey the axioms of probability theory, and that they are rationally updated by either standard or Jeffrey conditionalization) and the arguments that are often used to support them. In this paper, I will discuss some applications these ideas have had in confirmation theory, epistemol- ogy, and statistics, and criticisms of these applications.
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  13. Adam Elga (2013). The Puzzle of the Unmarked Clock and the New Rational Reflection Principle. Philosophical Studies 164 (1):127-139.
    The “puzzle of the unmarked clock” derives from a conflict between the following: (1) a plausible principle of epistemic modesty, and (2) “Rational Reflection”, a principle saying how one’s beliefs about what it is rational to believe constrain the rest of one’s beliefs. An independently motivated improvement to Rational Reflection preserves its spirit while resolving the conflict.
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  14. Branden Fitelson (2011). Favoring, Likelihoodism, and Bayesianism. [REVIEW] Philosophy and Phenomenological Research 83 (3):666-672.
    This (brief) note is about the (evidential) “favoring” relation. Pre-theoretically, favoring is a three-place (epistemic) relation, between an evidential proposition E and two hypotheses H1 and H2. Favoring relations are expressed via locutions of the form: E favors H1 over H2. Strictly speaking, favoring should really be thought of as a four-place relation, between E, H1, H2, and a corpus of background evidence K. But, for present purposes (which won't address issues involving K), I will suppress the background corpus, so (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  15. Branden Fitelson (2007). Likelihoodism, Bayesianism, and Relational Confirmation. Synthese 156 (3):473 - 489.
    Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a "middle way" (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  16. Malcolm Forster & Elliott Sober (1994). How to Tell When Simpler, More Unified, or Less Ad Hoc Theories Will Provide More Accurate Predictions. British Journal for the Philosophy of Science 45 (1):1-35.
    Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike [1973], which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light (...)
    Remove from this list | Direct download (10 more)  
     
    My bibliography  
     
    Export citation  
  17. MR Forster (1999). Model Selection in Science: The Problem of Language Variance. British Journal for the Philosophy of Science 50 (1):83-102.
    Recent solutions to the curve-fitting problem, described in Forster and Sober ([1995]), trade off the simplicity and fit of hypotheses by defining simplicity as the paucity of adjustable parameters. Scott De Vito ([1997]) charges that these solutions are 'conventional' because he thinks that the number of adjustable parameters may change when the hypotheses are described differently. This he believes is exactly what is illustrated in Goodman's new riddle of induction, otherwise known as the grue problem. However, the 'number of adjustable (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  18. Alan Hájek (2011). Triviality Pursuit. Topoi 30 (1):3-15.
    The thesis that probabilities of conditionals are conditional probabilities has putatively been refuted many times by so-called ‘triviality results’, although it has also enjoyed a number of resurrections. In this paper I assault it yet again with a new such result. I begin by motivating the thesis and discussing some of the philosophical ramifications of its fluctuating fortunes. I will canvas various reasons, old and new, why the thesis seems plausible, and why we should care about its fate. I will (...)
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  19. Stephan Hartmann & Luc Bovens (2006). An Impossibility Result for Coherence Rankings. Philosophical Studies 128 (1):77-91.
    If we receive information from multiple independent and partially reliable information sources, then whether we are justified to believe these information items is affected by how reliable the sources are, by how well the information coheres with our background beliefs and by how internally coherent the information is. We consider the following question. Is coherence a separable determinant of our degree of belief, i.e. is it the case that the more coherent the new information is, the more justified we are (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  20. Brian Hedden, Time-Slice Rationality.
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  21. Kenneth Einar Himma (2002). Prior Probabilities and Confirmation Theory: A Problem with the Fine-Tuning Argument. [REVIEW] International Journal for Philosophy of Religion 51 (3):175-194.
    Fine-tuning arguments attempt to infer God’s existence from the empirical fact that life would not be possible if any of approximately two-dozen fundamental laws and properties of the universe had been even slightly different. In this essay, I consider a version that relies on the following principle: if an observation O is more likely to occur under hypothesis H1 than under hypothesis H2, then O supports accepting H1 over H2. I argue that this particular application of this principle is vulnerable (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  22. Colin Howson (2008). De Finetti, Countable Additivity, Consistency and Coherence. British Journal for the Philosophy of Science 59 (1):1-23.
    Many people believe that there is a Dutch Book argument establishing that the principle of countable additivity is a condition of coherence. De Finetti himself did not, but for reasons that are at first sight perplexing. I show that he rejected countable additivity, and hence the Dutch Book argument for it, because countable additivity conflicted with intuitive principles about the scope of authentic consistency constraints. These he often claimed were logical in nature, but he never attempted to relate this idea (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  23. Franz Huber (2014). New Foundations for Counterfactuals. Synthese 191 (10):2167-2193.
    Philosophers typically rely on intuitions when providing a semantics for counterfactual conditionals. However, intuitions regarding counterfactual conditionals are notoriously shaky. The aim of this paper is to provide a principled account of the semantics of counterfactual conditionals. This principled account is provided by what I dub the Royal Rule, a deterministic analogue of the Principal Principle relating chance and credence. The Royal Rule says that an ideal doxastic agent’s initial grade of disbelief in a proposition \(A\) , given that the (...)
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  24. Michael Huemer (2009). Explanationist Aid for the Theory of Inductive Logic. British Journal for the Philosophy of Science 60 (2):345-375.
    A central problem facing a probabilistic approach to the problem of induction is the difficulty of sufficiently constraining prior probabilities so as to yield the conclusion that induction is cogent. The Principle of Indifference, according to which alternatives are equiprobable when one has no grounds for preferring one over another, represents one way of addressing this problem; however, the Principle faces the well-known problem that multiple interpretations of it are possible, leading to incompatible conclusions. I propose a partial solution to (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  25. Valeriano Iranzo (2009). Probabilidad Inicial y Éxito Probabilístico. Análisis Filosófico 29 (1):39-71.
    Una cuestión controvertida en la teoría bayesiana de la confirmación es el estatus de las probabilidades iniciales. Aunque la tendencia dominante entre los bayesianos es considerar que la única constricción legítima sobre los valores de dichas probabilidades es la consistencia formal con los teoremas de la teoría matemática de la probabilidad, otros autores -partidarios de lo que se ha dado en llamar "bayesianismo objetivo"- defienden la conveniencia de restricciones adicionales. Mi propuesta, en el marco del bayesianismo objetivo, recoge una sugerencia (...)
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  26. Benjamin C. Jantzen (forthcoming). Piecewise Versus Total Support: How to Deal with Background Information in Likelihood Arguments. Philosophical Explorations.
    The use of the Law of Likelihood (LL) as a general tool for assessing rival hypotheses has been criticized for its ambiguous treatment of background information. The LL endorses radically different answers depending on what information is designated as background versus evidence. I argue that once one distinguishes between two questions about evidentiary support, the ambiguity vanishes. I demonstrate this resolution by applying it to a debate over the status of the ‘fine-tuning argument’.
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  27. I. A. Kieseppä (2001). Statistical Model Selection Criteria and the Philosophical Problem of Underdetermination. British Journal for the Philosophy of Science 52 (4):761 - 794.
    I discuss the philosophical significance of the statistical model selection criteria, in particular their relevance for philosophical problems of underdetermination. I present an easily comprehensible account of their simplest possible application and contrast it with their application to curve-fitting problems. I embed philosophers' earlier discussion concerning the situations in which the criteria yield implausible results into a more general framework. Among other things, I discuss a difficulty which is related to the so-called subfamily problem, and I show that it has (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  28. I. A. Kieseppa (2001). Statistical Model Selection Criteria and the Philosophical Problem of Underdetermination. British Journal for the Philosophy of Science 52 (4):761-794.
    I discuss the philosophical significance of the statistical model selection criteria, in particular their relevance for philosophical of underdetermination. I present an easily comprehensible account of their simplest possible application and contrast it with their application to curve-fitting problems. I embed philosophers' earlier discussion concerning the situations in which the criteria yield implausible results into a more general framework. Among other things, I discuss a difficulty which is related to the so-called subfamily problem, and I show that it has analogies (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  29. Maria Lasonen-Aarnio (2013). Disagreement and Evidential Attenuation. Noûs 47 (4):767-794.
    What sort of doxastic response is rational to learning that one disagrees with an epistemic peer who has evaluated the same evidence? I argue that even weak general recommendations run the risk of being incompatible with a pair of real epistemic phenomena, what I call evidential attenuation and evidential amplification. I focus on a popular and intuitive view of disagreement, the equal weight view. I take it to state that in cases of peer disagreement, a subject ought to end up (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  30. Keith Lehrer (1983). Rationality as Weighted Averaging. Synthese 57 (3):283 - 295.
    Weighted averaging is a method for aggregating the totality of information, both regimented and unregimented, possessed by an individual or group of individuals. The application of such a method may be warranted by a theorem of the calculus of probability, simple conditionalization, or Jeffrey's formula for probability kinematics, all of which average in terms of the prior probability of evidence statements. Weighted averaging may, however, be applied as a method of rational aggregation of the probabilities of diverse perspectives or persons (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  31. Hanti Lin & Kevin T. Kelly (2012). A Geo-Logical Solution to the Lottery Paradox, with Applications to Conditional Logic. Synthese 186 (2):531-575.
  32. Ranald R. Macdonald (2000). The Limits of Probability Modelling: A Serendipitous Tale of Goldfish, Transfinite Numbers, and Pieces of String. [REVIEW] Mind and Society 1 (2):17-38.
    This paper is about the differences between probabilities and beliefs and why reasoning should not always conform to probability laws. Probability is defined in terms of urn models from which probability laws can be derived. This means that probabilities are expressed in rational numbers, they suppose the existence of veridical representations and, when viewed as parts of a probability model, they are determined by a restricted set of variables. Moreover, probabilities are subjective, in that they apply to classes of events (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  33. Patrick Maher (2006). Book Review: David Christensen. Putting Logic in its Place: Formal Constraints on Rational Belief. [REVIEW] Notre Dame Journal of Formal Logic 47 (1):133-149.
  34. Anna Mahtani (2014). Dutch Books, Coherence, and Logical Consistency. Noûs 48 (3).
    In this paper I present a new way of understanding Dutch Book Arguments: the idea is that an agent is shown to be incoherent iff (s)he would accept as fair a set of bets that would result in a loss under any interpretation of the claims involved. This draws on a standard definition of logical inconsistency. On this new understanding, the Dutch Book Arguments for the probability axioms go through, but the Dutch Book Argument for Reflection fails. The question of (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  35. Matthew W. Parker, More Trouble for Regular Probabilitites.
    In standard probability theory, probability zero is not the same as impossibility. But many have suggested that only impossible events should have probability zero. This can be arranged if we allow infinitesimal probabilities, but infinitesimals do not solve all of the problems. We will see that regular probabilities are not invariant over rigid transformations, even for simple, bounded, countable, constructive, and disjoint sets. Hence, regular chances cannot be determined by space-time invariant physical laws, and regular credences cannot satisfy seemingly reasonable (...)
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  36. Raghav Ramachandran, Arthur Ramer & Abhaya C. Nayak (2012). Probabilistic Belief Contraction. Minds and Machines 22 (4):325-351.
    Probabilistic belief contraction has been a much neglected topic in the field of probabilistic reasoning. This is due to the difficulty in establishing a reasonable reversal of the effect of Bayesian conditionalization on a probabilistic distribution. We show that indifferent contraction, a solution proposed by Ramer to this problem through a judicious use of the principle of maximum entropy, is a probabilistic version of a full meet contraction. We then propose variations of indifferent contraction, using both the Shannon entropy measure (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  37. Susanna Rinard (2014). A New Bayesian Solution to the Paradox of the Ravens. Philosophy of Science 81 (1):81-100.
    The canonical Bayesian solution to the ravens paradox faces a problem: it entails that black non-ravens disconfirm the hypothesis that all ravens are black. I provide a new solution that avoids this problem. On my solution, black ravens confirm that all ravens are black, while non-black non-ravens and black non-ravens are neutral. My approach is grounded in certain relations of epistemic dependence, which, in turn, are grounded in the fact that the kind raven is more natural than the kind black. (...)
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  38. William Roche (2012). A Weaker Condition for Transitivity in Probabilistic Support. European Journal for Philosophy of Science 2 (1):111-118.
    Probabilistic support is not transitive. There are cases in which x probabilistically supports y , i.e., Pr( y | x ) > Pr( y ), y , in turn, probabilistically supports z , and yet it is not the case that x probabilistically supports z . Tomoji Shogenji, though, establishes a condition for transitivity in probabilistic support, that is, a condition such that, for any x , y , and z , if Pr( y | x ) > Pr( y (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  39. William Roche (2012). Transitivity and Intransitivity in Evidential Support: Some Further Results. Review of Symbolic Logic 5 (2):259-268.
    Igor Douven establishes several new intransitivity results concerning evidential support. I add to Douven’s very instructive discussion by establishing two further intransitivity results and a transitivity result.
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  40. William Roche & Tomoji Shogenji (2013). Confirmation, Transitivity, and Moore: The Screening-Off Approach. Philosophical Studies (3):1-21.
    It is well known that the probabilistic relation of confirmation is not transitive in that even if E confirms H1 and H1 confirms H2, E may not confirm H2. In this paper we distinguish four senses of confirmation and examine additional conditions under which confirmation in different senses becomes transitive. We conduct this examination both in the general case where H1 confirms H2 and in the special case where H1 also logically entails H2. Based on these analyses, we argue that (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  41. Jeffrey Sanford Russell, John Hawthorne & Lara Buchak (forthcoming). Groupthink. Philosophical Studies:1-23.
    How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, we gather (...)
    No categories
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  42. Gerhard Schurz & Paul D. Thorn (2012). REWARD VERSUS RISK IN UNCERTAIN INFERENCE: THEOREMS AND SIMULATIONS. Review of Symbolic Logic 5 (4):574-612.
    Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions that express high conditional probabilities. In this paper we investigate four prominent LP systems, the systems O, P, Z, and QC. These systems differ in the number of inferences they licence (O ⊂ P ⊂ Z ⊂ QC). LP systems that license more inferences enjoy the possible reward of deriving more true and informative conclusions, but with this possible reward comes the risk of drawing more false or uninformative conclusions. In (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
  43. Jan Sprenger, Surprise and Evidence in Statistical Model Checking.
    There is considerable confusion about the role of p-values in statistical model checking. To clarify that point, I introduce the distinction between measures of surprise and measures of evidence which come with different epistemological functions. I argue that p-values, often understood as measures of evidence against a null model, do not count as proper measures of evidence and are closer to measures of surprise. Finally, I sketch how the problem of old evidence may be tackled by acknowledging the epistemic role (...)
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  44. Weng Hong Tang (2012). Regularity Reformulated. Episteme 9 (4):329-343.
    This paper focuses on the view that rationality requires that our credences be regular. I go through different formulations of the requirement, and show that they face several problems. I then formulate a version of the requirement that solves most of, if not all, these problems. I conclude by showing that an argument thought to support the requirement as traditionally formulated actually does not; if anything, the argument, slightly modified, supports my version of the requirement.
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  45. Paul D. Thorn (2013). Cognitivist Probabilism. In Vit Punochar & Petr Svarny (eds.), The Logica Yearbook 2012. College Publications. 201-213.
    In this article, I introduce the term “cognitivism” as a name for the thesis that degrees of belief are equivalent to full beliefs about truth-valued propositions. The thesis (of cognitivism) that degrees of belief are equivalent to full beliefs is equivocal, inasmuch as different sorts of equivalence may be postulated between degrees of belief and full beliefs. The simplest sort of equivalence (and the sort of equivalence that I discuss here) identifies having a given degree of belief with having a (...)
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  46. Paul D. Thorn (2007). The Trouble with Pollock’s Principle of Agreement. The Reasoner 1 (8):9-10.
  47. Paul D. Thorn & Gerhard Schurz (2014). A Utility Based Evaluation of Logico-Probabilistic Systems. Studia Logica 102 (4):867-890.
    Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions interpreted as expressing high conditional probabilities. In the present article, we investigate four prominent LP systems (namely, systems O, P, Z, and QC) by means of computer simulations. The results reported here extend our previous work in this area, and evaluate the four systems in terms of the expected utility of the dispositions to act that derive from the conclusions that the systems license. In addition to conforming to the dominant (...)
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  48. Paul D. Thorn & Gerhard Schurz (2013). Ampliative Inference Under Varied Entropy Levels. In Christoph Beierle & Gabriele Kern-Isberner (eds.), Proceedings of the 4th Workshop on Dynamics of Knowledge and Belief (DKB-2013). Fakultät für Mathematik und Informatik, FernUniversität in Hagen. 77-88.
  49. Paul D. Thorn & Gerhard Schurz (2012). Meta-Induction and the Wisdom of Crowds. Analyse and Kritik 34 (2):339-366.
    Meta-induction, in its various forms, is an imitative prediction method, where the prediction methods and the predictions of other agents are imitated to the extent that those methods or agents have proven successful in the past. In past work, Schurz demonstrated the optimality of meta-induction as a method for predicting unknown events and quantities. However, much recent discussion, along with formal and empirical work, on the Wisdom of Crowds has extolled the virtue of diverse and independent judgment as essential to (...)
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  50. Michael G. Titelbaum (2012). An Embarrassment for Double-Halfers. Thought 1 (2):146-151.
    “Double-halfers” think that throughout the Sleeping Beauty Problem, Beauty should keep her credence that a fair coin flip came up heads equal to 1/2. I introduce a new wrinkle to the problem that shows even double-halfers can't keep Beauty's credences equal to the objective chances for all coin-flip propositions. This leaves no way to deny that self-locating information generates an unexpected kind of inadmissible evidence.
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
1 — 50 / 53