Results for 'Bayesian calibration'

999 found
Order:
  1. Objective Bayesian Calibration and the Problem of Non-convex Evidence.Gregory Wheeler - 2012 - British Journal for the Philosophy of Science 63 (4):841-850.
    Jon Williamson's Objective Bayesian Epistemology relies upon a calibration norm to constrain credal probability by both quantitative and qualitative evidence. One role of the calibration norm is to ensure that evidence works to constrain a convex set of probability functions. This essay brings into focus a problem for Williamson's theory when qualitative evidence specifies non-convex constraints.
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  2.  64
    Making decisions with evidential probability and objective Bayesian calibration inductive logics.Mantas Radzvilas, William Peden & Francesco De Pretis - forthcoming - International Journal of Approximate Reasoning:1-37.
    Calibration inductive logics are based on accepting estimates of relative frequencies, which are used to generate imprecise probabilities. In turn, these imprecise probabilities are intended to guide beliefs and decisions — a process called “calibration”. Two prominent examples are Henry E. Kyburg's system of Evidential Probability and Jon Williamson's version of Objective Bayesianism. There are many unexplored questions about these logics. How well do they perform in the short-run? Under what circumstances do they do better or worse? What (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  3.  20
    Bayesian Statistics in Radiocarbon Calibration.Daniel Steel - 2001 - Philosophy of Science 68 (S3):S153-S164.
    Critics of Bayesianism often assert that scientists are not Bayesians. The widespread use of Bayesian statistics in the field of radiocarbon calibration is discussed in relation to this charge. This case study illustrates the willingness of scientists to use Bayesian statistics when the approach offers some advantage, while continuing to use orthodox methods in other contexts. The case of radiocarbon calibration, therefore, suggests a picture of statistical practice in science as eclectic and pragmatic rather than rigidly (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Calibration and the Epistemological Role of Bayesian Conditionalization.Marc Lange - 1999 - Journal of Philosophy 96 (6):294-324.
  5. Bayesian statistics in radiocarbon calibration.Daniel Steel - 2001 - Proceedings of the Philosophy of Science Association 2001 (3):S153-.
    Critics of Bayesianism often assert that scientists are not Bayesians. The widespread use of Bayesian statistics in the field of radiocarbon calibration is discussed in relation to this charge. This case study illustrates the willingness of scientists to use Bayesian statistics when the approach offers some advantage, while continuing to use orthodox methods in other contexts. The case of radiocarbon calibration, therefore, suggests a picture of statistical practice in science as eclectic and pragmatic rather than rigidly (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  6.  11
    Calibration and the epistemological role of bayesian conditionalization, Marc Lange.Wide Content Individualism - 1998 - Mind 107 (427).
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  86
    Calibration and Convexity: Response to Gregory Wheeler.Jon Williamson - 2012 - British Journal for the Philosophy of Science 63 (4):851-857.
    This note responds to some criticisms of my recent book In Defence of Objective Bayesianism that were provided by Gregory Wheeler in his ‘Objective Bayesian Calibration and the Problem of Non-convex Evidence’.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Failure of Calibration is Typical.Gordon Belot - 2013 - Statistics and Probability Letters 83:2316--2318.
    Schervish (1985b) showed that every forecasting system is noncalibrated for uncountably many data sequences that it might see. This result is strengthened here: from a topological point of view, failure of calibration is typical and calibration rare. Meanwhile, Bayesian forecasters are certain that they are calibrated---this invites worries about the connection between Bayesianism and rationality.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  9.  18
    Calibration, Validation, and Confirmation.Mathias Frisch - 2019 - In Claus Beisbart & Nicole J. Saam (eds.), Computer Simulation Validation: Fundamental Concepts, Methodological Frameworks, and Philosophical Perspectives. Springer Verlag. pp. 981-1004.
    This chapter examines the role of parameterParametercalibrationCalibration in the confirmation and validation of complex computer simulation models. I examine the question to what extent calibration data can confirm or validate the calibrated model, focusing in particular on Bayesian approaches to confirmation. I distinguish several different Bayesian approaches to confirmation and argue that complex simulation models exhibit a predictivist effect: Complex computer simulation models constitute a case in which predictive success, as opposed to the mere accommodation of evidence, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Climate Models, Calibration, and Confirmation.Katie Steele & Charlotte Werndl - 2013 - British Journal for the Philosophy of Science 64 (3):609-635.
    We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to (...)
    Direct download (13 more)  
     
    Export citation  
     
    Bookmark   31 citations  
  11. Bayesian Networks and the Problem of Unreliable Instruments.Luc Bovens & Stephan Hartmann - 2002 - Philosophy of Science 69 (1):29-72.
    We appeal to the theory of Bayesian Networks to model different strategies for obtaining confirmation for a hypothesis from experimental test results provided by less than fully reliable instruments. In particular, we consider (i) repeated measurements of a single test consequence of the hypothesis, (ii) measurements of multiple test consequences of the hypothesis, (iii) theoretical support for the reliability of the instrument, and (iv) calibration procedures. We evaluate these strategies on their relative merits under idealized conditions and show (...)
    Direct download (15 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  12. Climate models, calibration, and confirmation.Charlotte Werndl & Katie Steele - 2013 - British Journal for the Philosophy of Science 64 (3):609-635.
    We argue that concerns about double-counting -- using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate --deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a (...)/relative-likelihood approach to incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. (shrink)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  13.  78
    Objective bayesian probabilistic logic.Jon Williamson - 2008
    This paper develops connections between objective Bayesian epistemology—which holds that the strengths of an agent’s beliefs should be representable by probabilities, should be calibrated with evidence of empirical probability, and should otherwise be equivocal—and probabilistic logic. After introducing objective Bayesian epistemology over propositional languages, the formalism is extended to handle predicate languages. A rather general probabilistic logic is formulated and then given a natural semantics in terms of objective Bayesian epistemology. The machinery of objective Bayesian nets (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  14. Equivocation for the Objective Bayesian.George Masterton - 2015 - Erkenntnis 80 (2):403-432.
    According to Williamson , the difference between empirical subjective Bayesians and objective Bayesians is that, while both hold reasonable credence to be calibrated to evidence, the objectivist also takes such credence to be as equivocal as such calibration allows. However, Williamson’s prescription for equivocation generates constraints on reasonable credence that are objectionable. Herein Williamson’s calibration norm is explicated in a novel way that permits an alternative equivocation norm. On this alternative account, evidence calibrated probability functions are recognised as (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  58
    Visual aids improve diagnostic inferences and metacognitive judgment calibration.Rocio Garcia-Retamero, Edward T. Cokely & Ulrich Hoffrage - 2015 - Frontiers in Psychology 6:136977.
    Visual aids can improve comprehension of risks associated with medical treatments, screenings, and lifestyles. Do visual aids also help decision makers accurately assess their risk comprehension? That is, do visual aids help them become well calibrated? To address these questions, we investigated the benefits of visual aids displaying numerical information and measured accuracy of self-assessment of diagnostic inferences (i.e., metacognitive judgment calibration) controlling for individual differences in numeracy. Participants included 108 patients who made diagnostic inferences about three medical tests (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  16. TORC3: Token-Ring Clearing Heuristic for Currency Circulation.Julio Michael Stern, Carlos Humes, Marcelo de Souza Lauretto, Fabio Nakano, Carlos Alberto de Braganca Pereira & Guilherme Frederico Gazineu Rafare - 2012 - AIP Conference Proceedings 1490:179-188.
    Clearing algorithms are at the core of modern payment systems, facilitating the settling of multilateral credit messages with (near) minimum transfers of currency. Traditional clearing procedures use batch processing based on MILP - mixed-integer linear programming algorithms. The MILP approach demands intensive computational resources; moreover, it is also vulnerable to operational risks generated by possible defaults during the inter-batch period. This paper presents TORC3 - the Token-Ring Clearing Algorithm for Currency Circulation. In contrast to the MILP approach, TORC3 is a (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  17.  35
    Legacy Data, Radiocarbon Dating, and Robustness Reasoning.Alison Wylie - manuscript
    *PSA 2016, symposium on “Data in Time: Epistemology of Historical Data” organized by Sabina Leonelli, 5 November 2016* *See published version: "Radiocarbon Dating in Archaeology: Triangulation and Traceability" in Data Journeys in the Sciences (2020) - link below* Archaeologists put a premium on pressing “legacy data” into service, given the notoriously selective and destructive nature of their practices of data capture. Legacy data consist of material and records that been assembled over decades, sometimes centuries, often by means and for purposes (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Objective Bayesianism and the maximum entropy principle.Jürgen Landes & Jon Williamson - 2013 - Entropy 15 (9):3528-3591.
    Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities, they should be calibrated to our evidence of physical probabilities, and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  19. Reliability for degrees of belief.Jeff Dunn - 2015 - Philosophical Studies 172 (7):1929-1952.
    We often evaluate belief-forming processes, agents, or entire belief states for reliability. This is normally done with the assumption that beliefs are all-or-nothing. How does such evaluation go when we’re considering beliefs that come in degrees? I consider a natural answer to this question that focuses on the degree of truth-possession had by a set of beliefs. I argue that this natural proposal is inadequate, but for an interesting reason. When we are dealing with all-or-nothing belief, high reliability leads to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  20.  37
    Invariant Equivocation.Jürgen Landes & George Masterton - 2017 - Erkenntnis 82 (1):141-167.
    Objective Bayesians hold that degrees of belief ought to be chosen in the set of probability functions calibrated with one’s evidence. The particular choice of degrees of belief is via some objective, i.e., not agent-dependent, inference process that, in general, selects the most equivocal probabilities from among those compatible with one’s evidence. Maximising entropy is what drives these inference processes in recent works by Williamson and Masterton though they disagree as to what should have its entropy maximised. With regard to (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  21. Predictivism and old evidence: a critical look at climate model tuning.Mathias Frisch - 2015 - European Journal for Philosophy of Science 5 (2):171-190.
    Many climate scientists have made claims that may suggest that evidence used in tuning or calibrating a climate model cannot be used to evaluate the model. By contrast, the philosophers Katie Steele and Charlotte Werndl have argued that, at least within the context of Bayesian confirmation theory, tuning is simply an instance of hypothesis testing. In this paper I argue for a weak predictivism and in support of a nuanced reading of climate scientists’ concerns about tuning: there are cases, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  22.  82
    The epistemological status of vision and its implications for design.Dhanraj Vishwanath - 2005 - Axiomathes 15 (3):399-486.
    Computational theories of vision typically rely on the analysis of two aspects of human visual function: (1) object and shape recognition (2) co-calibration of sensory measurements. Both these approaches are usually based on an inverse-optics model, where visual perception is viewed as a process of inference from a 2D retinal projection to a 3D percept within a Euclidean space schema. This paradigm has had great success in certain areas of vision science, but has been relatively less successful in understanding (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  23.  17
    Extending the Agent in QBism.Jacques Pienaar - 2020 - Foundations of Physics 50 (12):1894-1920.
    According to the subjective Bayesian interpretation of quantum mechanics, the instruments used to measure quantum systems are to be regarded as an extension of the senses of the agent who is using them, and quantum states describe the agent’s expectations for what they will experience through these extended senses. How can QBism then account for the fact that instruments must be calibrated before they can be used to ‘sense’ anything; some instruments are more precise than others; more precise instruments (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  24.  60
    The Diversity of Model Tuning Practices in Climate Science.Charlotte Werndl & Katie Steele - 2016 - Philosophy of Science 83 (5):113-114.
    Many examples of calibration in climate science raise no alarms regarding model reliability. We examine one example and show that, in employing Classical Hypothesis-testing, it involves calibrating a base model against data that is also used to confirm the model. This is counter to the "intuitive position". We argue, however, that aspects of the intuitive position are upheld by some methods, in particular, the general Cross-validation method. How Cross-validation relates to other prominent Classical methods such as the Akaike Information (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25. Higher-Order Evidence and the Dynamics of Self-Location: An Accuracy-Based Argument for Calibrationism.Brett Topey - 2022 - Erkenntnis 89 (4):1407-1433.
    The thesis that agents should calibrate their beliefs in the face of higher-order evidence—i.e., should adjust their first-order beliefs in response to evidence suggesting that the reasoning underlying those beliefs is faulty—is sometimes thought to be in tension with Bayesian approaches to belief update: in order to obey Bayesian norms, it’s claimed, agents must remain steadfast in the face of higher-order evidence. But I argue that this claim is incorrect. In particular, I motivate a minimal constraint on a (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  20
    Strategic Learning and its Limits.H. Peyton Young - 2004 - Oxford University Press UK.
    In this concise book based on his Arne Ryde Lectures in 2002, Young suggests a conceptual framework for studying strategic learning and highlights theoretical developments in the area. He discusses the interactive learning problem; reinforcement and regret; equilibrium; conditional no-regret learning; prediction, postdiction, and calibration; fictitious play and its variants; Bayesian learning; and hypothesis testing. Young's framework emphasizes the amount of information required to implement different types of learning rules, criteria for evaluating their performance, and alternative notions of (...)
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  27.  79
    Justifying Objective Bayesianism on Predicate Languages.Jürgen Landes & Jon Williamson - 2015 - Entropy 17 (4):2459-2543.
    Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  28. Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting.Katie Steele & Charlotte Werndl - 2016 - British Journal for the Philosophy of Science:axw024.
    This article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  29.  12
    Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting.Charlotte Werndl & Katie Steele - 2018 - British Journal for the Philosophy of Science 69 (2):351-375.
    This article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  58
    A practical philosophy of complex climate modelling.Gavin A. Schmidt & Steven Sherwood - 2015 - European Journal for Philosophy of Science 5 (2):149-169.
    We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project. We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naïve predictions. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  31.  42
    Risk Assessment and Uncertainty.Kristin Shrader-Frechette - 1988 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1988:504 - 517.
    The "prevailing opinion" among decision theorists, according to John Harsanyi, is to use the Bayesian rule, even in situations of uncertainty. I want to argue that the prevailing opinion is wrong, at least in the case of societal risks under uncertainty. Admittedly Bayesian rules are better in many cases of individual risk or certainty. (Both Bayesian and maximin strategies are sometimes needed.) Although I shall not take the time to defend all these points in detail, I shall (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Radiocarbon Dating in Archaeology: Triangulation and Traceability.Alison Wylie - 2020 - In Sabina Leonelli & Niccolò Tempini (eds.), Data Journeys in the Sciences. Springer. pp. 285-301.
    When radiocarbon dating techniques were applied to archaeological material in the 1950s they were hailed as a revolution. At last archaeologists could construct absolute chronologies anchored in temporal data backed by immutable laws of physics. This would make it possible to mobilize archaeological data across regions and time-periods on a global scale, rendering obsolete the local and relative chronologies on which archaeologists had long relied. As profound as the impact of 14C dating has been, it has had a long and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  33. Paul Weirich.Bayesian Justification - 1994 - In Dag Prawitz & Dag Westerståhl (eds.), Logic and Philosophy of Science in Uppsala. Kluwer Academic Publishers. pp. 245.
     
    Export citation  
     
    Bookmark  
  34.  11
    A Context‐Dependent Bayesian Account for Causal‐Based Categorization.Nicolás Marchant, Tadeg Quillien & Sergio E. Chaigneau - 2023 - Cognitive Science 47 (1):e13240.
    The causal view of categories assumes that categories are represented by features and their causal relations. To study the effect of causal knowledge on categorization, researchers have used Bayesian causal models. Within that framework, categorization may be viewed as dependent on a likelihood computation (i.e., the likelihood of an exemplar with a certain combination of features, given the category's causal model) or as a posterior computation (i.e., the probability that the exemplar belongs to the category, given its features). Across (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Bayesian Perspectives on Mathematical Practice.James Franklin - 2020 - In Bharath Sririman (ed.), Handbook of the History and Philosophy of Mathematical Practice. Cham: Springer. pp. 2711-2726.
    Mathematicians often speak of conjectures as being confirmed by evidence that falls short of proof. For their own conjectures, evidence justifies further work in looking for a proof. Those conjectures of mathematics that have long resisted proof, such as the Riemann hypothesis, have had to be considered in terms of the evidence for and against them. In recent decades, massive increases in computer power have permitted the gathering of huge amounts of numerical evidence, both for conjectures in pure mathematics and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. Calibration dilemmas in the ethics of distribution.Jacob M. Nebel & H. Orri Stefánsson - 2023 - Economics and Philosophy 39 (1):67-98.
    This paper presents a new kind of problem in the ethics of distribution. The problem takes the form of several “calibration dilemmas,” in which intuitively reasonable aversion to small-stakes inequalities requires leading theories of distribution to recommend intuitively unreasonable aversion to large-stakes inequalities. We first lay out a series of such dilemmas for prioritarian theories. We then consider a widely endorsed family of egalitarian views and show that they are subject to even more forceful calibration dilemmas than prioritarian (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  37. bayesvl: Visually Learning the Graphical Structure of Bayesian Networks and Performing MCMC with 'Stan'.Quan-Hoang Vuong & Viet-Phuong La - 2019 - Open Science Framework 2019:01-47.
  38. Fine-tuning in the context of Bayesian theory testing.Luke A. Barnes - 2018 - European Journal for Philosophy of Science 8 (2):253-269.
    Fine-tuning in physics and cosmology is often used as evidence that a theory is incomplete. For example, the parameters of the standard model of particle physics are “unnaturally” small, which has driven much of the search for physics beyond the standard model. Of particular interest is the fine-tuning of the universe for life, which suggests that our universe’s ability to create physical life forms is improbable and in need of explanation, perhaps by a multiverse. This claim has been challenged on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  39. The curve fitting problem: A bayesian rejoinder.Prasanta S. Bandyopadhyay & Robert J. Boik - 1999 - Philosophy of Science 66 (3):402.
    In the curve fitting problem two conflicting desiderata, simplicity and goodness-of-fit pull in opposite directions. To solve this problem, two proposals, the first one based on Bayes's theorem criterion (BTC) and the second one advocated by Forster and Sober based on Akaike's Information Criterion (AIC) are discussed. We show that AIC, which is frequentist in spirit, is logically equivalent to BTC, provided that a suitable choice of priors is made. We evaluate the charges against Bayesianism and contend that AIC approach (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  40. The Rules of Logic Composition for the Bayesian Epistemic e-Values.Wagner Borges & Julio Michael Stern - 2007 - Logic Journal of the IGPL 15 (5-6):401-420.
    In this paper, the relationship between the e-value of a complex hypothesis, H, and those of its constituent elementary hypotheses, Hj, j = 1… k, is analyzed, in the independent setup. The e-value of a hypothesis H, ev, is a Bayesian epistemic, credibility or truth value defined under the Full Bayesian Significance Testing mathematical apparatus. The questions addressed concern the important issue of how the truth value of H, and the truth function of the corresponding FBST structure M, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  41. On having no reason: dogmatism and Bayesian confirmation.Peter Kung - 2010 - Synthese 177 (1):1 - 17.
    Recently in epistemology a number of authors have mounted Bayesian objections to dogmatism. These objections depend on a Bayesian principle of evidential confirmation: Evidence E confirms hypothesis H just in case Pr(H|E) > Pr(H). I argue using Keynes' and Knight's distinction between risk and uncertainty that the Bayesian principle fails to accommodate the intuitive notion of having no reason to believe. Consider as an example an unfamiliar card game: at first, since you're unfamiliar with the game, you (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  42.  84
    A normative framework for argument quality: argumentation schemes with a Bayesian foundation.Ulrike Hahn & Jos Hornikx - 2016 - Synthese 193 (6):1833-1873.
    In this paper, it is argued that the most fruitful approach to developing normative models of argument quality is one that combines the argumentation scheme approach with Bayesian argumentation. Three sample argumentation schemes from the literature are discussed: the argument from sign, the argument from expert opinion, and the appeal to popular opinion. Limitations of the scheme-based treatment of these argument forms are identified and it is shown how a Bayesian perspective may help to overcome these. At the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  43. Hempel's Raven paradox: A lacuna in the standard bayesian solution.Peter B. M. Vranas - 2004 - British Journal for the Philosophy of Science 55 (3):545-560.
    According to Hempel's paradox, evidence (E) that an object is a nonblack nonraven confirms the hypothesis (H) that every raven is black. According to the standard Bayesian solution, E does confirm H but only to a minute degree. This solution relies on the almost never explicitly defended assumption that the probability of H should not be affected by evidence that an object is nonblack. I argue that this assumption is implausible, and I propose a way out for Bayesians. Introduction (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  44.  9
    Rationalizing predictions by adversarial information calibration.Lei Sha, Oana-Maria Camburu & Thomas Lukasiewicz - 2023 - Artificial Intelligence 315 (C):103828.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  25
    Physical correlate theory versus the indirect calibration approach.Hans-Georg Geissler - 1983 - Behavioral and Brain Sciences 6 (2):316-317.
  46. Consequences of Calibration.Robert Williams & Richard Pettigrew - forthcoming - British Journal for the Philosophy of Science:14.
    Drawing on a passage from Ramsey's Truth and Probability, we formulate a simple, plausible constraint on evaluating the accuracy of credences: the Calibration Test. We show that any additive, continuous accuracy measure that passes the Calibration Test will be strictly proper. Strictly proper accuracy measures are known to support the touchstone results of accuracy-first epistemology, for example vindications of probabilism and conditionalization. We show that our use of Calibration is an improvement on previous such appeals by showing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. Trust and the value of overconfidence: a Bayesian perspective on social network communication.Aron Vallinder & Erik J. Olsson - 2014 - Synthese 191 (9):1991-2007.
    The paper presents and defends a Bayesian theory of trust in social networks. In the first part of the paper, we provide justifications for the basic assumptions behind the model, and we give reasons for thinking that the model has plausible consequences for certain kinds of communication. In the second part of the paper we investigate the phenomenon of overconfidence. Many psychological studies have found that people think they are more reliable than they actually are. Using a simulation environment (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  48.  99
    Bayesian reverse-engineering considered as a research strategy for cognitive science.Carlos Zednik & Frank Jäkel - 2016 - Synthese 193 (12):3951-3985.
    Bayesian reverse-engineering is a research strategy for developing three-level explanations of behavior and cognition. Starting from a computational-level analysis of behavior and cognition as optimal probabilistic inference, Bayesian reverse-engineers apply numerous tweaks and heuristics to formulate testable hypotheses at the algorithmic and implementational levels. In so doing, they exploit recent technological advances in Bayesian artificial intelligence, machine learning, and statistics, but also consider established principles from cognitive psychology and neuroscience. Although these tweaks and heuristics are highly pragmatic (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  49.  87
    Bayesian Philosophy of Science.Jan Sprenger & Stephan Hartmann - 2019 - Oxford and New York: Oxford University Press.
    How should we reason in science? Jan Sprenger and Stephan Hartmann offer a refreshing take on classical topics in philosophy of science, using a single key concept to explain and to elucidate manifold aspects of scientific reasoning. They present good arguments and good inferences as being characterized by their effect on our rational degrees of belief. Refuting the view that there is no place for subjective attitudes in 'objective science', Sprenger and Hartmann explain the value of convincing evidence in terms (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  50.  12
    The correlational structure of natural images and the calibration of spatial representations.Roland Baddeley - 1997 - Cognitive Science 21 (3):351-372.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 999