Results for 'Frequentist inference '

999 found
Order:
  1.  48
    Objectivity and conditionality in frequentist inference.David Cox & Deborah G. Mayo - 2009 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press. pp. 276.
  2.  9
    On a new philosophy of frequentist inference : exchanges with David Cox and Deborah G. Mayo.Aris Spanos - 2009 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press. pp. 315.
  3.  12
    Frequentist statistical inference without repeated sampling.Paul Vos & Don Holbert - 2022 - Synthese 200 (2):1-25.
    Frequentist inference typically is described in terms of hypothetical repeated sampling but there are advantages to an interpretation that uses a single random sample. Contemporary examples are given that indicate probabilities for random phenomena are interpreted as classical probabilities, and this interpretation of equally likely chance outcomes is applied to statistical inference using urn models. These are used to address Bayesian criticisms of frequentist methods. Recent descriptions of p-values, confidence intervals, and power are viewed through the (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4.  37
    Frequentist statistics as a theory of inductive inference.Deborah G. Mayo & David Cox - 2006 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press.
    After some general remarks about the interrelation between philosophical and statistical thinking, the discussion centres largely on significance tests. These are defined as the calculation of p-values rather than as formal procedures for ‘acceptance‘ and ‘rejection‘. A number of types of null hypothesis are described and a principle for evidential interpretation set out governing the implications of p- values in the specific circumstances of each application, as contrasted with a long-run interpretation. A number of more complicated situ- ations are discussed (...)
    Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  5.  97
    A frequentist interpretation of probability for model-based inductive inference.Aris Spanos - 2013 - Synthese 190 (9):1555-1585.
    The main objective of the paper is to propose a frequentist interpretation of probability in the context of model-based induction, anchored on the Strong Law of Large Numbers (SLLN) and justifiable on empirical grounds. It is argued that the prevailing views in philosophy of science concerning induction and the frequentist interpretation of probability are unduly influenced by enumerative induction, and the von Mises rendering, both of which are at odds with frequentist model-based induction that dominates current practice. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  6.  71
    Statistical inference without frequentist justifications.Jan Sprenger - 2010 - In M. Dorato M. Suàrez (ed.), Epsa Epistemology and Methodology of Science. Springer. pp. 289--297.
    Statistical inference is often justified by long-run properties of the sampling distributions, such as the repeated sampling rationale. These are frequentist justifications of statistical inference. I argue, in line with existing philosophical literature, but against a widespread image in empirical science, that these justifications are flawed. Then I propose a novel interpretation of probability in statistics, the artefactual interpretation. I believe that this interpretation is able to bridge the gap between statistical probability calculations and rational decisions on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Error and inference: an outsider stand on a frequentist philosophy.Christian P. Robert - 2013 - Theory and Decision 74 (3):447-461.
    This paper is an extended review of the book Error and Inference, edited by Deborah Mayo and Aris Spanos, about their frequentist and philosophical perspective on testing of hypothesis and on the criticisms of alternatives like the Bayesian approach.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  8. Why Frequentists and Bayesians Need Each Other.Jon Williamson - 2013 - Erkenntnis 78 (2):293-318.
    The orthodox view in statistics has it that frequentism and Bayesianism are diametrically opposed—two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are complementary and need to mesh if probabilistic reasoning is to be carried out correctly.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  9.  11
    Prior Information in Frequentist Research Designs: The Case of Neyman’s Sampling Theory.Adam P. Kubiak & Paweł Kawalec - 2022 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 53 (4):381-402.
    We analyse the issue of using prior information in frequentist statistical inference. For that purpose, we scrutinise different kinds of sampling designs in Jerzy Neyman’s theory to reveal a variety of ways to explicitly and objectively engage with prior information. Further, we turn to the debate on sampling paradigms (design-based vs. model-based approaches) to argue that Neyman’s theory supports an argument for the intermediate approach in the frequentism vs. Bayesianism debate. We also demonstrate that Neyman’s theory, by allowing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  25
    NewPerspectiveson (SomeOld) Problems of Frequentist Statistics.Deborah G. Mayo & David Cox - 2010 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press. pp. 247.
  11.  19
    The Role of Randomization in Bayesian and Frequentist Design of Clinical Trial.Paola Berchialla, Dario Gregori & Ileana Baldi - 2019 - Topoi 38 (2):469-475.
    A key role in inference is played by randomization, which has been extensively used in clinical trials designs. Randomization is primarily intended to prevent the source of bias in treatment allocation by producing comparable groups. In the frequentist framework of inference, randomization allows also for the use of probability theory to express the likelihood of chance as a source for the difference of end outcome. In the Bayesian framework, its role is more nuanced. The Bayesian analysis of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Statistical Inference and the Replication Crisis.Lincoln J. Colling & Dénes Szűcs - 2018 - Review of Philosophy and Psychology 12 (1):121-147.
    The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Bayesianism and inference to the best explanation.Valeriano Iranzo - 2008 - Theoria 23 (1):89-106.
    Bayesianism and Inference to the best explanation are two different models of inference. Recently there has been some debate about the possibility of “bayesianizing” IBE. Firstly I explore several alternatives to include explanatory considerations in Bayes’s Theorem. Then I distinguish two different interpretations of prior probabilities: “IBE-Bayesianism” and “frequentist-Bayesianism”. After detailing the content of the latter, I propose a rule for assessing the priors. I also argue that Freq-Bay: endorses a role for explanatory value in the assessment (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  14.  19
    Severity and Trustworthy Evidence: Foundational Problems versus Misuses of Frequentist Testing.Aris Spanos - 2022 - Philosophy of Science 89 (2):378-397.
    For model-based frequentist statistics, based on a parametric statistical model ${{\cal M}_\theta }$, the trustworthiness of the ensuing evidence depends crucially on the validity of the probabilistic assumptions comprising ${{\cal M}_\theta }$, the optimality of the inference procedures employed, and the adequateness of the sample size to learn from data by securing –. It is argued that the criticism of the postdata severity evaluation of testing results based on a small n by Rochefort-Maranda is meritless because it conflates (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  43
    Is There a Free Lunch in Inference?Jeffrey N. Rouder, Richard D. Morey, Josine Verhagen, Jordan M. Province & Eric-Jan Wagenmakers - 2016 - Topics in Cognitive Science 8 (3):520-547.
    The field of psychology, including cognitive science, is vexed by a crisis of confidence. Although the causes and solutions are varied, we focus here on a common logical problem in inference. The default mode of inference is significance testing, which has a free lunch property where researchers need not make detailed assumptions about the alternative to test the null hypothesis. We present the argument that there is no free lunch; that is, valid testing requires that researchers test the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  16. Strong Faithfulness and Uniform Consistency in Causal Inference.Jiji Zhang - unknown
    A fundamental question in causal inference is whether it is possible to reliably infer the manipulation effects from observational data. There are a variety of senses of asymptotic reliability in the statistical literature, among which the most commonly discussed frequentist notions are pointwise consistency and uniform consistency (see, e.g. Bickel, Doksum [2001]). Uniform consistency is in general preferred to pointwise consistency because the former allows us to control the worst case error bounds with a finite sample size. In (...)
     
    Export citation  
     
    Bookmark   7 citations  
  17.  39
    The epistemic consequences of pragmatic value-laden scientific inference.Adam P. Kubiak & Paweł Kawalec - 2021 - European Journal for Philosophy of Science 11 (2):1-26.
    In this work, we explore the epistemic import of the value-ladenness of Neyman-Pearson’s Theory of Testing Hypotheses by reconstructing and extending Daniel Steel’s argument for the legitimate influence of pragmatic values on scientific inference. We focus on how to properly understand N-P’s pragmatic value-ladenness and the epistemic reliability of N-P. We develop an account of the twofold influence of pragmatic values on N-P’s epistemic reliability and replicability. We refer to these two distinguished aspects as “direct” and “indirect”. We discuss (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  31
    Review of Deborah G. Mayo, Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science[REVIEW]Adam La Caze - 2010 - Notre Dame Philosophical Reviews 2010 (7).
    Deborah Mayo's view of science is that learning occurs by severely testing specific hypotheses. Mayo expounded this thesis in her (1996) Error and the Growth of Experimental Knowledge (EGEK). This volume consists of a series of exchanges between Mayo and distinguished philosophers representing competing views of the philosophy of science. The tone of the exchanges is lively, edifying and enjoyable. Mayo's error-statistical philosophy of science is critiqued in the light of positions which place more emphasis on large-scale theories. The result (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  19.  97
    Peter Lipton.Alien Abduction, Inference To & Best Explanation - 2007 - Episteme 7:239.
    Direct download  
     
    Export citation  
     
    Bookmark  
  20. The objectivity of Subjective Bayesianism.Jan Sprenger - 2018 - European Journal for Philosophy of Science 8 (3):539-558.
    Subjective Bayesianism is a major school of uncertain reasoning and statistical inference. It is often criticized for a lack of objectivity: it opens the door to the influence of values and biases, evidence judgments can vary substantially between scientists, it is not suited for informing policy decisions. My paper rebuts these concerns by connecting the debates on scientific objectivity and statistical method. First, I show that the above concerns arise equally for standard frequentist inference with null hypothesis (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  21. Testing a precise null hypothesis: the case of Lindley’s paradox.Jan Sprenger - 2013 - Philosophy of Science 80 (5):733-744.
    The interpretation of tests of a point null hypothesis against an unspecified alternative is a classical and yet unresolved issue in statistical methodology. This paper approaches the problem from the perspective of Lindley's Paradox: the divergence of Bayesian and frequentist inference in hypothesis tests with large sample size. I contend that the standard approaches in both frameworks fail to resolve the paradox. As an alternative, I suggest the Bayesian Reference Criterion: it targets the predictive performance of the null (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  22.  9
    Bernoulli’s golden theorem in retrospect: error probabilities and trustworthy evidence.Aris Spanos - 2021 - Synthese 199 (5-6):13949-13976.
    Bernoulli’s 1713 golden theorem is viewed retrospectively in the context of modern model-based frequentist inference that revolves around the concept of a prespecified statistical model Mθx\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{M}}_{{{\varvec{\uptheta}}}} \left( {\mathbf{x}} \right)$$\end{document}, defining the inductive premises of inference. It is argued that several widely-accepted claims relating to the golden theorem and frequentist inference are either misleading or erroneous: (a) Bernoulli solved the problem of inference ‘from probability to frequency’, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Foundational Issues in Statistical Modeling : Statistical Model Specification.Aris Spanos - 2011 - Rationality, Markets and Morals 2:146-178.
    Statistical model specification and validation raise crucial foundational problems whose pertinent resolution holds the key to learning from data by securing the reliability of frequentist inference. The paper questions the judiciousness of several current practices, including the theory-driven approach, and the Akaike-type model selection procedures, arguing that they often lead to unreliable inferences. This is primarily due to the fact that goodness-of-fit/prediction measures and other substantive and pragmatic criteria are of questionable value when the estimated model is statistically (...)
     
    Export citation  
     
    Bookmark  
  24. The reference class problem is your problem too.Alan Hájek - 2007 - Synthese 156 (3):563--585.
    The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of the (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   110 citations  
  25. Can the Behavioral Sciences Self-correct? A Social Epistemic Study.Felipe Romero - 2016 - Studies in History and Philosophy of Science Part A 60 (C):55-69.
    Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  26. On the correct interpretation of p values and the importance of random variables.Guillaume Rochefort-Maranda - 2016 - Synthese 193 (6):1777-1793.
    The p value is the probability under the null hypothesis of obtaining an experimental result that is at least as extreme as the one that we have actually obtained. That probability plays a crucial role in frequentist statistical inferences. But if we take the word ‘extreme’ to mean ‘improbable’, then we can show that this type of inference can be very problematic. In this paper, I argue that it is a mistake to make such an interpretation. Under minimal (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  27.  13
    The Support Interval.Eric-Jan Wagenmakers, Quentin F. Gronau, Fabian Dablander & Alexander Etz - 2020 - Erkenntnis 87 (2):589-601.
    A frequentist confidence interval can be constructed by inverting a hypothesis test, such that the interval contains only parameter values that would not have been rejected by the test. We show how a similar definition can be employed to construct a Bayesian support interval. Consistent with Carnap’s theory of corroboration, the support interval contains only parameter values that receive at least some minimum amount of support from the data. The support interval is not subject to Lindley’s paradox and provides (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28.  3
    The Support Interval.Alexander Etz, Fabian Dablander, Quentin F. Gronau & Eric-Jan Wagenmakers - 2020 - Erkenntnis 87 (2):589-601.
    A frequentist confidence interval can be constructed by inverting a hypothesis test, such that the interval contains only parameter values that would not have been rejected by the test. We show how a similar definition can be employed to construct a Bayesian support interval. Consistent with Carnap’s theory of corroboration, the support interval contains only parameter values that receive at least some minimum amount of support from the data. The support interval is not subject to Lindley’s paradox and provides (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  25
    Statistics‐based research – a pig in a poke?James Penston - 2011 - Journal of Evaluation in Clinical Practice 17 (5):862-867.
  30. Karl Pearson and the Logic of Science: Renouncing Causal Understanding (the Bride) and Inverted Spinozism.Julio Michael Stern - 2018 - South American Journal of Logic 4 (1):219-252.
    Karl Pearson is the leading figure of XX century statistics. He and his co-workers crafted the core of the theory, methods and language of frequentist or classical statistics – the prevalent inductive logic of contemporary science. However, before working in statistics, K. Pearson had other interests in life, namely, in this order, philosophy, physics, and biological heredity. Key concepts of his philosophical and epistemological system of anti-Spinozism (a form of transcendental idealism) are carried over to his subsequent works on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31.  92
    Tuning Your Priors to the World.Jacob Feldman - 2013 - Topics in Cognitive Science 5 (1):13-34.
    The idea that perceptual and cognitive systems must incorporate knowledge about the structure of the environment has become a central dogma of cognitive theory. In a Bayesian context, this idea is often realized in terms of “tuning the prior”—widely assumed to mean adjusting prior probabilities so that they match the frequencies of events in the world. This kind of “ecological” tuning has often been held up as an ideal of inference, in fact defining an “ideal observer.” But widespread as (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  32. Unit Roots: Bayesian Significance Test.Julio Michael Stern, Marcio Alves Diniz & Carlos Alberto de Braganca Pereira - 2011 - Communications in Statistics 40 (23):4200-4213.
    The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. Improving Bayesian statistics understanding in the age of Big Data with the bayesvl R package.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Manh-Toan Ho, Manh-Tung Ho & Peter Mantello - 2020 - Software Impacts 4 (1):100016.
    The exponential growth of social data both in volume and complexity has increasingly exposed many of the shortcomings of the conventional frequentist approach to statistics. The scientific community has called for careful usage of the approach and its inference. Meanwhile, the alternative method, Bayesian statistics, still faces considerable barriers toward a more widespread application. The bayesvl R package is an open program, designed for implementing Bayesian modeling and analysis using the Stan language’s no-U-turn (NUTS) sampler. The package combines (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  34.  84
    The Jeffreys–Lindley paradox and discovery criteria in high energy physics.Robert D. Cousins - 2017 - Synthese 194 (2):395-432.
    The Jeffreys–Lindley paradox displays how the use of a \ value ) in a frequentist hypothesis test can lead to an inference that is radically different from that of a Bayesian hypothesis test in the form advocated by Harold Jeffreys in the 1930s and common today. The setting is the test of a well-specified null hypothesis versus a composite alternative. The \ value, as well as the ratio of the likelihood under the null hypothesis to the maximized likelihood (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  35. Facts, Values and Quanta.D. M. Appleby - 2005 - Foundations of Physics 35 (4):627-668.
    Quantum mechanics is a fundamentally probabilistic theory (at least so far as the empirical predictions are concerned). It follows that, if one wants to properly understand quantum mechanics, it is essential to clearly understand the meaning of probability statements. The interpretation of probability has excited nearly as much philosophical controversy as the interpretation of quantum mechanics. 20th century physicists have mostly adopted a frequentist conception. In this paper it is argued that we ought, instead, to adopt a logical or (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  36.  41
    Causation, randomness, and pseudo-randomness in John Venn's logic of chance.Byron E. Wall - 2005 - History and Philosophy of Logic 26 (4):299-319.
    In 1866, the young John Venn published The Logic of Chance, motivated largely by the desire to correct what he saw as deep fallacies in the reasoning of historical determinists such as Henry Buckle and in the optimistic heralding of a true social science by Adolphe Quetelet. Venn accepted the inevitable determinism implied by the physical sciences, but denied that the stable social statistics cited by Buckle and Quetelet implied a similar determinism in human actions. Venn maintained that probability statements (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Are ecology and evolutionary biology “soft” sciences?Massimo Pigliucci - 2002 - Annales Zoologici Finnici 39:87-98.
    Research in ecology and evolutionary biology (evo-eco) often tries to emulate the “hard” sciences such as physics and chemistry, but to many of its practitioners feels more like the “soft” sciences of psychology and sociology. I argue that this schizophrenic attitude is the result of lack of appreciation of the full consequences of the peculiarity of the evo-eco sciences as lying in between a-historical disciplines such as physics and completely historical ones as like paleontology. Furthermore, evo-eco researchers have gotten stuck (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  38. Critical Notice of Evidence and Evolution: The Logic Behind the Science by Elliott Sober, Cambridge University of Press, 2008.Ingo Brigandt - 2011 - Canadian Journal of Philosophy 41 (1):159-186.
    This essay discusses Elliott Sober’s Evidence and Evolution: The Logic Behind the Science. Valuable to both philosophers and biologists, Sober analyzes the testing of different kinds of evolutionary hypotheses about natural selection or phylogenetic history, including a thorough critique of intelligent design. Not at least because of a discussion of different schools of hypothesis testing (Bayesianism, likelihoodism, and frequentism), with Sober favoring a pluralism where different inference methods are appropriate in different empirical contexts, the book has lessons for philosophy (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  39. Who Should Be Afraid of the Jeffreys-Lindley Paradox?Aris Spanos - 2013 - Philosophy of Science 80 (1):73-93.
    The article revisits the large n problem as it relates to the Jeffreys-Lindley paradox to compare the frequentist, Bayesian, and likelihoodist approaches to inference and evidence. It is argued that what is fallacious is to interpret a rejection of as providing the same evidence for a particular alternative, irrespective of n; this is an example of the fallacy of rejection. Moreover, the Bayesian and likelihoodist approaches are shown to be susceptible to the fallacy of acceptance. The key difference (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  40.  12
    Gaussian Process Panel Modeling—Machine Learning Inspired Analysis of Longitudinal Panel Data.Julian D. Karch, Andreas M. Brandmaier & Manuel C. Voelkle - 2020 - Frontiers in Psychology 11.
    In this article, we extend the Bayesian nonparametric regression method Gaussian Process Regression to the analysis of longitudinal panel data. We call this new approach Gaussian Process Panel Modeling (GPPM). GPPM provides great flexibility because of the large number of models it can represent. It allows classical statistical inference as well as machine learning inspired predictive modeling. GPPM offers frequentist and Bayesian inference without the need to resort to Markov chain Monte Carlo-based approximations, which makes the approach (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  24
    We are All Bayesian, Everyone is Not a Bayesian.Mattia Andreoletti & Andrea Oldofredi - 2019 - Topoi 38 (2):477-485.
    Medical research makes intensive use of statistics in order to support its claims. In this paper we make explicit an epistemological tension between the conduct of clinical trials and their interpretation: statistical evidence is sometimes discarded on the basis of an underlined Bayesian reasoning. We suggest that acknowledging the potentiality of Bayesian statistics might contribute to clarify and improve comprehension of medical research. Nevertheless, despite Bayesianism may provide a better account for scientific inference with respect to the standard (...) approach, Bayesian statistics is rarely adopted in clinical research. The main reason lies in the supposed subjective elements characterizing this perspective. Hence, we discuss this objection presenting the so-called Reference analysis, a formal method which has been developed in the context of objective Bayesian statistics in order to define priors which have a minimal or null impact on posterior probabilities. Furthermore, according to this method only available data are relevant sources of information, so that it resists the most common criticisms against Bayesianism. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  4
    History and nature of the Jeffreys–Lindley paradox.Eric-Jan Wagenmakers & Alexander Ly - 2022 - Archive for History of Exact Sciences 77 (1):25-72.
    The Jeffreys–Lindley paradox exposes a rift between Bayesian and frequentist hypothesis testing that strikes at the heart of statistical inference. Contrary to what most current literature suggests, the paradox was central to the Bayesian testing methodology developed by Sir Harold Jeffreys in the late 1930s. Jeffreys showed that the evidence for a point-null hypothesis $${\mathcal {H}}_0$$ H 0 scales with $$\sqrt{n}$$ n and repeatedly argued that it would, therefore, be mistaken to set a threshold for rejecting $${\mathcal {H}}_0$$ (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  53
    What is probability and why does it matter.Zvonimir Šikić - 2014 - European Journal of Analytic Philosophy 10 (1):21-43.
    The idea that probability is a degree of rational belief seemed too vague for a foundation of a mathematical theory. It was certainly not obvious that degrees of rational belief had to be governed by the probability axioms as used by Laplace and other prestatistical probabilityst. The axioms seemed arbitrary in their interpretation. To eliminate the arbitrariness, the stat- isticians of the early 20th century drastically restricted the possible applications of the probability theory, by insisting that probabilities had to be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44.  42
    Mathematical statistics and metastatistical analysis.Andrés Rivadulla - 1991 - Erkenntnis 34 (2):211 - 236.
    This paper deals with meta-statistical questions concerning frequentist statistics. In Sections 2 to 4 I analyse the dispute between Fisher and Neyman on the so called logic of statistical inference, a polemic that has been concomitant of the development of mathematical statistics. My conclusion is that, whenever mathematical statistics makes it possible to draw inferences, it only uses deductive reasoning. Therefore I reject Fisher's inductive approach to the statistical estimation theory and adhere to Neyman's deductive one. On the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  45.  28
    Mill's Conversion: The Herschel Connection.Brian Skyrms - 2018 - Philosophers' Imprint 18.
    Between the first and second editions of A System of Logic, John Stuart Mill underwent a startling conversion from an uncompromising frequentist philosophy of probability to a thoroughly Bayesian degree-of-belief view. The conversion was effected by correspondence with the eminent scientist Sir John Herschel, to whom Mill already owed what have become known as Mill's Methods of Experimental Inference. We present the relevant correspondence, and discuss the extent of Mill's conversion.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  34
    From Evidential Support to a Measure of Corroboration.Jan Sprenger - unknown
    According to influential accounts of scientific method, e.g., critical rationalism, scientific knowledge grows by repeatedly testing our best hypotheses. In comparison to rivaling accounts of scientific reasoning such as Bayesianism, these accounts are closer to crucial aspects of scientific practice. But despite the preeminence of hypothesis tests in statistical inference, their philosophical foundations are shaky. In particular, the interpretation of "insignificant results"---outcomes where the tested hypothesis has survived the test---poses a major epistemic challenge that is not sufficiently addressed by (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47.  46
    How Strong is the Confirmation of a Hypothesis by Significant Data?Thomas Bartelborth - 2016 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 47 (2):277-291.
    The aim of the article is to propose a way to determine to what extent a hypothesis H is confirmed if it has successfully passed a classical significance test. Bayesians have already raised many serious objections against significance testing, but in doing so they have always had to rely on epistemic probabilities and a further Bayesian analysis, which are rejected by classical statisticians. Therefore, I will suggest a purely frequentist evaluation procedure for significance tests that should also be accepted (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  48. Two ways to rule out error: Severity and security.Kent Staley - unknown
    I contrast two modes of error-elimination relevant to evaluating evidence in accounts that emphasize frequentist reliability. The contrast corresponds to that between the use of of a reliable inference procedure and the critical scrutiny of a procedure with regard to its reliability, in light of what is and is not known about the setting in which the procedure is used. I propose a notion of security as a category of evidential assessment for the latter. In statistical settings, robustness (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  49.  32
    Revisiting Haavelmo's structural econometrics: bridging the gap between theory and data.Aris Spanos - 2015 - Journal of Economic Methodology 22 (2):171-196.
    The objective of the paper is threefold. First, to argue that some of Haavelmo's methodological ideas and insights have been neglected because they are largely at odds with the traditional perspective that views empirical modeling in economics as an exercise in curve-fitting. Second, to make a case that this neglect has contributed to the unreliability of empirical evidence in economics that is largely due to statistical misspecification. The latter affects the reliability of inference by inducing discrepancies between the actual (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50.  59
    Jon Williamson. In Defence of Objective Bayesianism. Oxford: Oxford University Press, 2010. ISBN 978-0-19-922800-3). Pp. vi + 185: Critical Studies/Book Reviews. [REVIEW]Christian Hennig - 2011 - Philosophia Mathematica 19 (2):219-225.
    The foundations of probability deal with the problem of modelling reasoning in face of uncertainty by a mathematical calculus, usually the standard probability calculus .The three dominating schools in the foundations of probability interpret probabilities as limiting long-run frequencies conceived as an objective property of series of repeatable experiments , or rational betting rates for an individual to bet on the unknown outcome of experiments depending on the individual’s prior assessments updated by evidence , or rational betting rates to bet (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 999