This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related categories
Siblings:
40 found
Search inside:
(import / add options)   Order:
  1. Ernest W. Adams (1996). Four Probability-Preserving Properties of Inferences. Journal of Philosophical Logic 25 (1):1 - 24.
    Different inferences in probabilistic logics of conditionals 'preserve' the probabilities of their premisses to different degrees. Some preserve certainty, some high probability, some positive probability, and some minimum probability. In the first case conclusions must have probability I when premisses have probability 1, though they might have probability 0 when their premisses have any lower probability. In the second case, roughly speaking, if premisses are highly probable though not certain then conclusions must also be highly probable. In the third case (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   10 citations  
  2. Paul Anand (1993). Foundations of Rational Choice Under Risk. Oxford University Press.
  3. Andre Ariew (2007). Under the Influence of Malthus's Law of Population Growth: Darwin Eschews the Statistical Techniques of Aldolphe Quetelet. Studies in History and Philosophy of Science Part C 38 (1):1-19.
    In the epigraph, Fisher is blaming two generations of theoretical biologists, from Darwin on, for ignoring Quetelet's statistical techniques and hence harboring confusions about evolution and natural selection. He is right to imply that Darwin and his contemporaries were aware of the core of Quetelet's work. Quetelet's seminal monograph, Sur L'homme, was widely discussed in Darwin's academic circles. We know that Darwin owned a copy (Schweber 1977). More importantly, we have in Darwin's notebooks two entries referring to Quetelet's work on (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  4. David Atkinson & Jeanne Peijnenburg (2006). Probability All the Way Up. Synthese 153 (2):187-197.
    Richard Jeffrey’s radical probabilism (‘probability all the way down’) is augmented by the claim that probability cannot be turned into certainty, except by data that logically exclude all alternatives. Once we start being uncertain, no amount of updating will free us from the treadmill of uncertainty. This claim is cast first in objectivist and then in subjectivist terms.
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    My bibliography  
  5. Prasanta S. Bandyopadhyay & Malcolm Forster (eds.) (forthcoming). Handbook of the Philosophy of Statistics. Elsevier.
    Remove from this list  
    Translate
     
     
    Export citation  
     
    My bibliography  
  6. Prasanta S. Bandyopadhyay & Malcolm Forster (eds.) (forthcoming). Philosophy of Statistics, Handbook of the Philosophy of Science, Volume 7. Elsevier.
  7. D. Bar (2004). Internet Websites Statistics Expressed in the Framework of the Ursell—Mayer Cluster Formalism. Foundations of Physics 34 (8):1203-1223.
    We show that it is possible to generalize the Ursell–Mayer cluster formalism so that it may cover also the statistics of Internet websites. Our starting point is the introduction of an extra variable that is assumed to take account, as will be explained, of the nature of the Internet statistics. We then show, following the arguments in Mayer, that one may obtain a phase transition-like phenomena.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  8. Marcel J. Boumans, When Evidence is Not in the Mean.
    When observing or measuring phenomena, errors are inevitable, one can only aspire to reduce these errors as much as possible. An obvious strategy to achieve this reduction is by using more precise instruments. Another strategy was to develop a theory of these errors that could indicate how to take them into account. One of the greatest achievements of statistics in the beginning of the 19th century was such a theory of error. This theory told the practitioners that the best thing (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  9. Siu L. Chow (1998). The Null-Hypothesis Significance-Test Procedure is Still Warranted. Behavioral and Brain Sciences 21 (2):228-235.
    Entertaining diverse assumptions about empirical research, commentators give a wide range of verdicts on the NHSTP defence in Statistical significance. The null-hypothesis significance- test procedure is defended in a framework in which deductive and inductive rules are deployed in theory corroboration in the spirit of Popper's Conjectures and refutations. The defensible hypothetico-deductive structure of the framework is used to make explicit the distinctions between substantive and statistical hypotheses, statistical alternative and conceptual alternative hypotheses, and making statistical decisions and drawing theoretical (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography  
  10. Marc-Kevin Daoust (2014). Compte rendu de « Desrosières, Alain (2014), Prouver et gouverner. Une analyse politique des statistiques publiques ». [REVIEW] Science Ouverte 1:1-7.
    Prouver et gouverner étudie le rôle des institutions, des conventions et des enjeux normatifs dans la construction d’indicateurs quantitatifs. Desrosières pense qu’on ne peut étudier le développement scientifique des statistiques sans prendre en compte le développement institutionnel – en particulier le rôle de l’État – dans la constitution de cette discipline.
    Remove from this list  
    Translate
      Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  11. Christopher F. French (2015). Philosophy as Conceptual Engineering: Inductive Logic in Rudolf Carnap's Scientific Philosophy. Dissertation, University of British Columbia
  12. William M. Goodman (2010). The Undetectable Difference: An Experimental Look at the ‘Problem’ of P-Values. Statistical Literacy Website/Papers: Www.Statlit.Org/Pdf/2010GoodmanASA.Pdf.
    In the face of continuing assumptions by many scientists and journal editors that p-values provide a gold standard for inference, counter warnings are published periodically. But the core problem is not with p-values, per se. A finding that “p-value is less than α” could merely signal that a critical value has been exceeded. The question is why, when estimating a parameter, we provide a range (a confidence interval), but when testing a hypothesis about a parameter (e.g. µ = x) we (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  13. Amato Herzel, Giampiero Landenna, Ian Società Italiana di Statistica & Hacking (1982). Critical Analysis of Ian Hacking's Book "Logic of Statistical Inference". Stampato Col Contributo Del Consiglio Nazionale Delle Ricerche.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  14. J. V. Howard (2009). Significance Testing with No Alternative Hypothesis: A Measure of Surprise. Erkenntnis 70 (2):253-270.
    A pure significance test would check the agreement of a statistical model with the observed data even when no alternative model was available. The paper proposes the use of a modified p -value to make such a test. The model will be rejected if something surprising is observed. It is shown that the relation between this measure of surprise and the surprise indices of Weaver and Good is similar to the relationship between a p -value, a corresponding odds-ratio, and a (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  15. Andreas Hüttemann & Alexander Reutlinger (2013). Against the Statistical Account of Special Science Laws. In Vassilios Karakostas & Dennis Dieks (eds.), Recent Progress in Philosophy of Science: Perspectives and Foundational Problems. The Third European Philosophy of Science Association Proceedings. Springer 181-192.
    John Earman and John T. Roberts advocate a challenging and radical claim regarding the semantics of laws in the special sciences: the statistical account. According to this account, a typical special science law “asserts a certain precisely defined statistical relation among well-defined variables” and this statistical relation does not require being hedged by ceteris paribus conditions. In this paper, we raise two objections against the attempt to cash out the content of special science generalizations in statistical terms.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  16. Adam P. Kubiak (2014). A Frequentist Solution to Lindley & Phillips’ Stopping Rule Problem in Ecological Realm. Zagadnienia Naukoznawstwa 2:135-145.
    In this paper I provide a frequentist philosophical-methodological solution for the stopping rule problem presented by Lindley & Phillips in 1976, which is settled in the ecological realm of testing koalas’ sex ratio. I deliver criteria for discerning a stopping rule, an evidence and a model that are epistemically more appropriate for testing the hypothesis of the case studied, by appealing to physical notion of probability and by analyzing the content of possible formulations of evidence, assumptions of models and meaning (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  17. A. la Caze (2009). Evidence-Based Medicine Must Be .. Journal of Medicine and Philosophy 34 (5):509-527.
    Proponents of evidence-based medicine (EBM) provide the “hierarchy of evidence” as a criterion for judging the reliability of therapeutic decisions. EBM's hierarchy places randomized interventional studies (and systematic reviews of such studies) higher in the hierarchy than observational studies, unsystematic clinical experience, and basic science. Recent philosophical work has questioned whether EBM's special emphasis on evidence from randomized interventional studies can be justified. Following the critical literature, and in particular the work of John Worrall, I agree that many of the (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography   8 citations  
  18. Michael LaPorte (2013). Philosophy Paper.
  19. Johannes Lenhard (2006). Models and Statistical Inference: The Controversy Between Fisher and Neyman–Pearson. British Journal for the Philosophy of Science 57 (1):69-91.
    The main thesis of the paper is that in the case of modern statistics, the differences between the various concepts of models were the key to its formative controversies. The mathematical theory of statistical inference was mainly developed by Ronald A. Fisher, Jerzy Neyman, and Egon S. Pearson. Fisher on the one side and Neyman–Pearson on the other were involved often in a polemic controversy. The common view is that Neyman and Pearson made Fisher's account more stringent mathematically. It is (...)
    Remove from this list   Direct download (8 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  20. Bert Leuridan (2007). Galton's Blinding Glasses. Modern Statistics Hiding Causal Structure in Early Theories of Inheritance. In Federica Russo & Jon Williamson (eds.), Causality and Probability in the Sciences. 243--262.
    ABSTRACT. Probability and statistics play an important role in contemporary -philosophy of causality. They are viewed as glasses through which we can see or detect causal relations. However, they may sometimes act as blinding glasses, as I will argue in this paper. In the 19th century, Francis Galton tried to statistically analyze hereditary phenomena. Although he was a far better statistician than Gregor Mendel, his biological theory turned out to be less fruitful. This was no sheer accident. His knowledge of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  21. Daniel Malinsky (2015). Hypothesis Testing, “Dutch Book” Arguments, and Risk. Philosophy of Science 82 (5):917-929.
    “Dutch Book” arguments and references to gambling theorems are typical in the debate between Bayesians and scientists committed to “classical” statistical methods. These arguments have rarely convinced non-Bayesian scientists to abandon certain conventional practices, partially because many scientists feel that gambling theorems have little relevance to their research activities. In other words, scientists “don’t bet.” This article examines one attempt, by Schervish, Seidenfeld, and Kadane, to progress beyond such apparent stalemates by connecting “Dutch Book”–type mathematical results with principles actually endorsed (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  22. J. S. Markovitch, The Psychology of The Two Envelope Problem.
    This article concerns the psychology of the paradoxical Two Envelope Problem. The goal is to find instructive variants of the envelope switching problem that are capable of clear-cut resolution, while still retaining paradoxical features. By relocating the original problem into different contexts involving commutes and playing cards the reader is presented with a succession of resolved paradoxes that reduce the confusion arising from the parent paradox. The goal is to reduce confusion by understanding how we sometimes misread mathematical statements; or, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  23. Deborah G. Mayo (1992). Did Pearson Reject the Neyman-Pearson Philosophy of Statistics? Synthese 90 (2):233 - 262.
    I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view Pearson did hold gives a (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography   4 citations  
  24. J. T. M. Miller (2016). A Philosophical Guide to Chance. Philosophical Quarterly 66 (262):pqv037.
    A review of A Philosophical Guide to Chance by Toby Handfield. Cambridge CUP 2012.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  25. Daniel Osherson, Notes on Statistical Tests.
    Let an unbiased coin be used to form an ω-sequence S of independent tosses. Let N be the positive integers. The finite initial segment of length n ∈ N is denoted by Sn (thus, S1 holds exactly the first toss). For n ∈ N , let Hn be the proportion of heads that show up in Sn.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  26. Massimo Pigliucci (2005). Bayes's Theorem. [REVIEW] Quarterly Review of Biology 80 (1):93-95.
    About a British Academy collection of papers on Bayes' famous theorem.
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  27. Guillaume Rochefort-Maranda (2016). On the Correct Interpretation of P Values and the Importance of Random Variables. Synthese 193 (6):1777-1793.
    The p value is the probability under the null hypothesis of obtaining an experimental result that is at least as extreme as the one that we have actually obtained. That probability plays a crucial role in frequentist statistical inferences. But if we take the word ‘extreme’ to mean ‘improbable’, then we can show that this type of inference can be very problematic. In this paper, I argue that it is a mistake to make such an interpretation. Under minimal assumptions about (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  28. Felipe Romero (forthcoming). Can the Behavioral Sciences Self-Correct? A Social Epistemic Study. Studies in History and Philosophy of Science Part A.
    Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon (...)
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  29. David Wÿss Rudge (2001). Kettlewell From an Error Statisticians's Point of View. Perspectives on Science 9 (1):59-77.
    : Bayesians and error statisticians have relied heavily upon examples from physics in developing their accounts of scientific inference. The present essay demonstrates it is possible to analyze H.B.D. Kettlewell's classic study of natural selection from Deborah Mayo's error statistical point of view (Mayo 1996). A comparison with a previous analysis of this episode from a Bayesian perspective (Rudge 1998) reveals that the error statistical account makes better sense of investigations such as Kettlewell's because it clarifies how core elements in (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    My bibliography   5 citations  
  30. Sebastiano Sonego (1991). Interpretation of the Hydrodynamical Formalism of Quantum Mechanics. Foundations of Physics 21 (10):1135-1181.
    The hydrodynamical formalism for the quantum theory of a nonrelativistic particle is considered, together with a reformulation of it which makes use of the methods of kinetic theory and is based on the existence of the Wigner phase-space distribution. It is argued that this reformulation provides strong evidence in favor of the statistical interpretation of quantum mechanics, and it is suggested that this latter could be better understood as an almost classical statistical theory. Moreover, it is shown how, within this (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  31. Kent Staley (2012). Strategies for Securing Evidence Through Model Criticism. European Journal for Philosophy of Science 2 (1):21-43.
    Some accounts of evidence regard it as an objective relationship holding between data and hypotheses, perhaps mediated by a testing procedure. Mayo’s error-statistical theory of evidence is an example of such an approach. Such a view leaves open the question of when an epistemic agent is justified in drawing an inference from such data to a hypothesis. Using Mayo’s account as an illustration, I propose a framework for addressing the justification question via a relativized notion, which I designate security , (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  32. Roger Stanev (2012). The Epistemology and Ethics of Early Stopping Decisions in Randomized Controlled Trials. Dissertation, University of British Columbia
    Philosophers subscribing to particular principles of statistical inference and evidence need to be aware of the limitations and practical consequences of the statistical approach they endorse. The framework proposed (for statistical inference in the field of medicine) allows disparate statistical approaches to emerge in their appropriate context. My dissertation proposes a decision theoretic model, together with methodological guidelines, that provide important considerations for deciding on clinical trial conduct. These considerations do not amount to more stopping rules. Instead, they are principles (...)
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  33. Roger Stanev (2012). Stopping Rules and Data Monitoring in Clinical Trials. In H. W. de Regt, S. Hartmann & S. Okasha (eds.), EPSA Philosophy of Science: Amsterdam 2009, The European Philosophy of Science Association Proceedings Vol. 1, 375-386. Springer 375--386.
    Stopping rules — rules dictating when to stop accumulating data and start analyzing it for the purposes of inferring from the experiment — divide Bayesians, Likelihoodists and classical statistical approaches to inference. Although the relationship between Bayesian philosophy of science and stopping rules can be complex (cf. Steel 2003), in general, Bayesians regard stopping rules as irrelevant to what inference should be drawn from the data. This position clashes with classical statistical accounts. For orthodox statistics, stopping rules do matter to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  34. Roger Stanev (2011). Statistical Decisions and the Interim Analyses of Clinical Trials. Theoretical Medicine and Bioethics 32 (1):61-74.
    This paper analyzes statistical decisions during the interim analyses of clinical trials. After some general remarks about the ethical and scientific demands of clinical trials, I introduce the notion of a hard-case clinical trial, explain the basic idea behind it, and provide a real example involving the interim analyses of zidovudine in asymptomatic HIV-infected patients. The example leads me to propose a decision analytic framework for handling ethical conflicts that might arise during the monitoring of hard-case clinical trials. I use (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  35. Michael Strevens (2009). Objective Evidence and Absence: Comment on Sober. Philosophical Studies 143 (1):91 - 100.
    Elliott Sober argues that the statistical slogan “Absence of evidence is not evidence of absence” cannot be taken literally: it must be interpreted charitably as claiming that the absence of evidence is (typically) not very much evidence of absence. I offer an alternative interpretation, on which the slogan claims that absence of evidence is (typically) not objective evidence of absence. I sketch a definition of objective evidence, founded in the notion of an epistemically objective likelihood, and I show that in (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  36. Nassim N. Taleb, The Future Has Thicker Tails Than the Past: Model Error as Branching Counterfactuals.
    Ex ante predicted outcomes should be interpreted as counterfactuals (potential histories), with errors as the spread between outcomes. But error rates have error rates. We reapply measurements of uncertainty about the estimation errors of the estimation errors of an estimation treated as branching counterfactuals. Such recursions of epistemic uncertainty have markedly different distributial properties from conventional sampling error, and lead to fatter tails in the projections than in past realizations. Counterfactuals of error rates always lead to fat tails, regardless of (...)
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  37. Nassim N. Taleb & Avital Pilpel (2007). Epistemology and Risk Management. Risk and Regulation 13:6--7.
  38. Gregory Wheeler (2004). A Resource-Bounded Default Logic. In J. Delgrande & T. Schaub (eds.), Proceedings of NMR 2004. AAAI
    This paper presents statistical default logic, an expansion of classical (i.e., Reiter) default logic that allows us to model common inference patterns found in standard inferential statistics, including hypothesis testing and the estimation of a populations mean, variance and proportions. The logic replaces classical defaults with ordered pairs consisting of a Reiter default in the first coordinate and a real number within the unit interval in the second coordinate. This real number represents an upper-bound limit on the probability of accepting (...)
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   4 citations  
  39. Gregory Wheeler & Carlos Damasio (2004). An Implementation of Statistical Default Logic. In Jose Alferes & Joao Leite (eds.), Logics in Artificial Intelligence (JELIA 2004). Springer
    Statistical Default Logic (SDL) is an expansion of classical (i.e., Reiter) default logic that allows us to model common inference patterns found in standard inferential statistics, e.g., hypothesis testing and the estimation of a population‘s mean, variance and proportions. This paper presents an embedding of an important subset of SDL theories, called literal statistical default theories, into stable model semantics. The embedding is designed to compute the signature set of literals that uniquely distinguishes each extension on a statistical default theory (...)
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  40. Jon Williamson (2013). Why Frequentists and Bayesians Need Each Other. Erkenntnis 78 (2):293-318.
    The orthodox view in statistics has it that frequentism and Bayesianism are diametrically opposed—two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are complementary and need to mesh if probabilistic reasoning is to be carried out correctly.
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography   2 citations