This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related categories
Siblings:
23 found
Search inside:
(import / add options)   Sort by:
  1. Ernest W. Adams (1996). Four Probability-Preserving Properties of Inferences. Journal of Philosophical Logic 25 (1):1 - 24.
    Different inferences in probabilistic logics of conditionals 'preserve' the probabilities of their premisses to different degrees. Some preserve certainty, some high probability, some positive probability, and some minimum probability. In the first case conclusions must have probability I when premisses have probability 1, though they might have probability 0 when their premisses have any lower probability. In the second case, roughly speaking, if premisses are highly probable though not certain then conclusions must also be highly probable. In the third case (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  2. Andre Ariew (2007). Under the Influence of Malthus's Law of Population Growth: Darwin Eschews the Statistical Techniques of Aldolphe Quetelet. Studies in History and Philosophy of Science Part C 38 (1):1-19.
    In the epigraph, Fisher is blaming two generations of theoretical biologists, from Darwin on, for ignoring Quetelet's statistical techniques and hence harboring confusions about evolution and natural selection. He is right to imply that Darwin and his contemporaries were aware of the core of Quetelet's work. Quetelet's seminal monograph, Sur L'homme, was widely discussed in Darwin's academic circles. We know that Darwin owned a copy (Schweber 1977). More importantly, we have in Darwin's notebooks two entries referring to Quetelet's work on (...)
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  3. David Atkinson & Jeanne Peijnenburg (2006). Probability All the Way Up. Synthese 153 (2):187 - 197.
    Richard Jeffrey’s radical probabilism (‘probability all the way down’) is augmented by the claim that probability cannot be turned into certainty, except by data that logically exclude all alternatives. Once we start being uncertain, no amount of updating will free us from the treadmill of uncertainty. This claim is cast first in objectivist and then in subjectivist terms.
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  4. Prasanta S. Bandyopadhyay & Malcolm Forster (eds.) (forthcoming). Handbook of the Philosophy of Statistics. Elsevier.
    Remove from this list |
    Translate to English
    |
     
    My bibliography  
     
    Export citation  
  5. Prasanta S. Bandyopadhyay & Malcolm Forster (eds.) (forthcoming). Philosophy of Statistics, Handbook of the Philosophy of Science, Volume 7. Elsevier.
  6. D. Bar (2004). Internet Websites Statistics Expressed in the Framework of the Ursell—Mayer Cluster Formalism. Foundations of Physics 34 (8):1203-1223.
    We show that it is possible to generalize the Ursell–Mayer cluster formalism so that it may cover also the statistics of Internet websites. Our starting point is the introduction of an extra variable that is assumed to take account, as will be explained, of the nature of the Internet statistics. We then show, following the arguments in Mayer, that one may obtain a phase transition-like phenomena.
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  7. Marcel J. Boumans, When Evidence is Not in the Mean.
    When observing or measuring phenomena, errors are inevitable, one can only aspire to reduce these errors as much as possible. An obvious strategy to achieve this reduction is by using more precise instruments. Another strategy was to develop a theory of these errors that could indicate how to take them into account. One of the greatest achievements of statistics in the beginning of the 19th century was such a theory of error. This theory told the practitioners that the best thing (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  8. Siu L. Chow (1998). The Null-Hypothesis Significance-Test Procedure is Still Warranted. Behavioral and Brain Sciences 21 (2):228-235.
    Entertaining diverse assumptions about empirical research, commentators give a wide range of verdicts on the NHSTP defence in Statistical significance. The null-hypothesis significance-test procedure (NHSTP) is defended in a framework in which deductive and inductive rules are deployed in theory corroboration in the spirit of Popper's Conjectures and refutations (1968b). The defensible hypothetico-deductive structure of the framework is used to make explicit the distinctions between (1) substantive and statistical hypotheses, (2) statistical alternative and conceptual alternative hypotheses, and (3) making (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  9. J. V. Howard (2009). Significance Testing with No Alternative Hypothesis: A Measure of Surprise. [REVIEW] Erkenntnis 70 (2):253 - 270.
    A pure significance test would check the agreement of a statistical model with the observed data even when no alternative model was available. The paper proposes the use of a modified p -value to make such a test. The model will be rejected if something surprising is observed (relative to what else might have been observed). It is shown that the relation between this measure of surprise (the s -value) and the surprise indices of Weaver and Good is similar (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  10. Andreas Hüttemann & Alexander Reutlinger (forthcoming). Against the Statistical Account of Special Science Laws. In Vassilios Karakostas & Dennis Dieks (eds.), Recent Progress in Philosophy of Science: Perspectives and Foundational Problems. The Third European Philosophy of Science Association Proceedings. Springer.
    John Earman and John T. Roberts advocate a challenging and radical claim regarding the semantics of laws in the special sciences: the statistical account. According to this account, a typical special science law “asserts a certain precisely defined statistical relation among well-defined variables” (Earman and Roberts 1999) and this statistical relation does not require being hedged by ceteris paribus conditions. In this paper, we raise two objections against the attempt to cash out the content of special science generalizations in statistical (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  11. A. la Caze (2009). Evidence-Based Medicine Must Be .. Journal of Medicine and Philosophy 34 (5):509-527.
    Proponents of evidence-based medicine (EBM) provide the “hierarchy of evidence” as a criterion for judging the reliability of therapeutic decisions. EBM's hierarchy places randomized interventional studies (and systematic reviews of such studies) higher in the hierarchy than observational studies, unsystematic clinical experience, and basic science. Recent philosophical work has questioned whether EBM's special emphasis on evidence from randomized interventional studies can be justified. Following the critical literature, and in particular the work of John Worrall, I agree that many of the (...)
    Remove from this list | Direct download (10 more)  
     
    My bibliography  
     
    Export citation  
  12. Bert Leuridan (2007). Galton's Blinding Glasses. Modern Statistics Hiding Causal Structure in Early Theories of Inheritance. In Federica Russo & Jon Williamson (eds.), Causality and Probability in the Sciences. 243--262.
  13. Deborah G. Mayo (1992). Did Pearson Reject the Neyman-Pearson Philosophy of Statistics? Synthese 90 (2):233 - 262.
    I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view Pearson did hold gives a (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  14. Massimo Pigliucci (2005). Bayes's Theorem. [REVIEW] Quarterly Review of Biology 80 (1):93-95.
    About a British Academy collection of papers on Bayes' famous theorem.
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  15. Kent Staley (2012). Strategies for Securing Evidence Through Model Criticism. European Journal for Philosophy of Science 2 (1):21-43.
    Some accounts of evidence regard it as an objective relationship holding between data and hypotheses, perhaps mediated by a testing procedure. Mayo’s error-statistical theory of evidence is an example of such an approach. Such a view leaves open the question of when an epistemic agent is justified in drawing an inference from such data to a hypothesis. Using Mayo’s account as an illustration, I propose a framework for addressing the justification question via a relativized notion, which I designate security , (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  16. Roger Stanev (2012). Stopping Rules and Data Monitoring in Clinical Trials. In H. W. de Regt, S. Hartmann & S. Okasha (eds.), EPSA Philosophy of Science: Amsterdam 2009, The European Philosophy of Science Association Proceedings Vol. 1, 375-386. Springer. 375--386.
    Stopping rules — rules dictating when to stop accumulating data and start analyzing it for the purposes of inferring from the experiment — divide Bayesians, Likelihoodists and classical statistical approaches to inference. Although the relationship between Bayesian philosophy of science and stopping rules can be complex (cf. Steel 2003), in general, Bayesians regard stopping rules as irrelevant to what inference should be drawn from the data. This position clashes with classical statistical accounts. For orthodox statistics, stopping rules do matter to (...)
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  17. Roger Stanev (2012). The Epistemology and Ethics of Early Stopping Decisions in Randomized Controlled Trials. Dissertation, University of British Columbia
    Philosophers subscribing to particular principles of statistical inference and evidence need to be aware of the limitations and practical consequences of the statistical approach they endorse. The framework proposed (for statistical inference in the field of medicine) allows disparate statistical approaches to emerge in their appropriate context. My dissertation proposes a decision theoretic model, together with methodological guidelines, that provide important considerations for deciding on clinical trial conduct. These considerations do not amount to more stopping rules. Instead, they are principles (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  18. Roger Stanev (2011). Statistical Decisions and the Interim Analyses of Clinical Trials. Theoretical Medicine and Bioethics 32 (1):61-74.
    This paper analyzes statistical decisions during the interim analyses of clinical trials. After some general remarks about the ethical and scientific demands of clinical trials, I introduce the notion of a hard-case clinical trial, explain the basic idea behind it, and provide a real example involving the interim analyses of zidovudine in asymptomatic HIV-infected patients. The example leads me to propose a decision analytic framework for handling ethical conflicts that might arise during the monitoring of hard-case clinical trials. I use (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  19. Nassim N. Taleb, The Future Has Thicker Tails Than the Past: Model Error as Branching Counterfactuals.
    Ex ante predicted outcomes should be interpreted as counterfactuals (potential histories), with errors as the spread between outcomes. But error rates have error rates. We reapply measurements of uncertainty about the estimation errors of the estimation errors of an estimation treated as branching counterfactuals. Such recursions of epistemic uncertainty have markedly different distributial properties from conventional sampling error, and lead to fatter tails in the projections than in past realizations. Counterfactuals of error rates always lead to fat tails, regardless of (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  20. Nassim N. Taleb & Avital Pilpel (2007). Epistemology and Risk Management. Risk and Regulation 13:6--7.
  21. Gregory Wheeler (2004). A Resource-Bounded Default Logic. In J. Delgrande & T. Schaub (eds.), Proceedings of NMR 2004. AAAI.
    This paper presents statistical default logic, an expansion of classical (i.e., Reiter) default logic that allows us to model common inference patterns found in standard inferential statistics, including hypothesis testing and the estimation of a populations mean, variance and proportions. The logic replaces classical defaults with ordered pairs consisting of a Reiter default in the first coordinate and a real number within the unit interval in the second coordinate. This real number represents an upper-bound limit on the probability of accepting (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  22. Gregory Wheeler & Carlos Damasio (2004). An Implementation of Statistical Default Logic. In Jose Alferes & Joao Leite (eds.), Logics in Artificial Intelligence (JELIA 2004). Springer.
    Statistical Default Logic (SDL) is an expansion of classical (i.e., Reiter) default logic that allows us to model common inference patterns found in standard inferential statistics, e.g., hypothesis testing and the estimation of a population‘s mean, variance and proportions. This paper presents an embedding of an important subset of SDL theories, called literal statistical default theories, into stable model semantics. The embedding is designed to compute the signature set of literals that uniquely distinguishes each extension on a statistical default theory (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  23. Jon Williamson (2013). Why Frequentists and Bayesians Need Each Other. Erkenntnis 78 (2):293-318.
    The orthodox view in statistics has it that frequentism and Bayesianism are diametrically opposed—two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are complementary and need to mesh if probabilistic reasoning is to be carried out correctly.
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation