This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related categories

55 found
Order:
1 — 50 / 55
  1. Four Probability-Preserving Properties of Inferences.Ernest W. Adams - 1996 - Journal of Philosophical Logic 25 (1):1 - 24.
    Different inferences in probabilistic logics of conditionals 'preserve' the probabilities of their premisses to different degrees. Some preserve certainty, some high probability, some positive probability, and some minimum probability. In the first case conclusions must have probability I when premisses have probability 1, though they might have probability 0 when their premisses have any lower probability. In the second case, roughly speaking, if premisses are highly probable though not certain then conclusions must also be highly probable. In the third case (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  2. Foundations of Rational Choice Under Risk.Paul Anand - 1993 - Oxford University Press.
  3. Under the Influence of Malthus's Law of Population Growth: Darwin Eschews the Statistical Techniques of Aldolphe Quetelet.Andre Ariew - 2005 - Studies in History and Philosophy of Science Part C 38 (1):1-19.
    In the epigraph, Fisher is blaming two generations of theoretical biologists, from Darwin on, for ignoring Quetelet's statistical techniques and hence harboring confusions about evolution and natural selection. He is right to imply that Darwin and his contemporaries were aware of the core of Quetelet's work. Quetelet's seminal monograph, Sur L'homme, was widely discussed in Darwin's academic circles. We know that Darwin owned a copy (Schweber 1977). More importantly, we have in Darwin's notebooks two entries referring to Quetelet's work on (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Probability All the Way Up.David Atkinson & Jeanne Peijnenburg - 2006 - Synthese 153 (2):187-197.
    Richard Jeffrey’s radical probabilism (‘probability all the way down’) is augmented by the claim that probability cannot be turned into certainty, except by data that logically exclude all alternatives. Once we start being uncertain, no amount of updating will free us from the treadmill of uncertainty. This claim is cast first in objectivist and then in subjectivist terms.
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    Bookmark  
  5. Handbook of the Philosophy of Statistics.Prasanta S. Bandyopadhyay & Malcolm Forster (eds.) - forthcoming - Elsevier.
    Remove from this list  
    Translate
     
     
    Export citation  
     
    Bookmark  
  6. Philosophy of Statistics, Handbook of the Philosophy of Science, Volume 7.Prasanta S. Bandyopadhyay & Malcolm Forster (eds.) - forthcoming - Elsevier.
  7. Internet Websites Statistics Expressed in the Framework of the Ursell—Mayer Cluster Formalism.D. Bar - 2004 - Foundations of Physics 34 (8):1203-1223.
    We show that it is possible to generalize the Ursell–Mayer cluster formalism so that it may cover also the statistics of Internet websites. Our starting point is the introduction of an extra variable that is assumed to take account, as will be explained, of the nature of the Internet statistics. We then show, following the arguments in Mayer, that one may obtain a phase transition-like phenomena.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  8. A Critical Examination of the Analysis of Dichotomous Data.William H. Batchelder & Louis Narens - 1977 - Philosophy of Science 44 (1):113-135.
    This paper takes a critical look at theory-free, statistical methodologies for processing and interpreting data taken from respondents answering a set of dichotomous (yes-no) questions. The basic issue concerns to what extent theoretical conclusions based on such analyses are invariant under a class of "informationally equivalent" question transformations. First the notion of Boolean equivalence of two question sets is discussed. Then Lazarsfeld's latent structure analysis is considered in detail. It is discovered that the best fitting latent model depends on which (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. An Automatic Ockham’s Razor for Bayesians?Gordon Belot - 2018 - Erkenntnis:1-7.
    It is sometimes claimed that the Bayesian framework automatically implements Ockham's razor---that conditionalizing on data consistent with both a simple theory and a complex theory more or less inevitably favours the simpler theory. It is shown here that the automatic razor doesn't in fact cut it for certain mundane curve-fitting problems.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  10. Evidence Amalgamation, Plausibility, and Cancer Research.Marta Bertolaso & Fabio Sterpetti - forthcoming - Synthese:1-39.
    Cancer research is experiencing ‘paradigm instability’, since there are two rival theories of carcinogenesis which confront themselves, namely the somatic mutation theory and the tissue organization field theory. Despite this theoretical uncertainty, a huge quantity of data is available thanks to the improvement of genome sequencing techniques. Some authors think that the development of new statistical tools will be able to overcome the lack of a shared theoretical perspective on cancer by amalgamating as many data as possible. We think instead (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  11. When Evidence is Not in the Mean.Marcel J. Boumans - unknown
    When observing or measuring phenomena, errors are inevitable, one can only aspire to reduce these errors as much as possible. An obvious strategy to achieve this reduction is by using more precise instruments. Another strategy was to develop a theory of these errors that could indicate how to take them into account. One of the greatest achievements of statistics in the beginning of the 19th century was such a theory of error. This theory told the practitioners that the best thing (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  12. The Null-Hypothesis Significance-Test Procedure is Still Warranted.Siu L. Chow - 1998 - Behavioral and Brain Sciences 21 (2):228-235.
    Entertaining diverse assumptions about empirical research, commentators give a wide range of verdicts on the NHSTP defence in Statistical significance. The null-hypothesis significance- test procedure is defended in a framework in which deductive and inductive rules are deployed in theory corroboration in the spirit of Popper's Conjectures and refutations. The defensible hypothetico-deductive structure of the framework is used to make explicit the distinctions between substantive and statistical hypotheses, statistical alternative and conceptual alternative hypotheses, and making statistical decisions and drawing theoretical (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  13. Philosophical Aspect of Statistical Theory.C. West Churchman - 1946 - Philosophical Review 55 (1):81-87.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  14. Compte rendu de « Desrosières, Alain (2014), Prouver et gouverner. Une analyse politique des statistiques publiques ». [REVIEW]Marc-Kevin Daoust - 2014 - Science Ouverte 1:1-7.
    Prouver et gouverner étudie le rôle des institutions, des conventions et des enjeux normatifs dans la construction d’indicateurs quantitatifs. Desrosières pense qu’on ne peut étudier le développement scientifique des statistiques sans prendre en compte le développement institutionnel – en particulier le rôle de l’État – dans la constitution de cette discipline.
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  15. Philosophy as Conceptual Engineering: Inductive Logic in Rudolf Carnap's Scientific Philosophy.Christopher F. French - 2015 - Dissertation, University of British Columbia
  16. Legal Burdens of Proof and Statistical Evidence.Georgi Gardiner - forthcoming - In James Chase & David Coady (eds.), Routledge Handbook of Applied Epistemology. Routledge.
    In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases suggests (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. The Undetectable Difference: An Experimental Look at the ‘Problem’ of P-Values.William M. Goodman - 2010 - Statistical Literacy Website/Papers: Www.Statlit.Org/Pdf/2010GoodmanASA.Pdf.
    In the face of continuing assumptions by many scientists and journal editors that p-values provide a gold standard for inference, counter warnings are published periodically. But the core problem is not with p-values, per se. A finding that “p-value is less than α” could merely signal that a critical value has been exceeded. The question is why, when estimating a parameter, we provide a range (a confidence interval), but when testing a hypothesis about a parameter (e.g. µ = x) we (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. David Van Dantzig's Statistical Work.J. Hemelrijk - 1959 - Synthese 11 (4):335 - 351.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  19. Critical Analysis of Ian Hacking's Book "Logic of Statistical Inference".Amato Herzel, Giampiero Landenna, Ian Società Italiana di Statistica & Hacking - 1982 - Stampato Col Contributo Del Consiglio Nazionale Delle Ricerche.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  20. Significance Testing with No Alternative Hypothesis: A Measure of Surprise.J. V. Howard - 2009 - Erkenntnis 70 (2):253-270.
    A pure significance test would check the agreement of a statistical model with the observed data even when no alternative model was available. The paper proposes the use of a modified p -value to make such a test. The model will be rejected if something surprising is observed. It is shown that the relation between this measure of surprise and the surprise indices of Weaver and Good is similar to the relationship between a p -value, a corresponding odds-ratio, and a (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  21. What is a Philosophical Effect? Models of Data in Experimental Philosophy.Bryce Huebner - 2015 - Philosophical Studies 172 (12):3273-3292.
    Papers in experimental philosophy rarely offer an account of what it would take to reveal a philosophically significant effect. In part, this is because experimental philosophers tend to pay insufficient attention to the hierarchy of models that would be required to justify interpretations of their data; as a result, some of their most exciting claims fail as explanations. But this does not impugn experimental philosophy. My aim is to show that experimental philosophy could be made more successful by developing, articulating, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Against the Statistical Account of Special Science Laws.Andreas Hüttemann & Alexander Reutlinger - 2013 - In Vassilios Karakostas & Dennis Dieks (eds.), Recent Progress in Philosophy of Science: Perspectives and Foundational Problems. The Third European Philosophy of Science Association Proceedings. Springer. pp. 181-192.
    John Earman and John T. Roberts advocate a challenging and radical claim regarding the semantics of laws in the special sciences: the statistical account. According to this account, a typical special science law “asserts a certain precisely defined statistical relation among well-defined variables” and this statistical relation does not require being hedged by ceteris paribus conditions. In this paper, we raise two objections against the attempt to cash out the content of special science generalizations in statistical terms.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Causal Conclusions That Flip Repeatedly and Their Justification.Kevin T. Kelly & Conor Mayo-Wilson - 2010 - Proceedings of the Twenty Sixth Conference on Uncertainty in Artificial Intelligence 26:277-286.
    Over the past two decades, several consistent procedures have been designed to infer causal conclusions from observational data. We prove that if the true causal network might be an arbitrary, linear Gaussian network or a discrete Bayes network, then every unambiguous causal conclusion produced by a consistent method from non-experimental data is subject to reversal as the sample size increases any finite number of times. That result, called the causal flipping theorem, extends prior results to the effect that causal discovery (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24. A Frequentist Solution to Lindley & Phillips’ Stopping Rule Problem in Ecological Realm.Adam P. Kubiak - 2014 - Zagadnienia Naukoznawstwa 50 (200):135-145.
    In this paper I provide a frequentist philosophical-methodological solution for the stopping rule problem presented by Lindley & Phillips in 1976, which is settled in the ecological realm of testing koalas’ sex ratio. I deliver criteria for discerning a stopping rule, an evidence and a model that are epistemically more appropriate for testing the hypothesis of the case studied, by appealing to physical notion of probability and by analyzing the content of possible formulations of evidence, assumptions of models and meaning (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. Evidence-Based Medicine Must Be ..A. la Caze - 2009 - Journal of Medicine and Philosophy 34 (5):509-527.
    Proponents of evidence-based medicine (EBM) provide the “hierarchy of evidence” as a criterion for judging the reliability of therapeutic decisions. EBM's hierarchy places randomized interventional studies (and systematic reviews of such studies) higher in the hierarchy than observational studies, unsystematic clinical experience, and basic science. Recent philosophical work has questioned whether EBM's special emphasis on evidence from randomized interventional studies can be justified. Following the critical literature, and in particular the work of John Worrall, I agree that many of the (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  26. Philosophy Paper.Michael LaPorte - 2013
  27. Models and Statistical Inference: The Controversy Between Fisher and Neyman–Pearson.Johannes Lenhard - 2006 - British Journal for the Philosophy of Science 57 (1):69-91.
    The main thesis of the paper is that in the case of modern statistics, the differences between the various concepts of models were the key to its formative controversies. The mathematical theory of statistical inference was mainly developed by Ronald A. Fisher, Jerzy Neyman, and Egon S. Pearson. Fisher on the one side and Neyman–Pearson on the other were involved often in a polemic controversy. The common view is that Neyman and Pearson made Fisher's account more stringent mathematically. It is (...)
    Remove from this list   Direct download (10 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Galton's Blinding Glasses. Modern Statistics Hiding Causal Structure in Early Theories of Inheritance.Bert Leuridan - 2007 - In Federica Russo & Jon Williamson (eds.), Causality and Probability in the Sciences. pp. 243--262.
    ABSTRACT. Probability and statistics play an important role in contemporary -philosophy of causality. They are viewed as glasses through which we can see or detect causal relations. However, they may sometimes act as blinding glasses, as I will argue in this paper. In the 19th century, Francis Galton tried to statistically analyze hereditary phenomena. Although he was a far better statistician than Gregor Mendel, his biological theory turned out to be less fruitful. This was no sheer accident. His knowledge of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. Hypothesis Testing, “Dutch Book” Arguments, and Risk.Daniel Malinsky - 2015 - Philosophy of Science 82 (5):917-929.
    “Dutch Book” arguments and references to gambling theorems are typical in the debate between Bayesians and scientists committed to “classical” statistical methods. These arguments have rarely convinced non-Bayesian scientists to abandon certain conventional practices, partially because many scientists feel that gambling theorems have little relevance to their research activities. In other words, scientists “don’t bet.” This article examines one attempt, by Schervish, Seidenfeld, and Kadane, to progress beyond such apparent stalemates by connecting “Dutch Book”–type mathematical results with principles actually endorsed (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  30. The Psychology of The Two Envelope Problem.J. S. Markovitch - manuscript
    This article concerns the psychology of the paradoxical Two Envelope Problem. The goal is to find instructive variants of the envelope switching problem that are capable of clear-cut resolution, while still retaining paradoxical features. By relocating the original problem into different contexts involving commutes and playing cards the reader is presented with a succession of resolved paradoxes that reduce the confusion arising from the parent paradox. The goal is to reduce confusion by understanding how we sometimes misread mathematical statements; or, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. Did Pearson Reject the Neyman-Pearson Philosophy of Statistics?Deborah G. Mayo - 1992 - Synthese 90 (2):233 - 262.
    I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view Pearson did hold gives a (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  32. A Philosophical Guide to Chance.J. T. M. Miller - 2016 - Philosophical Quarterly 66 (262):pqv037.
    A review of A Philosophical Guide to Chance by Toby Handfield. Cambridge CUP 2012.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. Multiple Regression Is Not Multiple Regressions: The Meaning of Multiple Regression and the Non-Problem of Collinearity.Michael B. Morrissey & Graeme D. Ruxton - 2018 - Philosophy, Theory, and Practice in Biology 10 (3).
    Simple regression (regression analysis with a single explanatory variable), and multiple regression (regression models with multiple explanatory variables), typically correspond to very different biological questions. The former use regression lines to describe univariate associations. The latter describe the partial, or direct, effects of multiple variables, conditioned on one another. We suspect that the superficial similarity of simple and multiple regression leads to confusion in their interpretation. A clear understanding of these methods is essential, as they underlie a large range of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Notes on Statistical Tests.Daniel Osherson - manuscript
    Let an unbiased coin be used to form an ω-sequence S of independent tosses. Let N be the positive integers. The finite initial segment of length n ∈ N is denoted by Sn (thus, S1 holds exactly the first toss). For n ∈ N , let Hn be the proportion of heads that show up in Sn.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. Bayes's Theorem. [REVIEW]Massimo Pigliucci - 2005 - Quarterly Review of Biology 80 (1):93-95.
    About a British Academy collection of papers on Bayes' famous theorem.
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  36. Derivation of the Cramer-Rao Bound.Ryan Reece - manuscript
    I give a pedagogical derivation of the Cramer-Rao Bound, which gives a lower bound on the variance of estimators used in statistical point estimation, commonly used to give numerical estimates of the systematic uncertainties in a measurement.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. On the Correct Interpretation of P Values and the Importance of Random Variables.Guillaume Rochefort-Maranda - 2016 - Synthese 193 (6):1777-1793.
    The p value is the probability under the null hypothesis of obtaining an experimental result that is at least as extreme as the one that we have actually obtained. That probability plays a crucial role in frequentist statistical inferences. But if we take the word ‘extreme’ to mean ‘improbable’, then we can show that this type of inference can be very problematic. In this paper, I argue that it is a mistake to make such an interpretation. Under minimal assumptions about (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  38. Statistics as Inductive Inference.Jan-Willem Romeijn - unknown
    An inductive logic is a system of inference that describes the relation between propositions on data, and propositions that extend beyond the data, such as predictions over future data, and general conclusions on all possible data. Statistics, on the other hand, is a mathematical discipline that describes procedures for deriving results about a population from sample data. These results include predictions on future samples, decisions on rejecting or accepting a hypothesis about the population, the determination of probability assignments over such (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  39. Can the Behavioral Sciences Self-Correct? A Social Epistemic Study.Felipe Romero - 2016 - Studies in History and Philosophy of Science Part A 60:55-69.
    Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Kettlewell From an Error Statisticians's Point of View.David Wÿss Rudge - 2001 - Perspectives on Science 9 (1):59-77.
    : Bayesians and error statisticians have relied heavily upon examples from physics in developing their accounts of scientific inference. The present essay demonstrates it is possible to analyze H.B.D. Kettlewell's classic study of natural selection from Deborah Mayo's error statistical point of view (Mayo 1996). A comparison with a previous analysis of this episode from a Bayesian perspective (Rudge 1998) reveals that the error statistical account makes better sense of investigations such as Kettlewell's because it clarifies how core elements in (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  41. Interpretation of the Hydrodynamical Formalism of Quantum Mechanics.Sebastiano Sonego - 1991 - Foundations of Physics 21 (10):1135-1181.
    The hydrodynamical formalism for the quantum theory of a nonrelativistic particle is considered, together with a reformulation of it which makes use of the methods of kinetic theory and is based on the existence of the Wigner phase-space distribution. It is argued that this reformulation provides strong evidence in favor of the statistical interpretation of quantum mechanics, and it is suggested that this latter could be better understood as an almost classical statistical theory. Moreover, it is shown how, within this (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Pragmatic Warrant for Frequentist Statistical Practice: The Case of High Energy Physics.Kent Staley - 2017 - Synthese 194 (2).
    Amidst long-running debates within the field, high energy physics has adopted a statistical methodology that primarily employs standard frequentist techniques such as significance testing and confidence interval estimation, but incorporates Bayesian methods for limited purposes. The discovery of the Higgs boson has drawn increased attention to the statistical methods employed within HEP. Here I argue that the warrant for the practice in HEP of relying primarily on frequentist methods can best be understood as pragmatic, in the sense that statistical methods (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  43. Strategies for Securing Evidence Through Model Criticism.Kent W. Staley - 2012 - European Journal for Philosophy of Science 2 (1):21-43.
    Some accounts of evidence regard it as an objective relationship holding between data and hypotheses, perhaps mediated by a testing procedure. Mayo’s error-statistical theory of evidence is an example of such an approach. Such a view leaves open the question of when an epistemic agent is justified in drawing an inference from such data to a hypothesis. Using Mayo’s account as an illustration, I propose a framework for addressing the justification question via a relativized notion, which I designate security , (...)
    Remove from this list   Direct download (8 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  44. Stopping Rules and Data Monitoring in Clinical Trials.Roger Stanev - 2012 - In H. W. de Regt, S. Hartmann & S. Okasha (eds.), EPSA Philosophy of Science: Amsterdam 2009, The European Philosophy of Science Association Proceedings Vol. 1, 375-386. Springer. pp. 375--386.
    Stopping rules — rules dictating when to stop accumulating data and start analyzing it for the purposes of inferring from the experiment — divide Bayesians, Likelihoodists and classical statistical approaches to inference. Although the relationship between Bayesian philosophy of science and stopping rules can be complex (cf. Steel 2003), in general, Bayesians regard stopping rules as irrelevant to what inference should be drawn from the data. This position clashes with classical statistical accounts. For orthodox statistics, stopping rules do matter to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. The Epistemology and Ethics of Early Stopping Decisions in Randomized Controlled Trials.Roger Stanev - 2012 - Dissertation, University of British Columbia
    Philosophers subscribing to particular principles of statistical inference and evidence need to be aware of the limitations and practical consequences of the statistical approach they endorse. The framework proposed (for statistical inference in the field of medicine) allows disparate statistical approaches to emerge in their appropriate context. My dissertation proposes a decision theoretic model, together with methodological guidelines, that provide important considerations for deciding on clinical trial conduct. These considerations do not amount to more stopping rules. Instead, they are principles (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  46. Statistical Decisions and the Interim Analyses of Clinical Trials.Roger Stanev - 2011 - Theoretical Medicine and Bioethics 32 (1):61-74.
    This paper analyzes statistical decisions during the interim analyses of clinical trials. After some general remarks about the ethical and scientific demands of clinical trials, I introduce the notion of a hard-case clinical trial, explain the basic idea behind it, and provide a real example involving the interim analyses of zidovudine in asymptomatic HIV-infected patients. The example leads me to propose a decision analytic framework for handling ethical conflicts that might arise during the monitoring of hard-case clinical trials. I use (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Probabilistic Opinion Pooling with Imprecise Probabilities.Rush T. Stewart & Ignacio Ojea Quintana - 2018 - Journal of Philosophical Logic 47 (1):17-45.
    The question of how the probabilistic opinions of different individuals should be aggregated to form a group opinion is controversial. But one assumption seems to be pretty much common ground: for a group of Bayesians, the representation of group opinion should itself be a unique probability distribution, 410–414, [45]; Bordley Management Science, 28, 1137–1148, [5]; Genest et al. The Annals of Statistics, 487–501, [21]; Genest and Zidek Statistical Science, 114–135, [23]; Mongin Journal of Economic Theory, 66, 313–351, [46]; Clemen and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. Tychomancy: Inferring Probability From Causal Structure.Michael Strevens - 2013 - Cambridge, MA: Harvard University Press.
    Maxwell's deduction of the probability distribution over the velocity of gas molecules—one of the most important passages in physics (Truesdell)—presents a riddle: a physical discovery of the first importance was made in a single inferential leap without any apparent recourse to empirical evidence. -/- Tychomancy proposes that Maxwell's derivation was not made a priori; rather, he inferred his distribution from non-probabilistic facts about the dynamics of intermolecular collisions. Further, the inference is of the same sort as everyday reasoning about the (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  49. Objective Evidence and Absence: Comment on Sober.Michael Strevens - 2009 - Philosophical Studies 143 (1):91 - 100.
    Elliott Sober argues that the statistical slogan “Absence of evidence is not evidence of absence” cannot be taken literally: it must be interpreted charitably as claiming that the absence of evidence is (typically) not very much evidence of absence. I offer an alternative interpretation, on which the slogan claims that absence of evidence is (typically) not objective evidence of absence. I sketch a definition of objective evidence, founded in the notion of an epistemically objective likelihood, and I show that in (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50. The Future Has Thicker Tails Than the Past: Model Error as Branching Counterfactuals.Nassim N. Taleb - manuscript
    Ex ante predicted outcomes should be interpreted as counterfactuals (potential histories), with errors as the spread between outcomes. But error rates have error rates. We reapply measurements of uncertainty about the estimation errors of the estimation errors of an estimation treated as branching counterfactuals. Such recursions of epistemic uncertainty have markedly different distributial properties from conventional sampling error, and lead to fatter tails in the projections than in past realizations. Counterfactuals of error rates always lead to fat tails, regardless of (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
1 — 50 / 55