This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related

Contents
259 found
Order:
1 — 50 / 259
  1. Cohen’s convention and the body of knowledge in behavioral science.Aran Arslan & Frank Zenker - manuscript
    In the context of discovery-oriented hypothesis testing research, behavioral scientists widely accept a convention for false positive (α) and false negative error rates (β) proposed by Jacob Cohen, who deemed the general relative seriousness of the antecedently accepted α = 0.05 to be matched by β = 0.20. Cohen’s convention not only ignores contexts of hypothesis testing where the more serious error is the β-error. Cohen’s convention also implies for discovery-oriented hypothesis testing research that a statistically significant observed effect is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Conditional Probability Is Not Countably Additive.Dmitri Gallow - manuscript
    I argue for a connection between two debates in the philosophy of probability. On the one hand, there is disagreement about conditional probability. Is it to be defined in terms of unconditional probability, or should we instead take conditional probability as the primitive notion? On the other hand, there is disagreement about how additive probability is. Is it merely finitely additive, or is it additionally countably additive? My thesis is that, if conditional probability is primitive, then it is not countably (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  3. Linguistic Copenhagen interpretation of quantum mechanics: Quantum Language [Ver. 6] (6th edition).Shiro Ishikawa - manuscript
    Recently I proposed “quantum language” (or,“the linguistic Copenhagen interpretation of quantum mechanics”), which was not only characterized as the metaphysical and linguistic turn of quantum mechanics but also the linguistic turn of Descartes=Kant epistemology. Namely, quantum language is the scientific final goal of dualistic idealism. It has a great power to describe classical systems as well as quantum systems. In this research report, quantum language is seen as a fundamental theory of statistics and reveals the true nature of statistics.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. The Psychology of The Two Envelope Problem.J. S. Markovitch - manuscript
    This article concerns the psychology of the paradoxical Two Envelope Problem. The goal is to find instructive variants of the envelope switching problem that are capable of clear-cut resolution, while still retaining paradoxical features. By relocating the original problem into different contexts involving commutes and playing cards the reader is presented with a succession of resolved paradoxes that reduce the confusion arising from the parent paradox. The goal is to reduce confusion by understanding how we sometimes misread mathematical statements; or, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Type I error rates are not usually inflated.Mark Rubin - manuscript
    The inflation of Type I error rates is thought to be one of the causes of the replication crisis. Questionable research practices such as p-hacking are thought to inflate Type I error rates above their nominal level, leading to unexpectedly high levels of false positives in the literature and, consequently, unexpectedly low replication rates. In this article, I offer an alternative view. I argue that questionable and other research practices do not usually inflate relevant Type I error rates. I begin (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6. The Future Has Thicker Tails than the Past: Model Error as Branching Counterfactuals.Nassim N. Taleb - manuscript
    Ex ante predicted outcomes should be interpreted as counterfactuals (potential histories), with errors as the spread between outcomes. But error rates have error rates. We reapply measurements of uncertainty about the estimation errors of the estimation errors of an estimation treated as branching counterfactuals. Such recursions of epistemic uncertainty have markedly different distributial properties from conventional sampling error, and lead to fatter tails in the projections than in past realizations. Counterfactuals of error rates always lead to fat tails, regardless of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Legal Burdens of Proof and Statistical Evidence.Georgi Gardiner - forthcoming - In James Chase & David Coady (eds.), The Routledge Handbook of Applied Epistemology. Routledge.
    In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases suggests (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   27 citations  
  8. Just Probabilities.Chad Lee-Stronach - forthcoming - Noûs.
    I defend the thesis that legal standards of proof are reducible to thresholds of probability. Many have rejected this thesis because it seems to entail that defendants can be found liable solely on the basis of statistical evidence. I argue that this inference is invalid. I do so by developing a view, called Legal Causalism, that combines Thomson's (1986) causal analysis of evidence with recent work in formal theories of causal inference. On this view, legal standards of proof can be (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9. Scientific Metaphysics and Information.Bruce Long - forthcoming - Springer.
    This book investigates the interplay between two new and influential subdisciplines in the philosophy of science and philosophy: contemporary scientific metaphysics and the philosophy of information. Scientific metaphysics embodies various scientific realisms and has a partial intellectual heritage in some forms of neo-positivism, but is far more attuned than the latter to statistical science, theory defeasibility, scale variability, and pluralist ontological and explanatory commitments, and is averse to a-priori conceptual analysis. The philosophy of information is the combination of what has (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  10. Making decisions with evidential probability and objective Bayesian calibration inductive logics.Mantas Radzvilas, William Peden & Francesco De Pretis - forthcoming - International Journal of Approximate Reasoning:1-37.
    Calibration inductive logics are based on accepting estimates of relative frequencies, which are used to generate imprecise probabilities. In turn, these imprecise probabilities are intended to guide beliefs and decisions — a process called “calibration”. Two prominent examples are Henry E. Kyburg's system of Evidential Probability and Jon Williamson's version of Objective Bayesianism. There are many unexplored questions about these logics. How well do they perform in the short-run? Under what circumstances do they do better or worse? What is their (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Edgar Zilsel: Philosopher, Historian, Sociologist. (Vienna Circle Institute Yearbook, vol. 27).Donata Romizi, Monika Wulz & Elisabeth Nemeth (eds.) - forthcoming - Cham: Springer Nature.
    This book provides a new all-round perspective on the life and work of Edgar Zilsel (1891-1944) as a philosopher, historian, and sociologist. He was close to the Vienna Circle and has been hitherto almost exclusively referred to in terms of the so-called “Zilsel thesis” on the origins of modern science. Much beyond this “thesis”, Zilsel’s brilliant work provides original insights on a broad number of topics, ranging from the philosophy of probability and statistics to the concept of “genius”, from the (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  12. Bayesian merging of opinions and algorithmic randomness.Francesca Zaffora Blando - forthcoming - British Journal for the Philosophy of Science.
    We study the phenomenon of merging of opinions for computationally limited Bayesian agents from the perspective of algorithmic randomness. When they agree on which data streams are algorithmically random, two Bayesian agents beginning the learning process with different priors may be seen as having compatible beliefs about the global uniformity of nature. This is because the algorithmically random data streams are of necessity globally regular: they are precisely the sequences that satisfy certain important statistical laws. By virtue of agreeing on (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. The deep neural network approach to the reference class problem.Oliver Buchholz - 2023 - Synthese 201 (3):1-24.
    Methods of machine learning (ML) are gradually complementing and sometimes even replacing methods of classical statistics in science. This raises the question whether ML faces the same methodological problems as classical statistics. This paper sheds light on this question by investigating a long-standing challenge to classical statistics: the reference class problem (RCP). It arises whenever statistical evidence is applied to an individual object, since the individual belongs to several reference classes and evidence might vary across them. Thus, the problem consists (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. Accuracy and infinity: a dilemma for subjective Bayesians.Mikayla Kelley & Sven Neth - 2023 - Synthese 201 (12):1-14.
    We argue that subjective Bayesians face a dilemma: they must offend against the spirit of their permissivism about rational credence or reject the principle that one should avoid accuracy dominance.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. The rise of mathematics in biology was not a matter of luck: Charles H. Pence: The rise of chance in evolutionary theory: a pompous parade of arithmetic. London: Academic Press, 2021, 190 pp, $125 PB. [REVIEW]Ehud Lamm - 2023 - Metascience 32 (3):359-362.
  16. Quantum Indeterminism, Free Will, and Self-Causation.Marco Masi - 2023 - Journal of Consciousness Studies 30 (5-6):32–56.
    A view that emancipates free will by means of quantum indeterminism is frequently rejected based on arguments pointing out its incompatibility with what we know about quantum physics. However, if one carefully examines what classical physical causal determinism and quantum indeterminism are according to physics, it becomes clear what they really imply–and, especially, what they do not imply–for agent-causation theories. Here, we will make necessary conceptual clarifications on some aspects of physical determinism and indeterminism, review some of the major objections (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17. A Dilemma for Solomonoff Prediction.Sven Neth - 2023 - Philosophy of Science 90 (2):288-306.
    The framework of Solomonoff prediction assigns prior probability to hypotheses inversely proportional to their Kolmogorov complexity. There are two well-known problems. First, the Solomonoff prior is relative to a choice of Universal Turing machine. Second, the Solomonoff prior is not computable. However, there are responses to both problems. Different Solomonoff priors converge with more and more data. Further, there are computable approximations to the Solomonoff prior. I argue that there is a tension between these two responses. This is because computable (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Merely statistical evidence: when and why it justifies belief.Paul Silva - 2023 - Philosophical Studies 180 (9):2639-2664.
    It is one thing to hold that merely statistical evidence is _sometimes_ insufficient for rational belief, as in typical lottery and profiling cases. It is another thing to hold that merely statistical evidence is _always_ insufficient for rational belief. Indeed, there are cases where statistical evidence plainly does justify belief. This project develops a dispositional account of the normativity of statistical evidence, where the dispositions that ground justifying statistical evidence are connected to the goals (= proper function) of objects. There (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  19. Essential materials for Bayesian Mindsponge Framework analytics.Aisdl Team - 2023 - Sm3D Science Portal.
    Acknowledging that many members of the SM3D Portal need reference documents related to Bayesian Mindsponge Framework (BMF) analytics to conduct research projects effectively, we present the essential materials and most up-to-date studies employing the method in this post. By summarizing all the publications and preprints associated with BMF analytics, we also aim to help researchers reduce the time and effort for information seeking, enhance proactive self-learning, and facilitate knowledge exchange and community dialogue through transparency.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Pharmacovigilance as Personalized Evidence.Francesco De Pretis, William Peden, Jürgen Landes & Barbara Osimani - 2022 - In Chiara Beneduce & Marta Bertolaso (eds.), Personalized Medicine in the Making. Springer. pp. 147-171.
    Personalized medicine relies on two points: 1) causal knowledge about the possible effects of X in a given statistical population; 2) assignment of the given individual to a suitable reference class. Regarding point 1, standard approaches to causal inference are generally considered to be characterized by a trade-off between how confidently one can establish causality in any given study (internal validity) and extrapolating such knowledge to specific target groups (external validity). Regarding point 2, it is uncertain which reference class leads (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. Calibrating statistical tools: Improving the measure of Humanity's influence on the climate.Corey Dethier - 2022 - Studies in History and Philosophy of Science Part A 94 (C):158-166.
    Over the last twenty-five years, climate scientists working on the attribution of climate change to humans have developed increasingly sophisticated statistical models in a process that can be understood as a kind of calibration: the gradual changes to the statistical models employed in attribution studies served as iterative revisions to a measurement(-like) procedure motivated primarily by the aim of neutralizing particularly troublesome sources of error or uncertainty. This practice is in keeping with recent work on the evaluation of models more (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  22. When is an Ensemble like a Sample?Corey Dethier - 2022 - Synthese 200 (52):1-22.
    Climate scientists often apply statistical tools to a set of different estimates generated by an “ensemble” of models. In this paper, I argue that the resulting inferences are justified in the same way as any other statistical inference: what must be demonstrated is that the statistical model that licenses the inferences accurately represents the probabilistic relationship between data and target. This view of statistical practice is appropriately termed “model-based,” and I examine the use of statistics in climate fingerprinting to show (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  23. The safe, the sensitive, and the severely tested: a unified account.Georgi Gardiner & Brian Zaharatos - 2022 - Synthese 200 (5):1-33.
    This essay presents a unified account of safety, sensitivity, and severe testing. S’s belief is safe iff, roughly, S could not easily have falsely believed p, and S’s belief is sensitive iff were p false S would not believe p. These two conditions are typically viewed as rivals but, we argue, they instead play symbiotic roles. Safety and sensitivity are both valuable epistemic conditions, and the relevant alternatives framework provides the scaffolding for their mutually supportive roles. The relevant alternatives condition (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24. There is Cause to Randomize.Cristian Larroulet Philippi - 2022 - Philosophy of Science 89 (1):152 - 170.
    While practitioners think highly of randomized studies, some philosophers argue that there is no epistemic reason to randomize. Here I show that their arguments do not entail their conclusion. Moreover, I provide novel reasons for randomizing in the context of interventional studies. The overall discussion provides a unified framework for assessing baseline balance, one that holds for interventional and observational studies alike. The upshot: practitioners’ strong preference for randomized studies can be defended in some cases, while still offering a nuanced (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25. The Material Theory of Induction at the Frontiers of Science.William Peden - 2022 - Episteme 19 (2):247-263.
    According to John D. Norton's Material Theory of Induction, all reasonable inductive inferences are justified in virtue of background knowledge about local uniformities in nature. These local uniformities indicate that our samples are likely to be representative of our target population in our inductions. However, a variety of critics have noted that there are many circumstances in which induction seems to be reasonable, yet such background knowledge is apparently absent. I call such absences ‘the frontiers of science', where background scientific (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Probability and Statistics in the Tinbergen-Keynes Debates.William Peden - 2022 - Erasmus Journal for Philosophy and Economics 15 (2):aa–aa.
    As part of a book symposium on Erwin Dekker's Jan Tinbergen (1903–1994) and the Rise of Economic Expertise (2021), William Peden reflects on shared views on the objectivity and nature of statistics between Tinbergen and Keynes underlying the Tinbergen-Keynes debates.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Explanatory reasoning in the material theory of induction.William Peden - 2022 - Metascience 31 (3):303-309.
    In his recent book, John Norton has created a theory of inference to the best explanation, within the context of his "material theory of induction". I apply it to the problem of scientific explanations that are false: if we want the theories in our explanations to be true, then why do historians and scientists often say that false theories explained phenomena? I also defend Norton's theory against some possible objections.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Distention for Sets of Probabilities.Rush T. Stewart & Michael Nielsen - 2022 - Philosophy of Science 89 (3):604-620.
    Bayesians often appeal to “merging of opinions” to rebut charges of excessive subjectivity. But what happens in the short run is often of greater interest than what happens in the limit. Seidenfeld and coauthors use this observation as motivation for investigating the counterintuitive short run phenomenon of dilation, since, they allege, dilation is “the opposite” of asymptotic merging of opinions. The measure of uncertainty relevant for dilation, however, is not the one relevant for merging of opinions. We explicitly investigate the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29. A preamble about doing research that sells.Quan-Hoang Vuong - 2022 - In Quan-Hoang Vuong, Minh-Hoang Nguyen & Viet-Phuong La (eds.), The mindsponge and BMF analytics for innovative thinking in social sciences and humanities. Berlin, Germany: De Gruyter.
    Being a researcher is challenging, especially in the beginning. Early Career Researchers (ECRs) need achievements to secure and expand their careers. In today’s academic landscape, researchers are under many pressures: data collection costs, the expectation of novelty, analytical skill requirements, lengthy publishing process, and the overall competitiveness of the career. Innovative thinking and the ability to turn good ideas into good papers are the keys to success.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Causal Inference from Noise.Nevin Climenhaga, Lane DesAutels & Grant Ramsey - 2021 - Noûs 55 (1):152-170.
    "Correlation is not causation" is one of the mantras of the sciences—a cautionary warning especially to fields like epidemiology and pharmacology where the seduction of compelling correlations naturally leads to causal hypotheses. The standard view from the epistemology of causation is that to tell whether one correlated variable is causing the other, one needs to intervene on the system—the best sort of intervention being a trial that is both randomized and controlled. In this paper, we argue that some purely correlational (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Contested Numbers: The failed negotiation of objective statistics in a methodological review of Kinsey et al.’s sex research.Tabea Cornel - 2021 - History and Philosophy of the Life Sciences 43 (1):1-32.
    From 1950 to 1952, statisticians W.G. Cochran, C.F. Mosteller, and J.W. Tukey reviewed A.C. Kinsey and colleagues’ methodology. Neither the history-and-philosophy of science literature nor contemporary theories of interdisciplinarity seem to offer a conceptual model that fits this forced interaction, which was characterized by significant power asymmetries and disagreements on multiple levels. The statisticians initially attempted to exclude all non-technical matters from their evaluation, but their political and personal investments interfered with this agenda. In the face of McCarthy’s witch hunts, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Why Simpler Computer Simulation Models Can Be Epistemically Better for Informing Decisions.Casey Helgeson, Vivek Srikrishnan, Klaus Keller & Nancy Tuana - 2021 - Philosophy of Science 88 (2):213-233.
    For computer simulation models to usefully inform climate risk management, uncertainties in model projections must be explored and characterized. Because doing so requires running the model many ti...
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  33. Francis Galton’s regression towards mediocrity and the stability of types.Adam Krashniak & Ehud Lamm - 2021 - Studies in History and Philosophy of Science Part A 81 (C):6-19.
    A prevalent narrative locates the discovery of the statistical phenomenon of regression to the mean in the work of Francis Galton. It is claimed that after 1885, Galton came to explain the fact that offspring deviated less from the mean value of the population than their parents did as a population-level statistical phenomenon and not as the result of the processes of inheritance. Arguing against this claim, we show that Galton did not explain regression towards mediocrity statistically, and did not (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. The epistemic consequences of pragmatic value-laden scientific inference.Adam P. Kubiak & Paweł Kawalec - 2021 - European Journal for Philosophy of Science 11 (2):1-26.
    In this work, we explore the epistemic import of the value-ladenness of Neyman-Pearson’s Theory of Testing Hypotheses by reconstructing and extending Daniel Steel’s argument for the legitimate influence of pragmatic values on scientific inference. We focus on how to properly understand N-P’s pragmatic value-ladenness and the epistemic reliability of N-P. We develop an account of the twofold influence of pragmatic values on N-P’s epistemic reliability and replicability. We refer to these two distinguished aspects as “direct” and “indirect”. We discuss the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Christianity & Science in Harmony?Robert W. P. Luk - 2021 - Science and Philosophy 9 (2):61-82.
    A worldview that does not involve religion or science seems to be incomplete. However, a worldview that includes both religion and science may arouse concern of incompatibility. This paper looks at the particular religion, Christianity, and proceeds to develop a worldview in which Christianity and Science are compatible with each other. The worldview may make use of some ideas of Christianity and may involve some author’s own ideas on Christianity. It is thought that Christianity and Science are in harmony in (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Revisiting the two predominant statistical problems: the stopping-rule problem and the catch-all hypothesis problem.Yusaku Ohkubo - 2021 - Annals of the Japan Association for Philosophy of Science 30:23-41.
    The history of statistics is filled with many controversies, in which the prime focus has been the difference in the “interpretation of probability” between Fre- quentist and Bayesian theories. Many philosophical arguments have been elabo- rated to examine the problems of both theories based on this dichotomized view of statistics, including the well-known stopping-rule problem and the catch-all hy- pothesis problem. However, there are also several “hybrid” approaches in theory, practice, and philosophical analysis. This poses many fundamental questions. This paper (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  37. Statistical Significance Testing in Economics.William Peden & Jan Sprenger - 2021 - In Conrad Heilmann & Julian Reiss (eds.), The Routledge Handbook of the Philosophy of Economics.
    The origins of testing scientific models with statistical techniques go back to 18th century mathematics. However, the modern theory of statistical testing was primarily developed through the work of Sir R.A. Fisher, Jerzy Neyman, and Egon Pearson in the inter-war period. Some of Fisher's papers on testing were published in economics journals (Fisher, 1923, 1935) and exerted a notable influence on the discipline. The development of econometrics and the rise of quantitative economic models in the mid-20th century made statistical significance (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  38. A Battle in the Statistics Wars: a simulation-based comparison of Bayesian, Frequentist and Williamsonian methodologies.Mantas Radzvilas, William Peden & Francesco De Pretis - 2021 - Synthese 199 (5-6):13689-13748.
    The debates between Bayesian, frequentist, and other methodologies of statistics have tended to focus on conceptual justifications, sociological arguments, or mathematical proofs of their long run properties. Both Bayesian statistics and frequentist (“classical”) statistics have strong cases on these grounds. In this article, we instead approach the debates in the “Statistics Wars” from a largely unexplored angle: simulations of different methodologies’ performance in the short to medium run. We conducted a large number of simulations using a straightforward decision problem based (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  39. Hacking, Ian (1936–).Samuli Reijula - 2021 - Routledge Encyclopedia of Philosophy.
    Ian Hacking (born in 1936, Vancouver, British Columbia) is most well-known for his work in the philosophy of the natural and social sciences, but his contributions to philosophy are broad, spanning many areas and traditions. In his detailed case studies of the development of probabilistic and statistical reasoning, Hacking pioneered the naturalistic approach in the philosophy of science. Hacking’s research on social constructionism, transient mental illnesses, and the looping effect of the human kinds make use of historical materials to shed (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Inflated effect sizes and underpowered tests: how the severity measure of evidence is affected by the winner’s curse.Guillaume Rochefort-Maranda - 2021 - Philosophical Studies 178 (1):133-145.
    My aim in this paper is to show how the problem of inflated effect sizes corrupts the severity measure of evidence. This has never been done. In fact, the Winner’s Curse is barely mentioned in the philosophical literature. Since the severity score is the predominant measure of evidence for frequentist tests in the philosophical literature, it is important to underscore its flaws. It is also crucial to bring the philosophical literature up to speed with the limits of classical testing. The (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41. When to adjust alpha during multiple testing: a consideration of disjunction, conjunction, and individual testing.Mark Rubin - 2021 - Synthese 199 (3-4):10969-11000.
    Scientists often adjust their significance threshold during null hypothesis significance testing in order to take into account multiple testing and multiple comparisons. This alpha adjustment has become particularly relevant in the context of the replication crisis in science. The present article considers the conditions in which this alpha adjustment is appropriate and the conditions in which it is inappropriate. A distinction is drawn between three types of multiple testing: disjunction testing, conjunction testing, and individual testing. It is argued that alpha (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  42. Phrenology and the average person, 1840–1940.Fenneke Sysling - 2021 - History of the Human Sciences 34 (2):27-45.
    The popular science of phrenology is known for its preoccupation with geniuses and criminals, but this article shows that phrenologists also introduced ideas about the ‘average’ person. Popular phrenologists in the US and the UK examined the heads of their clients to give an indication of their character. Based on the publications of phrenologists and on a large collection of standardized charts with clients’ scores, this article analyses their definition of what they considered to be the ‘average’. It can be (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Sample representation in the social sciences.Kino Zhao - 2021 - Synthese (10):9097-9115.
    The social sciences face a problem of sample non-representation, where the majority of samples consist of undergraduate students from Euro-American institutions. The problem has been identified for decades with little trend of improvement. In this paper, I trace the history of sampling theory. The dominant framework, called the design-based approach, takes random sampling as the gold standard. The idea is that a sampling procedure that is maximally uninformative prevents samplers from introducing arbitrary bias, thus preserving sample representation. I show how (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  44. Are Scientific Models of life Testable? A lesson from Simpson's Paradox.Prasanta S. Bandyopadhyay, Don Dcruz, Nolan Grunska & Mark Greenwood - 2020 - Sci 1 (3).
    We address the need for a model by considering two competing theories regarding the origin of life: (i) the Metabolism First theory, and (ii) the RNA World theory. We discuss two interrelated points, namely: (i) Models are valuable tools for understanding both the processes and intricacies of origin-of-life issues, and (ii) Insights from models also help us to evaluate the core objection to origin-of-life theories, called “the inefficiency objection”, which is commonly raised by proponents of both the Metabolism First theory (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. Reliability: an introduction.Stefano Bonzio, Jürgen Landes & Barbara Osimani - 2020 - Synthese (Suppl 23):1-10.
    How we can reliably draw inferences from data, evidence and/or experience has been and continues to be a pressing question in everyday life, the sciences, politics and a number of branches in philosophy (traditional epistemology, social epistemology, formal epistemology, logic and philosophy of the sciences). In a world in which we can now longer fully rely on our experiences, interlocutors, measurement instruments, data collection and storage systems and even news outlets to draw reliable inferences, the issue becomes even more pressing. (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Statistical significance under low power: A Gettier case?Daniel Dunleavy - 2020 - Journal of Brief Ideas.
    A brief idea on statistics and epistemology.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Classical versus Bayesian Statistics.Eric Johannesson - 2020 - Philosophy of Science 87 (2):302-318.
    In statistics, there are two main paradigms: classical and Bayesian statistics. The purpose of this article is to investigate the extent to which classicists and Bayesians can agree. My conclusion is that, in certain situations, they cannot. The upshot is that, if we assume that the classicist is not allowed to have a higher degree of belief in a null hypothesis after he has rejected it than before, then he has to either have trivial or incoherent credences to begin with (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Reflexões acerca de Big Data e Cognição.Joao Kogler - 2020 - In Mariana C. Broens Edna A. De Souza (ed.), Big Data: Implicações Epistemológicas e Éticas. São Paulo, State of São Paulo, Brazil: Editora Filoczar. pp. 145-157.
    In this essay we examine the relationships between Big Data and cognition, in particular human cognition. The reason for exploring such relationships lies in two aspects. First, because in the domain of cognitive science, many speculate about the benefits that the uses of Big Data analysis techniques can provide to the characterization and understanding of cognition. Secondly, because the scientific and technological sectors that promote data analysis activities, particularly statistics, computer science and data science, naturally accustomed to working with Big (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  49. Heinrich Hartmann. The Body Populace: Military Statistics and Demography in Europe before the First World War. Translated by Ellen Yutzy Glebe. (Transformations: Studies in the History of Science and Technology.) xxiii + 256 pp., notes, bibl., index. Cambridge, Mass./London: MIT Press, 2018. $40 (paper). ISBN 9780262536325. [REVIEW]Morgane Labbé - 2020 - Isis 111 (2):406-407.
  50. Scientific self-correction: the Bayesian way.Felipe Romero & Jan Sprenger - 2020 - Synthese (Suppl 23):1-21.
    The enduring replication crisis in many scientific disciplines casts doubt on the ability of science to estimate effect sizes accurately, and in a wider sense, to self-correct its findings and to produce reliable knowledge. We investigate the merits of a particular countermeasure—replacing null hypothesis significance testing with Bayesian inference—in the context of the meta-analytic aggregation of effect sizes. In particular, we elaborate on the advantages of this Bayesian reform proposal under conditions of publication bias and other methodological imperfections that are (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
1 — 50 / 259