This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related categories

172 found
Order:
1 — 50 / 172
  1. The Psychology of The Two Envelope Problem.J. S. Markovitch - manuscript
    This article concerns the psychology of the paradoxical Two Envelope Problem. The goal is to find instructive variants of the envelope switching problem that are capable of clear-cut resolution, while still retaining paradoxical features. By relocating the original problem into different contexts involving commutes and playing cards the reader is presented with a succession of resolved paradoxes that reduce the confusion arising from the parent paradox. The goal is to reduce confusion by understanding how we sometimes misread mathematical statements; or, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Distention for Sets of Probabilities.Rush T. Stewart & Michael Nielsen - manuscript
    A prominent pillar of Bayesian philosophy is that, relative to just a few constraints, priors “wash out” in the limit. Bayesians often appeal to such asymptotic results as a defense against charges of excessive subjectivity. But, as Seidenfeld and coauthors observe, what happens in the short run is often of greater interest than what happens in the limit. They use this point as one motivation for investigating the counterintuitive short run phenomenon of dilation since, it is alleged, “dilation contrasts with (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  3. The Future Has Thicker Tails Than the Past: Model Error as Branching Counterfactuals.Nassim N. Taleb -
    Ex ante predicted outcomes should be interpreted as counterfactuals (potential histories), with errors as the spread between outcomes. But error rates have error rates. We reapply measurements of uncertainty about the estimation errors of the estimation errors of an estimation treated as branching counterfactuals. Such recursions of epistemic uncertainty have markedly different distributial properties from conventional sampling error, and lead to fatter tails in the projections than in past realizations. Counterfactuals of error rates always lead to fat tails, regardless of (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  4. Legal Burdens of Proof and Statistical Evidence.Georgi Gardiner - forthcoming - In James Chase & David Coady (eds.), The Routledge Handbook of Applied Epistemology. Routledge.
    In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases suggests (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  5. Sample Representation in the Social Sciences.Kino Zhao - forthcoming - Synthese:1-19.
    The social sciences face a problem of sample non-representation, where the majority of samples consist of undergraduate students from Euro-American institutions. The problem has been identified for decades with little trend of improvement. In this paper, I trace the history of sampling theory. The dominant framework, called the design-based approach, takes random sampling as the gold standard. The idea is that a sampling procedure that is maximally uninformative prevents samplers from introducing arbitrary bias, thus preserving sample representation. I show how (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6. Causal Inference From Noise.Nevin Climenhaga, Lane DesAutels & Grant Ramsey - 2021 - Noûs 55 (1):152-170.
    "Correlation is not causation" is one of the mantras of the sciences—a cautionary warning especially to fields like epidemiology and pharmacology where the seduction of compelling correlations naturally leads to causal hypotheses. The standard view from the epistemology of causation is that to tell whether one correlated variable is causing the other, one needs to intervene on the system—the best sort of intervention being a trial that is both randomized and controlled. In this paper, we argue that some purely correlational (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Statistical Inference and the Replication Crisis.Lincoln J. Colling & Dénes Szűcs - 2021 - Review of Philosophy and Psychology 12 (1):121-147.
    The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength of evidence for hypotheses. (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  8. Contested Numbers: The failed negotiation of objective statistics in a methodological review of Kinsey et al.’s sex research.Tabea Cornel - 2021 - History and Philosophy of the Life Sciences 43 (1):1-32.
    From 1950 to 1952, statisticians W.G. Cochran, C.F. Mosteller, and J.W. Tukey reviewed A.C. Kinsey and colleagues’ methodology. Neither the history-and-philosophy of science literature nor contemporary theories of interdisciplinarity seem to offer a conceptual model that fits this forced interaction, which was characterized by significant power asymmetries and disagreements on multiple levels. The statisticians initially attempted to exclude all non-technical matters from their evaluation, but their political and personal investments interfered with this agenda. In the face of McCarthy’s witch hunts, (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  9. Francis Galton’s Regression Towards Mediocrity and the Stability of Types.Adam Krashniak & Ehud Lamm - 2021 - Studies in History and Philosophy of Science Part A 81:6-19.
    A prevalent narrative locates the discovery of the statistical phenomenon of regression to the mean in the work of Francis Galton. It is claimed that after 1885, Galton came to explain the fact that offspring deviated less from the mean value of the population than their parents did as a population-level statistical phenomenon and not as the result of the processes of inheritance. Arguing against this claim, we show that Galton did not explain regression towards mediocrity statistically, and did not (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10. Misalignment Between Research Hypotheses and Statistical Hypotheses: A Threat to Evidence-Based Medicine?Insa Lawler & Georg Zimmermann - 2021 - Topoi 40 (2):307-318.
    Evidence-based medicine frequently uses statistical hypothesis testing. In this paradigm, data can only disconfirm a research hypothesis’ competitors: One tests the negation of a statistical hypothesis that is supposed to correspond to the research hypothesis. In practice, these hypotheses are often misaligned. For instance, directional research hypotheses are often paired with non-directional statistical hypotheses. Prima facie, one cannot gain proper evidence for one’s research hypothesis employing a misaligned statistical hypothesis. This paper sheds lights on the nature of and the reasons (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. Inflated effect sizes and underpowered tests: how the severity measure of evidence is affected by the winner’s curse.Guillaume Rochefort-Maranda - 2021 - Philosophical Studies 178 (1):133-145.
    My aim in this paper is to show how the problem of inflated effect sizes corrupts the severity measure of evidence. This has never been done. In fact, the Winner’s Curse is barely mentioned in the philosophical literature. Since the severity score is the predominant measure of evidence for frequentist tests in the philosophical literature, it is important to underscore its flaws. It is also crucial to bring the philosophical literature up to speed with the limits of classical testing. The (...)
    Remove from this list   Direct download (4 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  12. Phrenology and the Average Person, 1840–1940.Fenneke Sysling - 2021 - History of the Human Sciences 34 (2):27-45.
    The popular science of phrenology is known for its preoccupation with geniuses and criminals, but this article shows that phrenologists also introduced ideas about the ‘average’ person. Popular phrenologists in the US and the UK examined the heads of their clients to give an indication of their character. Based on the publications of phrenologists and on a large collection of standardized charts with clients’ scores, this article analyses their definition of what they considered to be the ‘average’. It can be (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Reliability: An Introduction.Stefano Bonzio, Jürgen Landes & Barbara Osimani - 2020 - Synthese:1-10.
  14. Statistical Significance Under Low Power: A Gettier Case?Daniel Dunleavy - 2020 - Journal of Brief Ideas.
  15. Scientific Self-Correction: The Bayesian Way.Felipe Romero & Jan Sprenger - 2020 - Synthese:1-21.
    The enduring replication crisis in many scientific disciplines casts doubt on the ability of science to estimate effect sizes accurately, and in a wider sense, to self-correct its findings and to produce reliable knowledge. We investigate the merits of a particular countermeasure—replacing null hypothesis significance testing with Bayesian inference—in the context of the meta-analytic aggregation of effect sizes. In particular, we elaborate on the advantages of this Bayesian reform proposal under conditions of publication bias and other methodological imperfections that are (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   3 citations  
  16. “Repeated Sampling From the Same Population?” A Critique of Neyman and Pearson’s Responses to Fisher.Mark Rubin - 2020 - European Journal for Philosophy of Science 10 (3):1-15.
    Fisher criticised the Neyman-Pearson approach to hypothesis testing by arguing that it relies on the assumption of “repeated sampling from the same population.” The present article considers the responses to this criticism provided by Pearson and Neyman. Pearson interpreted alpha levels in relation to imaginary replications of the original test. This interpretation is appropriate when test users are sure that their replications will be equivalent to one another. However, by definition, scientific researchers do not possess sufficient knowledge about the relevant (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  17. Conditional Degree of Belief and Bayesian Inference.Jan Sprenger - 2020 - Philosophy of Science 87 (2):319-335.
    Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis for Bayesian (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  18. Trung tâm ISR có bài ra mừng 130 năm Ngày sinh Chủ tịch Hồ Chí Minh.Hồ Mạnh Toàn - 2020 - ISR Phenikaa 2020 (5):1-3.
    Bài mới xuất bản vào ngày 19-5-2020 với tác giả liên lạc là NCS Nguyễn Minh Hoàng, cán bộ nghiên cứu của Trung tâm ISR, trình bày tiếp cận thống kê Bayesian cho việc nghiên cứu dữ liệu khoa học xã hội. Đây là kết quả của định hướng Nhóm nghiên cứu SDAG được nêu rõ ngay từ ngày 18-5-2019.
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  19. Mereological Dominance and Simpson’s Paradox.Tung-Ying Wu - 2020 - Philosophia: Philosophical Quarterly of Israel 48 (1):391–404.
    Numerous papers have investigated the transitivity principle of ‘better-than.’ A recent argument appeals to the principle of mereological dominance for transitivity. However, writers have not treated mereological dominance in much detail. This paper sets out to evaluate the generality of mereological dominance and its effectiveness in supporting the transitivity principle. I found that the mereological dominance principle is vulnerable to a counterexample based on Simpson’s Paradox. The thesis concludes that the mereological dominance principle should be revised in certain ways.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20. Thinking in Multitudes: Questionnaires and Composite Cases in Early American Psychology.Jacy L. Young - 2020 - History of the Human Sciences 33 (3-4):160-174.
    In the late 19th century, the questionnaire was one means of taking the case study into the multitudes. This article engages with Forrester’s idea of thinking in cases as a means of interrogating questionnaire-based research in early American psychology. Questionnaire research was explicitly framed by psychologists as a practice involving both natural historical and statistical forms of scientific reasoning. At the same time, questionnaire projects failed to successfully enact the latter aspiration in terms of synthesizing masses of collected data into (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  21. An Automatic Ockham’s Razor for Bayesians?Gordon Belot - 2019 - Erkenntnis 84 (6):1361-1367.
    It is sometimes claimed that the Bayesian framework automatically implements Ockham’s razor—that conditionalizing on data consistent with both a simple theory and a complex theory more or less inevitably favours the simpler theory. It is shown here that the automatic razor doesn’t in fact cut it for certain mundane curve-fitting problems.
    Remove from this list   Direct download (5 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  22. Evidence Amalgamation, Plausibility, and Cancer Research.Marta Bertolaso & Fabio Sterpetti - 2019 - Synthese 196 (8):3279-3317.
    Cancer research is experiencing ‘paradigm instability’, since there are two rival theories of carcinogenesis which confront themselves, namely the somatic mutation theory and the tissue organization field theory. Despite this theoretical uncertainty, a huge quantity of data is available thanks to the improvement of genome sequencing techniques. Some authors think that the development of new statistical tools will be able to overcome the lack of a shared theoretical perspective on cancer by amalgamating as many data as possible. We think instead (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  23. Clinical Equipoise and Adaptive Clinical Trials.Nicolas Fillion - 2019 - Topoi 38 (2):457-467.
    Ethically permissible clinical trials must not expose subjects to risks that are unreasonable in relation to anticipated benefits. In the research ethics literature, this moral requirement is typically understood in one of two different ways: as requiring the existence of a state of clinical equipoise, meaning a state of honest, professional disagreement among the community of experts about the preferred treatment; or as requiring an equilibrium between individual and collective ethics. It has been maintained that this second interpretation makes it (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24. To Read More Papers, or to Read Papers Better? A Crucial Point for the Reproducibility Crisis.Thiago F. A. França & José M. Monserrat - 2019 - Bioessays 41 (1):1800206.
    The overflow of scientific literature stimulates poor reading habits which can aggravate science's reproducibility crisis. Thus, solving the reproducibility crisis demands not only methodological changes, but also changes in our relationship with the scientific literature, especially our reading habits. Importantly, this does not mean reading more, it means reading better.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25. A Moral Framework for Understanding of Fair ML Through Economic Models of Equality of Opportunity.Hoda Heidari - 2019 - Proceedings of the Conference on Fairness, Accountability, and Transparency 1.
    We map the recently proposed notions of algorithmic fairness to economic models of Equality of opportunity (EOP)---an extensively studied ideal of fairness in political philosophy. We formally show that through our conceptual mapping, many existing definition of algorithmic fairness, such as predictive value parity and equality of odds, can be interpreted as special cases of EOP. In this respect, our work serves as a unifying moral framework for understanding existing notions of algorithmic fairness. Most importantly, this framework allows us to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Why is Bayesian Confirmation Theory Rarely Practiced?Robert W. P. Luk - 2019 - Science and Philosophy 7 (1):3-20.
    Bayesian confirmation theory is a leading theory to decide the confirmation/refutation of a hypothesis based on probability calculus. While it may be much discussed in philosophy of science, is it actually practiced in terms of hypothesis testing by scientists? Since the assignment of some of the probabilities in the theory is open to debate and the risk of making the wrong decision is unknown, many scientists do not use the theory in hypothesis testing. Instead, they use alternative statistical tests that (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Direct Inference in the Material Theory of Induction.William Peden - 2019 - Philosophy of Science 86 (4):672-695.
    John D. Norton’s “Material Theory of Induction” has been one of the most intriguing recent additions to the philosophy of induction. Norton’s account appears to be a notably natural account of actual inductive practices, although his theory has attracted considerable criticism. I detail several novel issues for his theory but argue that supplementing the Material Theory with a theory of direct inference could address these problems. I argue that if this combination is possible, a stronger theory of inductive reasoning emerges, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28. We Are Less Free Than How We Think: Regular Patterns in Nonverbal Communication.".Alessandro Vinciarelli, Anna Esposito, Mohammad Tayarani, Giorgio Roffo, Filomena Scibelli, Perrone Francesco & Dong BachVo - 2019 - In Multimodal Behavior Analysis in the Wild Advances and Challenges Computer Vision and Pattern Recognition. pp. Pages 269-288.
    The goal of this chapter is to show that human behavior is not random but follows principles and laws that result into regular patterns that can be not only observed, but also automatically detected and analyzed. The word “behavior” accounts here for nonverbal behavioral cues (e.g., facial expressions, laughter, gestures, etc.) that people display, typically outside conscious awareness, during social interactions. In particular, the chapter shows that observable behavioral patterns typically account for social and psychological differences that cannot be observed (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  29. Regression Explanation and Statistical Autonomy.Joeri Witteveen - 2019 - Biology and Philosophy 34 (5):1-20.
    The phenomenon of regression toward the mean is notoriously liable to be overlooked or misunderstood; regression fallacies are easy to commit. But even when regression phenomena are duly recognized, it remains perplexing how they can feature in explanations. This article develops a philosophical account of regression explanations as “statistically autonomous” explanations that cannot be deepened by adducing details about causal histories, even if the explananda as such are embedded in the causal structure of the world. That regression explanations have statistical (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Multiple Regression Is Not Multiple Regressions: The Meaning of Multiple Regression and the Non-Problem of Collinearity.Michael B. Morrissey & Graeme D. Ruxton - 2018 - Philosophy, Theory, and Practice in Biology 10 (3).
    Simple regression (regression analysis with a single explanatory variable), and multiple regression (regression models with multiple explanatory variables), typically correspond to very different biological questions. The former use regression lines to describe univariate associations. The latter describe the partial, or direct, effects of multiple variables, conditioned on one another. We suspect that the superficial similarity of simple and multiple regression leads to confusion in their interpretation. A clear understanding of these methods is essential, as they underlie a large range of (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  31. Imprecise Probability and the Measurement of Keynes's "Weight of Arguments".William Peden - 2018 - IfCoLog Journal of Logics and Their Applications 5 (4):677-708.
    Many philosophers argue that Keynes’s concept of the “weight of arguments” is an important aspect of argument appraisal. The weight of an argument is the quantity of relevant evidence cited in the premises. However, this dimension of argumentation does not have a received method for formalisation. Kyburg has suggested a measure of weight that uses the degree of imprecision in his system of “Evidential Probability” to quantify weight. I develop and defend this approach to measuring weight. I illustrate the usefulness (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. Two Impossibility Results for Measures of Corroboration.Jan Sprenger - 2018 - British Journal for the Philosophy of Science 69 (1):139--159.
    According to influential accounts of scientific method, such as critical rationalism, scientific knowledge grows by repeatedly testing our best hypotheses. But despite the popularity of hypothesis tests in statistical inference and science in general, their philosophical foundations remain shaky. In particular, the interpretation of non-significant results—those that do not reject the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis, or provide a reason to accept it? Popper sought for measures of corroboration that could (...)
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33. Probabilistic Opinion Pooling with Imprecise Probabilities.Rush T. Stewart & Ignacio Ojea Quintana - 2018 - Journal of Philosophical Logic 47 (1):17-45.
    The question of how the probabilistic opinions of different individuals should be aggregated to form a group opinion is controversial. But one assumption seems to be pretty much common ground: for a group of Bayesians, the representation of group opinion should itself be a unique probability distribution, 410–414, [45]; Bordley Management Science, 28, 1137–1148, [5]; Genest et al. The Annals of Statistics, 487–501, [21]; Genest and Zidek Statistical Science, 114–135, [23]; Mongin Journal of Economic Theory, 66, 313–351, [46]; Clemen and (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  34. Nhà khoa học Việt đứng tên một mình trên tạp chí hàng đầu về khoa học dữ liệu của Nature Research.Thùy Dương - 2017 - Dân Trí Online 2017 (10).
    Dân Trí (25/10/2017) — Lần đầu tiên, có một nhà khoa học người Việt, thực hiện công trình nghiên cứu hoàn toàn 100% tại Việt Nam, đứng tên một mình được công bố tại tạp chí Scientific Data, một tạp chí hàng đầu về khoa học dữ liệu thuộc danh mục xuất bản của Nature Research danh tiếng.
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  35. Pragmatic Warrant for Frequentist Statistical Practice: The Case of High Energy Physics.Kent Staley - 2017 - Synthese 194 (2).
    Amidst long-running debates within the field, high energy physics has adopted a statistical methodology that primarily employs standard frequentist techniques such as significance testing and confidence interval estimation, but incorporates Bayesian methods for limited purposes. The discovery of the Higgs boson has drawn increased attention to the statistical methods employed within HEP. Here I argue that the warrant for the practice in HEP of relying primarily on frequentist methods can best be understood as pragmatic, in the sense that statistical methods (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Expectational V. Instrumental Reasoning: What Statistics Contributes to Practical Reasoning.Mariam Thalos - 2017 - Diametros 53:125-149.
    Utility theories—both Expected Utility and non-Expected Utility theories—offer numericalized representations of classical principles meant for the regulation of choice under conditions of risk—a type of formal representation that reduces the representation of risk to a single number. I shall refer to these as risk-numericalizing theories of decision. I shall argue that risk--numericalizing theories are not satisfactory answers to the question: “How do I take the means to my ends?” In other words, they are inadequate or incomplete as instrumental theories. They (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Significance Testing, P-Values and the Principle of Total Evidence.Bengt Autzen - 2016 - European Journal for Philosophy of Science 6 (2):281-295.
    The paper examines the claim that significance testing violates the Principle of Total Evidence. I argue that p-values violate PTE for two-sided tests but satisfy PTE for one-sided tests invoking a sufficient test statistic independent of the preferred theory of evidence. While the focus of the paper is to evaluate a particular claim about the relationship of significance testing and PTE, I clarify the reading of this methodological principle along the way.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38. La valeur de l'incertitude : l'évaluation de la précision des mesures physiques et les limites de la connaissance expérimentale.Fabien Grégis - 2016 - Dissertation, Université Sorbonne Paris Cité Université Paris.Diderot (Paris 7)
    Abstract : A measurement result is never absolutely accurate: it is affected by an unknown “measurement error” which characterizes the discrepancy between the obtained value and the “true value” of the quantity intended to be measured. As a consequence, to be acceptable a measurement result cannot take the form of a unique numerical value, but has to be accompanied by an indication of its “measurement uncertainty”, which enunciates a state of doubt. What, though, is the value of measurement uncertainty? What (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  39. A Philosophical Guide to Chance. [REVIEW]J. T. M. Miller - 2016 - Philosophical Quarterly 66 (262):pqv037.
    A review of A Philosophical Guide to Chance by Toby Handfield. Cambridge CUP 2012.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. On the Correct Interpretation of P Values and the Importance of Random Variables.Guillaume Rochefort-Maranda - 2016 - Synthese 193 (6):1777-1793.
    The p value is the probability under the null hypothesis of obtaining an experimental result that is at least as extreme as the one that we have actually obtained. That probability plays a crucial role in frequentist statistical inferences. But if we take the word ‘extreme’ to mean ‘improbable’, then we can show that this type of inference can be very problematic. In this paper, I argue that it is a mistake to make such an interpretation. Under minimal assumptions about (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  41. Can the Behavioral Sciences Self-Correct? A Social Epistemic Study.Felipe Romero - 2016 - Studies in History and Philosophy of Science Part A 60:55-69.
    Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  42. Philosophy as Conceptual Engineering: Inductive Logic in Rudolf Carnap's Scientific Philosophy.Christopher F. French - 2015 - Dissertation, University of British Columbia
    My dissertation explores the ways in which Rudolf Carnap sought to make philosophy scientific by further developing recent interpretive efforts to explain Carnap’s mature philosophical work as a form of engineering. It does this by looking in detail at his philosophical practice in his most sustained mature project, his work on pure and applied inductive logic. I, first, specify the sort of engineering Carnap is engaged in as involving an engineering design problem and then draw out the complications of design (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  43. What is a Philosophical Effect? Models of Data in Experimental Philosophy.Bryce Huebner - 2015 - Philosophical Studies 172 (12):3273-3292.
    Papers in experimental philosophy rarely offer an account of what it would take to reveal a philosophically significant effect. In part, this is because experimental philosophers tend to pay insufficient attention to the hierarchy of models that would be required to justify interpretations of their data; as a result, some of their most exciting claims fail as explanations. But this does not impugn experimental philosophy. My aim is to show that experimental philosophy could be made more successful by developing, articulating, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  44. Hypothesis Testing, “Dutch Book” Arguments, and Risk.Daniel Malinsky - 2015 - Philosophy of Science 82 (5):917-929.
    “Dutch Book” arguments and references to gambling theorems are typical in the debate between Bayesians and scientists committed to “classical” statistical methods. These arguments have rarely convinced non-Bayesian scientists to abandon certain conventional practices, partially because many scientists feel that gambling theorems have little relevance to their research activities. In other words, scientists “don’t bet.” This article examines one attempt, by Schervish, Seidenfeld, and Kadane, to progress beyond such apparent stalemates by connecting “Dutch Book”–type mathematical results with principles actually endorsed (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. A Merton Model of Credit Risk with Jumps.Hoang Thi Phuong Thao & Quan-Hoang Vuong - 2015 - Journal of Statistics Applications and Probability Letters 2 (2):97-103.
    In this note, we consider a Merton model for default risk, where the firm’s value is driven by a Brownian motion and a compound Poisson process.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. Error and Uncertainty in Scientific Practice.Marcel Boumans, Giora Hon & Arthur Petersen (eds.) - 2014 - Pickering & Chatto.
  47. Compte rendu de « Desrosières, Alain (2014), Prouver et gouverner. Une analyse politique des statistiques publiques ». [REVIEW]Marc-Kevin Daoust - 2014 - Science Ouverte 1:1-7.
    Prouver et gouverner étudie le rôle des institutions, des conventions et des enjeux normatifs dans la construction d’indicateurs quantitatifs. Desrosières pense qu’on ne peut étudier le développement scientifique des statistiques sans prendre en compte le développement institutionnel – en particulier le rôle de l’État – dans la constitution de cette discipline.
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  48. A Frequentist Solution to Lindley & Phillips’ Stopping Rule Problem in Ecological Realm.Adam P. Kubiak - 2014 - Zagadnienia Naukoznawstwa 50 (200):135-145.
    In this paper I provide a frequentist philosophical-methodological solution for the stopping rule problem presented by Lindley & Phillips in 1976, which is settled in the ecological realm of testing koalas’ sex ratio. I deliver criteria for discerning a stopping rule, an evidence and a model that are epistemically more appropriate for testing the hypothesis of the case studied, by appealing to physical notion of probability and by analyzing the content of possible formulations of evidence, assumptions of models and meaning (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. OBCS: The Ontology of Biological and Clinical Statistics.Jie Zheng, Marcelline R. Harris, Anna Maria Masci, Yu Lin, Alfred Hero, Barry Smith & Yongqun He - 2014 - Proceedings of the Fifth International Conference on Biomedical Ontology 1327:65.
    Statistics play a critical role in biological and clinical research. To promote logically consistent representation and classification of statistical entities, we have developed the Ontology of Biological and Clinical Statistics (OBCS). OBCS extends the Ontology of Biomedical Investigations (OBI), an OBO Foundry ontology supported by some 20 communities. Currently, OBCS contains 686 terms, including 381 classes imported from OBI and 147 classes specific to OBCS. The goal of this paper is to present OBCS for community critique and to describe a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Against the Statistical Account of Special Science Laws.Andreas Hüttemann & Alexander Reutlinger - 2013 - In Vassilios Karakostas & Dennis Dieks (eds.), Recent Progress in Philosophy of Science: Perspectives and Foundational Problems. The Third European Philosophy of Science Association Proceedings. Springer. pp. 181-192.
    John Earman and John T. Roberts advocate a challenging and radical claim regarding the semantics of laws in the special sciences: the statistical account. According to this account, a typical special science law “asserts a certain precisely defined statistical relation among well-defined variables” and this statistical relation does not require being hedged by ceteris paribus conditions. In this paper, we raise two objections against the attempt to cash out the content of special science generalizations in statistical terms.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 172