Severe testing as a basic concept in a neyman–pearson philosophy of induction

Abstract
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a meta-statistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies. Introduction and overview 1.1 Behavioristic and inferential rationales for Neyman–Pearson (N–P) tests 1.2 Severity rationale: induction as severe testing 1.3 Severity as a meta-statistical concept: three required restrictions on the N–P paradigm Error statistical tests from the severity perspective 2.1 N–P test T(): type I, II error probabilities and power 2.2 Specifying test T() using p-values Neyman's post-data use of power 3.1 Neyman: does failure to reject H warrant confirming H? Severe testing as a basic concept for an adequate post-data inference 4.1 The severity interpretation of acceptance (SIA) for test T() 4.2 The fallacy of acceptance (i.e., an insignificant difference): Ms Rosy 4.3 Severity and power Fallacy of rejection: statistical vs. substantive significance 5.1 Taking a rejection of H0 as evidence for a substantive claim or theory 5.2 A statistically significant difference from H0 may fail to indicate a substantively important magnitude 5.3 Principle for the severity interpretation of a rejection (SIR) 5.4 Comparing significant results with different sample sizes in T(): large n problem 5.5 General testing rules for T(), using the severe testing concept The severe testing concept and confidence intervals 6.1 Dualities between one and two-sided intervals and tests 6.2 Avoiding shortcomings of confidence intervals Beyond the N–P paradigm: pure significance, and misspecification tests Concluding comments: have we shown severity to be a basic concept in a N–P philosophy of induction?
Keywords No keywords specified (fix it)
Categories (categorize this paper)
DOI 10.1093/bjps/axl003
Options
 Save to my reading list
Follow the author(s)
Edit this record
My bibliography
Export citation
Find it on Scholar
Mark as duplicate
Request removal from index
Revision history
Download options
Our Archive


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 29,820
Through your library
References found in this work BETA
Error and the Growth of Experimental Knowledge.Deborah G. Mayo - 1996 - International Studies in the Philosophy of Science 15 (1):455-459.
The Myth of the Framework.Karl Popper - 1996 - British Journal for the Philosophy of Science 47 (1):149-151.
The Logic of Scientific Discovery.Karl R. Popper - 1959 - Les Etudes Philosophiques 14 (3):383-383.

View all 26 references / Add more references

Citations of this work BETA
Some Surprising Facts About Surprising Facts.D. Mayo - 2014 - Studies in History and Philosophy of Science Part A 45 (1):79-86.

View all 21 citations / Add more citations

Similar books and articles
Models and Statistical Inference: The Controversy Between Fisher and Neyman–Pearson.Johannes Lenhard - 2006 - British Journal for the Philosophy of Science 57 (1):69-91.
Of Nulls and Norms.Peter Godfrey-Smith - 1994 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994:280 - 290.
The Logic of Tests of Significance.Stephen Spielman - 1974 - Philosophy of Science 41 (3):211-226.
Die Falsifikation Statistischer Hypothesen.Max Albert - 1992 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 23 (1):1 - 32.
How to Discount Double-Counting When It Counts: Some Clarifications.Deborah G. Mayo - 2008 - British Journal for the Philosophy of Science 59 (4):857-879.
Novel Evidence and Severe Tests.Deborah G. Mayo - 1991 - Philosophy of Science 58 (4):523-552.
Added to PP index
2009-01-28

Total downloads
82 ( #69,246 of 2,210,138 )

Recent downloads (6 months)
9 ( #41,797 of 2,210,138 )

How can I increase my downloads?

Monthly downloads
My notes
Sign in to use this feature