Behavioristic, evidentialist, and learning models of statistical testing

Philosophy of Science 52 (4):493-516 (1985)
While orthodox (Neyman-Pearson) statistical tests enjoy widespread use in science, the philosophical controversy over their appropriateness for obtaining scientific knowledge remains unresolved. I shall suggest an explanation and a resolution of this controversy. The source of the controversy, I argue, is that orthodox tests are typically interpreted as rules for making optimal decisions as to how to behave--where optimality is measured by the frequency of errors the test would commit in a long series of trials. Most philosophers of statistics, however, view the task of statistical methods as providing appropriate measures of the evidential-strength that data affords hypotheses. Since tests appropriate for the behavioral-decision task fail to provide measures of evidential-strength, philosophers of statistics claim the use of orthodox tests in science is misleading and unjustified. What critics of orthodox tests overlook, I argue, is that the primary function of statistical tests in science is neither to decide how to behave nor to assign measures of evidential strength to hypotheses. Rather, tests provide a tool for using incomplete data to learn about the process that generated it. This they do, I show, by providing a standard for distinguishing differences (between observed and hypothesized results) due to accidental or trivial errors from those due to systematic or substantively important discrepancies. I propose a reinterpretation of a commonly used orthodox test to make this learning model of tests explicit
Keywords No keywords specified (fix it)
Categories (categorize this paper)
DOI 10.1086/289272
 Save to my reading list
Follow the author(s)
My bibliography
Export citation
Find it on Scholar
Edit this record
Mark as duplicate
Revision history Request removal from index
Download options
PhilPapers Archive

Upload a copy of this paper     Check publisher's policy on self-archival     Papers currently archived: 16,667
External links
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library
References found in this work BETA

No references found.

Add more references

Citations of this work BETA
Greg Gandenberger (2015). A New Proof of the Likelihood Principle. British Journal for the Philosophy of Science 66 (3):475-503.

Add more citations

Similar books and articles
J. D. Trout (1994). Austere Realism and the Worldly Assumptions of Inferential Statistics. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994:190 - 199.
Deborah G. Mayo (1991). Novel Evidence and Severe Tests. Philosophy of Science 58 (4):523-552.
Max Albert (1992). Die Falsifikation Statistischer Hypothesen. Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 23 (1):1 - 32.
Peter Godfrey-Smith (1994). Of Nulls and Norms. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994:280 - 290.

Monthly downloads

Added to index


Total downloads

109 ( #27,747 of 1,726,995 )

Recent downloads (6 months)

102 ( #11,821 of 1,726,995 )

How can I increase my downloads?

My notes
Sign in to use this feature

Start a new thread
There  are no threads in this forum
Nothing in this forum yet.