David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Ezio Di Nucci
Jonathan Jenkins Ichikawa
Jack Alan Reynolds
Learn more about PhilPapers
Theoretical Medicine and Bioethics 7 (3) (1986)
In the past, hypothesis testing in medicine has employed the paradigm of the repeatable experiment. In statistical hypothesis testing, an unbiased sample is drawn from a larger source population, and a calculated statistic is compared to a preassigned critical region, on the assumption that the comparison could be repeated an indefinite number of times. However, repeated experiments often cannot be performed on human beings, due to ethical or economic constraints. We describe a new paradigm for hypothesis testing which uses only rearrangements of data present within the observed data set. The token swap test, based on this new paradigm, is applied to three data sets from cardiovascular pathology, and computational experiments suggest that the token swap test satisfies the Neyman Pearson condition.
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
No citations found.
Similar books and articles
Darrell P. Rowbottom & R. McNeill Alexander (2012). The Role of Hypotheses in Biomechanical Research. Science in Context 25 (2):247-262.
Deborah G. Mayo (1983). An Objective Theory of Statistical Testing. Synthese 57 (3):297 - 340.
Stephen Spielman (1973). A Refutation of the Neyman-Pearson Theory of Testing. British Journal for the Philosophy of Science 24 (3):201-222.
Spencer Graves (1978). On the Neyman-Pearson Theory of Testing. British Journal for the Philosophy of Science 29 (1):1-23.
Robert W. Frick (1998). Chow's Defense of Null-Hypothesis Testing: Too Traditional? Behavioral and Brain Sciences 21 (2):199-199.
Max Albert (1992). Die Falsifikation Statistischer Hypothesen. Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 23 (1):1 - 32.
Johannes Lenhard (2006). Models and Statistical Inference: The Controversy Between Fisher and Neyman–Pearson. British Journal for the Philosophy of Science 57 (1):69-91.
Peter Godfrey-Smith (1994). Of Nulls and Norms. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994:280 - 290.
Andrés Rivadulla (1991). Mathematical Statistics and Metastatistical Analysis. Erkenntnis 34 (2):211 - 236.
Deborah G. Mayo & Aris Spanos (2006). Severe Testing as a Basic Concept in a Neyman–Pearson Philosophy of Induction. British Journal for the Philosophy of Science 57 (2):323-357.
Added to index2009-01-28
Total downloads17 ( #268,241 of 1,925,534 )
Recent downloads (6 months)5 ( #187,179 of 1,925,534 )
How can I increase my downloads?