What experiment did we just do? Counterfactual error statistics and uncertainties about the reference class
Philosophy of Science 69 (2):279-299 (2002)
|Abstract||Experimenters sometimes insist that it is unwise to examine data before determining how to analyze them, as it creates the potential for biased results. I explore the rationale behind this methodological guideline from the standpoint of an error statistical theory of evidence, and I discuss a method of evaluating evidence in some contexts when this predesignation rule has been violated. I illustrate the problem of potential bias, and the method by which it may be addressed, with an example from the search for the top quark. A point in favor of the error statistical theory is its ability, demonstrated here, to explicate such methodological problems and suggest solutions, within the framework of an objective theory of evidence.|
|Keywords||No keywords specified (fix it)|
|Through your library||Configure|
Similar books and articles
Deborah G. Mayo (1997). Error Statistics and Learning From Error: Making a Virtue of Necessity. Philosophy of Science 64 (4):212.
Deborah G. Mayo (1997). Duhem's Problem, the Bayesian Way, and Error Statistics, or "What's Belief Got to Do with It?". Philosophy of Science 64 (2):222-244.
Kent Staley (2012). Strategies for Securing Evidence Through Model Criticism. European Journal for Philosophy of Science 2 (1):21-43.
Kent Staley (2008). Error-Statistical Elimination of Alternative Hypotheses. Synthese 163 (3):397 - 408.
Kent W. Staley, Strategies for Securing Evidence Through Model Criticism: An Error-Statistical Perspective.
Kent W. Staley (2002). What Experiment Did We Just Do? Philosophy of Science 69 (2):279-99.
Added to index2009-01-28
Total downloads7 ( #133,637 of 550,917 )
Recent downloads (6 months)0
How can I increase my downloads?