Robustness and integrative survival in significance testing: The world's contribution to rationality
David Bourget (Western Ontario)
David Chalmers (ANU, NYU)
Rafael De Clercq
Ezio Di Nucci
Jonathan Jenkins Ichikawa
Jack Alan Reynolds
Learn more about PhilPapers
British Journal for the Philosophy of Science 44 (1):1-15 (1993)
Significance testing is the primary method for establishing causal relationships in psychology. Meehl [1978, 1990a, 1990b] and Faust  argue that significance tests and their interpretation are subject to actuarial and psychological biases, making continued adherence to these practices irrational, and even partially responsible for the slow progress of the ‘soft’ areas of psychology. I contend that familiar standards of testing and literature review, along with recently developed meta-analytic techniques, are able to correct the proposed actuarial and psychological biases. In particular, psychologists embrace a principle of robustness which states that real psychological effects are (1) reproducible by similar methods, (2) detectable by diverse means, and (3) able to survive theoretical integration. By contrast, spurious significant findings perish under the strain of persistent tests of their robustness. The resulting vindication of significance testing confers on the world a role in determining the rationality of a method, and also affords us an explanation for the fast progress of ‘hard’ areas of psychology. *I would like to thank Dick Boyd and Phil Gasper for helpful comments on the ideas presented here.
|Keywords||No keywords specified (fix it)|
|Categories||categorize this paper)|
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
|Through your library|
References found in this work BETA
No references found.
Citations of this work BETA
Jonathan Y. Tsou (2012). Intervention, Causal Reasoning, and the Neurobiology of Mental Disorders: Pharmacological Drugs as Experimental Instruments. Studies in History and Philosophy of Science Part C 43 (2):542-551.
J. D. Trout (1995). Diverse Tests on an Independent World. Studies in History and Philosophy of Science Part A 26 (3):407-429.
Similar books and articles
Richard J. Harris (1998). “With Friends Like This . . .”: Three Flaws in Chow's Defense of Significance Testing. Behavioral and Brain Sciences 21 (2):202-203.
Fred L. Bookstein (1998). Statistical Significance Testing Was Not Meant for Weak Corroborations of Weaker Theories. Behavioral and Brain Sciences 21 (2):195-196.
John F. Kihlstrom (1998). If You've Got an Effect, Test its Significance; If You've Got a Weak Effect, Do a Meta-Analysis. Behavioral and Brain Sciences 21 (2):205-206.
Jay Odenbaugh & Anna Alexandrova (2011). Buyer Beware: Robustness Analyses in Economics and Biology. Biology and Philosophy 26 (5):757-771.
Alfons Schuster & Yoko Yamaguchi (2009). The Survival of the Fittest and the Reign of the Most Robust: In Biology and Elsewhere. [REVIEW] Minds and Machines 19 (3):361-389.
Günther Palm (1998). Significance Testing – Does It Need This Defence? Behavioral and Brain Sciences 21 (2):214-215.
Zeno G. Swijtink (1998). A Plea for Popperian Significance Testing. Behavioral and Brain Sciences 21 (2):220-221.
Henry Rouanet (1998). Significance Testing in a Bayesian Framework: Assessing Direction of Effects. Behavioral and Brain Sciences 21 (2):217-218.
Brian D. Haig (2000). Statistical Significance Testing, Hypothetico-Deductive Method, and Theory Evaluation. Behavioral and Brain Sciences 23 (2):292-293.
J. D. Trout (1999). Measured Realism and Statistical Inference: An Explanation for the Fast Progress of "Hard" Psychology. Philosophy of Science 66 (3):272.
Added to index2009-01-28
Total downloads197 ( #17,480 of 1,934,535 )
Recent downloads (6 months)30 ( #19,949 of 1,934,535 )
How can I increase my downloads?