How experimental algorithmics can benefit from Mayo's extensions to neyman–pearson theory of testing
Synthese 163 (3):385 - 396 (2008)
|Abstract||Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Experimentation is necessary. The analysis of algorithms should follow the same principles and standards of other empirical sciences. This article focuses on stochastic search algorithms, such as evolutionary algorithms or particle swarm optimization. Stochastic search algorithms tackle hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bio-informatics, where classical methods from mathematical optimization fail. Nowadays statistical tools that are able to cope with problems like small sample sizes, non-normal distributions, noisy results, etc. are developed for the analysis of algorithms. Although there are adequate tools to discuss the statistical significance of experimental data, statistical significance is not scientifically meaningful per se. It is necessary to bridge the gap between the statistical significance of an experimental result and its scientific meaning. We will propose some ideas on how to accomplish this task based on Mayo’s learning model (NPT*).|
|Keywords||No keywords specified (fix it)|
|Categories||No categories specified (fix it)|
|Through your library||Configure|
Similar books and articles
Max Albert (1992). Die Falsifikation Statistischer Hypothesen. Journal for General Philosophy of Science 23 (1):1 - 32.
Joseph Ramsey & Clark Glymour, Experiments on the Accuracy of Algorithms for Inferring the Structure of Genetic Regulatory Networks From Microarray Expression Levels.
Charles E. Boklage (1998). On the Position of Statistical Significance in the Epistemology of Experimental Science. Behavioral and Brain Sciences 21 (2):195-195.
Rolf Niedermeier (2006). Invitation to Fixed-Parameter Algorithms. Oxford University Press.
Johannes Lenhard (2006). Models and Statistical Inference: The Controversy Between Fisher and Neyman–Pearson. British Journal for the Philosophy of Science 57 (1):69-91.
Peffrey A. Witmer & Murray K. Clayton (1986). On Objectivity and Subjectivity in Statistical Inference: A Response to Mayo. Synthese 67 (2):369 - 379.
Deborah G. Mayo & Aris Spanos (2006). Severe Testing as a Basic Concept in a Neyman–Pearson Philosophy of Induction. British Journal for the Philosophy of Science 57 (2):323-357.
Deborah G. Mayo (1992). Did Pearson Reject the Neyman-Pearson Philosophy of Statistics? Synthese 90 (2):233 - 262.
Deborah G. Mayo (1983). An Objective Theory of Statistical Testing. Synthese 57 (3):297 - 340.
Andrés Rivadulla (1991). Mathematical Statistics and Metastatistical Analysis. Erkenntnis 34 (2):211 - 236.
Added to index2009-01-28
Total downloads3 ( #203,804 of 556,837 )
Recent downloads (6 months)0
How can I increase my downloads?