Hostname: page-component-76fb5796d-9pm4c Total loading time: 0 Render date: 2024-04-29T13:20:21.417Z Has data issue: false hasContentIssue false

AIC and Large Samples

Published online by Cambridge University Press:  01 January 2022

Abstract

I discuss the behavior of the Akaike Information Criterion in the limit when the sample size grows. I show the falsity of the claim made recently by Stanley Mulaik in Philosophy of Science that AIC would not distinguish between saturated and other correct factor analytic models in this limit. I explain the meaning and demonstrate the validity of the familiar, more moderate criticism that AIC is not a consistent estimator of the number of parameters of the smallest correct model. I also give a short explanation why this feature of AIC is compatible with the motives for using it.

Type
Confirmation and Statistical Inference
Copyright
Copyright © The Philosophy of Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

I would like to express my gratitude to Stanley Mulaik and Malcolm Forster for our discussions on the topics addressed in this paper.

References

Akaike, Hirotugu (1987), “Factor Analysis and AIC”, Factor Analysis and AIC 52:317332.Google Scholar
Bollen, Kenneth A. (1989), Structural Equations with Latent Variables. New York: John Wiley & Sons.CrossRefGoogle Scholar
Bozdogan, Hamparsum (1987), “Model Selection and Akaike's Information Criterion (AIC): The General Theory and its Analytic Extensions”, Model Selection and Akaike's Information Criterion (AIC): The General Theory and its Analytic Extensions 52:345370.Google Scholar
Burnham, Kenneth P., and Anderson, David R. (1998), Model Selection and Inference: A Practical Information-Theoretic Approach. New York: Springer.CrossRefGoogle Scholar
Forster, Malcolm, and Sober, Elliot (1994), “How to Tell when Simpler, More Unified, or Less Ad Hoc Theories will Provide More Accurate Predictions”, How to Tell when Simpler, More Unified, or Less Ad Hoc Theories will Provide More Accurate Predictions 45:135.Google Scholar
Hogg, Robert V., and Craig, Allen T. (1965), Introduction to Mathematical Statistics. 2d ed. New York: Macmillan.Google Scholar
Kieseppä, I. A. (2001), “Statistical Model Selection Criteria and the Philosophical Problem of Underdetermination”, Statistical Model Selection Criteria and the Philosophical Problem of Underdetermination 52:761794.Google Scholar
McDonald, Roderick P. (1989), “An Index of Goodness-of-Fit Based on Noncentrality”, Journal of Classification 6:97103.CrossRefGoogle Scholar
McDonald, Roderick P., and Marsh, Herbert W. (1990), “Choosing a Multivariate Model: Noncentrality and Goodness of Fit”, Choosing a Multivariate Model: Noncentrality and Goodness of Fit 107:247255.Google Scholar
Mulaik, Stanley A. (2001), “The Curve-Fitting Problem: An Objectivist View”, The Curve-Fitting Problem: An Objectivist View 68:218241.Google Scholar
Sakamoto, Yosiyuki, Ishiguro, M., and Kitagawa, G. (1986), Akaike Information Criterion Statistics. Tokyo: KTK Scientific Publishers.Google Scholar
Woodroofe, Michael (1982), “On Model Selection and the Arc Sine Laws”, On Model Selection and the Arc Sine Laws 10:11821194.Google Scholar