The difficulty of conducting relevant experiments has long been regarded as the central challenge to learning about the economy from data. The standard solution, going back to Haavelmo's famous “The Probability Approach in Econometrics” (1944), involved two elements: first, it placed substantial weight on a priori theory as a source of structural information, reducing econometric estimates to measurements of causally articulated systems; second, it emphasized the need for an appropriate statistical model of the data. These elements are usually seen as (...) tightly linked. I argue that they are, to a large extent, separable. Careful attention to the role of an empirically justified statistical model in underwriting probability explains puzzles not only in economics, but more generally with respect to recent criticisms of Reichenbach's principle of the common cause, which lies behind graph-theoretic causal search algorithms. And it provides an antidote to the pessimistic understanding of the possibilities for passive observation of causal structure in econometrics and related areas of Nancy Cartwright and others. (shrink)
(2013). Beyond mechanical markets – asset price swings, risk and the role of the state. Journal of Economic Methodology: Vol. 20, Methodology, Systemic Risk, and the Economics Profession, pp. 69-75. doi: 10.1080/1350178X.2013.774856.
The dominant view among macroeconomists is that macroeconomics reduces to microeconomics, both in the sense that all macroeconomic phenomena arise out of microeconomic phenomena and in the sense that macroeconomic theory—to the extent that it is correct—can be derived from microeconomic theory. More than that, the dominant view believes that macroeconomics should in practice use the reduced microeconomic theory: this is the program of microfoundations for macroeconomics to which the vast majority of macroeconomists adhere. The "microfoundational" models that they actually (...) employ are, however, characterized by another feature: they are highly idealized, even when they are applied as direct characterizations of actual data, which itself consists of macroeconomic aggregates. This paper explores the interrelationship between reductionism and idealization in the microfoundational program and the role of idealization in empirical modeling. (shrink)
For more than 20 years, Deidre McCloskey has campaigned to convince the economics profession that it is hopelessly confused about statistical significance. She argues that many practices associated with significance testing are bad science and that most economists routinely employ these bad practices: ?Though to a child they look like science, with all that really hard math, no science is being done in these and 96 percent of the best empirical economics ?? (McCloskey 1999). McCloskey's charges are analyzed and rejected. (...) That statistical significance is not economic significance is a jejune and uncontroversial claim, and there is no convincing evidence that economists systematically mistake the two. Other elements of McCloskey's analysis of statistical significance are shown to be ill?founded, and her criticisms of practices of economists are found to be based in inaccurate readings and tendentious interpretations of those economists' work. Properly used, significance tests are a valuable tool for assessing signal strength, for assisting in model specification, and for determining causal structure. (shrink)
Elliot Sober () forcefully restates his well-known counterexample to Reichenbach's principle of the common cause: bread prices in Britain and sea levels in Venice both rise over time and are, therefore, correlated; yet they are ex hypothesi not causally connected, which violates the principle of the common cause. The counterexample employs nonstationary data—i.e., data with time-dependent population moments. Common measures of statistical association do not generally reflect probabilistic dependence among nonstationary data. I demonstrate the inadequacy of the counterexample and of (...) some previous responses to it, as well as illustrating more appropriate measures of probabilistic dependence in the nonstationary case. A challenge to the principle of the common causeSober's argument and the attempts to rescue the principleProbabilistic dependenceNonstationary time seriesProbabilistic dependence in nonstationary time seriesDo Venetian sea levels and British bread prices violate the principle of the common cause? (shrink)
Through most of the history of economics, the most influential commentators on methodology were also eminent practitioners of economics. And even not so long ago, it was so. Milton Friedman, Paul Samuelson, Trygve Haavelmo, and Tjalling Koopmans were awarded Nobel prizes for their substantive contributions to economics, and were each important contributors to methodological thought. But the fashion has changed. Specialization has increased. Not only has methodology become its own field, but many practitioners have come to agree with Frank Hahn's (...) (1992) view that methodology is a distraction to the practitioner, best left to the professional methodologists and philosophers, and of little practical import even when delivered from their pens. John Sutton's lectures, Marshall's Tendencies: What Economists Can Know, is a welcome return to the older fashion, for Sutton is an eminent practitioner of game theory and industrial organization. One of the main themes of these rich and nuanced lectures is the relationship of economic theory to econometric evidence. Sutton's reflections on econometrics appear to arise from the darker recesses of his practitioner's soul. While he affects a sunny disposition and ends on a hopeful note, his analysis articulates the lurking fear that econometrics is a hopeless project and that economics has little to learn from the interaction of theory and econometrics. Sutton's book is like a play in which virtue triumphs, but the villain gets all the good lines. (shrink)
'Data mining' refers to a broad class of activities that have in common, a search over different ways to process or package data statistically or econometrically with the purpose of making the final presentation meet certain design criteria. We characterize three attitudes toward data mining: first, that it is to be avoided and, if it is engaged in, that statistical inferences must be adjusted to account for it; second, that it is inevitable and that the only results of any interest (...) are those that transcend the variety of alternative data mined specifications (a view associated with Leamer's extreme-bounds analysis); and third, that it is essential and that the only hope we have of using econometrics to uncover true economic relationships is to be found in the intelligent mining of data. The first approach confuses considerations of sampling distribution and considerations of epistemic warrant and, reaches an unnecessarily hostile attitude toward data mining. The second approach relies on a notion of robustness that has little relationship to truth: there is no good reason to expect a true specification to be robust alternative specifications. Robustness is not, in general, a carrier of epistemic warrant. The third approach is operationalized in the general-to-specific search methodology of the LSE school of econometrics. Its success demonstrates that intelligent data mining is an important element in empirical investigation in economics. (shrink)