Ontology is one of today’s buzzwords. It is back in fashion in analytical philosophy and Artificial Intelligence, and major projects and research centres get funding around the world (cf. e.g. the Buffalo Centre for Ontological Research, the Laboratory for Ontology in Turin, the Institute for Formal Ontology and Medical Information Science in Saarland). In the philosophy of science ontology has arguably always been a key area of research, under the guise of ‘The foundations of __’ (physics, biology, chemistry, etc.). Economics (...) however is an exception. Because of economics’ hybrid status, philosophers’ interest in its foundations has traditionally focused on normative issues – typically the theory of rationality that lies at the core of the neoclassical paradigm. What is relatively new, then, 1 is the current growth of interest for the metaphysics of economics as a descriptive scientific discipline (see e.g. Mäki, ed. 2001). (shrink)
David Lewis famously proposed to model conventions as solutions to coordination games, where equilibrium selection is driven by precedence, or the history of play. A characteristic feature of Lewis Conventions is that they are intrinsically nonnormative. Some philosophers have argued that for this reason they miss a crucial aspect of our folk notion of convention. It is doubtful however that Lewis was merely analysing a folk concept. I illustrate how his theory can (and must) be assessed using empirical data, and (...) argue that it does indeed miss some important aspects of real-world conventions. I conclude that whether Lewis Conventions exist or not depends on how closely they approximate real-world behaviour, and whether we have any alternative theory that does a better job at explaining the phenomena. (shrink)
This is a slightly longer version of an entry prepared for the 2nd edition of The New Palgrave Dictionary of Economics, edited by Steven Durlauf and Lawrence Blume (Palgrave-Macmillan, forthcoming). Since the New Palgrave does not include acknowledgments, I should use this chance to thank Roger Backhouse, Philippe Fontaine, Daniel Kahneman, Kyu Sang Lee, Ivan Moscati, and Vernon Smith for their help and suggestions in preparing this paper.
This chapter is organised around two topics: the first one is the methodology of experimental economics, a research programme that is becoming increasingly influential in contemporary economic science. The second one is normative methodology, an issue that has been widely debated by philosophers of economics over the last two decades.
The disagreement between Binmore and the “behaviouralists” concerns mainly the kind of reciprocity mechanisms that sustain cooperation in and out of the experimental laboratory. Although Binmore’s scepticism concerning Strong Reciprocity is justified, his case for Weak Reciprocity and the long-run convergence to Nash equilibria is unsupported by laboratory evidence. Part of the reason is that laboratory evidence alone cannot solve the reciprocity controversy, and researchers should pay more attention to field data. As an example, I briefly illustrate a historical case (...) suggesting that the institutions that foster cooperation in the real world rely on Weak Reciprocity mechanisms such as those that feature prominently in Binmore’s story. (shrink)
Analytical philosophy has been challenged by experimental approaches that make use of, among other things, cognitive science methods. In this paper we illustrate the benefits of merging philosophy with neuroscience, using an example of research in the foundations of social science. We argue that designing novel experiments to answer specific philosophical questions has several advantages compared to relying passively on neuroscientists' data. In this particular case, the data redirect attention towards topics ? such as inductive reasoning ? that are relatively (...) overlooked by mainstream social neuroscience. (shrink)
The Ultimatum Game is one of the most successful experimental designs in the history of the social sciences. In this article I try to explain this success—what makes it a “paradigmatic experiment”—stressing in particular its versatility. Despite the intentions of its inventors, the Ultimatum Game was never a good design to test economic theory, and it is now mostly used as a heuristic tool for the observation of nonstandard preferences or as a “social thermometer” for the observation of culture‐specific norms. (...) †To contact the author, please write to: Department of Sociology and Philosophy, University of Exeter, Amory Building, Exeter EX4 4DT, UK; e‐mail: firstname.lastname@example.org. (shrink)
The answer in a nutshell is: Yes, five years ago, but nobody has noticed. Nobody noticed because the majority of social scientists subscribe to one of the following views: (1) the ‘anomalous’ behaviour observed in standard prisoner’s dilemma or ultimatum game experiments has refuted standard game theory a long time ago; (2) game theory is flexible enough to accommodate any observed choices by ‘refining’ players’ preferences; or (3) it is just a piece of pure mathematics (a tautology). None of these (...) views is correct. This paper defends the view that GT as commonly understood is not a tautology, that it suffers from important (albeit very recently discovered) empirical anomalies, and that it is not flexible enough to accommodate all the anomalies in its theoretical framework. It also discusses the experiments that finally refuted game theory, and concludes trying to explain why it took so long for experimental game theorists to design experiments that could adequately test the theory. (shrink)
Two important arguments in the methodological literature on experimental economics rely on the specification of a domain for economic theory. The first one is used by some experimenters in their skirmishes with economic theorists, and moves from the assumption that theories have (or ought to have) their domain of application written in their assumptions. The other one is used to play down the relevance of certain unwelcome experimental results, and moves from the symmetric assumption that the domain of economic theory (...) is more limited than a literal reading of its assumptions would suggest. Of course, only one of them can be right. In this paper I criticise the former, and outline some well?known arguments that strongly point in the direction of the incompleteness of economic theory. Some remarks on the role of methodological arguments conclude the paper. (shrink)
External validity is the problem of generalizing results from laboratory to non?laboratory conditions. In this paper we review various ways in which the problem can be tackled, depending on the kind of experiment one is doing. Using a concrete example, we highlight in particular the distinction between external validity and robustness, and point out that many experiments are not aimed at a well?specified real?world target but rather contribute to a ?library of robust phenomena?, a body of experimental knowledge to be (...) applied case by case. (shrink)
Experimental “localism” stresses the importance of context‐specific knowledge, and the limitations of universal theories in science. I illustrate Latour's radical approach to localism and show that it has some unpalatable consequences, in particular the suggestion that problems of external validity (or how to generalize experimental results to nonlaboratory circumstances) cannot be solved. In the last part of the paper I try to sketch a solution to the problem of external validity by extending Mayo's error‐probabilistic approach.
Clear-cut designs have a number of methodological virtues, with respect to internal and external validity, which I illustrate by means of informal causal analysis. In contrast, a more uniform experimental practice across disciplines may not lead to progress if causal relations in the human sciences are highly dependent on the details of the context.
Controversies in economics often fizzle out unresolved. One reason is that, despite their professed empiricism, economists find it hard to agree on the interpretation of the relevant empirical evidence. In this paper I will present an example of a controversial issue first raised and then solved by recourse to laboratory experimentation. A major theme of this paper, then, concerns the methodological advantages of controlled experiments. The second theme is the nature of experimental artefacts and of the methods devised to detect (...) them. Recent studies of experimental science have stressed that experimenters are often merely concerned about determining whether a certain phenomeonon exists or not, or whether, when, and where it can be produced, without necessarily engaging in proving or disproving any theoretical explanation of the phenomenon itself. In this paper I shall be concerned mainly with such a case, and focus on the example of preference reversals, a phenomenon whose existence was until quite recently denied by the majority of economists. Their favourite strategy consisted in trying to explain the phenomenon away as an artefact of the experimental techniques used to observe it. By controlled experimentation, as we shall see, such an interpretation has been discredited, and now preference reversals are generally accepted as real. The problem of distinguishing an artefact from a real phenomenon is related to methodological issues traditionally discussed by philosophers of science, such as the theory-ladenness of observation and Duhem's problem. Part of this paper is devoted to clarifying these two philosophical problems, and to arguing that only the latter is relevant to the case in hand. The solutions to Duhem's problem devised by economic experimentalists will be presented and discussed. I shall show that they belong in two broad categories: independent tests of new predictions derived from the competing hypotheses at stake, and ‘no-miracle arguments’ from different experimental techniques delivering converging results despite their being theoretically independent. (shrink)
The paper investigates how normative considerations influenced the development of the theory of individual decision-making under risk. In the first part, the debate between Maurice Allais and the 'Neo-Bernoullians' (supporting the Expected Utility model) is reconstructed, in order to show that a controversy on the definition of rational decision and on the methodology of normative justification played a crucial role in legitimizing the Allais-paradox as genuinely refuting evidence. In the second part, it is shown how informal notions of rationality were (...) among the tacit heuristic principles that led to the discovery of generalized models of decision put forward in the early eighties to replace the received model. (shrink)