The experimental approach in economics is a driving force behind some of the most exciting developments in the field. The 'experimental revolution' was based on a series of bold philosophical premises which have remained until now mostly unexplored. This book provides the first comprehensive analysis and critical discussion of the methodology of experimental economics, written by a philosopher of science with expertise in the field. It outlines the fundamental principles of experimental inference in order to investigate their power, scope and (...) limitations. The author demonstrates that experimental economists have a lot to gain by discussing openly the philosophical principles that guide their work, and that philosophers of science have a lot to learn from their ingenious techniques devised by experimenters in order to tackle difficult scientific problems. (shrink)
Understanding Institutions proposes a new unified theory of social institutions that combines the best insights of philosophers and social scientists who have written on this topic. Francesco Guala presents a theory that combines the features of three influential views of institutions: as equilibria of strategic games, as regulative rules, and as constitutive rules. -/- Guala explains key institutions like money, private property, and marriage, and develops a much-needed unification of equilibrium- and rules-based approaches. Although he uses game theory concepts, the (...) theory is presented in a simple, clear style that is accessible to a wide audience of scholars working in different fields. Outlining and discussing various implications of the unified theory, Guala addresses venerable issues such as reflexivity, realism, Verstehen, and fallibilism in the social sciences. He also critically analyses the theory of “looping effects” and “interactive kinds” defended by Ian Hacking, and asks whether it is possible to draw a demarcation between social and natural science using the criteria of causal and ontological dependence. Focusing on current debates about the definition of marriage, Guala shows how these abstract philosophical issues have important practical and political consequences. -/- Moving beyond specific cases to general models and principles, Understanding Institutions offers new perspectives on what institutions are, how they work, and what they can do for us. (shrink)
Recent debates on the nature of preferences in economics have typically assumed that they are to be interpreted either as behavioural regularities or as mental states. In this paper I challenge this dichotomy and argue that neither interpretation is consistent with scientific practice in choice theory and behavioural economics. Preferences are belief-dependent dispositions with a multiply realizable causal basis, which explains why economists are reluctant to make a commitment about their interpretation.
Strong Reciprocity theorists claim that cooperation in social dilemma games can be sustained by costly punishment mechanisms that eliminate incentives to free ride, even in one-shot and finitely repeated games. There is little doubt that costly punishment raises cooperation in laboratory conditions. Its efficacy in the field however is controversial. I distinguish two interpretations of experimental results, and show that the wide interpretation endorsed by Strong Reciprocity theorists is unsupported by ethnographic evidence on decentralised punishment and by historical evidence on (...) common pool institutions. The institutions that spontaneously evolve to solve dilemmas of cooperation typically exploit low-cost mechanisms, turning finite games into indefinitely repeated ones and eliminating the cost of sanctioning. (shrink)
Current debates in social ontology are dominated by approaches that view institutions either as rules or as equilibria of strategic games. We argue that these two approaches can be unified within an encompassing theory based on the notion of correlated equilibrium. We show that in a correlated equilibrium each player follows a regulative rule of the form ‘if X then do Y’. We then criticize Searle's claim that constitutive rules of the form ‘X counts as Y in C’ are fundamental (...) building blocks for institutions, showing that such rules can be derived from regulative rules by introducing new institutional terms. Institutional terms are introduced for economy of thought, but are not necessary for the creation of social reality. (shrink)
Institutions generate cooperative benefits that explain why they exist and persist. Therefore, their etiological function is to promote cooperation. The function of a particular institution, such as money or traffic regulations, is to solve one or more cooperation problems. We go on to argue that the teleological function of institutions is to secure values by means of norms. Values can also be used to redesign an institution and to promote social change. We argue, however, that an adequate theory of institutions (...) should not be ‘moralized’ in that they should not be defined in terms of the values they are supposed to promote. (shrink)
Experimental “localism” stresses the importance of context‐specific knowledge, and the limitations of universal theories in science. I illustrate Latour's radical approach to localism and show that it has some unpalatable consequences, in particular the suggestion that problems of external validity (or how to generalize experimental results to nonlaboratory circumstances) cannot be solved. In the last part of the paper I try to sketch a solution to the problem of external validity by extending Mayo's error‐probabilistic approach.
Thaler and Sunstein justify nudge policies from welfaristic premises: nudges are acceptable because they benefit the individuals who are nudged. A tacit assumption behind this strategy is that we can identify the true preferences of decision-makers. We argue that this assumption is often unwarranted, and that as a consequence nudge policies must be justified in a different way. A possible strategy is to abandon welfarism and endorse genuine paternalism. Another one is to argue that the biases of decision that choice (...) architects attempt to eliminate create externalities. For example, in the case of intertemporal discounting, the costs of preference reversals are not always paid by the discounters, because they are transferred onto other individuals. But if this is the case, then nudges are best justified from a political rather than welfaristic standpoint. (shrink)
Comparative process tracing is the best analysis of extrapolation inferences in the philosophical and scientific literature so far. In this essay I examine some similarities and differences between comparative process tracing and former attempts to capture the logic of extrapolation, such as the analogical approach. I show that these accounts are not different in spirit, although comparative process tracing supersedes previous proposals in terms of analytical detail. I also examine some qualms about the possibility of drawing extrapolation inferences in the (...) social sciences and conclude by suggesting that there may be cases of extrapolation without process tracing. (shrink)
The answer in a nutshell is: Yes, five years ago, but nobody has noticed. Nobody noticed because the majority of social scientists subscribe to one of the following views: (1) the ‘anomalous’ behaviour observed in standard prisoner’s dilemma or ultimatum game experiments has refuted standard game theory a long time ago; (2) game theory is flexible enough to accommodate any observed choices by ‘refining’ players’ preferences; or (3) it is just a piece of pure mathematics (a tautology). None of these (...) views is correct. This paper defends the view that GT as commonly understood is not a tautology, that it suffers from important (albeit very recently discovered) empirical anomalies, and that it is not flexible enough to accommodate all the anomalies in its theoretical framework. It also discusses the experiments that finally refuted game theory, and concludes trying to explain why it took so long for experimental game theorists to design experiments that could adequately test the theory. (shrink)
David Lewis famously proposed to model conventions as solutions to coordination games, where equilibrium selection is driven by precedence, or the history of play. A characteristic feature of Lewis Conventions is that they are intrinsically non-normative. Some philosophers have argued that for this reason they miss a crucial aspect of our folk notion of convention. It is doubtful however that Lewis was merely analysing a folk concept. I illustrate how his theory can (and must) be assessed using empirical data, and (...) argue that it does indeed miss an important aspect of real-world conventions. (shrink)
The auctions of the Federal Communication Commission, designed in 1994 to sell spectrum licences, are one of the few widely acclaimed and copied cases of economic engineering to date. This paper includes a detailed narrative of the process of designing, testing and implementing the FCC auctions, focusing in particular on the role played by game theoretical modelling and laboratory experimentation. Some general remarks about the scope, interpretation and use of rational choice models open and conclude the paper.
The Ultimatum Game is one of the most successful experimental designs in the history of the social sciences. In this article I try to explain this success—what makes it a “paradigmatic experiment”—stressing in particular its versatility. Despite the intentions of its inventors, the Ultimatum Game was never a good design to test economic theory, and it is now mostly used as a heuristic tool for the observation of nonstandard preferences or as a “social thermometer” for the observation of culture‐specific norms. (...) †To contact the author, please write to: Department of Sociology and Philosophy, University of Exeter, Amory Building, Exeter EX4 4DT, UK; e‐mail: [email protected] (shrink)
This article is in three parts: in the first section, a real case of laboratory experimentation in economics illustrates what experimentalists do in order to test the external validity of their results. Then, it is shown that such a practice presupposes a specific conception of the causal relations economists are seeking. Some general remarks about the notions of external validity and parallelism are provided in conclusion.
The paper investigates how normative considerations influenced the development of the theory of individual decision-making under risk. In the first part, the debate between Maurice Allais and the 'Neo-Bernoullians' (supporting the Expected Utility model) is reconstructed, in order to show that a controversy on the definition of rational decision and on the methodology of normative justification played a crucial role in legitimizing the Allais-paradox as genuinely refuting evidence. In the second part, it is shown how informal notions of rationality were (...) among the tacit heuristic principles that led to the discovery of generalized models of decision put forward in the early eighties to replace the received model. (shrink)
External validity is the problem of generalizing results from laboratory to non?laboratory conditions. In this paper we review various ways in which the problem can be tackled, depending on the kind of experiment one is doing. Using a concrete example, we highlight in particular the distinction between external validity and robustness, and point out that many experiments are not aimed at a well?specified real?world target but rather contribute to a ?library of robust phenomena?, a body of experimental knowledge to be (...) applied case by case. (shrink)
A series of recent debates in experimental economics have associated demand effects with the artificiality of the experimental setting and have linked it to the problem of external validity. In this paper, we argue that these associations can be misleading, partly because of the ambiguity with which “artificiality” has been defined, but also because demand effects and external validity are related in complex ways. We argue that artificiality may be directly as well as inversely correlated with demand effects. We also (...) distinguish between the demand effects of experimentation and the reactions that they may trigger and that might endanger experimental validity. We conclude that economists should pay more attention to the way in which subjects construe the experimental task and learn to exploit subjects’ reactivity to expectations in their experiments. (shrink)
Controversies in economics often fizzle out unresolved. One reason is that, despite their professed empiricism, economists find it hard to agree on the interpretation of the relevant empirical evidence. In this paper I will present an example of a controversial issue first raised and then solved by recourse to laboratory experimentation. A major theme of this paper, then, concerns the methodological advantages of controlled experiments. The second theme is the nature of experimental artefacts and of the methods devised to detect (...) them. Recent studies of experimental science have stressed that experimenters are often merely concerned about determining whether a certain phenomeonon exists or not, or whether, when, and where it can be produced, without necessarily engaging in proving or disproving any theoretical explanation of the phenomenon itself. In this paper I shall be concerned mainly with such a case, and focus on the example of preference reversals, a phenomenon whose existence was until quite recently denied by the majority of economists. Their favourite strategy consisted in trying to explain the phenomenon away as an artefact of the experimental techniques used to observe it. By controlled experimentation, as we shall see, such an interpretation has been discredited, and now preference reversals are generally accepted as real. The problem of distinguishing an artefact from a real phenomenon is related to methodological issues traditionally discussed by philosophers of science, such as the theory-ladenness of observation and Duhem's problem. Part of this paper is devoted to clarifying these two philosophical problems, and to arguing that only the latter is relevant to the case in hand. The solutions to Duhem's problem devised by economic experimentalists will be presented and discussed. I shall show that they belong in two broad categories: independent tests of new predictions derived from the competing hypotheses at stake, and ‘no- miracle arguments’ from different experimental techniques delivering converging results despite their being theoretically independent. (shrink)
Hindriks’ paper raises two issues: one is formal and concerns the notion of ‘cost’ in rational choice accounts of norms; the other is substantial and concerns the role of expectations in the modification of payoffs. In this commentary I express some doubts and worries especially about the latter: What’s so special with shared expectations? Why do they induce compliance with norms, if transgression is not associated with sanctions?
The title of this book is rather misleading. “Birth of neoliberal governmentality,” or something like that, would have been more faithful to its contents. In Foucault's vocabulary, “biopolitics” is the “rationalisation” of “governmentality” : it's the theory, in other words, as opposed to the art of managing people. The mismatch between title and content is easily explained: the general theme of the courses at the Collège de France had to be announced at the beginning of each academic year. It is (...) part of the mandate of every professor at the Collège, however, that his lectures should follow closely his current research. As a consequence it wasn't unusual for Foucault to take new directions while he was lecturing. In 1979, for the first and only time in his career, he took a diversion into contemporary political philosophy. His principal object of investigation became “neoliberal” political economy. More precisely, he got increasingly interested in those strands of contemporary liberalism that use economic science both as a principle of limitation and of inspiration for the management of people. (shrink)
Understanding Institutions offers a theory that is able to unify the two dominant approaches in the scientific and philosophical literature on institutions. Moreover, using the ‘rules-in-equilibrium’ theory, it tackles several ancient puzzles in the philosophy of social science.
_The Philosophy of Social Science Reader_ is an outstanding, comprehensive and up-to-date collection of key readings in the philosophy of social science, covering the essential issues, problems and debates in this important interdisciplinary area. Each section is carefully introduced by the editors, and the readings placed in context. The anthology is organized into seven clear parts: Values and Social Science Causal Inference and Explanation Interpretation Rationality and Choice Individualism Norms Cultural Evolution. Featuring the work of influential philosophers and social scientists (...) such as Ernest Nagel, Ian Hacking, John Searle, Clifford Geertz, Daniel Kahneman, Steven Lukes and Richard Dawkins, _The Philosophy of Social Science Reader_ is the ideal text for philosophy of social science courses, and for students in related disciplines interested in the differences between the social and natural sciences. (shrink)
Naturalism is still facing a strong opposition in the philosophy of social science from influential scholars who argue that philosophical analysis must be autonomous from scientific investigation. The opposition exploits philosophers’ traditional diffidence toward social science and fuels the ambition to provide new foundations for social research. A classic anti-naturalist strategy is to identify a feature of social reality that prevents scientific explanation and prediction. An all-time favorite is the dependence of social phenomena on human representation. This article examines two (...) prominent versions of the dependence thesis and concludes that they both fail. Contemporary social science is capable of accounting for the causal dependence of social reality on representation, and there is no reason to believe that social entities are ontologically dependent on the collective acceptance of a constitutive rule. (shrink)
Standard defences of ontological individualism are challenged by arguments that exploit the dependence of social facts on material facts – i.e. facts that are not about human individuals. In this paper I discuss Brian Epstein’s “materialism” in The Ant Trap: granting Epstein’s strict definition of individualism, I show that his arguments depend crucially on a generous conception of social properties and social facts. Individualists however are only committed to the claim that projectible properties are individualistically realized, and materialists have not (...) undermined this claim. (shrink)
Two important arguments in the methodological literature on experimental economics rely on the specification of a domain for economic theory. The first one is used by some experimenters in their skirmishes with economic theorists, and moves from the assumption that theories have (or ought to have) their domain of application written in their assumptions. The other one is used to play down the relevance of certain unwelcome experimental results, and moves from the symmetric assumption that the domain of economic theory (...) is more limited than a literal reading of its assumptions would suggest. Of course, only one of them can be right. In this paper I criticise the former, and outline some well?known arguments that strongly point in the direction of the incompleteness of economic theory. Some remarks on the role of methodological arguments conclude the paper. (shrink)
The folk conception of money as an object is not a promising starting point to develop general, explanatory metaphysical accounts of the social world. A theory of institutions as rules in equilibrium is more consistent with scientific theories of money, is able to shed light on the folk view, and side-steps some unnecessary puzzles.
Strong reciprocity theorists claim that punishment has evolved to promote the good of the group and to deter cheating. By contrast, weak reciprocity suggests that punishment aims to restore justice (i.e., reciprocity) between the criminal and his victim. Experimental evidences as well as field observations suggest that humans punish criminals to restore fairness rather than to support group cooperation.
While admirable, Guala's discussion of reciprocity suffers from a confusion between proximate causes (psychological mechanisms triggering behaviour) and ultimate causes (evolved function of those psychological mechanisms). Because much work on commits this error, I clarify the difference between proximate and ultimate causes of cooperation and punishment. I also caution against hasty rejections of of experimental evidence.
David Lewis famously proposed to model conventions as solutions to coordination games, where equilibrium selection is driven by precedence, or the history of play. A characteristic feature of Lewis Conventions is that they are intrinsically nonnormative. Some philosophers have argued that for this reason they miss a crucial aspect of our folk notion of convention. It is doubtful however that Lewis was merely analysing a folk concept. I illustrate how his theory can (and must) be assessed using empirical data, and (...) argue that it does indeed miss some important aspects of real-world conventions. I conclude that whether Lewis Conventions exist or not depends on how closely they approximate real-world behaviour, and whether we have any alternative theory that does a better job at explaining the phenomena. (shrink)
In a unified theory of human reciprocity, the strong and weak forms are similar because neither is biologically altruistic and both require normative motivation to support cooperation. However, strong reciprocity is necessary to support cooperation in public goods games. It involves inflicting costs on defectors; and though the costs for punishers are recouped, recouping costs requires complex institutions that would not have emerged if weak reciprocity had been enough.
I argue in my target article that field evidence does not support the costly punishment hypothesis. Some commentators object to my reading of the evidence, while others agree that evidence in favour of costly punishment is scant. Most importantly, no rigorous measurement of cost-benefit ratios in the field has been attempted so far. This lack of evidence does not rule out costly punishment as a cause of human cooperation, but it does pre-empt some overconfident claims made in the past. Other (...) commentators have interpreted my article as an anti-experimental pamphlet or as a flat denial of the existence of pro-social motives – which it was not intended to be. While we have enough data to establish the existence (and theoretical relevance) of strong reciprocity motives, I argue in this response that their efficacy (and policy relevance) has not been demonstrated. (shrink)
Gerald Allen Cohen was one of the most influential political philosophers of the latter half of the twentieth century. When he died in 2009 Cohen left behind not only a short book and various unpublished papers but an intellectual legacy that will remain alive for many years. Economics and Philosophy initially planned to organize a review symposium devoted to Cohen's posthumous publications. However, the reviews became articles and the original project turned into a larger symposium in memory of Cohen. The (...) editors would like to thank Ian Carter, Paula Casal, Serena Olsaretti and Andrew Williams for working with us on that project as it gradually took shape. We all believe that this is a fitting way to honour a remarkable philosophical career inspired by an unrelenting political passion. (shrink)
This is a slightly longer version of an entry prepared for the 2nd edition of The New Palgrave Dictionary of Economics, edited by Steven Durlauf and Lawrence Blume (Palgrave-Macmillan, forthcoming). Since the New Palgrave does not include acknowledgments, I should use this chance to thank Roger Backhouse, Philippe Fontaine, Daniel Kahneman, Kyu Sang Lee, Ivan Moscati, and Vernon Smith for their help and suggestions in preparing this paper.
Thomas Kuhn was not only the greatest historian of science but also one of the most influential philosophers of the twentieth century. Faced with such a significant character, Giuseppe Giordano has decided to focus on Kuhn “the philosopher,” touching on the historian only indirectly. The book is roughly divided into two parts. The first one is devoted to a reconstruction of the genesis of Kuhn's most important ideas, focusing in particular on the essay “The Essential Tension” and on The Structure (...) of Scientific Revolutions. In the second part Giordano considers the reception of Kuhn's work within the philosophy of science community. Since the philosophical debate is almost entirely a post‐Structure phenomenon, the narrative proceeds more or less in chronological order, covering Kuhn's career from the beginning to the end. The story unfolds almost entirely in the realm of ideas, ignoring institutional matters such as Kuhn's role in the creation of a community of professional historians of science or his time as president of the Philosophy of Science Association.The better part of the book is the first. Here Giordano tells us how Kuhn turned from physics to the history of science and, most important, how the concept of “paradigm” evolved from a pedagogic device to a pathbreaking philosophical idea. He also sketches the intellectual background to Kuhn's work, especially the received views on science and on the role of history embodied in the logical positivism of the 1950s. Unfortunately, the author does not make use of recent work on Kuhn, Carnap, and the neopositivists by Peter Galison, Michael Friedman, and others. As a result, the contraposition between the “old” and the “new” philosophy of science is a fairly conventional and dated one. Developments in the historiography of science in the last two decades are also ignored: one would have liked to read something on Kuhn's influence on the new sociology of science, the birth of the micro history of science as a reaction to Kuhn's macro approach, and so forth.But of course no author can discuss everything, and Giordano has explicitly decided to focus on the relationship between Kuhn and the philosophers of his time. Chapter 3 deals with the controversy between Kuhn and Karl Popper. Giordano argues that whereas Popper never really questioned his own theses, Kuhn benefited from the debate, which prompted some significant changes in his position. In this chapter and the following one , Kuhn's thought is presented in its dynamic evolution, as he adjusted and reacted to criticism. In both chapters Kuhn is set at center stage, with the other characters playing supporting roles.The last chapter discusses Kuhn's “mature” views on theory change and scientific progress. It is a pity that nowhere in the book are we provided with a rigorous formulation of crucial concepts such as “progress” and “scientific rationality.” This is the main defect of the book, the philosophical depth and rigor of which is sometimes less than satisfactory. Other examples are a confusion between scientific realism and the correspondence theory of truth , a sloppy formulation of the problem of induction , and the lack of a serious discussion of the Duhem‐Quine problem . Because of these problems, Giordano's book is valuable chiefly as a concise summary and discussion of the overall significance of Kuhn's philosophical work. The commentary is frequently interrupted by long quotations from Kuhn's texts, and the footnotes also quote widely from the secondary literature . This makes Giordano's book a peculiar piece of work, rather like a conflation of a textbook and a Kuhnian anthology.It must be recognized that to write yet another book on Kuhn is a challenging task. Kuhn's work has been dissected, criticized, and interpreted a number of times, and a truly novel analysis would require a truly novel approach. Other scholars have stretched the interpretation of known texts and facts until they have become “new” texts and facts. Giordano does not aim to be controversial and admirably abstains from such flamboyant exercises. However, he does not pursue the other route either: that of digging deeper into the past in order to discover something genuinely novel. A new book on Kuhn should be based, at the very least, on serious archival research in Kuhn's papers at MIT, and Giordano has not done that. Furthermore, he relies on a relatively small fraction of Kuhn's published writings, invariably the most widely known and celebrated ones. It is not surprising, then, that he ends up with a very familiar picture of Thomas Kuhn and his place in twentieth‐century philosophy of science. (shrink)
Guala does not go far enough in his critique of the assumption that human decisions about sharing made in the context of experimental game conditions accurately reflect decision-making under real conditions. Sharing of hunted animals is constrained by cultural rules and is not as assumed in models of weak and strong reciprocity. Missing in these models is the cultural basis of sharing that makes it a group property rather than an individual one.