The Pareto principle states that if the members of society express the same preference judgment between two options, this judgment is compelling for society. A building block of normative economics and social choice theory, and often borrowed by contemporary political philosophy, the principle has rarely been subjected to philosophical criticism. The paper objects to it on the ground that it indifferently applies to those cases in which the individuals agree on both their expressed preferences and their reasons for entertaining them, (...) and those cases in which they agree on their expressed preferences, while differing on their reasons. The latter are cases of "spurious unanimity", and it is normatively inappropriate, or so the paper argues, to defend unanimity preservation at the social level for them, so the Pareto principle is formulated much too broadly. The objection seems especially powerful when the principle is applied in an ex ante context of uncertainty, in which individuals can disagree on both their probabilities and utilities, and nonetheless agree on their preferences over prospects. (shrink)
Whereas many others have scrutinized the Allais paradox from a theoretical angle, we study the paradox from an historical perspective and link our findings to a suggestion as to how decision theory could make use of it today. We emphasize that Allais proposed the paradox as a normative argument, concerned with ‘the rational man’ and not the ‘real man’, to use his words. Moreover, and more subtly, we argue that Allais had an unusual sense of the normative, being concerned not (...) so much with the rationality of choices as with the rationality of the agent as a person. These two claims are buttressed by a detailed investigation – the first of its kind – of the 1952 Paris conference on risk, which set the context for the invention of the paradox, and a detailed reconstruction – also the first of its kind – of Allais’s specific normative argument from his numerous but allusive writings. The paper contrasts these interpretations of what the paradox historically represented, with how it generally came to function within decision theory from the late 1970s onwards: that is, as an empirical refutation of the expected utility hypothesis, and more specifically of the condition of von Neumann–Morgenstern independence that underlies that hypothesis. While not denying that this use of the paradox was fruitful in many ways, we propose another use that turns out also to be compatible with an experimental perspective. Following Allais’s hints on ‘the experimental definition of rationality’, this new use consists in letting the experiment itself speak of the rationality or otherwise of the subjects. In the 1970s, a short sequence of papers inspired by Allais implemented original ways of eliciting the reasons guiding the subjects’ choices, and claimed to be able to draw relevant normative consequences from this information. We end by reviewing this forgotten experimental avenue not simply historically, but with a view to recommending it for possible use by decision theorists today. (shrink)
According to a theorem recently proved in the theory of logical aggregation, any nonconstant social judgment function that satisfies independence of irrelevant alternatives (IIA) is dictatorial. We show that the strong and not very plausible IIA condition can be replaced with a minimal independence assumption plus a Pareto-like condition. This new version of the impossibility theorem likens it to Arrow’s and arguably enhances its paradoxical value.
Popper's well-known demarcation criterion has often been understood to distinguish statements of empirical science according to their logical form. Implicit in this interpretation of Popper's philosophy is the belief that when the universe of discourse of the empirical scientist is infinite, empirical universal sentences are falsifiable but not verifiable, whereas the converse holds for existential sentences. A remarkable elaboration of this belief is to be found in Watkins's early work on the statements he calls “all-and-some,” such as: “For every metal (...) there is a melting point.” All-and-some statements are both universally and existentially quantified in that order. Watkins argued that AS should be regarded as both nonfalsifiable and nonverifiable, for they partake in the logical fate of both universal and existential statements. This claim is subject to the proviso that the bound variables are “uncircumscribed” ; i.e., that the universe of discourse is infinite. (shrink)
In the framework of judgment aggregation, we assume that some formulas of the agenda are singled out as premisses, and that both Independence (formula-wise aggregation) and Unanimity Preservation hold for them. Whether premiss-based aggregation thus defined is compatible with conclusion-based aggregation, as defined by Unanimity Preservation on the non-premisses, depends on how the premisses are logically connected, both among themselves and with other formulas. We state necessary and sufficient conditions under which the combination of both approaches leads to dictatorship (resp. (...) oligarchy), either just on the premisses or on the whole agenda. This framework is inspired by the doctrinal paradox of legal theory and arguably relevant to this field as well as political science and political economy. When the set of premisses coincides with the whole agenda, a limiting case of our assumptions, we obtain several existing results in judgment aggregation theory. (shrink)
We investigate the conflict between the ex ante and ex post criteria of social welfare in a new framework of individual and social decisions, which distinguishes between two sources of uncertainty, here interpreted as an objective and a subjective source respectively. This framework makes it possible to endow the individuals and society not only with ex ante and ex post preferences, as is usually done, but also with interim preferences of two kinds, and correspondingly, to introduce interim forms of the (...) Pareto principle. After characterizing the ex ante and ex post criteria, we present a first solution to their conflict that extends the former as much possible in the direction of the latter. Then, we present a second solution, which goes in the opposite direction, and is also maximally assertive. Both solutions translate the assumed Pareto conditions into weighted additive utility representations, and both attribute to the individuals common probability values on the objective source of uncertainty, and different probability values on the subjective source. We discuss these solutions in terms of two conceptual arguments, i.e., the by now classic spurious unanimity argument and a novel informational argument labelled complementary ignorance. The paper complies with the standard economic methodology of basing probability and utility representations on preference axioms, but for the sake of completeness, also considers a construal of objective uncertainty based on the assumption of an exogeneously given probability measure. JEL classification: D70; D81. (shrink)
Judgment aggregation theory, or rather, as we conceive of it here, logical aggregation theory generalizes social choice theory by having the aggregation rule bear on judgments of all kinds instead of merely preference judgments. It derives from Kornhauser and Sager’s doctrinal paradox and List and Pettit’s discursive dilemma, two problems that we distinguish emphatically here. The current theory has developed from the discursive dilemma, rather than the doctrinal paradox, and the final objective of the paper is to give the latter (...) its own theoretical development along the line of recent work by Dietrich and Mongin. However, the paper also aims at reviewing logical aggregation theory as such, and it covers impossibility theorems by Dietrich, Dietrich and List, Dokow and Holzman, List and Pettit, Mongin, Nehring and Puppe, Pauly and van Hees, providing a uniform logical framework in which they can be compared with each other. The review goes through three historical stages: the initial paradox and dilemma, the scattered early results on the independence axiom, and the so-called canonical theorem, a collective achievement that provided the theory with its specific method of analysis. The paper goes some way towards philosophical logic, first by briefly connecting the aggregative framework of judgment with the modern philosophy of judgment, and second by thoroughly discussing and axiomatizing the ‘general logic’ built in this framework. (shrink)
Nudge is a concept of policy intervention that originates in Thaler and Sunstein's (2008) popular eponymous book. Following their own hints, we distinguish three properties of nudge interventions: they redirect individual choices by only slightly altering choice conditions (here nudge 1), they use rationality failures instrumentally (here nudge 2), and they alleviate the unfavourable effects of these failures (here nudge 3). We explore each property in semantic detail and show that no entailment relation holds between them. This calls into question (...) the theoretical unity of nudge, as intended by Thaler and Sunstein and most followers. We eventually recommend pursuing each property separately, both in policy research and at the foundational level. We particularly emphasize the need of reconsidering the respective roles of decision theory and behavioural economics to delineate nudge 2 correctly. The paper differs from most of the literature in focusing on the definitional rather than the normative problems of nudge. (shrink)
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic independence. To fill this (...) significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
The paper analyses economic evaluations by distinguishing evaluative statements from actual value judgments. From this basis, it compares four solutions to the value neutrality problem in economics. After rebutting the strong theses about neutrality (normative economics is illegitimate) and non-neutrality (the social sciences are value-impregnated), the paper settles the case between the weak neutrality thesis (common in welfare economics) and a novel, weak non-neutrality thesis that extends the realm of normative economics more widely than the other weak thesis does.
Following a long-standing philosophical tradition, impartiality is a distinctive and determining feature of moral judgments, especially in matters of distributive justice. This broad ethical tradition was revived in welfare economics by Vickrey, and above all, Harsanyi, under the form of the so-called Impartial Observer Theorem. The paper offers an analytical reconstruction of this argument and a step-wise philosophical critique of its premisses. It eventually provides a new formal version of the theorem based on subjective probability.
We introduce a ranking of multidimensional alternatives, including uncertain prospects as a particular case, when these objects can be given a matrix form. This ranking is separable in terms of rows and columns, and continuous and monotonic in the basic quantities. Owing to the theory of additive separability developed here, we derive very precise numerical representations over a large class of domains (i.e., typically notof the Cartesian product form). We apply these representationsto (1)streams of commodity baskets through time, (2)uncertain social (...) prospects, (3)uncertain individual prospects. Concerning(1), we propose a finite horizon variant of Koopmans’s (1960) axiomatization of infinite discounted utility sums. The main results concern(2). We push the classic comparison between the exanteand expostsocial welfare criteria one step further by avoiding any expected utility assumptions, and as a consequence obtain what appears to be the strongest existing form of Harsanyi’s (1955) Aggregation Theorem. Concerning(3), we derive a subjective probability for Anscombe and Aumann’s (1963) finite case by merely assuming that there are two epistemically independent sources of uncertainty. (shrink)
The paper surveys the currently available axiomatizations of common belief (CB) and common knowledge (CK) by means of modal propositional logics. (Throughout, knowledge- whether individual or common- is defined as true belief.) Section 1 introduces the formal method of axiomatization followed by epistemic logicians, especially the syntax-semantics distinction, and the notion of a soundness and completeness theorem. Section 2 explains the syntactical concepts, while briefly discussing their motivations. Two standard semantic constructions, Kripke structures and neighbourhood structures, are introduced in Sections (...) 3 and 4, respectively. It is recalled that Aumann's partitional model of CK is a particular case of a definition in terms of Kripke structures. The paper also restates the well-known fact that Kripke structures can be regarded as particular cases of neighbourhood structures. Section 3 reviews the soundness and completeness theorems proved w.r.t. the former structures by Fagin, Halpern, Moses and Vardi, as well as related results by Lismont. Section 4 reviews the corresponding theorems derived w.r.t. the latter structures by Lismont and Mongin. A general conclusion of the paper is that the axiomatization of CB does not require as strong systems of individual belief as was originally thought- only monotonicity has thusfar proved indispensable. Section 5 explains another consequence of general relevance: despite the "infinitary" nature of CB, the axiom systems of this paper admit of effective decision procedures, i.e., they are decidable in the logician's sense. (shrink)
The paper has a twofold aim. On the one hand, it provides what appears to be the first game-theoretic modeling of Napoleon’s last campaign, which ended dramatically on 18 June 1815 at Waterloo. It is specifically concerned with the decision Napoleon made on 17 June 1815 to detach part of his army against the Prussians he had defeated, though not destroyed, on 16 June at Ligny. Military historians agree that this decision was crucial but disagree about whether it was rational. (...) Hypothesizing a zero-sum game between Napoleon and Blücher, and computing its solution, we show that it could have been a cautious strategy on the former's part to divide his army, a conclusion which runs counter to the charges of misjudgement commonly heard since Clausewitz. On the other hand, the paper addresses methodological issues. We defend its case study against the objections of irrelevance that have been raised elsewhere against “analytic narratives”, and conclude that military campaigns provide an opportunity for successful application of the formal theories of rational choice. Generalizing the argument, we finally investigate the conflict between narrative accounts – the historians' standard mode of expression – and mathematical modeling. (shrink)
The relations between rationality and optimization have been widely discussed in the wake of Herbert Simon's work, with the common conclusion that the rationality concept does not imply the optimization principle. The paper is partly concerned with adding evidence for this view, but its main, more challenging objective is to question the converse implication from optimization to rationality, which is accepted even by bounded rationality theorists. We discuss three topics in succession: (1) rationally defensible cyclical choices, (2) the revealed preference (...) theory of optimization, and (3) the infinite regress of optimization. We conclude that (1) and (2) provide evidence only for the weak thesis that rationality does not imply optimization. But (3) is seen to deliver a significant argument for the strong thesis that optimization does not imply rationality. (shrink)
The paper revisits the rationality principle from the particular perspective of the unity of social sciences. It has been argued that the principle was the unique law of the social sciences and that accordingly there are no deep differences between them (Popper). It has also been argued that the rationality principle was specific to economics as opposed to the other social sciences, especially sociology (Pareto). The paper rejects these opposite views on the grounds that the rationality principle is strictly metaphysical (...) and does not have the logical force required to deliver interesting deductions. Explanation in the social sciences takes place at a level of specialization that is always higher than that of the principle itself. However, what is peculiar about economics is that it specializes the explanatory rational schemes to a degree unparalleled in history and sociology. As a consequence, there is a backward-and-forward move between specific and general formulations of rationality that takes place in economics and has no analogue in the other social sciences. (shrink)
ABSTRACT. The relations between rationality and optimization have been widely discussed in the wake of Herbert Simon’s work, with the common conclusion that the rationality concept does not imply the optimization principle. The paper is partly concerned with adding evidence for this view, but its main, more challenging objective is to question the converse implication from optimization to rationality, which is accepted even by bounded rationality theorists. We discuss three topics in succession: (1) rationally defensible cyclical choices, (2) the revealed (...) preference theory of optimization, and (3) the infinite regress of optimization. We conclude that (1) and (2) provide evidence only for the weak thesis that rationality does not imply optimization. But (3) is seen to deliver a significant argument for the strong thesis that optimization does not imply rationality. (shrink)
The article discusses Friedman's classic claim that economics can be based on irrealistic assumptions. It exploits Samuelson's distinction between two "F-twists" (that is, "it is an advantage for an economic theory to use irrealistic assumptions" vs "the more irrealistic the assumptions, the better the economic theory"), as well as Nagel's distinction between three philosophy-of-science construals of the basic claim. On examination, only one of Nagel's construals seems promising enough. It involves the neo-positivistic distinction between theoretical and non-theoretical ("observable") terms; so (...) Friedman would in some sense argue for the major role of theoretical terms in economics. The paper uses a model-theoretic apparatus to refine the selected construal and check whether it can be made to support the claim. This inquiry leads to essentially negative results for both F-twists, and the final conclusion is that they are left unsupported. (shrink)
In Richard Bradley’s book, Decision Theory with a Human Face, we have selected two themes for discussion. The first is the Bolker-Jeffrey theory of decision, which the book uses throughout as a tool to reorganize the whole field of decision theory, and in particular to evaluate the extent to which expected utility theories may be normatively too demanding. The second theme is the redefinition strategy that can be used to defend EU theories against the Allais and Ellsberg paradoxes, a strategy (...) that the book by and large endorses, and even develops in an original way concerning the Ellsberg paradox. We argue that the BJ theory is too specific to fulfil Bradley’s foundational project and that the redefinition strategy fails in both the Allais and Ellsberg cases. Although we share Bradley’s conclusion that EU theories do not state universal rationality requirements, we reach it not by a comparison with BJ theory, but by a comparison with the non-EU theories that the paradoxes have heuristically suggested. (shrink)
This article critically discusses the concept of economic rationality, arguing that it is too narrow and specific to encompass the full concept of practical rationality. Economic rationality is identified here with the use of the optimizing model of decision, as well as of expected utility apparatus to deal with uncertainty. To argue that practical rationality is broader than economic rationality, the article claims that practical rationality includes bounded rationality as a particular case, and that bounded rationality cannot be reduced to (...) economic rationality as defined here. (shrink)
This monographic chapter explains how expected utility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I actually follow Duhem's recommendation, (...) which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth. In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured. This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend. I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him. In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories. (shrink)
Stochastic independence has a complex status in probability theory. It is not part of the definition of a probability measure, but it is nonetheless an essential property for the mathematical development of this theory. Bayesian decision theorists such as Savage can be criticized for being silent about stochastic independence. From their current preference axioms, they can derive no more than the definitional properties of a probability measure. In a new framework of twofold uncertainty, we introduce preference axioms that entail not (...) only these definitional properties, but also the stochastic independence of the two sources of uncertainty. This goes some way towards filling a curious lacuna in Bayesian decision theory. (shrink)
Abstract: Economists are accustomed to distinguishing between a positive and a normative component of their work, a distinction that is peculiar to their field, having no exact counterpart in the other social sciences. The distinction has substantially changed over time, and the different ways of understanding it today are reflective of its history. Our objective is to trace the origins and initial forms of the distinction, from the English classical political economy of the first half of the 19th century to (...) the emergence of welfare economics in the first half of the 20th century. This sequential account will also serve to identify the main representative positions along with the arguments used to support them, and it thus prepares the ground for a discussion that will be less historical and more strictly conceptual. -/- Résumé : Les économistes ont coutume de distinguer entre une composante positive et une composante normative de leurs travaux, ce qui est une singularité de leur discipline, car cette distinction n'a pas de répondant exact dans les autres sciences sociales. Elle a fortement évolué au cours du temps et les différentes manières de la concevoir aujourd'hui en reflètent l'histoire. On se propose ici d'en retracer les origines et les premières formes, de l'économie politique classique anglaise de la première moitié du XIXe siècle jusqu'à l'apparition de l'économie du bien-être dans la première moitié du XXe siècle. Ce parcours séquentiel vise aussi à identifier les positions les plus représentatives et les arguments invoqués pour les soutenir, en préparant ainsi une discussion qui serait moins historique et plus strictement conceptuelle. (shrink)
We reexamine some of the classic problems connected with the use of cardinal utility functions in decision theory, and discuss Patrick Suppes's contributions to this field in light of a reinterpretation we propose for these problems. We analytically decompose the doctrine of ordinalism, which only accepts ordinal utility functions, and distinguish between several doctrines of cardinalism, depending on what components of ordinalism they specifically reject. We identify Suppes's doctrine with the major deviation from ordinalism that conceives of utility functions as (...) representing preference differences, while being non- etheless empirically related to choices. We highlight the originality, promises and limits of this choice-based cardinalism. (shrink)
The paper discusses the sense in which the changes undergone by normative economics in the twentieth century can be said to be progressive. A simple criterion is proposed to decide whether a sequence of normative theories is progressive. This criterion is put to use on the historical transition from the new welfare economics to social choice theory. The paper reconstructs this classic case, and eventually concludes that the latter theory was progressive compared with the former. It also briefly comments on (...) the recent developments in normative economics and their connection with the previous two stages. (Published Online April 18 2006) Footnotes1 This paper suspersedes an earlier one entitled “Is There Progress in Normative Economics?” (Mongin 2002). I thank the organizers of the Fourth ESHET Conference (Graz 2000) for the opportunity they gave me to lecture on this topic. Thanks are also due to J. Alexander, K. Arrow, A. Bird, R. Bradley, M. Dascal, W. Gaertner, N. Gravel, D. Hausman, B. Hill, C. Howson, N. McClennen, A. Trannoy, J. Weymark, J. Worrall, two annonymous referees of this journal, and especially the editor M. Fleurbaey, for helpful comments. The editor's suggestions contributed to determine the final orientation of the paper. The author is grateful to the LSE and the Lachmann Foundation for their support at the time when he was writing the initial version. (shrink)
This paper is concerned with representations of belief by means of nonadditive probabilities of the Dempster-Shafer (DS) type. After surveying some foundational issues and results in the D.S. theory, including Suppes's related contributions, the paper proceeds to analyze the connection of the D.S. theory with some of the work currently pursued in epistemic logic. A preliminary investigation of the modal logic of belief functions à la Shafer is made. There it is shown that the Alchourrron-Gärdenfors-Makinson (A.G.M.) logic of belief change (...) is closely related to the D.S. theory. The final section compares the critique of Bayesianism which underlies the present paper with some important objections raised by Suppes against this doctrine. -/- . (shrink)
This chapter briefly reviews the present state of judgment aggregation theory and tentatively suggests a future direction for that theory. In the review, we start by emphasizing the difference between the doctrinal paradox and the discursive dilemma, two idealized examples which classically serve to motivate the theory, and then proceed to reconstruct it as a brand of logical theory, unlike in some other interpretations, using a single impossibility theorem as a key to its technical development. In the prospective part, having (...) mentioned existing applications to social choice theory and computer science, which we do not discuss here, we consider a potential application to law and economics. This would be based on a deeper exploration of the doctrinal paradox and its relevance to the functioning of collegiate courts. On this topic, legal theorists have provided empirical observations and theoretical hints that judgment aggregation theorists would be in a position to clarify and further elaborate. As a general message, the chapter means to suggest that the future of judgment aggregation theory lies with its applications rather than its internal theoretical development. (shrink)
This is a chapter of a collective volume of Rawls's and Harsanyi's theories of distributive justice. It focuses on Harsanyi's important Social Aggregation Theorem and technically reconstructs it as a theorem in welfarist social choice.
This note aims at critically assessing a little-noticed proposal made by Popper in the second edition of "Objective Knowledge" to the effect that verisimilitude of scientific theories should be made relative to the problems they deal with. Using a simple propositional calculus formalism, it is shown that the "relativized" definition fails for the very same reason why Popper's original concept of verisimilitude collapsed -- only if one of two theories is true can they be compared in terms of the suggested (...) definition of verisimilitude. (shrink)
Judgment aggregation theory generalizes social choice theory by having the aggregation rule bear on judgments of all kinds instead of barely judgments of preference. The theory derives from Kornhauser and Sager’s doctrinal paradox and Pettit’s discursive dilemma, which List and Pettit turned into an impossibility theorem – the first of a long list to come. After mentioning this formative stage, the paper restates what is now regarded as the “canonical theorem” of judgment aggregation theory. The last part of paper discusses (...) how judgment aggregation theory connects with social choice theory and can contribute to it; it singles out two representative applications, one to Arrow’s impossibility theorem and the other to the group identification problem. (shrink)
The basic axioms or formal conditions of decision theory, especially the ordering condition put on preferences and the axioms underlying the expected utility formula, are subject to a number of counter-examples, some of which can be endowed with normative value and thus fall within the ambit of a philosophical reflection on practical rationality. Against such counter-examples, a defensive strategy has been developed which consists in redescribing the outcomes of the available options in such a way that the threatened axioms or (...) conditions continue to hold. We examine how this strategy performs in three major cases: Sen's counterexamples to the binariness property of preferences, the Allais paradox of EU theory under risk, and the Ellsberg paradox of EU theory under uncertainty. We find that the strategy typically proves to be lacking in several major respects, suffering from logical triviality, incompleteness, and theoretical insularity. To give the strategy more structure, philosophers have developed “principles of individuation”; but we observe that these do not address the aforementioned defects. Instead, we propose the method of checking whether the strategy can overcome its typical defects once it is given a proper theoretical expansion. We find that the strategy passes the test imperfectly in Sen's case and not at all in Allais's. In Ellsberg's case, however, it comes close to meeting our requirement. But even the analysis of this more promising application suggests that the strategy ought to address the decision problem as a whole, rather than just the outcomes, and that it should extend its revision process to the very statements it is meant to protect. Thus, by and large, the same cautionary tale against redescription practices runs through the analysis of all three cases. A more general lesson, simply put, is that there is no easy way out from the paradoxes of decision theory. (shrink)
The expression "analytic narratives" is used to refer to a range of quite recent studies that lie on the boundaries between history, political science, and economics. These studies purport to explain specific historical events by combining the usual narrative approach of historians with the analytic tools that economists and political scientists draw from formal rational choice theories. Game theory, especially of the extensive form version, is currently prominent among these tools, but there is nothing inevitable about such a technical choice. (...) The chapter explains what analytic narratives are by reviewing the studies of the major book Analytic Narratives (1998), which are concerned with the workings of political institutions broadly speaking, as well as several cases drawn from military and security studies, which form an independent source of the analytic narratives literature. At the same time as it gradually develops a definition of analytic narratives, the chapter investigates how they fulfil one of their main purposes, which is to provide explanations of a better standing than those of traditional history. An important principle that will emerge in the course of the discussion is that narration is called upon not only to provide facts and problems, but also to contribute to the explanation itself. The chapter distinguishes between several expository schemes of analytic narratives according to the way they implement this principle. From all the arguments developed here, it seems clear that the current applications of analytic narratives do not exhaust their potential, and in particular that they deserve the attention of economic historians, if only because they are concerned with microeconomic interactions that are not currently their focus of attention. (shrink)
Élie HALÉVY (1870-1937), philosophe et historien des idées, fut professeur à l'École libre des sciences politiques, l'ancêtre de l'actuel Sciences Po. Comme son autre grand ouvrage, l'Histoire du peuple anglais au XIXe siècle, paru en six tomes de 1913 à 1932, les trois tomes de La formation du radicalisme philosophique, parus en 1901 pour les deux premiers et en 1904 pour le troisième, reflètent pour partie ses enseignements de l'Ecole libre consacrés à l'histoire britannique. Le premier tome, La jeunesse de (...) Bentham 1776-1789, étudie la doctrine utilitariste non seulement chez celui qu'on regarde comme son fondateur principal, Jeremy Bentham, mais aussi chez les nombreux auteurs qui, en Grande-Bretagne et sur le continent, en dessinèrent avant lui les contours. Le deuxième tome, L’évolution de la doctrine utilitaire de 1789 à 1815, montre comment l'utilitarisme revêtit la forme non seulement d'une école de pensée, mais aussi d'un mouvement pour la réforme économique, sociale et politique. Le rôle coordonnateur nouveau de James Mill, ainsi qu'une convergence de vues avec les économistes, qui poussaient dans le sens des réformes, marquèrent notamment cette évolution. Le troisième tome, Le radicalisme philosophique, continue d'étudier la transformation de l'école en mouvement après la fin des guerres napoléoniennes, lorsque celle-ci commence à engranger ses premiers grands succès réformistes. Bentham, James Mill et les autres penseurs utilitaires sont alors réunis sous l'appellation de philosophic radicals. Le terme temporel de l'ouvrage est le Reform Act de 1832, première étape vers la modernisation du système électoral, que la propagande de ce groupe ne contribua pas peu à faire aboutir. Quoique l'ouvrage d'Halévy vaille en premier lieu par l'immense savoir qu'il déploie, et le nombre et l'excellence des citations qu'il propose, il comporte aussi des thèses historiques et philosophiques originales. On peut citer parmi les premières la thèse, qui relie les trois tomes, voulant que l'utilitarisme britannique trouve sa forme achevée dans l'intervention sur la société, lorsqu'il se mue en radicalisme philosophique, et parmi les secondes, la thèse, énoncée au début du premier tome, voulant qu'il existe trois modèles dominants de jonction des intérêts individuels (la fusion sympathique, l'identification naturelle et l'identification artificielle). Une autre grande thèse, à la fois historique et philosophique, affirme en substance que l'économie politique classique serait un département spécialisé de la pensée utilitaire. La question de savoir jusqu'à quel point Smith, Ricardo et Malthus ont pu adhérer au "principe d'utilité" de Bentham est toujours débattue. En même temps que les élucidations apportées à ce principe, elle contribue à expliquer l'intérêt que les historiens de la pensée économique continuent de porter à l'ouvrage. L'auteur a participé à la réédition de La formation du radicalisme philosophique en 1995 par les Presses Universitaires de France (P.U.F.), suivant un projet collectif lancé par Monique Canto-Sperber. Dans le présent article, antérieur à cette réédition, l'auteur tentait de résumer brièvement un livre qui demeure irremplaçable en dépit d'une conception et d'un style quelque peu datés. (shrink)
A review of A. Hisch and N. de Marchi's thorough historical study on Milton Friedman's life-long work as an economist (and more specifically as a monetary economist) and as an economic methodologist (in his famous essay "The Methodology of Positive Economics".
A comment on Paul Schoemaker's target article in Behavioral and Brain Sciences, 14 (1991), p. 205-215, "The Quest for Optimality: A Positive Heuristic of Science?" (https://doi.org/10.1017/S0140525X00066140). This comment argues that the optimizing model of decision leads to an infinite regress, once internal costs of decision (i.e., information and computation costs) are duly taken into account.
The paper applies confirmation theory to a famous statement of economics, the law of demand, which says that ceteris paribus, prices and quantities demanded change in opposite directions. Today's economists do not accept the law unless definite restrictions hold, and have shown little interest in deciding whether or not these restrictions were satisfied empirically. However, Hildenbrand (1994) has provided a new derivation of the law of aggregate demand and used this theoretical advance to devise a test that may be the (...) first rigorous one ever performed on the law. The paper accounts for Hildenbrand's and, in less detail, his predecessors' contributions within the philosophical framework of Hempel (1965) and Glymour (1980). Its salient result is that economists have accepted the "consequence condition", and rejected the "converse consequence condition", and thus implicitly adhered to a Hempelian- Glymourian view of confirmation and testability. (shrink)
This article attempts to assess Jon Elster's contribution to rational choice in Ulysses and the Sirens and Sour Grapes. After reviewing Elster's analysis of functional versus intentional explanations, the essay moves on to the crucial distinction between the thin and broad theories of rationality. The former elabo rates on the traditional economist's preference / feasible set apparatus; the latter is the more demanding theory which inquires into the rationality of beliefs and preferences. Elster's approach to the broad theory normally consists (...) in using the thin theory as a reference point and in making purposefully limited departures from it. The essay illustrates the method while commenting on Elster's discus sion of autonomous preferences in Sour Grapes. It goes on to stress some impor tant analogies between Elster's use of the thin and broad theories, on one hand, and Weber's ideal-typical method, on the other. The final assessment is phrased in terms of these analogies; it is suggested that Elster is at his best when the ideal-typical method and his own separate from each other, that is, when he comes to grips with the broad theory in its own terms. (shrink)
A rejoinder to commentators of the paper by P. Mongin, "Le réalisme des hypothèses et la "Partial Interpretation View"", Philosophy of the Social Sciences, 18, 1988, p. 281-325. (This paper is listed and made available by Philpapers.).
The paper extends a result in Dutta and Ray's (1989) theory of constrained egalitarianism initiated by relying on the concept of proportionate rather than absolute equality. We apply this framework to redistributive systems in which what the individuals get depends on what they receive or pay qua members of generally overlapping groups. We solve the constrained equalization problem for this class of models. The paper ends up comparing our solution with the alternative solution based on the Shapley value, which has (...) been recommended in some distributive applications. (shrink)
An introduction to the special issue on epistemic logic and the foundations of game theory edited by Michael Bacharach and Philippe Mongin. Contributors are Michael Bacharach, Robert Stalnaker, Salvatore Modica and Aldo Rustichini, Luc Lismont and Philippe Mongin, and Hyun-Song Shin and Timothy Williamson.
From the comparison of the Grundrisse (1857-58) manuscripts with Marx's subsequent writings, it is clear that the so-called « deduction » of fundamental economic categories follows two distinctive patterns, one of which is close to ordinary logical analysis, the other being inspired by Hegel's dialectics of essence. This duality is reflected in the double meaning of the concept of « presupposition » (Voraussetzung) and, finally, in the simultaneous endorsement by the Grundrisse of two labour-value theories, one of which is Smithian-like, (...) the other is Ricardian. Marx's reinterpretation of economic value as an « immanent measure », i.e., his claim that commodities are measured by each other when exchange takes place, should help to bridge the gap between the two theories. However, such reinterpretation is shown to be inadequate ; as a result, Marx's account of value should be seen as internally inconsistent. (shrink)
Par risques majeurs, on entend ceux qui s’attachent à des événements dont les conséquences défavorables, pour l’humanité ou pour l’environnement, sont d’une gravité exceptionnelle. On n’ajoutera ni que ces événements sont d’une intensité physique extrême, ni qu’ils surviennent rarement, car ce n’est pas toujours le cas. Seuls des risques majeurs de nature civile seront considérés dans cet ouvrage, et il s'agira, plus limitativement, de risques naturels, comme ceux d’inondation et de submersion marine, illustrés par la tempête Xynthia en 2010, de (...) risques technologiques industriels, comme celui d’une explosion d'usine, illustré par la catastrophe AZF en 2001, et de risques nucléaires, pour autant qu’ils mettent en jeu les dangers de la radioactivité, et qu’illustrent, hors de France, les catastrophes de Three Miles Island (1979), Tchernobyl (1986) et Fukushima (2011), -/- Conçu comme un rapport destiné au Conseil d'analyse économique (CAE) du Premier Ministre français, l'ouvrage comporte une introduction traitant du risque majeur pris en général (section 1). Dans sa partie principale, il aborde les trois risques majeurs qu'il a sélectionnés à travers les prismes successifs de la géographie et de la technologie (section 2), de l’histoire institutionnelle et juridique (section 3), enfin d’un bilan normatif accompagné de recommandations pour l'action publique (section 4). -/- Neuf compléments spécialisés, dus à d'autres auteurs, complètent l'ouvrage. Ils portent sur des aspects technologiques (F. Ménage; A. Quantin et D. Moucoulon; P. Saint-Raymond), juridiques (V. Sanseverino-Godfrin), économiques généraux (C. Grislain-Letrémy et B. Villeneuve; R. Lahidji; J. Percebois; A. Schmitt et S. Spaeter) et financiers (M. Pappalardo) et ils concernent principalement le risque nucléaire. -/- Dans cet ensemble de vaste format, on espère avoir su proposer, non seulement des évaluations et des préconisations destinées aux pouvoirs publics, comme il convenait au projet initial de rapport, mais aussi une synthèse qui soit utilisable par tous ceux - décideurs, scientifiques ou simples observateurs - que préoccupent les questions de risques majeurs. (shrink)
This doctoral thesis was prepared in 1975-77 at Ecole des Hautes Etudes en Sciences Sociales, Paris, under the supervision of Prof. Raymond ARON. It was submitted in 1977 in fulfilment of the requirements for a Ph.D. degree in Social Sciences (Doctorat de 3e cycle en sciences sociales). The oral examination (soutenance de thèse) was held in January 1978, with the examination committee consisting of Prof. Aron, Bartoli, Boudon and Brochier. This 250 page unpublished dissertation was the first study ever written (...) in French on Karl Marx's Grundrisse - a 1857-58 manuscript preparatory to Marx's main economic work, Capital. The dissertation reviews Marx's successive projects for his economic work since the 1844 Manuscripts and then proceeds to a presentation and critical discussion of Grundrisse. The proposed interpretation explains the linkage that Marx operated in 1857-58 between Ricardo's economics and Hegel's dialectics, and it emphasizes that Marx was at that time primarily trying to reconstruct the dynamics of capitalism, without going to the stage of developing a formal theory of value and exploitation, as he will eventually do in Capital. (shrink)
This thought-provoking book discusses the concept of progress in economics and investigates whether any advance has been made in its different spheres of research. The authors look back at the history, successes and failures of their respective fields and thoroughly examine the notion of progress from an epistemological and methodological perspective. The idea of progress is particularly significant as the authors regard it as an essentially contested concept which can be defined in many ways – theoretically or empirically; locally or (...) globally; or as encouraging or impeding the existence of other research traditions. The authors discuss the idea that for progress to make any sense there must be an accumulation of knowledge built up over time rather than the replacement of ideas by each successive generation. Accordingly, they are not concerned with estimating the price of progress, reminiscing in the past, or assessing what has been lost. Instead they apply the complex mechanisms and machinery of the discipline to sub-fields such as normative economics, monetary economics, trade and location theory, Austrian economics and classical economics to critically assess whether progress has been made in these areas of research. -/- Bringing together authoritative and wide-ranging contributions by leading scholars, this book will challenge and engage those interested in philosophy, economic methodology and the history of economic thought. It will also appeal to economists in general who are interested in the advancement of their profession. (shrink)
This French article aims at analyzing the Ricardian problem of an "invariable standard of value" in Ricardo's own terms. It is argued that Ricardo's commentators and modern followers have changed these terms significantly. The problem actually branches into two subproblems, i.e., that of "invariability" strictly, and that of "neutrality with respect to distribution". These subproblems do not matter to Ricardo to the same extent. He regards the latter (in various formulations recapitulated here) as a complication of the former, which is (...) the crucial one in his search for a "good" standard. This exemplifies precisely how Ricardo could theoretically focus on the production side of the economy at the expense of the distribution side. With these conclusions at hand, the paper can be critical of Marx's and Sraffa's interpretations of the Ricardian problem of the standard: respectively, because Marx's is simply incorrect, and because Sraffa's solved a problem that was unrelated to the original one in Ricardo. -/- -/- . (shrink)