We formalise a notion of dynamic rationality in terms of a logic of conditional beliefs on (doxastic) plausibility models. Similarly to other epistemic statements (e.g. negations of Moore sentences and of Muddy Children announcements), dynamic rationality changes its meaning after every act of learning, and it may become true after players learn it is false. Applying this to extensive games, we “simulate” the play of a game as a succession of dynamic updates of the original plausibility model: the epistemic situation (...) when a given node is reached can be thought of as the result of a joint act of learning (via public announcements) that the node is reached. We then use the notion of “stable belief”, i.e. belief that is preserved during the play of the game, in order to give an epistemic condition for backwardinduction: rationality and common knowledge of stable belief in rationality. This condition is weaker than Aumann’s and compatible with the implicit assumptions (the “epistemic openness of the future”) underlying Stalnaker’s criticism of Aumann’s proof. The “dynamic” nature of our concept of rationality explains why our condition avoids the apparent circularity of the “backwardinduction paradox”: it is consistent to (continue to) believe in a player’s rationality after updating with his irrationality. (shrink)
We conceive of a player in dynamic games as a set of agents, which are assigned the distinct tasks of reasoning and node-specific choices. The notion of agent connectedness measuring the sequential stability of a player over time is then modeled in an extended type-based epistemic framework. Moreover, we provide an epistemic foundation for backwardinduction in terms of agent connectedness. Besides, it is argued that the epistemic independence assumption underlying backwardinduction is stronger than usually (...) presumed. (shrink)
According to the so-called “Folk Theorem” for repeated games, stable cooperative relations can be sustained in a Prisoner’s Dilemma if the game is repeated an indefinite number of times. This result depends on the possibility of applying strategies that are based on reciprocity, i.e., strategies that reward cooperation with subsequent cooperation and punish defectionwith subsequent defection. If future interactions are sufficiently important, i.e., if the discount rate is relatively small, each agent may be motivated to cooperate by fear of retaliation (...) in the future. For finite games, however, where the number of plays is known beforehand, there is a backwardinduction argument showing that rational agents will not be able to achieve cooperation. On behalf of the Hobbesian “Foole”, who cannot see any advantage in cooperation, Gregory Kavka (1983, 1986) has presented an argument that significantly extends the range of the backwardinduction argument. He shows that, for the backwardinduction argument to be effective, it is not necessary that the precise number of future interactions be known. It is sufficient that there is a known definite upper bound on the number of interactions. A similar argument is developed by John W. Carroll (1987). We will here question the assumption of a known upper bound. When the assumption is made precise in the way needed for the argument to go through, its apparent plausibility evaporates. We then offer a reformulation of the argument, based on weaker, and more plausible, assumptions. (shrink)
According to a standard objection to the use of backwardinduction in extensive-form games with perfect information, backwardinduction (BI) can only work if the players are confident that each player is resiliently rational - disposed to act rationally at each possible node that the game can reach, even at the nodes that will certainly never be reached in actual play - and also confident that these beliefs in the players’ future resilient rationality are robust, i.e. (...) that they would be kept come what may, whatever evidence of irrationality would by then transpire concerning past performance of the players. Since both resiliency and robustness assumptions are extremely strong and their appropriateness as idealizations is quite problematic, it has been argued (by Binmore, Reny, Bicchieri, Pettit and Sugden, among others) that BI is an indefensible procedure. Therefore, we need not be worried that BI can be used to justify seemingly counter-intuitive game solutions. I show, however, that there is a restricted class of extensive-form games in which BI solutions can be defended without assuming resiliency or robustness. For these ”BI-terminating games” (= games in which BI moves always terminate the play, at each choice node), to defend BI solutions, it is enough to make confidence-in-rationality assumptions concerning actual play; stipulations about various counterfactual developments are unnecessary. For this class of games, then, the standard objection to BI is inapplicable. At the same time, however, it will transpire that the class in question contains some well-known games, such as the Centipede in its different versions, in which BI recommends a seemingly unreasonable behaviour. (shrink)
The backwardinduction argument purports to show that rational and suitably informed players will defect throughout a finite sequence of prisoner's dilemmas. It is supposed to be a useful argument for predicting how rational players will behave in a variety of interesting decision situations. Here, I lay out a set of assumptions defining a class of finite sequences of prisoner's dilemmas. Given these assumptions, I suggest how it might appear that backwardinduction succeeds and why it (...) is actually fallacious. Then, I go on to consider the consequences of adopting a stronger set of assumptions. Focusing my attention on stronger sets that, like the original, obey the informedness condition, I show that any supplementation of the original set that preserves informedness does so at the expense of forcing rational participants in prisoner's dilemma situations to have unexpected beliefs, ones that threaten the usefulness of backwardinduction. (shrink)
The problem of finding sufficient doxastic conditions for backwardinduction in games of perfect information is analyzed in a syntactic framework with subjunctive conditionals. This allows to describe the structure of the game by a logical formula and consequently to treat beliefs about this structure in the same way as beliefs about rationality. A backwardinduction and a non-Nash equilibrium result based on higher level belief in rationality and the structure of the game are derived.
A cornerstone of game theory is backwardinduction, whereby players reason backward from the end of a game in extensive form to the beginning in order to determine what choices are rational at each stage of play. Truels, or three-person duels, are used to illustrate how the outcome can depend on (1) the evenness/oddness of the number of rounds (the parity problem) and (2) uncertainty about the endpoint of the game (the uncertainty problem). Since there is no (...) known endpoint in the latter case, an extension of the idea of backwardinduction is used to determine the possible outcomes. The parity problem highlights the lack of robustness of backwardinduction, but it poses no conflict between foundational principles. On the other hand, two conflicting views of the future underlie the uncertainty problem, depending on whether the number of rounds is bounded (the players invariably shoot from the start) or unbounded (they may all cooperate and never shoot, despite the fact that the truel will end with certainty and therefore be effectively bounded). Some real-life examples, in which destructive behavior sometimes occurred and sometimes did not, are used to illustrate these differences, and some ethical implications of the analysis are discussed. (shrink)
The traditional form of the backwardinduction argument, which concludes that two initially rational agents would always defect, relies on the assumption that they believe they will be rational in later rounds. Philip Pettit and Robert Sugden have argued, however, that this assumption is unjustified. The purpose of this paper is to reconstruct the argument without using this assumption. The formulation offered concludes that two initially rational agents would decide to always defect, and relies only on the weaker (...) assumption that they do not believe they will not be rational in later rounds. The argument employs the idea that decisions justify revocable presumptions about behaviour. (shrink)
Two justifications of backwardinduction (BI) in generic perfect information games are formulated using Bonanno's (1992; Theory and Decision 33, 153) belief systems. The first justification concerns the BI strategy profile and is based on selecting a set of rational belief systems from which players have to choose their belief functions. The second justification concerns the BI path of play and is based on a sequential deletion of nodes that are inconsistent with the choice of rational belief functions.
The standard backward-induction reasoning in a game like the centipede assumes that the players maintain a common belief in rationality throughout the game. But that is a dubious assumption. Suppose the first player X didn't terminate the game in the first round; what would the second player Y think then? Since the backwards-induction argument says X should terminate the game, and it is supposed to be a sound argument, Y might be entitled to doubt X's rationality. Alternatively, (...) Y might doubt that X believes Y is rational, or that X believes Y believes X is rational, or Y might have some higher-order doubt. X’s deviant first move might cause a breakdown in common belief in rationality, therefore. Once that goes, the entire argument fails. The argument also assumes that the players act rationally at each stage of the game, even if this stage could not be reached by rational play. But it is also dubious to assume that past irrationality never exerts a corrupting influence on present play. However, the backwards-induction argument can be reconstructed for the centipede game on a more secure basis.1 It may be implausible to assume a common belief in rationality throughout the game, however the game might go, but the argument requires less than this. The standard idealisations in game theory certainly allow us to assume a common belief in rationality at the beginning of the game. They also allow us to assume this common belief persists so long as no one makes an irrational move. That is enough for the argument to go through. (shrink)
Robert Aumann argues that common knowledge of rationality implies backwardinduction in finite games of perfect information. I have argued that it does not. A literature now exists in which various formal arguments are offered in support of both positions. This paper argues that Aumann's claim can be justified if knowledge is suitably reinterpreted.
According to decision theory, the rational initial action in a sequential decision-problem may be found by backwardinduction or folding back. But the reasoning which underwrites this claim appeals to the agent's beliefs about what she will later believe, about what she will later believe she will still later believe, and so forth. There are limits to the depth of people's beliefs. Do these limits pose a threat to the standard theory of rational sequential choice? It is argued, (...) first, that the traditional solutions of certain games depend on knowledge which exceeds depth limits, and that these solutions therefore cannot be shown rational in the usual sense. Then, for that related reason even folding back solutions of one-person problems cannot be! A revision of our notion of rational choice is proposed, analogous to the reliabilist account of knowledge of Goldman and others, by which this paradox is resolved. (shrink)
According to a familiar argument, iterated prisoner's dilemmas of known finite lengths resolve for ideally rational and well-informed players: They would defect in the last round, anticipate this in the next to last round and so defect in it, and so on. But would they anticipate defections even if they had been cooperating? Not necessarily, say recent critics. These critics "lose" the backward-induction paradox by imposing indicative interpretations on rationality and information conditions. To regain it I propose subjunctive (...) interpretations. To solve it I stress that implications for ordinary imperfect players are limited. (shrink)
This paper uses the Centipede Game to criticize formal arguments that have recently been offered for and against backwardinduction as a rationality principle. It is argued that the crucial issues concerning the interpretation of counterfactuals depend on contextual questions that are abstracted away in current formalisms. I have a text, it always is the same, And always has been, Since I learnt the game. Chaucer, The Pardoner's Tale.
The logical foundations of game-theoretic solution concepts have so far been explored within the con¯nes of epistemic logic. In this paper we turn to a di®erent branch of modal logic, namely temporal logic, and propose to view the solution of a game as a complete prediction about future play. The branching time framework is extended by adding agents and by de¯ning the notion of prediction. A syntactic characterization of backwardinduction in terms of the property of internal consistency (...) of prediction is given. (shrink)
A large class of games is that of non-cooperative, extensive form games of perfect information. When the length of these games is finite, the method used to reach a solution is that of a backwardinduction. Working from the terminal nodes, dominated strategies are successively deleted and what remains is a unique equilibrium. Game theorists have generally assumed that the informational requirement needed to solve these games is that the players have common knowledge of rationality. This assumption, however, (...) has given rise to several problems and paradoxes. Most notably, it has been shown that the common knowledge assumption makes the theory of the game inconsistent at some information set. The present paper shows that a) no common knowledge of rationality need be assumed for the backwardinduction solution to hold. Rather, it is sufficient that the players have a number of levels of knowledge proportional to the length of the game, and b) it is also necessary that the number of levels of knowledge is finite and proportional to the length of the game. For a higher number of levels of knowledge, inconsistencies arise. (shrink)
Backwardinduction has been the standard method of solving finite extensive-form games with perfect information, notwithstanding the fact that this procedure leads to counter-intuitive results in various games (iterated prisoner's dilemma, centipede, chain store, etc.). However, beginning in the late eighties, the method of backwardinduction became an object of criticism. It is claimed (most notably, by Reny 1988, 1989, Binmore 1987, Bicchieri 1989, and Pettit & Sugden 1989) that the assumptions needed for its defence are (...) quite implausible, if not incoherent. It is therefore natural to ask for the justification of backwardinduction: Can one show that rational players who know the structure of the game, have trust in each other's practical rationality and reason correctly, will act in accordance in backwardinduction? Several researchers have tried a justification of this kind, but the argument presented in Robert Aumann's paper from 1995 is perhaps the most well-known and influential attempt to provide such a justification. Clausing (1999) provides a sustained discussion of the justification problem for backwardinduction. It is an excellent work and the criticism I will present below does not detract from this evaluation: the issues discussed by the author are complex and it is difficult to get everything right. Furthermore, I hope that the criticism to be presented may be instructive; Even though it has not been Clausing's intention, his logical reconstruction of Aumann's defence of backwardinduction allows us to see very clearly what is wrong with that argument. It also provides us with * This paper was presented at workshops in Lund and in Uppsala, in the Fall of 1999. I am indebted to the participants for their useful comments. The work on the paper was supported by a generous research grant from The Bank of Sweden Tercentenary Foundation. (shrink)
We analyze the sequential structure of dynamic games with perfect information. A three-stage account is proposed, that species setup, reasoning and play stages. Accordingly, we define a player as a set of agents corresponding to these three stages. The notion of agent connectedness is introduced into a type-based epistemic model. Agent connectedness measures the extent to which agents' choices are sequentially stable. Thus describing dynamic games allows to more fully understand strategic interaction over time. In particular, we provide suffcient conditions (...) for backward <span class='Hi'>induction</span> in terms of agent connectedness. Also, our framework makes explicit that the epistemic independence assumption involved in backward <span class='Hi'>induction</span> reasoning is stronger than usually presumed, and makes accessible multiple-self interpretations for dynamic games. (shrink)
We develop a logical system that captures two different interpretations of what extensive games model, and we apply this to a long-standing debate in game theory between those who defend the claim that common knowledge of rationality leads to backwardinduction or subgame perfect (Nash) equilibria and those who reject this claim. We show that a defense of the claim à la Aumann (1995) rests on a conception of extensive game playing as a one-shot event in combination with (...) a principle of rationality that is incompatible with it, while a rejection of the claim à la Reny (1988) assumes a temporally extended, many-moment interpretation of extensive games in combination with implausible belief revision policies. In addition, the logical system provides an original inductive and implicit axiomatization of rationality in extensive games based on relations of dominance rather than the usual direct axiomatization of rationality as maximization of expected utility. (shrink)
In evolutionary models of indirect reciprocity, reputation mechanisms can stabilize cooperation even in severe cooperation problems like the prisoner’s dilemma. Under certain circumstances, conditionally cooperative strategies, which cooperate iff their partner has a good reputation, cannot be invaded by any other strategy that conditions behavior only on own and partner reputation. The first point of this paper is to show that an evolutionary version of backwardinduction can lead to a breakdown of this kind of indirectly reciprocal cooperation. (...)Backwardinduction, however, requires strategies that count and then cease to cooperate in the last, last but one, last but two, … game they play. These strategies are unlikely to exist in natural settings. We then present two new findings. (1) Surprisingly, the same kind of breakdown is also possible without counting. Strategies using rare golden opportunities for defection can invade conditional cooperators. This can create further golden opportunities, inviting the next wave of opportunists, and so on, until cooperation breaks down completely. (2) Cooperation can be stabilized against these opportunists, by letting an individual’s initial reputation be inherited from that individual’s parent. This ‘inclusive reputation’ mechanism can cope with any observably opportunistic strategy. Offspring of opportunists who successfully exploited a conditional cooperator cannot repeat their parents’ success because they inherit a bad reputation, which forewarns conditional cooperators in later generations. (shrink)
In the standard money pump, an agent with cyclical preferences can avoid exploitation if he shows foresight and solves his sequential decision problem using backwardinduction (BI). This way out is foreclosed in a modified money pump, which has been presented in Rabinowicz (2000). There, BI will lead the agent to behave in a self-defeating way. The present paper describes another sequential decision problem of this kind, the Centipede for an Intransitive Preferrer, which in some respects is even (...) more striking than the modified pump. In the new problem, the BI reasoning that implies self-defeating behavior does not rest on the controversial robustness assumption concerning beliefs in one's future rationality. This strengthens the claim that foresight cannot save the intransitive preferrer from a self-defeating course of action. (shrink)
If game theory is to be used as a negotiation support tool, it should be able to provide unambiguous recommendations for a target to aim at and for actions to reach this target. This need cannot be satisfied with the Nash equilibrium concept, based on the standard instrumental concept of rationality. These equilibria, as is well known, are generally multiple in a game. The concept of substantive or instrumental rationality has proved to be so pregnant, however, that researchers, instead of (...) re-evaluating its use in game theory, have simply tried to design concepts related to the Nash equilibrium, but with the property of being unique in a game — i.e., they have devised ways ofselecting among Nash equilibria. These concepts have been labeledrefined Nash equilibria.The purpose of this paper is to show the following.The different types of refined Nash equilibria, based on the principle of backwardinduction, can lead to severe contradictions within the framework itself. This makes these concepts utterly unsatisfactory and calls for a new appraisal of the reasoning process of the players.The degree of confidence in the principle of backwardinduction depends upon the evaluation of potential deviations with respect to the extended Nash equilibrium concept used and upon the possible interpretations of such deviations by the different players. Our goal is to show that the nature of these possible interpretations reinforces the argument that a serious conceptual reappraisal is necessary.Some form of forward induction should then become the real yardstick of rationality, extending Simonianprocedural rationality towards the concept ofcognitive rationality. This could open the way to a renewed game theoretic approach to negotiation support systems. Such a research program, which would be a revision of the basic game theoretic concepts, is dealt with in the end of the paper. (shrink)
The article provides an evolutionary analysis of a finitely iterated Prisoner's Dilemma. The backwardinduction reasoning for a breakdown of cooperation in this game is transformed to an evolutionary degradation effect. After the introduction of random variations in the strategies' population size, however, cyclical variations of cooperativeness may appear. A breakdown of cooperation is no longer inevitable. An analysis for all possible payoff relations in Prisoner's Dilemma matrices shows that only four qualitatively different dynamical flows can emerge.
Many philosophers and game theorists have been struck by the thought that the backwardinduction argument (BIA) for the ﬁnite iterated pris- oner’s dilemma (FIPD) recommends a course of action which is grossly counter-intuitive and certainly contrary to the way in which people behave in real-life FIPD-situations (Luce and Raiffa 1957, Pettit and Sugden 1989, Bovens 1997).1 Yet the backwards induction argument puts itself forward as binding upon rational agents. What are we to conclude from this? Is (...) it that people in real-life FIPD-situations tend to act irrationally and that our own intuitions about what to do in such situations reveal us to be irra- tional? Alternatively, should we abandon game theory and decision theory as a guide to rationality? Or are there other ways in which the apparent disparity between the dictates of rationality and the reality of reasoning can be accommodated? (shrink)
An agent whose preferences violate the Independence Axiom or for some other reason are not representable by an expected utility function, can avoid 'dynamic inconsistency' either by foresight ('sophisticated choice') or by subsequent adjustment of preferences to the chosen plan of action ('resolute choice'). Contrary to McClennen and Machina, among others, it is argued these two seemingly conflicting approaches to 'dynamic rationality' need not be incompatible. 'Wise choice' reconciles foresight with a possibility of preference adjustment by rejecting the two assumptions (...) that create the conflict: Separability of Preferences in the case of sophisticated choice and Reduction to Normal form in the case of resolute choice.. (shrink)
Seidenfeld (Seidenfeld, T. [1988a], Decision theory without 'Independence' or without 'Ordering', Economics and Philosophy 4: 267-290) gave an argument for Independence based on a supposition that admissibility of a sequential option is preserved under substitution of indifferents at choice nodes (S). To avoid a natural complaint that (S) begs the question against a critic of Independence, he provided an independent proof of (S) in his (Seidenfeld, T. [1988b], Rejoinder [to Hammond and McClennen], Economics and Philosophy 4: 309-315). In reply to (...) my (Rabinowicz, W. , To have one's cake and eat it too: Sequential choice and expected-utility violations, The Journal of Philosophy 92: 586-620), in which I argue that the proof is invalid, Seidenfeld (Seidenfeld, T. , Substitution of indifferent options at choice nodes and admissibility: A reply to Rabinowicz, Theory and Decision 48: 305â310 this issue) submits that I fail to give due consideration to one of the underlying assumptions of his derivation: it is meant to apply only to those cases in which the agent's preferences are stable throughout the sequential decision process. The purpose of this note is to clarify the notion of preference stability so as meet this objection. (shrink)
This paper reports on an experimental test of the Principle of Optimality in dynamic decision problems. This Principle, which states that the decision-maker should always choose the optimal decision at each stage of the decision problem, conditional on behaving optimally thereafter, underlies many theories of optimal dynamic decision making, but is normally difficult to test empirically without knowledge of the decision-maker's preference function. In the experiment reported here we use a new experimental procedure to get round this difficulty, which also (...) enables us to shed some light on the decision process that the decision-maker is using if he or she is not using the Principle of Optimality - which appears to be the case in our experiments. (shrink)
An important approach to game theory is to examine the consequences of beliefs that agents may have about each other. This paper investigates respect for public preferences. Consider an agent A who believes that B strictly prefers an option a to an option b. Then A respects B’s preference if A assigns probability 1 to the choice of a given that B chooses a or b. Respect for public preferences requires that if it is common belief that B prefers a (...) to b, then it is common belief that all other agents respect that preference. Along the lines of Blume, Brandenburger and Dekel  and Asheim , I treat respect for public preferences as a constraint on lexicographic probability systems. The main result is that given respect for public preferences and perfect recall, players choose in accordance with Iterated Backward Inference. Iterated Backward Inference is a procedure that generalizes standard backwardinduction reasoning for games of both perfect and imperfect information. From Asheim’s characterization of proper rationalizability  it follows that properly rationalizable strategies are consistent with respect for public preferences; hence strategies eliminated by Iterated Backward Inference are not properly rationalizable. (shrink)
An important approach to game theory is to examine the consequences of beliefs that rational agents may have about each other. This paper considers respect for public preferences. Consider an agent A who believes that B strictly prefers an option a to an option b. Then A respects B’s preference if A considers the choice of a “infinitely more likely” than the choice of B; equivalently, if A assigns probability 1 to the choice of a given that B chooses a (...) or b. Respect for public preferences requires that if it is common belief that B prefers a to b, then it is common belief that all other agents respect that preference. Along the lines of Blume, Brandenburger and Dekel  and Asheim , I treat respect for public preferences as a constraint on lexicographic probability systems. The main result is that if respect for public preferences and perfect recall obtains, then players choose in accordance with Iterated Backward Inference. Iterated Backward Inference is a procedure that generalizes standard backwardinduction reasoning for games of both perfect and imperfect information. From Asheim’s characterization of proper rationalizability  it follows that properly rationalizable strategies are consistent with respect for public preferences; hence strategies eliminated by Iterated Backward Inference are not properly rationalizable. (shrink)
We present an axiomatic approach for a class of finite, extensive form games of perfect information that makes use of notions like rationality at a node and knowledge at a node. We distinguish between the game theorist's and the players' own theory of the game. The latter is a theory that is sufficient for each player to infer a certain sequence of moves, whereas the former is intended as a justification of such a sequence of moves. While in general the (...) game theorist's theory of the game is not and need not be axiomatized, the players' theory must be an axiomatic one, since we model players as analogous to automatic theorem provers that play the game by inferring (or computing) a sequence of moves. We provide the players with an axiomatic theory sufficient to infer a solution for the game (in our case, the backwards induction equilibrium), and prove its consistency. We then inquire what happens when the theory of the game is augmented with information that a move outside the inferred solution has occurred. We show that a theory that is sufficient for the players to infer a solution and still remains consistent in the face of deviations must be modular. By this we mean that players have distributed knowledge of it. Finally, we show that whenever the theory of the game is group-knowledge (or common knowledge) among the players (i.e., it is the same at each node), a deviation from the solution gives rise to inconsistencies and therefore forces a revision of the theory at later nodes. On the contrary, whenever a theory of the game is modular, a deviation from equilibrium play does not induce a revision of the theory. (shrink)
We present an axiomatic approach for a class of finite, extensive form games of perfect information that makes use of notions like “rationality at a node” and “knowledge at a node.” We distinguish between the game theorist's and the players' own “theory of the game.” The latter is a theory that is sufficient for each player to infer a certain sequence of moves, whereas the former is intended as a justification of such a sequence of moves. While in general the (...) game theorist's theory of the game is not and need not be axiomatized, the players' theory must be an axiomatic one, since we model players as analogous to automatic theorem provers that play the game by inferring (or computing) a sequence of moves. We provide the players with an axiomatic theory sufficient to infer a solution for the game (in our case, the backwards induction equilibrium), and prove its consistency. We then inquire what happens when the theory of the game is augmented with information that a move outside the inferred solution has occurred. We show that a theory that is sufficient for the players to infer a solution and still remains consistent in the face of deviations must be modular. By this we mean that players have distributed knowledge of it. Finally, we show that whenever the theory of the game is group-knowledge (or common knowledge) among the players (i.e., it is the same at each node), a deviation from the solution gives rise to inconsistencies and therefore forces a revision of the theory at later nodes. On the contrary, whenever a theory of the game is modular, a deviation from equilibrium play does not induce a revision of the theory. (shrink)
Backwards induction is an intriguing form of argument. It is used in a number of different contexts. One of these is the surprise exam paradox. Another is game theory. But its use is problematic, at least sometimes. The purpose of this paper is to determine what, exactly, backwards induction is, and hence to evaluate it. Let us start by rehearsing informally some of its problematic applications.
In this three-part paper, my concern is to expound and defend a conception of science, close to Einstein's, which I call aim-oriented empiricism. I argue that aim-oriented empiricsim has the following virtues. (i) It solve the problem of induction; (ii) it provides decisive reasons for rejecting van Fraassen's brilliantly defended but intuitively implausible constructive empiricism; (iii) it solves the problem of verisimilitude, the problem of explicating what it can mean to speak of scientific progress given that science advances from (...) one false theory to another; (iv) it enables us to hold that appropriate scientific theories, even though false, can nevertheless legitimately be interpreted realistically, as providing us with genuine , even if only approximate, knowledge of unobservable physical entities; (v) it provies science with a rational, even though fallible and non-mechanical, method for the discovery of fundamental new theories in physics. In the third part of the paper I show that Einstein made essential use of aim-oriented empiricism in scientific practice in developing special and general relativity. I conclude by considering to what extent Einstein came explicitly to advocate aim-oriented empiricism in his later years. (shrink)
The pessimistic induction holds that successful past scientific theories are completely false, so successful current ones are completely false too. I object that past science did not perform as poorly as the pessimistic induction depicts. A close study of the history of science entitles us to construct an optimistic induction that would neutralize the pessimistic induction. Also, even if past theories were completely false, it does not even inductively follow that the current theories will also turn (...) out to be completely false because the current theories are more successful and have better birth qualities than the past theories. Finally, the extra success and better birth qualities justify an anti-induction in favor of the present theories. (shrink)
In this paper I adduce a new argument in support of the claim that IBE is an autonomous (indispensable) form of inference, based on a familiar, yet surprisingly, under-discussed, problem for Hume’s theory of induction. I then use some insights thereby gleaned to argue for the (reductionist) claim that induction is really IBE, and draw some normative conclusions.
A Mug's Game? Solving the Problem of Induction with Metaphysical Presuppositions Nicholas Maxwell Emeritus Reader in Philosophy of Science at University College London Email: firstname.lastname@example.org Website: www.nick-maxwell.demon.co.uk Abstract This paper argues that a view of science, expounded and defended elsewhere, solves the problem of induction. The view holds that we need to see science as accepting a hierarchy of metaphysical theses concerning the comprehensibility and knowability of the universe, these theses asserting less and less as we go up (...) the hierarchy. It may seem that this view must suffer from vicious circularity, in so far as accepting physical theories is justified by an appeal to metaphysical theses in turn justified by the success of science. But this is rebutted. A thesis high up in the hierarchy asserts that the universe is such that the element of circularity, just indicated, is legitimate and justified, and not vicious. Acceptance of the thesis is in turn justified without appeal to the success of science. It may seem that the practical problem of induction can only be solved along these lines if there is a justification of the truth of the metaphysical theses in question. It is argued that this demand must be rejected as it stems from an irrational conception of science. (shrink)
Necessity holds that, if a proposition A supports another B, then it must support B. John Greco contends that one can resolve Hume's Problem of Induction only if she rejects Necessity in favor of reliabilism. If Greco's contention is correct, we would have good reason to reject Necessity and endorse reliabilism about inferential justification. Unfortunately, Greco's contention is mistaken. I argue that there is a plausible reply to Hume's Problem that both endorses Necessity and is at least as good (...) as Greco's alternative. Hence, Greco provides a good reason for neither rejecting Necessity nor endorsing inferential reliabilism. (shrink)
In this paper, I consider the pessimistic induction construed as a deductive argument (specifically, reductio ad absurdum) and as an inductive argument (specifically, inductive generalization). I argue that both formulations of the pessimistic induction are fallacious. I also consider another possible interpretation of the pessimistic induction, namely, as pointing to counterexamples to the scientific realist’s thesis that success is a reliable mark of (approximate) truth. I argue that this interpretation of the pessimistic induction fails, too. If (...) this is correct, then the pessimistic induction is an utter failure that should be abandoned by scientific anti-realists. (shrink)
Israel 2004 claims that numerous philosophers have misinterpreted Goodman’s original ‘New Riddle of Induction’, and weakened it in the process, because they do not define ‘grue’ as referring to past observations. Both claims are false: Goodman clearly took the riddle to concern the maximally general problem of “projecting” any type of characteristic from a given realm of objects into another, and since this problem subsumes Israel’s, Goodman formulated a stronger philosophical challenge than the latter surmises.
In 1955, Goodman set out to 'dissolve' the problem of induction, that is, to argue that the old problem of induction is a mere pseudoproblem not worthy of serious philosophical attention. I will argue that, under naturalistic views of the reflective equilibrium method, it cannot provide a basis for a dissolution of the problem of induction. This is because naturalized reflective equilibrium is -- in a way to be explained -- itself an inductive method, and thus renders (...) Goodman's dissolution viciously circular. This paper, then, examines how the old problem of induction crept back in while nobody was looking. (shrink)