En este artículo me ocupo de la cuestión de cómo en las teorías de proceso dual se puede dar cuenta del autoengaño y su conexión con la racionalidad. Presento las versiones intencionalista y no intencionalista del autoengaño y muestro cómo el debate entre ellas puede dirimirse de manera más completa y satisfactoria en el marco de una teoría dual. En éste suelen aceptarse dos sistemas de razonamiento, uno heurístico y otro analítico, que compiten por el control de nuestras inferencias y (...) acciones, pero a veces interactúan y colaboran entre sí. Se defiende que si predomina la respuesta de S1, se puede ver el patrón del autoengaño como una forma de razonamiento heurístico y no únicamente como un vínculo causal. Se sugiere que las evaluaciones en cuanto a la racionalidad del proceso del autoengaño, dependerá del modo en que intervenga en el patrón de razonamiento y del sistema desde el cual se lleve a cabo. In this paper I discuss the phenomenon of self-deception and its connection with the notion of rationality linked to the dual process theories. I present the intentionalist and nonintentionalist accounts of self-deception and aim to show how the debate between them can be resolved in a more comprehensive and satisfactory manner, if it is placed in the frame of the dual process theories. The dual model usually accepts two kinds of reasoning processes, heuristic and analytic, referred to two different systems, S1 and S2. These processes compete for control of our inferences and actions, but sometimes they interact and collaborate. It is suggested that in a dual model, the evaluations in terms of the rationality of the process will depend on the way in which self-deception participates in the reasoning process and on the system from which the evaluation takes place. (shrink)
An important trend in contemporary epistemology centers on elaborating an old idea of pragmatist pedigree: theory selection (and in general the process of changing view and fixing beliefs) presupposes epistemic values. This article focuses on analyzing the case where epistemic values are indeterminate or when the sources of valuation are multiple (epistemic values like coherence and simplicity need not order options in compatible ways). According to the theory that thus arises epistemic alternatives need not be fully ordered by an underlying (...) notion of information-value and therefore the usual economic techniques of optimization cannot be applied in order to compute optimal contractions. But in cases of this sort it is still rational to maximize, i.e. to deem an option as choosable when it is not known to be worse that any other. We present here basic results about a notion of liberal contraction based on maximizing quasi-orderings. This requires the previous solution of some open problems in the theory of rational choice functions, namely a full characterization of choice functions rationalizable in terms of maximization of quasi-transitive relations. We conclude by discussing the problem of what is the adequate feasible set for calculating maximizing solutions for contraction problems and by considering the epistemological roots of some counterexamples against the most fundamental axioms on choice functions (like α). While the first part of the paper shows how economic insights can be used to improve our understanding of the principles of belief formation and change, this final section reverses this strategy by showing the utility of epistemological insights and techniques for providing invariance conditions capable of regulating the applicability of the pure principles of choice. (shrink)
The paper focuses on extending to the first order case the semantical program for modalities first introduced by Dana Scott and Richard Montague. We focus on the study of neighborhood frames with constant domains and we offer in the first part of the paper a series of new completeness results for salient classical systems of first order modal logic. Among other results we show that it is possible to prove strong completeness results for normal systems without the Barcan Formula (like (...) FOL + K)in terms of neighborhood frames with constant domains. The first order models we present permit the study of many epistemic modalities recently proposed in computer science as well as the development of adequate models for monadic operators of high probability. Models of this type are either difficult of impossible to build in terms of relational Kripkean semantics .We conclude by introducing general first order neighborhood frames with constant domains and we offer a general completeness result for the entire family of classical first order modal systems in terms of them, circumventing some well-known problems of propositional and first order neighborhood semantics (mainly the fact that many classical modal logics are incomplete with respect to an unmodified version of either neighborhood or relational frames). We argue that the semantical program that thus arises offers the first complete semantic unification of the family of classical first order modal logics. (shrink)
How to accept a conditional? F. P. Ramsey proposed the following test in (Ramsey 1990).(RT) If A, then B must be accepted with respect to the current epistemic state iff the minimal hypothetical change of it needed to accept A also requires accepting B.
The article focuses on representing different forms of non-adjunctive inference as sub-Kripkean systems of classical modal logic, where the inference from □A and □B to □A ∧ B fails. In particular we prove a completeness result showing that the modal system that Schotch and Jennings derive from a form of non-adjunctive inference in (Schotch and Jennings, 1980) is a classical system strictly stronger than EMN and weaker than K (following the notation for classical modalities presented in Chellas, 1980). The unified (...) semantical characterization in terms of neighborhoods permits comparisons between different forms of non-adjunctive inference. For example, we show that the non-adjunctive logic proposed in (Schotch and Jennings, 1980) is not adequate in general for representing the logic of high probability operators. An alternative interpretation of the forcing relation of Schotch and Jennings is derived from the proposed unified semantics and utilized in order to propose a more fine-grained measure of epistemic coherence than the one presented in (Schotch and Jennings, 1980). Finally we propose a syntactic translation of the purely implicative part of Jaśkowski's system D₂ into a classical system preserving all the theorems (and non-theorems) explicilty mentioned in (Jaśkowski, 1969). The translation method can be used in order to develop epistemic semantics for a larger class of non-adjunctive (discursive) logics than the ones historically investigated by Jaśkowski. (shrink)
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
It is now well known that, on pain of triviality, the probability of a conditional cannot be identified with the corresponding conditional probability . This surprising impossibility result has a qualitative counterpart. In fact, Peter Gärdenfors showed in  that believing ‘If A then B’ cannot be equated with the act of believing B on the supposition that A — as long as supposing obeys minimal Bayesian constraints. Recent work has shown that in spite of these negative results, the question (...) ‘how to accept a conditional?’ has a clear answer. Even if conditionals are not truth-carriers, they do have precise acceptability conditions. Nevertheless most epistemic models of conditionals do not provide acceptance conditions for iterated conditionals. One of the main goals of this essay is to provide a comprehensive account of the notion of epistemic conditionality covering all forms of iteration. First we propose an account of the basic idea of epistemic conditionality, by studying the conditionals validated by epistemic models where iteration is permitted but not constrained by special axioms. Our modeling does not presuppose that epistemic states should be represented by belief sets (we only assume that to each epistemic state corresponds an associated belief state). A full encoding of the basic epistemic conditionals (encompassing all forms of iteration) is presented and a representation result is proved. In the second part of the essay we argue that the notion of change involved in the evaluation of conditionals is suppositional, and that such notion should be distinguished from the notion of updating (modelled by AGM and other methods). We conclude by considering how some of the recent modellings of iterated change fare as methods for iterated supposing. (shrink)
Daniel Ellsberg presented in Ellsberg (The Quarterly Journal of Economics 75:643–669, 1961) various examples questioning the thesis that decision making under uncertainty can be reduced to decision making under risk. These examples constitute one of the main challenges to the received view on the foundations of decision theory offered by Leonard Savage in Savage (1972). Craig Fox and Amos Tversky have, nevertheless, offered an indirect defense of Savage. They provided in Fox and Tversky (1995) an explanation of Ellsberg’s two-color problem (...) in terms of a psychological effect: ambiguity aversion . The ‘comparative ignorance’ hypothesis articulates how this effect works and explains why it is important to an understanding of the typical pattern of responses associated with Ellsberg’s two-color problem. In the first part of this article we challenge Fox and Tversky’s explanation. We present first an experiment that extends Ellsberg’s two-color problem where certain predictions of the comparative ignorance hypothesis are not confirmed. In addition the hypothesis seems unable to explain how the subjects resolve trade-offs between security and expected pay-off when vagueness is present. Ellsberg offered an explanation of the typical behavior elicited by his examples in terms of these trade-offs and in section three we offer a model of Ellsberg’s trade-offs. The model takes seriously the role of imprecise probabilities in explaining Ellsberg’s phenomenon. The so-called three-color problem was also considered in Fox and Tversky (1995). We argue that Fox and Tversky’s analysis of this case breaks a symmetry with their analysis of the two-color problem. We propose a unified treatment of both problems and we present a experiment that confirms our hypothesis. (shrink)
Gerd Gigerenzer and Thomas Sturm have recently proposed a modest form of what they describe as a normative, ecological and limited naturalism. The basic move in their argument is to infer that certain heuristics we tend to use should be used in the right ecological setting. To address this argument, we first consider the case of a concrete heuristic called Take the Best (TTB). There are at least two variants of the heuristic which we study by making explicit the choice (...) functions they induce, extending these variants of TTB beyond binary choice. We argue that the naturalistic argument can be applied to only one of the two variants of the heuristic; we also argue that the argument for the extension requires paying attention to other “rational” virtues of heuristics aside from efficacy, speed, and frugality. This notwithstanding, we show that there is a way of extending the right variant of TTB to obtain a very well behaved heuristic that could be used to offer a stronger case for the naturalistic argument (in the sense that if this heuristic is used, it is also a heuristic that we should use). The second part of the article considers attempts to extending the naturalistic argument from algorithms dealing with inference to heuristics dealing with choice. Our focus is the so-called Priority Heuristic, which we extend from risk to uncertainty. In this setting, the naturalist argument seems more difficult to formulate, if it remains feasible at all. Normativity seems in this case extrinsic to the heuristic, whose main virtue seems to be its ability to describe actual patterns of choice. But it seems that a new version of the naturalistic argument used with partial success in the case of inference is unavailable to solve the normative problem of whether we should exhibit the patterns of choice that we actually display. (shrink)
We present a decision-theoretically motivated notion of contraction which, we claim, encodes the principles of minimal change and entrenchment. Contraction is seen as an operation whose goal is to minimize loses of informational value. The operation is also compatible with the principle that in contracting A one should preserve the sentences better entrenched than A (when the belief set contains A). Even when the principle of minimal change and the latter motivation for entrenchment figure prominently among the basic intuitions in (...) the works of, among others, Quine and Ullian (1978), Levi (1980, 1991), Harman (1988) and Gärdenfors (1988), formal accounts of belief change (AGM, KM – see Gärdenfors (1988); Katsuno and Mendelzon (1991)) have abandoned both principles (see Rott (2000)). We argue for the principles and we show how to construct a contraction operation, which obeys both. An axiom system is proposed. We also prove that the decision-theoretic notion of contraction can be completely characterized in terms of the given axioms. Proving this type of completeness result is a well-known open problem in the field, whose solution requires employing both decision-theoretical techniques and logical methods recently used in belief change. (shrink)
This article elaborates on foundational issues in the social sciences and their impact on the contemporary theory of belief revision. Recent work in the foundations of economics has focused on the role external social norms play in choice. Amartya Sen has argued in [Sen93] that the traditional rationalizability approach used in the theory of rational choice has serious problems accommodating the role of social norms. Sen's more recent work [Sen96, Sen97] proposes how one might represent social norms in the theory (...) of choice, and in a very recent article [BS07] Walter Bossert and Kotaro Suzumura develop Sen's proposal, offering an extension of the classical theory of choice that is capable of dealing with social norms. The first part of this article offers an alternative functional characterization of the extended notion of rationality employed by Bossert and Suzumura in [BS07]. This characterization, unlike the one offered in [BS07], represents a norm-sensitive notion of rationality in terms of a pure functional constraint unmediated by a notion of revealed preference (something that is crucial for the application developed in the second part of this article). This functional characterization is formulated for general domains (as is Bossert and Suzumura's characterization) and is therefore empirically more applicable than usual characterizations of rationality. Interestingly, the functional constraint we propose is a variant of a condition first entertained in [AGM85] by Carlos Alchourrón, Peter Gärdenfors and David Makinson in the area of belief change. The second part of this article applies the theory developed in the first part to the realm of belief change. We first point out that social norms can be invoked to concoct counterexamples against some postulates of belief change (like postulate (*7)) that are necessary for belief change to be relational. These examples constitute the epistemological counterpart of Sen's counterexamples against condition α in rational choice (as a matter of fact, Rott has showed in [Rot01] that condition and postulate (*7) are mutually mappable). These examples are variants of examples Rott has recently presented in [Rot04]. One of our main goals in this article consists in applying the theory developed in the first part to develop a theory of norm-inclusive belief change that circumvents the counterexamples. We offer a new axiomatization for belief change and we furnish correspondence results relating constraints of rational choice to postulates of belief change. (shrink)
Following the pioneer work of Bruno De Finetti , conditional probability spaces (allowing for conditioning with events of measure zero) have been studied since (at least) the 1950's. Perhaps the most salient axiomatizations are Karl Popper's in , and Alfred Renyi's in . Nonstandard probability spaces  are a well know alternative to this approach. Vann McGee proposed in  a result relating both approaches by showing that the standard values of infinitesimal probability functions are representable as Popper functions, and (...) that every Popper function is representable in terms of the standard real values of some infinitesimal measure. Our main goal in this article is to study the constraints on (qualitative and probabilistic) change imposed by an extended version of McGee's result. We focus on an extension capable of allowing for iterated changes of view. Such extension, we argue, seems to be needed in almost all considered applications. Since most of the available axiomatizations stipulate (definitionally) important constraints on iterated change, we propose a non-questionbegging framework, Iterative Probability Systems (IPS) and we show that every Popper function can be regarded as a Bayesian IPS. A generalized version of McGee's result is then proved and several of its consequences considered. In particular we note that our proof requires the imposition of Cumulativity, i.e. the principle that a proposition that is accepted at any stage of an iterative process of acceptance will continue to be accepted at any later stage. The plausibility and range of applicability of Cumulativity is then studied. In particular we appeal to a method for defining belief from conditional probability (first proposed in  and then slightly modified in  and ) in order to characterize the notion of qualitative change induced by Cumulative models of probability kinematics. The resulting cumulative notion is then compared with existing axiomatizations of belief change and probabilistic supposition. We also consider applications in the probabilistic accounts of conditionals  and . (shrink)
One of the main applications of the logic of theory change is to the epistemic analysis of conditionals via the so-called Ramsey test. In the first part of the present note this test is studied in the “limiting case” where the theory being revised is inconsistent, and it is shown that this case manifests an intrinsic incompatibility between the Ramsey test and the AGM postulate of “success”. The paper then analyses the use of the postulate of success, and a weakening (...) of it, generating axioms of conditional logic via the test, and it is shown that for certain purposes both success and weak success are quite superfluous. This suggests the proposal of abandoning both success and weak success entirely, thus permitting retention of the postulate of “preservation” discarded by Gärdenfors. (shrink)
The paper studies first order extensions of classical systems of modal logic (see (Chellas, 1980, part III)). We focus on the role of the Barcan formulas. It is shown that these formulas correspond to fundamental properties of neighborhood frames. The results have interesting applications in epistemic logic. In particular we suggest that the proposed models can be used in order to study monadic operators of probability (Kyburg, 1990) and likelihood (Halpern-Rabin, 1987).
It is now well known that, on pain of triviality, the probability of a conditional cannot be identified with the corresponding conditional probability . This surprising impossibility result has a qualitative counterpart. In fact, Peter Gardenfors showed in  that believing 'If A then B' cannot be equated with the act of believing B on the supposition that A.
The paper provides a framework for representing belief-contravening hypotheses in games of perfect information. The resulting t-extended information structures are used to encode the notion that a player has the disposition to behave rationally at a node. We show that there are models where the condition of all players possessing this disposition at all nodes (under their control) is both a necessary and a sufficient for them to play the backward induction solution in centipede games. To obtain this result, we (...) do not need to assume that rationality is commonly known (as is done in [Aumann (1995)]) or commonly hypothesized by the players (as done in [Samet (1996)]). The proposed model is compared with the account of hypothetical knowledge presented by Samet in [Samet (1996)] and with other possible strategies for extending information structures with conditional propositions. (shrink)
Recent work has shown that in spite of these negative results, the question 'how to accept a conditional?' has a clear answer. Even if conditionals are not truth-carriers, they do have precise acceptability conditions. Nevertheless most epistemic models of conditionals do not provide acceptance conditions for iterated conditionals. One of the main goals of this essay is to provide a comprehensive account of the notion of epistemic conditionality covering all forms of iteration.
How to accept a conditional? F. P. Ramsey proposed the following test in . 'If A, then B' must be accepted with respect to the current epistemic state iff the minimal hypothetical change of it needed to accept A also requires accepting B. In this article we propose a formulation of , which unlike some of its predecessors, is compatible with our best theory of belief revision, the so-called AGM theory , chapters 1-5 for a survey). The new test, which, (...) we claim, encodes some of the crucial insights defended by F. P. Ramsey in , is used to study the conditionals epistemically validated by the AGM postulates. Our notion of validity is compared with the notion of negative validity used by Gärdenfors in . It is observed that the notions of PV and NV will in general differ and that when these differences arise it is the notion of PV that is preferable. Finally we compare our formulation of the Ramsey test with a previous formulation offered by Gärdenfors . We show that any attempt to interpret as delivering acceptance conditions for Ramsey's conditionals is doomed to failure. (shrink)
In (Hertwig et al. , 2003) Hertwig et al. draw a distinction between decisions from experience and decisions from description. In a decision from experience an agent does not have a summary description of the possible outcomes or their likelihoods. A career choice, deciding whether to back up a computer hard drive, cross a busy street, etc., are typical examples of decisions from experience. In such decisions agents can rely only of their encounters with the corresponding prospects. By contrast, an (...) agent furnished with information sources such as drug-package inserts or mutual-fund brochures—all of which describe risky prospects—will often make decisions from description. In (Hertwig et al. , 2003) it is shown (empirically) that decisions from experience and decisions from description can lead to dramatically different choice behavior. Most of these results (summarized and analyzed in (Hertwig, 2009)) are concerned with the role of risk in decision making. This article presents some preliminary results concerning the role of uncertainty in decision-making. We focus on Ellsberg’s two-color problem and consider a chance setup based on double sampling. We report empirical results which indicate that decisions from description where subjects select between a clear urn, the chance setup based on double sampling and Ellsberg’s vague urn, are such that subjects perceive the chance setup at least as an intermediate option between clear and vague choices (and there is evidence indicating that the double sampling chance setup is seen as operationally indistinguishable from the vague urn). We then suggest how the iterated chance setup can be used in order to study decisions from experience in the case of uncertainty. (shrink)
Carlos Alchourrón, Peter Gärdenfors and David Makinson published in 1985 a seminal article on belief change in the Journal of Symbolic Logic. Researchers from various disciplines, from computer science to mathematical economics to philosophical logic, have continued the work first presented in this seminal paper during the last two decades. This paper explores some salient foundational trends that interpret the act of changing view as a decision. We will argue that some of these foundational trends are already present, although only (...) tacitly, in the original article by the AGM trio. Other accounts decidedly depart from the view of contraction and revision presented in this seminal paper. I shall survey various types of theories that progressively depart form the axiomatic treatment defended by AGM. First, I consider theories where rational agents are considered as maximizers as opposed to optimizers ). Second, I consider which feasible set to use in contraction understood as a cognitive decision. This leads to rethink the very notion of what minimal change in contraction is. I shall conclude with some philosophical reflections concerning the sort of epistemological voluntarism that is tacit in seeing change in view as a rational choice. Carlos Alchourrón, Peter Gärdenfors y David Makinson publicaron en 1985 un artículo seminal sobre cambio de creencias en Journal of Symbolic Logic. Investigadores de varias disciplinas, desde la ciencia de la computación hasta la economía matemática y la lógica filosófica, han continuado en las dos últimas décadas esta línea de investigación. Este trabajo explora algunos aspectos fundacionalmente salientes que interpretan el acto de cambio de vista como una decisión. Argumentaremos que algunos de esos aspectos fundacionales ya estaban presentes, aunque solo tácitamente, en el artículo original del trío AGM. Otros abordajes parten decididamente de la contracción y revisión tal como fueran presentadas en el trabajo seminal. Inspeccionaré varios tipos de teorías que progresivamente parten del tratamiento axiomático defendido por AGM. Primero, considero teorías donde los agentes racionales aparecen como maximizadres opuestos a los optimiadores ). Segundo, me pregunto cuál conjunto derrotable debe usarse en una contracción entendida como una decisión cognitiva, lo cual lleva a repensar la importante cuestión de en qué consiste la noción de cambio mínimo en la contracción. Concluiré con algunas reflexiones filosóficas acerca de la suerte de voluntarismo epistemológico que está tácito en la concepción del cambio como una opción racional. (shrink)
The "Ellsberg phenomenon" has played a significant role in research on imprecise probabilities. Fox and Tversky  have attempted to explain this phenomenon in terms of their "comparative ignorance" hypothesis. We challenge that explanation and present empirical work suggesting an explanation that is much closer to Ellsberg's own diagnosis.
In a series of recent articles Angelika Kratzer has argued that the standard account of modality along Kripkean lines is inadequate in order to represent context-dependent modals. In particular she argues that the standard account is unable to deliver a non-trivial account of modality capable of overcoming inconsistencies of the underlying conversational background.
The anti- Humean proposal of constructing desire as belief about what would be good must be abandoned on pain of triviality. Our central result shows that if an agent's belief- desire state is represented by Jeffrey's expected value theory enriched with the Desire as Belief Thesis (DAB), then, provided that three pairwise inconsistent propositions receive non- zero probability, the agent must view with indifference any proposition whose probability is greater than zero. Unlike previous results against DAB our Opinionation or Indifference (...) Theorem is a purely synchronic one that depends in no way of the properties of Jeffrey conditionalization. (shrink)
Sven-Ove Hansson and Erik Olsson studied in Hansson and Olsson, 103–119 1995) the logical properties of an operation of contraction first proposed by Isaac Levi in Levi. They provided a completeness result for the simplest version of contraction that they call Levi-contraction but left open the problem of characterizing axiomatically the more complex operation of value-based contraction or saturatable contraction. In this paper we propose an axiomatization for this operation and prove a completeness result for it. We argue that the (...) resulting operation is better behaved than various rival operations of contraction defined in recent years. (shrink)
The paper focuses on extending to the first order case the semantical program for modalities first introduced by Dana Scott and Richard Montague. We focus on the study of neighborhood frames with constant domains and we offer in the first part of the paper a series of new completeness results for salient classical systems of first order modal logic. Among other results we show that it is possible to prove strong completeness results for normal systems without the Barcan Formula in (...) terms of neighborhood frames with constant domains. The first order models we present permit the study of many epistemic modalities recently proposed in computer science as well as the development of adequate models for monadic operators of high probability. Models of this type are either difficult of impossible to build in terms of relational Kripkean semantics .We conclude by introducing general first order neighborhood frames with constant domains and we offer a general completeness result for the entire family of classical first order modal systems in terms of them, circumventing some well-known problems of propositional and first order neighborhood semantics. We argue that the semantical program that thus arises offers the first complete semantic unification of the family of classical first order modal logics. (shrink)
This special issue presents a series of articles focusing on recent work in formal epistemology and formal philosophy. The articles in the latter category elaborate on the notion of context and content and their relationships. This work is not unrelated to recent developments in formal epistemology. Logical models of context, when connected with the representation of epistemic context, are clearly relevant for many issues considered by formal epistemologists. For example, the semantic framework Joe Halpern uses in his article for this (...) issue has been applied elsewhere to solve problems in interactive epistemology. (shrink)
Daniel Ellsberg presented in Ellsberg various examples questioning the thesis that decision making under uncertainty can be reduced to decision making under risk. These examples constitute one of the main challenges to the received view on the foundations of decision theory offered by Leonard Savage in Savage. Craig Fox and Amos Tversky have, nevertheless, offered an indirect defense of Savage. They provided in Fox and Tversky an explanation of Ellsberg’s two-color problem in terms of a psychological effect: ambiguity aversion. The (...) ‘comparative ignorance’ hypothesis articulates how this effect works and explains why it is important to an understanding of the typical pattern of responses associated with Ellsberg’s two-color problem. In the first part of this article we challenge Fox and Tversky’s explanation. We present first an experiment that extends Ellsberg’s two-color problem where certain predictions of the comparative ignorance hypothesis are not confirmed. In addition the hypothesis seems unable to explain how the subjects resolve trade-offs between security and expected pay-off when vagueness is present. Ellsberg offered an explanation of the typical behavior elicited by his examples in terms of these trade-offs and in section three we offer a model of Ellsberg’s trade-offs. The model takes seriously the role of imprecise probabilities in explaining Ellsberg’s phenomenon. The so-called three-color problem was also considered in Fox and Tversky. We argue that Fox and Tversky’s analysis of this case breaks a symmetry with their analysis of the two-color problem. We propose a unified treatment of both problems and we present a experiment that confirms our hypothesis. (shrink)
One of the reasons for adopting hyperbolic discounting is to explain preference reversals. Another is that this value structure suggests an elegant theory of the will. I examine the capacity of the theory to solve Newcomb's problem. In addition, I compare Ainslie's account with other procedural theories of choice that seem at least equally capable of accommodating reversals of preference.
Normative accounts in terms of similarity can be deployed in order to provide semantics for systems of context-free default rules and other sophisticated conditionals. In contrast, procedural accounts of decision in terms of similarity (Rubinstein 1997) are hard to reconcile with the normative rules of rationality used in decision-making, even when suitably weakened.
The recent literature offers several models of the notion of matter of fact supposition1 revealed in the acceptance of the so-called indicative conditionals. Some of those models are qualitative [Collins 90], [Levi 96], [Stalnaker 84]. Other probabilistic models appeal either to infinitesimal probability or two place probability functions. Recent work has made possible to understand which is the exact qualitative counterpart of the latter probabilistic models. In this article we show that the qualitative notion of change that thus arises is (...) hypothetical revision, a notion previously axiomatized in [Arló-Costa 97] and [Arló-Costa & Thomason 96]. This notion is incompatible with AGM as well as with other standard methods of theory change. The way in which matter-of-fact supposition is modeled by hypothetical revision is illustrated via examples. The model is compared with other qualitative accounts of the notion of supposition encoded in two-place probability functions, with models of subjunctive supposition, as well as with some of the well know models of learning. Applications in knowledge representation and in the theory of games and decisions are summarized. (shrink)
Page generated Thu Aug 5 13:39:03 2021 on philpapers-web-65948fd446-659hb
cache stats: hit=12342, miss=11908, save= autohandler : 1685 ms called component : 1669 ms search.pl : 1534 ms render loop : 1200 ms addfields : 608 ms publicCats : 580 ms next : 530 ms initIterator : 331 ms save cache object : 126 ms menu : 85 ms retrieve cache object : 61 ms autosense : 36 ms match_cats : 32 ms prepCit : 25 ms quotes : 9 ms applytpl : 7 ms search_quotes : 2 ms match_other : 2 ms intermediate : 1 ms match_authors : 1 ms init renderer : 0 ms setup : 0 ms auth : 0 ms writelog : 0 ms