Updating Principles
Summary | In Bayesian epistemology, an updating principle is a principle that specifies or puts restrictions on the changes in an agent’s belief state that follow (or should follow) some initial change in the agent’s belief state (usually – but maybe not always – as a result of the agent being exposed to new evidence). Although in other academic fields a great deal of the discussion regarding updating principles touches upon their empirical fit to the way people actually update their beliefs, much of the relevant philosophical literature is normative. The central questions are whether, why and in which contexts, obeying different updating principles is rationally required. In the simplest (but not uncommon) case, where the agent’s belief state can be represented by a single probability distribution over a set of propositions, and the initial change is that of learning a new proposition (represented as raising the probability of the learnt proposition to 1), the most popular updating rule is Bayesian Conditionalization. Richard Jeffrey offered a generalization of Bayesian Conditionalization, usually called “Jeffrey’s conditionalization”, to cases in which, although there is some initial change in the agent’s belief state, the probability of no proposition in the set is raised to 1. Others have introduced, discussed and explored the formal features of other updating principles. These principles are usually ones that either cover cases to which Jeffrey’s conditionalization does not apply (such as cases of “growing awareness” in which the initial change is represented as an addition of new propositions to the set or cases in which the agent’s initial belief set cannot be represented by a single probability distribution over a set of propositions) or constitute generalizations of or alternatives to Bayesian Conditionalization and Jeffrey’s Conditionalization in specific contexts (such as Adams’ conditionalization for the case of learning conditional probabilities, Imaging which – in some contexts – seem to fit better with other intuitive epistemic principles or different types of pooling methods for the case of learning other agents’ beliefs). |
Key works | Jeffrey 1992 introduces the idea of probability kinematics and discusses its features. Some important discussions of Jeffrey’s rule include Field 1978, Fraassen 1980 and Skyrms 1987. Bradley 2005 introduces and discusses Adams’ Conditionalization. In Bradley 2017 Bradley also discusses growing awareness and the relation between belief updating and the updating of desires and preferences. “Imaging” was introduced and discussed in Lewis 1976 and Gardenfors 1982. Leitgeb 2017 discusses the relation between Imaging and different belief aggregation methods. Some discussions of different problems associated with updating imprecise probabilities are White 2009, Bradley & Steele 2014 and Joyce 2010. |
Introductions | Jeffrey 2002 |
1 filter applied
|
Off-campus access
Using PhilPapers from home?
Create an account to enable off-campus access through your institution's proxy server. Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Editorial team
General Editors:
David Bourget (Western Ontario) David Chalmers (ANU, NYU) Area Editors: David Bourget Gwen Bradford Berit Brogaard Margaret Cameron David Chalmers James Chase Rafael De Clercq Ezio Di Nucci Barry Hallen Hans Halvorson Jonathan Ichikawa Michelle Kosch Øystein Linnebo JeeLoo Liu Paul Livingston Brandon Look Manolo Martínez Matthew McGrath Michiru Nagatsu Susana Nuccetelli Giuseppe Primiero Jack Alan Reynolds Darrell P. Rowbottom Aleksandra Samonek Constantine Sandis Howard Sankey Jonathan Schaffer Thomas Senor Robin Smith Daniel Star Jussi Suikkanen Lynne Tirrell Aness Kim Webster Other editors Contact us Learn more about PhilPapers |