This chapter explores the topic of imprecise probabilities as it relates to model validation. IP is a family of formal methods that aim to provide a better representationRepresentation of severe uncertainty than is possible with standard probabilistic methods. Among the methods discussed here are using sets of probabilities to represent uncertainty, and using functions that do not satisfy the additvity property. We discuss the basics of IP, some examples of IP in computer simulation contexts, possible interpretations of the IP framework (...) and some conceptual problems for the approach. We conclude with a discussion of IP in the context of model validation. (shrink)
There has been much recent interest in imprecise probabilities, models of belief that allow unsharp or fuzzy credence. There have also been some influential criticisms of this position. Here we argue, chiefly against Elga, that subjective probabilities need not be sharp. The key question is whether the imprecise probabilist can make reasonable sequences of decisions. We argue that she can. We outline Elga's argument and clarify the assumptions he makes and the principles of rationality he is implicitly committed to. We (...) argue that these assumptions are too strong and that rational imprecise choice is possible in the absence of these overly strong conditions. (shrink)
The sensitive dependence on initial conditions associated with nonlinear models imposes limitations on the models’ predictive power. We draw attention to an additional limitation than has been underappreciated, namely, structural model error. A model has SME if the model dynamics differ from the dynamics in the target system. If a nonlinear model has only the slightest SME, then its ability to generate decision-relevant predictions is compromised. Given a perfect model, we can take the effects of SDIC into account by substituting (...) probabilistic predictions for point predictions. This route is foreclosed in the case of SME, which puts us in a worse epistemic situation than SDIC. (shrink)
This paper considers a puzzling conflict between two positions that are each compelling: it is irrational for an agent to pay to avoid `free' evidence before making a decision, and rational agents may have imprecise beliefs and/or desires. Indeed, we show that Good's theorem concerning the invariable choice-worthiness of free evidence does not generalise to the imprecise realm, given the plausible existing decision theories for handling imprecision. A key ingredient in the analysis, and a potential source of controversy, is the (...) general approach taken for resolving sequential decision problems { we make explicit what the key alternatives are and defend our own approach. Furthermore, we endorse a resolution of the aforementioned puzzle { we privilege decision theories that merely permit avoiding free evidence over decision theories for which avoiding free evidence is uniquely admissible. Finally, we situate this particular result about free evidence within the broader `dynamic-coherence' literature. (shrink)
Econophysics is a new and exciting cross-disciplinary research field that applies models and modelling techniques from statistical physics to economic systems. It is not, however, without its critics: prominent figures in more mainstream economic theory have criticized some elements of the methodology of econophysics. One of the main lines of criticism concerns the nature of the modelling assumptions and idealizations involved, and a particular target are ‘kinetic exchange’ approaches used to model the emergence of inequality within the distribution of individual (...) monetary income. This article will consider such models in detail, and assess the warrant of the criticisms drawing upon the philosophical literature on modelling and idealization. Our aim is to provide the first steps towards informed mediation of this important and interesting interdisciplinary debate, and our hope is to offer guidance with regard to both the practice of modelling inequality, and the inequality of modelling practice. _1_ Introduction _1.1_ Econophysics and its discontents _1.2_ Against burglar economics _2_ Modelling Inequality _2.1_ Mainstream economic models for income distribution _2.2_ Econophysics models for income distribution _3_ Idealizations in Kinetic Exchange Models _3.1_ Binary interactions _3.2_ Conservation principles _3.3_ Exchange dynamics _ 4 _ Fat Tails and Savings _ 5 _ Evaluation. (shrink)
ABSTRACT In a recent article, Samir Okasha presented an argument that suggests that there is no rational way to choose among scientific theories. This would seriously undermine the view that science is a rational enterprise. In this article, I show how a suitably nuanced view of what scientific rationality requires allows us to sidestep this argument. In doing so, I present a new argument in favour of voluntarism of the type favoured by van Fraassen. I then show how such a (...) view of scientific rationality gives a precise interpretation of what Thomas Kuhn thought. _1_ Introduction _2_ Okasha’s Argument _3_ Rationality Can Be Silent _4_ Arrow Undermined _5_ The Informational-Basis Escape _6_ Theory Choice at the Level of the Individual Scientist _7_ Kuhn Vindicated _8_ Trade-offs and Partial Commensurability _9_ Conclusion. (shrink)
We introduce ‘model migration’ as a species of cross-disciplinary knowledge transfer whereby the representational function of a model is radically changed to allow application to a new disciplinary context. Controversies and confusions that often derive from this phenomenon will be illustrated in the context of econophysics and phylogeographic linguistics. Migration can be usefully contrasted with concept of ‘imperialism’, that has been influentially discussed in the context of geographical economics. In particular, imperialism, unlike migration, relies upon extension of the original model (...) via an expansion of the domain of phenomena it is taken to adequately described. The success of imperialism thus requires expansion of the justificatory sanctioning of the original idealising assumptions to a new disciplinary context. Contrastingly, successful migration involves the radical representational re-interpretation of the original model, rather than its extension. Migration thus requires ‘re-sanctioning’ of new ‘counterpart idealisations’ to allow application to an entirely different class of phenomena. Whereas legitimate scientific imperialism should be based on the pursuit of some form of ontological unification, no such requirement is need to legitimate the practice of model migration. The distinction between migration and imperialism will thus be shown to have significant normative as well as descriptive value. (shrink)
Imprecise probabilism—which holds that rational belief/credence is permissibly represented by a set of probability functions—apparently suffers from a problem known as dilation. We explore whether this problem can be avoided or mitigated by one of the following strategies: (a) modifying the rule by which the credal state is updated, (b) restricting the domain of reasonable credal states to those that preclude dilation.
We review the question of whether objective chances are compatible with determinism. We first outline what we mean by chance and what we mean by determinism. We then look at the alleged incompatibility between those concepts. Finally, we look at some ways that one might attempt to overcome the incompatibility.
There is currently much discussion about how decision making should proceed when an agent's degrees of belief are imprecise; represented by a set of probability functions. I show that decision rules recently discussed by Sarah Moss, Susanna Rinard and Rohan Sud all suffer from the same defect: they all struggle to rationalize diachronic ambiguity aversion. Since ambiguity aversion is among the motivations for imprecise credence, this suggests that the search for an adequate imprecise decision rule is not yet over.
Rational credence should be coherent in the sense that your attitudes should not leave you open to a sure loss. Rational credence should be such that you can learn when confronted with relevant evidence. Rational credence should not be sensitive to irrelevant differences in the presentation of the epistemic situation. We explore the extent to which orthodox probabilistic approaches to rational credence can satisfy these three desiderata and find them wanting. We demonstrate that an imprecise probability approach does better. Along (...) the way we shall demonstrate that the problem of “belief inertia” is not an issue for a large class of IP credences, and provide a solution to van Fraassen’s box factory puzzle. (shrink)
This paper presents a decision problem called the holiday puzzle. The decision problem is one that involves incommensurable goods and sequences of choices. This puzzle points to a tension between three prima facie plausible, but jointly incompatible claims. I present a way out of the trilemma which demonstrates that it is possible for agents to have incomplete preferences and to be dynamically rational. The solution also suggests that the relationship between preference and rational permission is more subtle than standardly assumed.
When do probability distribution functions (PDFs) about future climate misrepresent uncertainty? How can we recognise when such misrepresentation occurs and thus avoid it in reasoning about or communicating our uncertainty? And when we should not use a PDF, what should we do instead? In this paper we address these three questions. We start by providing a classification of types of uncertainty and using this classification to illustrate when PDFs misrepresent our uncertainty in a way that may adversely affect decisions. We (...) then discuss when it is reasonable and appropriate to use a PDF to reason about or communicate uncertainty about climate. We consider two perspectives on this issue. On one, which we argue is preferable, available theory and evidence in climate science basically excludes using PDFs to represent our uncertainty. On the other, PDFs can legitimately be provided when resting on appropriate expert judgement and recognition of associated risks. Once we have specified the border between appropriate and inappropriate uses of PDFs, we explore alternatives to their use. We briefly describe two formal alternatives, namely imprecise probabilities and possibilistic distribution functions, as well as informal possibilistic alternatives. We suggest that the possibilistic alternatives are preferable. -/- . (shrink)
This volume is a serious attempt to open up the subject of European philosophy of science to real thought, and provide the structural basis for the interdisciplinary development of its specialist fields, but also to provoke reflection on the idea of ‘European philosophy of science’. This efforts should foster a contemporaneous reflection on what might be meant by philosophy of science in Europe and European philosophy of science, and how in fact awareness of it could assist philosophers interpret and motivate (...) their research through a stronger collective identity. The overarching aim is to set the background for a collaborative project organising, systematising, and ultimately forging an identity for, European philosophy of science by creating research structures and developing research networks across Europe to promote its development. (shrink)
Imprecise probabilities are an increasingly popular way of reasoning about rational credence. However they are subject to an apparent failure to display convincing inductive learning. This paper demonstrates that a small modification to the update rule for IP allows us to overcome this problem, albeit at the cost of satisfying only a weaker concept of coherence.
It is well known that the convex hull of the classical truth value functions contains all and only the probability functions. Work by Paris and Williams has shown that this also holds for various kinds of nonclassical logics too. This note summarises the formal details of this topic and extends the results slightly.
This volume is a serious attempt to open up the subject of European philosophy of science to real thought, and provide the structural basis for the interdisciplinary development of its specialist fields, but also to provoke reflection on the idea of ‘European philosophy of science’. This efforts should foster a contemporaneous reflection on what might be meant by philosophy of science in Europe and European philosophy of science, and how in fact awareness of it could assist philosophers interpret and motivate (...) their research through a stronger collective identity. The overarching aim is to set the background for a collaborative project organising, systematising, and ultimately forging an identity for, European philosophy of science by creating research structures and developing research networks across Europe to promote its development. (shrink)
There are different kinds of uncertainty. I outline some of the various ways that uncertainty enters science, focusing on uncertainty in climate science and weather prediction. I then show how we cope with some of these sources of error through sophisticated modelling techniques. I show how we maintain confidence in the face of error.
In a recent paper, Samir Okasha presented an argument that suggests that there is no rational way to choose among scientific theories. This would seriously undermine the view that science is a rational entreprise. In this paper I show how a suitably nuanced view of what scientific rationality requires allows us to avoid Okasha’s conclusion. I go on to argue that making further assumptions about the space of possible scientific theories allows us to make scientific rationality more contentful. I then (...) show how such a view of scientific rationality fits with what Thomas Kuhn thought. (shrink)
In this paper we argue that there is a problem with the conjunction of David Lewis' account of counterfactual conditionals and his account of laws of nature. This is a pressing problem since both accounts are individually plausible, and popular.
"Chance" crops up all over philosophy, and in many other areas. It is often assumed -- without argument -- that chances are probabilities. I explore the extent to which this assumption is really sanctioned by what we understand by the concept of chance.
It is important to have an adequate model of uncertainty, since decisions must be made before the uncertainty can be resolved. For instance, flood defenses must be designed before we know the future distribution of flood events. It is standardly assumed that probability theory offers the best model of uncertain information. I think there are reasons to be sceptical of this claim. I criticise some arguments for the claim that probability theory is the only adequate model of uncertainty. In particular (...) I critique Dutch book arguments, representation theorems, and accuracy based arguments. Then I put forward my preferred model: imprecise probabilities. These are sets of probability measures. I offer several motivations for this model of uncertain belief, and suggest a number of interpretations of the framework. I also defend the model against some criticisms, including the so-called problem of dilation. I apply this framework to decision problems in the abstract. I discuss some decision rules from the literature including Levi’s E-admissibility and the more permissive rule favoured by Walley, among others. I then point towards some applications to climate decisions. My conclusions are largely negative: decision making under such severe uncertainty is inevitably difficult. I finish with a case study of scientific uncertainty. Climate modellers attempt to offer probabilistic forecasts of future climate change. There is reason to be sceptical that the model probabilities offered really do reflect the chances of future climate change, at least at regional scales and long lead times. Indeed, scientific uncertainty is multi-dimensional, and difficult to quantify. I argue that probability theory is not an adequate representation of the kinds of severe uncertainty that arise in some areas in science. I claim that this requires that we look for a better framework for modelling uncertainty. (shrink)