Skip to main content
Log in

Values and Uncertainty in Simulation Models

  • Original Article
  • Published:
Erkenntnis Aims and scope Submit manuscript

Abstract

In this paper I argue for a distinction between subjective and value laden aspects of judgements showing why equating the former with the latter has the potential to confuse matters when the goal is uncovering the influence of political influences on scientific practice. I will focus on three separate but interrelated issues. The first concerns the issue of ‘verification’ in computational modelling. This is a practice that involves a number of formal techniques but as I show, even these allegedly objective methods ultimately rely on subjective estimation and evaluation of different types of parameters. This has implications for my second point which relates to uncertainty quantification—an assessment of the degree of uncertainty present in a particular modelling scenario. I argue that while this practice also involves subjective elements, in no way does that detract from its status as an epistemic exercise. Finally I discuss the relation between accuracy and uncertainty and how each relates to judgements that embody social/ethical/political concerns, in particular those associated with high consequence systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Similar content being viewed by others

Notes

  1. I will have more to say about V&V below.

  2. See, for example, Winsberg (2012).

  3. The way in which discretization errors can result in uncertainties will be discussed below.

  4. I will distinguish below between this kind of error and the more technical use of the term which refers to the deviation from the true value of a quantity.

  5. An important question regarding epistemic uncertainty is how to deal with uncertainty that arises as a result of decisions to idealise or ignore features of a system when constructing a conceptual or mathematical model. Since this is a constant feature of scientific practice this type of uncertainty will always be present regardless of whether the quantities in question involve approximations or not. A similar situation arises when one discretizes the mathematical model since a loss of information will also accompany such a process. I will have more to say about discretization error below in the discussion of verification and validation.

  6. As a result the method of manufactured solutions (MMS) is often used. It involves, as the label suggests, manufacturing an exact solution to a modified equation. The general method is to choose a solution a priori and then operate the governing PDEs onto the chosen solution. Essentially it is a solution to a backward problem: given an original set of equations and a chosen solution, find a modified set of equations that the chosen solution will satisfy. Although the solutions need not be physically realistic (since verification is essentially deals with mathematical considerations) they should nevertheless obey physical constraints that are built into the code. For details of the method along with a discussion of benefits and drawbacks see Oberkampf and Roy (2010, pp. 225–234).

  7. A further problem arises when a rigorously verified code (i.e. to second order accuracy) is applied to a new problem. In these cases there is no estimation of the accuracy or confidence interval (size of the error as opposed to the order of the error) unless grid convergence tests have been performed which will band the numerical error (Roche 1997, 138). In that sense use of a verified code is not enough to guarantee that the result is accurate.

  8. Verification of input data is also an important aspect of solution verification and includes such things as checks for consistency between model choices and verification of any software used to generate input data. Although these verification procedures are important the issues they raise are less philosophically interesting than those associated with numerical error estimation.

  9. Although this type of problem is certainly simulation related it is an example of a more general computer science problem which results from dealing with floating-point numbers where exact values cannot be stored. Floating point values can be stored to many decimal places, and results calculated to many decimal places of precision, but round-off error will always be present. To ensure that results of floating-point routines are meaningful the round-off error of such routines must always be quantified. In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16. The advantage of floating-point representation over fixed-point and integer representation is that it can support a much wider range of values. For example, a fixed-point representation that has seven decimal digits with two decimal places, can represent the numbers 12345.67, 123.45, 1.23 and so on, whereas a floating-point representation (such as the IEEE 754 decimal32 format) with seven decimal digits could in addition represent 1.234567, 123456.7, 0.00001234567, 1234567000000000, and so on. The floating-point format needs slightly more storage (to encode the position of the radix point), so when stored in the same space, floating-point numbers achieve their greater range at the expense of precision.

  10. Smith (1985) has discussed the problem of ‘programme verification’, (aka code verification) and argues against the relevant practices being identified as verification. He claims that the fact that a programme has been “proven correct” is no guarantee that it will do what you intend; a claim that contributes to his larger argument that, for fundamental reasons, there are inherent limitations to what can be proven about computers and their programmes. One of the fundamental reasons he cites has to do with the role of models and the lack of any guarantee about the relation between the models embedded in the computer programme and the world. Although this latter problem is addressed in contemporary methodology by the validation step, Cantwell Smith makes an interesting point about the nature of what he calls the “correctness” of the programme. A proof of correctness is simply a proof that any system that obeys the programme will satisfy the specification where the latter is a formal description, based on the model, that specifies what the proper or expected behaviour should be. The programme details how the behaviour is to be achieved. His example is a specification for a milk delivery system: “Make one milk delivery at each store¸ driving the shortest possible distance in total”. There may be different ways to accomplish this task, and those instructions are what is embedded in the programme. So, the correctness proof is simply a proof that two characterisations of something are compatible or consistent. But, as Cantwell-Smith points out this isn’t “correctness” in any strong sense of the term; it is simply relative consistency which should be taken as indicating nothing more than the fact that the programme is reliable in designated situations for a substantial period of time.

  11. All of the literature on simulation characterizes the difference between verification and validation as a mathematics versus physics problem.

  12. For a complete discussion of validation experiments and the notion of a validation hierarchy see Oberkampf and Roy (2010).

  13. An interesting discussion, but not one that I can go into here, is the role of type 1 and type 2 errors in the validation process. See Oberkampf and Trucano (2002, 257) for an account of this and other difficulties related to hypothesis testing.

  14. It is important to point out that for any particular mathematical model of a system we should distinguish between parametric uncertainty and model form uncertainty. The former can be entirely aleatoric, relating to stochastic parameters in the model. The latter is fundamentally epistemic and concerns actual changes in the model or the selection of one model in a class. Although they usually occur together, the concern here is parametric uncertainty only.

  15. One such approach discussed by Oberkampf and Roy (2010) involves the construction of a validation metric that compares the estimated mean of the computational results with the estimated mean of the experimental measurements. A statistical confidence interval is then computed that reflects the confidence in the estimation of the model accuracy given the uncertainty in the experimental data. Without going into the details of the construction of a validation metric let me simply say that there are different methods one can employ, one of which is to compute the area between the probability p-boxes resulting from the model data and the experimental measurements. The comparison in this case could be the shape between the two cumulative distribution functions, with the discrepancies capable of being measured.

  16. See Post (2004) for a discussion of this and other problems in computational science.

  17. I would like to thank Anthony Chemero and an anonymous referee for very helpful comments. I would also like to thank Angela Potochnik for organizing the conference and for her generous hospitality.

References

  • Douglas, H. (2000). Inductive risk and values in science. Philosophy of Science, 67, 559–579.

    Article  Google Scholar 

  • Jeffrey, R. C. (1956). Valuation and acceptance of scientific hypotheses. Philosophy of Science, 22, 237–246.

    Article  Google Scholar 

  • Kahneman, D., & Tversky, A. (Eds.). (2000). Choices, values, and frames. New York: Cambridge University Press.

    Google Scholar 

  • Oberkampf, W. L., & Barone, M. F. (2006). Measures of agreement between computation and experiment: Validation metrics. Journal of Computational Physics, 217, 5–36.

    Article  Google Scholar 

  • Oberkampf, W. L., & Roy, C. J. (2010). Verification and validation in scientific computing. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Oberkampf, W. L., & Trucano, T. (2002). Verification and validation in computational fluid dynamics. Progress in Aerospace Sciences, 38, 209–272.

    Google Scholar 

  • Oberkampf, W. L., Trucano, T., & Hirsch, C. (2004). Verification, validation and predictive capability in computational engineering and physics. Applied Mechanics Review, 57, 345–384.

    Google Scholar 

  • Post, D. (2004). The coming crisis in computational science. In: Proceedings of the IEEE international conference on high performance computer architecture. Madrid, Feb 2004. Los Alamos Report LA-UR-04-0388.

  • Roche, P. J. (1997). Quantification of uncertainty in computational fluid dynamics. Annual Review of Fluid Mechanics, 29, 123–160.

    Article  Google Scholar 

  • Rudner, R. (1953). The scientist Qua scientist makes value judgments. Philosophy of Science, 20, 1–6.

    Article  Google Scholar 

  • Smith, B. C. (1985). Limits of correctness in computers. SIGCAS, 14(4), 18–26.

    Google Scholar 

  • Winsberg, E. (2012). Values and uncertainties in the predictions of global climate models. Kennedy Institute of Ethics Journal, 2, 111–137.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Margaret Morrison.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Morrison, M. Values and Uncertainty in Simulation Models. Erkenn 79 (Suppl 5), 939–959 (2014). https://doi.org/10.1007/s10670-013-9537-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10670-013-9537-1

Keywords

Navigation