A number of recent discussions comparing computer simulation and traditional experimentation have focused on the significance of “materiality.” I challenge several claims emerging from this work and suggest that computer simulation studies are material experiments in a straightforward sense. After discussing some of the implications of this material status for the epistemology of computer simulation, I consider the extent to which materiality (in a particular sense) is important when it comes to making justified inferences about target systems on the basis (...) of experimental results. (shrink)
We critically engage two traditional views of scientific data and outline a novel philosophical view that we call the pragmatic-representational view of data. On the PR view, data are representations that are the product of a process of inquiry, and they should be evaluated in terms of their adequacy or fitness for particular purposes. Some important implications of the PR view for data assessment, related to misrepresentation, context-sensitivity, and complementary use, are highlighted. The PR view provides insight into the common (...) but little-discussed practices of iteratively reusing and repurposing data, which result in many datasets’ having a phylogeny—an origin and complex evolutionary history—that is relevant to their evaluation and future use. We relate these insights to the open-data and data-rescue movements, and highlight several future avenues of research that build on the PR view of data. (shrink)
This article explores some of the roles of computer simulation in measurement. A model-based view of measurement is adopted and three types of measurement—direct, derived, and complex—are distinguished. It is argued that while computer simulations on their own are not measurement processes, in principle they can be embedded in direct, derived, and complex measurement practices in such a way that simulation results constitute measurement outcomes. Atmospheric data assimilation is then considered as a case study. This practice, which involves combining information (...) from conventional observations and simulation-based forecasts, is characterized as a complex measuring practice that is still under development. The case study reveals challenges that are likely to resurface in other measuring practices that embed computer simulation. It is also noted that some practices that embed simulation are difficult to classify; they suggest a fuzzy boundary between measurement and non-measurement. 1 Introduction2 A Contemporary View of Measurement3 Three Types of Measurement4 Can Computer Simulations Measure Real-World Target Systems?5 Case Study: Atmospheric Data Assimilation5.1 Why data assimilation?5.2 A complex measuring practice under development5.3 Epistemic iteration6 The Boundaries of Measurement7 Epistemology, Not Terminology. (shrink)
This article identifies conditions under which robust predictive modeling results have special epistemic significance---related to truth, confidence, and security---and considers whether those conditions hold in the context of present-day climate modeling. The findings are disappointing. When today’s climate models agree that an interesting hypothesis about future climate change is true, it cannot be inferred---via the arguments considered here anyway---that the hypothesis is likely to be true or that scientists’ confidence in the hypothesis should be significantly increased or that a claim (...) to have evidence for the hypothesis is now more secure. (shrink)
To study Earth’s climate, scientists now use a variety of computer simulation models. These models disagree in some of their assumptions about the climate system, yet they are used together as complementary resources for investigating future climatic change. This paper examines and defends this use of incompatible models. I argue that climate model pluralism results both from uncertainty concerning how to best represent the climate system and from difficulties faced in evaluating the relative merits of complex models. I describe how (...) incompatible climate models are used together in ‘multi-model ensembles’ and explain why this practice is reasonable, given scientists’ inability to identify a ‘best’ model for predicting future climate. Finally, I characterize climate model pluralism as involving both an ontic competitive pluralism and a pragmatic integrative pluralism. (shrink)
Philosophers continue to debate both the actual and the ideal roles of values in science. Recently, Eric Winsberg has offered a novel, model-based challenge to those who argue that the internal workings of science can and should be kept free from the influence of social values. He contends that model-based assignments of probability to hypotheses about future climate change are unavoidably influenced by social values. I raise two objections to Winsberg’s argument, neither of which can wholly undermine its conclusion but (...) each of which suggests that his argument exaggerates the influence of social values on estimates of uncertainty in climate prediction. I then show how a more traditional challenge to the value-free ideal seems tailor-made for the climate context. (shrink)
Lloyd (2009) contends that climate models are confirmed by various instances of fit between their output and observational data. The present paper argues that what these instances of fit might confirm are not climate models themselves, but rather hypotheses about the adequacy of climate models for particular purposes. This required shift in thinking—from confirming climate models to confirming their adequacy-for-purpose—may sound trivial, but it is shown to complicate the evaluation of climate models considerably, both in principle and in practice.
We call attention to an underappreciated way in which non-epistemic values influence evidence evaluation in science. Our argument draws upon some well-known features of scientific modeling. We show that, when scientific models stand in for background knowledge in Bayesian and other probabilistic methods for evidence evaluation, conclusions can be influenced by the non-epistemic values that shaped the setting of priorities in model development. Moreover, it is often infeasible to correct for this influence. We further suggest that, while this value influence (...) is not particularly prone to the problem of wishful thinking, it could have problematic non-epistemic consequences in some cases. (shrink)
After showing how Deborah Mayo’s error-statistical philosophy of science might be applied to address important questions about the evidential status of computer simulation results, I argue that an error-statistical perspective offers an interesting new way of thinking about computer simulation models and has the potential to significantly improve the practice of simulation model evaluation. Though intended primarily as a contribution to the epistemology of simulation, the analysis also serves to fill in details of Mayo’s epistemology of experiment.
Simulation-based weather and climate prediction now involves the use of methods that reflect a deep concern with uncertainty. These methods, known as ensemble prediction methods, produce multiple simulations for predictive periods of interest, using different initial conditions, parameter values and/or model structures. This paper provides a non-technical overview of current ensemble methods and considers how the results of studies employing these methods should be interpreted, paying special attention to probabilistic interpretations. A key conclusion is that, while complicated inductive arguments might (...) be given for the trustworthiness of probabilistic weather forecasts obtained from ensemble studies, analogous arguments are out of reach in the case of long-term climate prediction. In light of this, the paper considers how predictive uncertainty should be conveyed to decision makers. (shrink)
This paper critically examines Weisberg’s weighted feature matching account of model-world similarity. A number of concerns are raised, including that Weisberg provides an account of what underlies scientific judgments of relative similarity, when what is desired is an account of the sorts of model-target similarities that are necessary or sufficient for achieving particular types of modeling goal. Other concerns relate to the details of the account, in particular to the content of feature sets, the nature of shared features and the (...) assumed independence of feature weightings. (shrink)
Today’s most sophisticated simulation studies of future climate employ not just one climate model but a number of models. I explain why this “ensemble” approach has been adopted—namely, as a means of taking account of uncertainty—and why a comprehensive investigation of uncertainty remains elusive. I then defend a middle ground between two camps in an ongoing debate over the transformation of ensemble results into probabilistic predictions of climate change, highlighting requirements that I refer to as ownership, justification, and robustness.
Increasingly there are calls for climate services to be “co-produced” with users, taking into account not only the basic information needs of users but also their value systems and decision contexts. What does this mean in practice? One way that user values can be incorporated into climate services is in the management of inductive risk. This involves understanding which errors in climate service products would have particularly negative consequences from the users’ perspective (e.g., underestimating rather than overestimating the change in (...) an impact variable) and then prioritizing the avoidance of those errors. This essay shows how inductive risk could be managed in climate services in ways that serve user values and argues that there are both ethical and practical reasons in favor of doing so. (shrink)
The theoretical foundations of climate science have received little attention from philosophers thus far, despite a number of outstanding issues. We provide a brief, non-technical overview of several of these issues – related to theorizing about climates, climate change, internal variability and more – and attempt to make preliminary progress in addressing some of them. In doing so, we hope to open a new thread of discussion in the emerging area of philosophy of climate science, focused on theoretical foundations.
Allan Franklin has identified a number of strategies that scientists use to build confidence in experimental results. This paper shows that Franklin's strategies have direct analogues in the context of computer simulation and then suggests that one of his strategies—the so-called 'Sherlock Holmes' strategy—deserves a privileged place within the epistemologies of experiment and simulation. In particular, it is argued that while the successful application of even several of Franklin's other strategies (or their analogues in simulation) may not be sufficient for (...) justified belief in results, the successful application of a slightly elaborated version of the Sherlock Holmes strategy is sufficient. (shrink)
Climate change fingerprint studies investigate the causes of recent climate change. I argue that these studies have much in common with Steel’s (2008) streamlined comparative process tracing, illustrating a mechanisms-based approach to extrapolation in which the mechanisms of interest are simulated rather than physically instantiated. I then explain why robustness and variety-of-evidence considerations turn out to be important for understanding the evidential value of climate change fingerprint studies.
We explore three questions about Earth system modeling that are of both scientific and philosophical interest: What kind of understanding can be gained via complex Earth system models? How can the limits of understanding be bypassed or managed? How should the task of evaluating Earth system models be conceptualized?
In 1904, Norwegian physicist Vilhelm Bjerknes published what would become a landmark paper in the history of meteorology. In that paper, he proposed that daily weather forecasts could be made by calculating later states of the atmosphere from an earlier state using the laws of hydrodynamics and thermodynamics (Bjerknes 1904). He outlined a set of differential equations to be solved and advocated the development of graphical and numerical solution methods, since analytic solution was out of the question. Using these theory-based (...) equations to produce daily forecasts, however, turned out to be more difficult than anticipated. Graphical solution techniques had limited success, and a first attempt to use .. (shrink)
An uncertainty report describes the extent of an agent’s uncertainty about some matter. We identify two basic requirements for uncertainty reports, which we call faithfulness and completeness. We then discuss two pitfalls of uncertainty assessment that often result in reports that fail to meet these requirements. The first involves adopting a one-size-fits-all approach to the representation of uncertainty, while the second involves failing to take account of the risk of surprises. In connection with the latter, we respond to the objection (...) that it is impossible to account for the risk of genuine surprises. After outlining some steps that both scientists and the bodies who commission uncertainty assessments can take to help avoid these pitfalls, we explain why striving for faithfulness and completeness is important. (shrink)
As a device used by scientists in the course of performing research, the digital computer might be considered a scientific instrument. But if so, what is it an instrument for? This paper explores a number of answers to this question, focusing on the use of computers in a simulating mode.
Only a decade ago, the topic of scientific understanding remained one that philosophers of science largely avoided. Earlier discussions by Hempel and others had branded scientific understanding a mere subjective state or feeling, one to be studied by psychologists perhaps, but not an important or fruitful focus for philosophers of science. Even as scientific explanation became a central topic in philosophy of science, little attention was given to understanding. Over the last decade, however, this situation has changed. Analyses of scientific (...) understanding that do not treat it as a subjective state or feeling have been offered and debated, and both the epistemic value and the pitfalls of purported psychological .. (shrink)
Computer simulation modeling is an important part of contemporary scientific practice but has not yet received much attention from philosophers. The present project helps to fill this lacuna in the philosophical literature by addressing three questions that arise in the context of computer simulation of Earth's climate. Computer simulation experimentation commonly is viewed as a suspect methodology, in contrast to the trusted mainstay of material experimentation. Are the results of computer simulation experiments somehow deeply problematic in ways that the results (...) of material experiments are not? I argue against categorical skepticism toward the results of computer simulation experiments by revealing important parallels in the epistemologies of material and computer simulation experimentation. It has often been remarked that simple computer simulation models---but not complex ones---contribute substantially to our understanding of the atmosphere and climate system. Is this view of the relative contributions of simple and complex models tenable? I show that both simple and complex climate models can promote scientific understanding and argue that the apparent contribution of simple models depends upon whether a causal or deductive account of scientific understanding is adopted. When two incompatible scientific theories are under consideration, they typically are viewed as competitors, and we seek evidence that refutes at least one of the theories. In the study of climate change, however, logically incompatible computer simulation models are accepted as complementary resources for investigating future climate. How can we make sense of this use of incompatible models? I show that a collection of incompatible climate models persists in part because of difficulties faced in evaluating and comparing climate models. I then discuss the rationale for using these incompatible models together and argue that this climate model pluralism has both competitive and integrative components. (shrink)
Proceedings of the Pittsburgh Workshop in History and Philosophy of Biology, Center for Philosophy of Science, University of Pittsburgh, March 23-24 2001 Session 2: Female Orgasms and Evolutionary Theory.
While sexual and gender minorities are at increased risk for poor health outcomes, there is limited data regarding patient-provider interactions. In this study, we explored the perspectives of LGBTQ patients and their encounters with physicians in order to improve our understanding of patient-physician experiences. Using purposive selection of self-identified LGBTQ patients, we performed fourteen in-depth semi-structured interviews on topics of sexual orientation and gender identity, as well as their perceived role in the patient-provider relationship. Coding using a modified grounded theory (...) approach was performed to generate themes. We identified three major themes that demonstrate the complexity of LGBTQ patient experiences. The first, Lacking trust, identifies mistrust and loss of the physician-patient relationship resulting from physicians’ poor or judgmental communication, or from physicians making assumptions about gender, using incorrect pronouns, and not recognizing heterogeneity within the transgender community. A second theme, Being vulnerable, describes the challenges and fears related to comfort of patients with disclosing their sexual orientation and/or gender identity. A final theme, Navigating discrimination, outlines racial or ethnic discrimination which creates an additional burden on top of illness and stigmatized identity. Our results reveal the complex needs of individuals with multiple stigmatized identities when developing relationships with providers. By using an intersectional perspective that appreciates the plurality of patients’ identities, providers can help to improve their relationships with LGBTQ patients. Incorporating intersectional training for medical students and residents could greatly benefit both LGBTQ patients and their physicians. (shrink)
Computer simulation and philosophy of science Content Type Journal Article Pages 1-4 DOI 10.1007/s11016-011-9567-8 Authors Wendy S. Parker, Department of Philosophy, Ellis Hall 202, Ohio University, Athens, OH 45701, USA Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
Can computer simulation results be evidence for hypotheses about real-world systems and phenomena? If so, what sort of evidence? Can we gain genuinely new knowledge of the world via simulation? I argue that evidence from computer simulation is aptly characterized as higher-order evidence: it is evidence that other evidence regarding a hypothesis about the world has been collected. Insofar as particular epistemic agents do not have this other evidence, it is possible that they will gain genuinely new knowledge of the (...) world via simulation. I illustrate with examples inspired by uses of simulation in meteorology and astrophysics. (shrink)
This chapter identifies conditions under which robust predictive modeling results have special epistemic significance—related to truth, confidence, and security—and considers whether those conditions are met in the context of climate modeling today. The findings are disappointing. When today’s climate models agree that an interesting hypothesis about future climate change is true, it cannot be inferred, via the arguments considered here anyway, that the hypothesis is likely to be true, nor that confidence in the hypothesis should be significantly increased, nor that (...) a claim to have evidence for the hypothesis is now more secure. In some other modeling contexts, the prospects for such arguments are brighter. (shrink)
The Supreme Court ended its last term by making unconstitutional a choice Brown v. Board of Education once required - the voluntary, and race conscious, pursuit of integration - to little public outcry. As a society, we continue to find comfort in segregation. This Article argues that this acceptance is wrong, both educationally and constitutionally. It does so through the lens of teacher segregation, a topic all but ignored in the current literature. The first step of this argument is demonstrating, (...) by an original empirical study, the segregation of teachers, thereby proving a more profound school segregation than is generally recognized. The second step establishes, through an extensive review of the existing social science literature, that the typical minority student benefits from integration because it ensures a fundamental resource - experienced teachers. Lastly, the inherent inequality of segregation should have constitutional implications. While the Rehnquist Court utilized an interest balancing approach to the Equal Protection clause and recognized the constitutional harms of segregation, the Roberts Court has begun to minimize the Equal Protection to concern only capitalizing individual treatment and has erred in creating a "constitutional chill" toward the value of integration. (shrink)