Models as Mediators discusses the ways in which models function in modern science, particularly in the fields of physics and economics. Models play a variety of roles in the sciences: they are used in the development, exploration and application of theories and in measurement methods. They also provide instruments for using scientific concepts and principles to intervene in the world. The editors provide a framework which covers the construction and function of scientific models, and explore the (...) ways in which they enable us to learn about both theories and the world. The contributors to the volume offer their own individual theoretical perspectives to cover a wide range of examples of modelling, from physics, economics and chemistry. These papers provide ideal case-study material for understanding both the concepts and typical elements of modelling, using analytical approaches from the philosophy and history of science. (shrink)
A general account of modeling in physics is proposed. Modeling is shown to involve three components: denotation, demonstration, and interpretation. Elements of the physical world are denoted by elements of the model; the model possesses an internal dynamic that allows us to demonstrate theoretical conclusions; these in turn need to be interpreted if we are to make predictions. The DDI account can be readily extended in ways that correspond to different aspects of scientific practice.
Non-actual model systems discussed in scientific theories are compared to fictions in literature. This comparison may help with the understanding of similarity relations between models and real-world target systems. The ontological problems surrounding fictions in science may be particularly difficult, however. A comparison is also made to ontological problems that arise in the philosophy of mathematics.
This article discusses minimal model explanations, which we argue are distinct from various causal, mechanical, difference-making, and so on, strategies prominent in the philosophical literature. We contend that what accounts for the explanatory power of these models is not that they have certain features in common with real systems. Rather, the models are explanatory because of a story about why a class of systems will all display the same large-scale behavior because the details that distinguish them are irrelevant. (...) This story explains patterns across extremely diverse systems and shows how minimal models can be used to understand real systems. (shrink)
The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus (...) on the relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemic tools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation. (shrink)
Most scientific models are not physical objects, and this raises important questions. What sort of entity are models, what is truth in a model, and how do we learn about models? In this paper I argue that models share important aspects in common with literary fiction, and that therefore theories of fiction can be brought to bear on these questions. In particular, I argue that the pretence theory as developed by Walton has the resources to answer (...) these questions. I introduce this account, outline the answers that it offers, and develop a general picture of scientific modelling based on it. (shrink)
The paper presents an argument for treating certain types of computer simulation as having the same epistemic status as experimental measurement. While this may seem a rather counterintuitive view it becomes less so when one looks carefully at the role that models play in experimental activity, particularly measurement. I begin by discussing how models function as “measuring instruments” and go on to examine the ways in which simulation can be said to constitute an experimental activity. By focussing on (...) the connections between models and their various functions, simulation and experiment one can begin to see similarities in the practices associated with each type of activity. Establishing the connections between simulation and particular types of modelling strategies and highlighting the ways in which those strategies are essential features of experimentation allows us to clarify the contexts in which we can legitimately call computer simulation a form of experimental measurement. (shrink)
Mechanistic explanation has an impressive track record of advancing our understanding of complex, hierarchically organized physical systems, particularly biological and neural systems. But not every complex system can be understood mechanistically. Psychological capacities are often understood by providing cognitive models of the systems that underlie them. I argue that these models, while superficially similar to mechanistic models, in fact have a substantially more complex relation to the real underlying system. They are typically constructed using a range of (...) techniques for abstracting the functional properties of the system, which may not coincide with its mechanistic organization. I describe these techniques and show that despite being non-mechanistic, these cognitive models can satisfy the normative constraints on good explanations. (shrink)
Models are of central importance in many scientific contexts. The centrality of models such as the billiard ball model of a gas, the Bohr model of the atom, the MIT bag model of the nucleon, the Gaussian-chain model of a polymer, the Lorenz model of the atmosphere, the Lotka-Volterra model of predator-prey interaction, the double helix model of DNA, agent-based and evolutionary models in the social sciences, or general equilibrium models of markets in their respective domains (...) are cases in point. Scientists spend a great deal of time building, testing, comparing and revising models, and much journal space is dedicated to introducing, applying and interpreting these valuable tools. In short, models are one of the principal instruments of modern science. (shrink)
Most recent philosophical thought about the scientific representation of the world has focused on dyadic relationships between language-like entities and the world, particularly the semantic relationships of reference and truth. Drawing inspiration from diverse sources, I argue that we should focus on the pragmatic activity of representing, so that the basic representational relationship has the form: Scientists use models to represent aspects of the world for specific purposes. Leaving aside the terms "law" and "theory," I distinguish principles, specific conditions, (...)models, hypotheses, and generalizations. I argue that scientists use designated similarities between models and aspects of the world to form both hypotheses and generalizations. (shrink)
In this paper I propose an account of representation for scientific models based on Kendall Walton’s ‘make-believe’ theory of representation in art. I first set out the problem of scientific representation and respond to a recent argument due to Craig Callender and Jonathan Cohen, which aims to show that the problem may be easily dismissed. I then introduce my account of models as props in games of make-believe and show how it offers a solution to the problem. Finally, (...) I demonstrate an important advantage my account has over other theories of scientific representation. All existing theories analyse scientific representation in terms of relations, such as similarity or denotation. By contrast, my account does not take representation in modelling to be essentially relational. For this reason, it can accommodate a group of models often ignored in discussions of scientific representation, namely models which are representational but which represent no actual object. (shrink)
Many biological investigations are organized around a small group of species, often referred to as ‘model organisms’, such as the fruit fly Drosophila melanogaster. The terms ‘model’ and ‘modelling’ also occur in biology in association with mathematical and mechanistic theorizing, as in the Lotka–Volterra model of predator-prey dynamics. What is the relation between theoretical models and model organisms? Are these models in the same sense? We offer an account on which the two practices are shown to have different (...) epistemic characters. Theoretical modelling is grounded in explicit and known analogies between model and target. By contrast, inferences from model organisms are empirical extrapolations. Often such extrapolation is based on shared ancestry, sometimes in conjunction with other empirical information. One implication is that such inferences are unique to biology, whereas theoretical models are common across many disciplines. We close by discussing the diversity of uses to which model organisms are put, suggesting how these relate to our overall account. 1 Introduction2 Volterra and Theoretical Modelling3 Drosophila as a Model Organism4 Generalizing from Work on Model Organisms5 Phylogenetic Inference and Model Organisms6 Further Roles of Model Organisms6.1 Preparative experimentation6.2 Model organisms as paradigms6.3 Model organisms as theoretical models6.4 Inspiration for engineers6.5 Anchoring a research community7 Conclusion. (shrink)
Causal models provide a framework for making counterfactual predictions, making them useful for evaluating the truth conditions of counterfactual sentences. However, current causal models for counterfactual semantics face limitations compared to the alternative similarity-based approach: they only apply to a limited subset of counterfactuals and the connection to counterfactual logic is not straightforward. This paper argues that these limitations arise from the theory of interventions where intervening on variables requires changing structural equations rather than the values of variables. (...) Using an alternative theory of exogenous interventions, this paper extends the causal approach to counterfactuals to handle more complex counterfactuals, including backtracking counterfactuals and those with logically complex antecedents. The theory also validates familiar principles of counterfactual logic and offers an explanation for counterfactual disagreement and backtracking readings of forward counterfactuals. (shrink)
A classification of models of reduction into three categories — theory reductionism, explanatory reductionism, and constitutive reductionism — is presented. It is shown that this classification helps clarify the relations between various explications of reduction that have been offered in the past, especially if a distinction is maintained between the various epistemological and ontological issues that arise. A relatively new model of explanatory reduction, one that emphasizes that reduction is the explanation of a whole in terms of its parts (...) is also presented in detail. Finally, the classification is used to clarify the debate over reductionism in molecular biology. It is argued there that while no model from the category of theory reduction might be applicable in that case, models of explanatory reduction might yet capture the structure of the relevant explanations. (shrink)
Abstract While agreeing that dynamical models play a major role in cognitive science, we reject Stepp, Chemero, and Turvey's contention that they constitute an alternative to mechanistic explanations. We review several problems dynamical models face as putative explanations when they are not grounded in mechanisms. Further, we argue that the opposition of dynamical models and mechanisms is a false one and that those dynamical models that characterize the operations of mechanisms overcome these problems. By briefly considering (...) examples involving the generation of action potentials and circadian rhythms, we show how decomposing a mechanism and modeling its dynamics are complementary endeavors. (shrink)
Despite an enormous philosophical literature on models in science, surprisingly little has been written about data models and how they are constructed. In this paper, I examine the case of how paleodiversity data models are constructed from the fossil data. In particular, I show how paleontologists are using various model-based techniques to correct the data. Drawing on this research, I argue for the following related theses: First, the 'purity' of a data model is not a measure of (...) its epistemic reliability. Instead it is the fidelity of the data that matters. Second, the fidelity of a data model in capturing the signal of interest is a matter of degree. Third, the fidelity of a data model can be improved 'vicariously', such as through the use of post hoc model-based correction techniques. And, fourth, data models, like theoretical models, should be assessed as adequate (or inadequate) for particular purposes. (shrink)
Batterman and Rice () argue that minimal models possess explanatory power that cannot be captured by what they call ‘common features’ approaches to explanation. Minimal models are explanatory, according to Batterman and Rice, not in virtue of accurately representing relevant features, but in virtue of answering three questions that provide a ‘story about why large classes of features are irrelevant to the explanandum phenomenon’ (, p. 356). In this article, I argue, first, that a method (the renormalization group) (...) they propose to answer the three questions cannot answer them, at least not by itself. Second, I argue that answers to the three questions are unnecessary to account for the explanatoriness of their minimal models. Finally, I argue that a common features account, what I call the ‘generalized ontic conception of explanation’, can capture the explanatoriness of minimal models. (shrink)
Scientific discourse is rife with passages that appear to be ordinary descriptions of systems of interest in a particular discipline. Equally, the pages of textbooks and journals are filled with discussions of the properties and the behavior of those systems. Students of mechanics investigate at length the dynamical properties of a system consisting of two or three spinning spheres with homogenous mass distributions gravitationally interacting only with each other. Population biologists study the evolution of one species procreating at a constant (...) rate in an isolated ecosystem. And when studying the exchange of goods, economists consider a situation in which there are only two goods, two perfectly rational agents, no restrictions on available information, no transaction costs, no money, and dealings are done immediately. Their surface structure notwithstanding, no competent scientist would mistake descriptions of such systems as descriptions of an actual system: we know very well that there are no such systems. These descriptions are descriptions of a model-system, and scientists use model-systems to represent parts or aspects of the world they are interested in. Following common practice, I refer to those parts or aspects as target-systems. What are we to make of this? Is discourse about such models merely a picturesque and ultimately dispensable façon de parler? This was the view of some early twentieth century philosophers. Duhem (1906) famously guarded against confusing model building with scientific theorizing and argued that model building has no real place in science, beyond a minor heuristic role. The aim of science was, instead, to construct theories, with theories understood as classificatory or representative structures systematically presented and formulated in precise symbolic.. (shrink)
I propose a distinct type of robustness, which I suggest can support a confirmatory role in scientific reasoning, contrary to the usual philosophical claims. In model robustness, repeated production of the empirically successful model prediction or retrodiction against a background of independentlysupported and varying model constructions, within a group of models containing a shared causal factor, may suggest how confident we can be in the causal factor and predictions/retrodictions, especially once supported by a variety of evidence framework. I present (...) climate models of greenhouse gas global warming of the 20th Century as an example, and emphasize climate scientists’ discussions of robust models and causal aspects. The account is intended as applicable to a broad array of sciences that use complex modeling techniques. (shrink)
This book offers a discussion about how people think, talk, learn, and explain things in causal terms in terms of action and manipulation. Sloman also reviews the role of causality, causal models, and intervention in the basic human cognitive functions: decision making, reasoning, judgement, categorization, inductive inference, language, and learning.
Scientists have used models for hundreds of years as a means of describing phenomena and as a basis for further analogy. In _Scientific Models in Philosophy of Science, _Daniela Bailer-Jones assembles an original and comprehensive philosophical analysis of how models have been used and interpreted in both historical and contemporary contexts. Bailer-Jones delineates the many forms models can take, and how they are put to use. She examines early mechanical models employed by nineteenth-century physicists such (...) as Kelvin and Maxwell, describes their roots in the mathematical principles of Newton and others, and compares them to contemporary mechanistic approaches. Bailer-Jones then views the use of analogy in the late nineteenth century as a means of understanding models and to link different branches of science. She reveals how analogies can also be models themselves, or can help to create them. The first half of the twentieth century saw little mention of models in the literature of logical empiricism. Focusing primarily on theory, logical empiricists believed that models were of temporary importance, flawed, and awaiting correction. The later contesting of logical empiricism, particularly the hypothetico-deductive account of theories, by philosophers such as Mary Hesse, sparked a renewed interest in the importance of models during the 1950s that continues to this day. Bailer-Jones analyzes subsequent propositions of: models as metaphors; Kuhn's concept of a paradigm; the Semantic View of theories; and the case study approaches of Cartwright and Morrison, among others. She then engages current debates on topics such as phenomena versus data, the distinctions between models and theories, the concepts of representation and realism, and the discerning of falsities in models. (shrink)
[Correction Notice: An erratum for this article was reported in Vol 109 of Psychological Review. Due to circumstances that were beyond the control of the authors, the studies reported in "Models of Ecological Rationality: The Recognition Heuristic," by Daniel G. Goldstein and Gerd Gigerenzer overlap with studies reported in "The Recognition Heuristic: How Ignorance Makes Us Smart," by the same authors and with studies reported in "Inference From Ignorance: The Recognition Heuristic". In addition, Figure 3 in the Psychological Review (...) article was originally published in the book chapter and should have carried a note saying that it was used by permission of Oxford University Press.] One view of heuristics is that they are imperfect versions of optimal statistical procedures considered too complicated for ordinary minds to carry out. In contrast, the authors consider heuristics to be adaptive strategies that evolved in tandem with fundamental psychological mechanisms. The recognition heuristic, arguably the most frugal of all heuristics, makes inferences from patterns of missing knowledge. This heuristic exploits a fundamental adaptation of many organisms: the vast, sensitive, and reliable capacity for recognition. The authors specify the conditions under which the recognition heuristic is successful and when it leads to the counter-intuitive less-is-more effect in which less knowledge is better than more for making accurate inferences. (shrink)
The program of research now known as the heuristics and biases approach began with a study of the statistical intuitions of experts, who were found to be excessively confident in the replicability of results from small samples. The persistence of such systematic errors in the intuitions of experts implied that their intuitive judgments may be governed by fundamentally different processes than the slower, more deliberate computations they had been trained to execute. The ancient idea that cognitive processes can be partitioned (...) into two main families--traditionally called intuition and reason--is now widely embraced under the general label of dual-process theories. Dual-process models come in many flavors, but all distinguish cognitive operations that are quick and associative from others that are slow and governed by rules. To represent intuitive and deliberate reasoning, we borrow the terms "system 1" and "system 2" from Stanovich and West. In the following section, we present an attribute-substitution model of heuristic judgment, which assumes that difficult questions are often answered by substituting an answer to an easier one. Following sections introduce a research design for studying attribute substitution, as well as discuss the controversy over the representativeness heuristic in the context of a dual-system view that we endorse. The final section situates representativeness within a broad family of prototype heuristics, in which properties of a prototypical exemplar dominate global judgments concerning an entire set. (shrink)
We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to (...) incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. 1 Introduction2 Remarks about Models and Adequacy-for-Purpose3 Evidence for Calibration Can Also Yield Comparative Confirmation3.1 Double-counting I3.2 Double-counting II4 Climate Science Examples: Comparative Confirmation in Practice4.1 Confirmation due to better and worse best fits4.2 Confirmation due to more and less plausible forcings values5 Old Evidence6 Doubts about the Relevance of Past Data7 Non-comparative Confirmation and Catch-Alls8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice9 Concluding Remarks. (shrink)
This paper analyses and explicates the explanatory characteristics of Schelling's checkerboard model of segregation. It argues that the explanation of emergence of segregation which is based on the checkerboard model is a partial potential (theoretical) explanation. Yet it is also argued that despite its partiality, the checkerboard model is valuable because it improves our chances to provide better explanations of particular exemplifications of residential segregation. The paper establishes this argument by way of examining the several ways in which the checkerboard (...) model has been explored in the literature. The examination of the checkerboard model also supports the view that the relation between the real world and models is complex, and models should be considered as mediators, or as instruments of investigation. (shrink)
If models can be true, where is their truth located? Giere (Explaining science, University of Chicago Press, Chicago, 1998) has suggested an account of theoretical models on which models themselves are not truth-valued. The paper suggests modifying Giere’s account without going all the way to purely pragmatic conceptions of truth—while giving pragmatics a prominent role in modeling and truth-acquisition. The strategy of the paper is to ask: if I want to relocate truth inside models, how do (...) I get it, what else do I need to accept and reject? In particular, what ideas about model and truth do I need? The case used as an illustration is the world’s first economic model, that of von Thünen (1826/1842) on agricultural land use in the highly idealized Isolated State. (shrink)
What is it for a group to believe something? A summative account assumes that for a group to believe that p most members of the group must believe that p. Accounts of this type are commonly proposed in interpretation of everyday ascriptions of beliefs to groups. I argue that a nonsummative account corresponds better to our unexamined understanding of such ascriptions. In particular I propose what I refer to as the joint acceptance model of group belief. I argue that group (...) beliefs according to the joint acceptance model are important phenomena whose aetiology and development require investigation. There is an analogous phenomenon of social or group preference, which social choice theory tends to ignore. (shrink)
Detailed examinations of scientific practice have revealed that the use of idealized models in the sciences is pervasive. These models play a central role in not only the investigation and prediction of phenomena, but in their received scientific explanations as well. This has led philosophers of science to begin revising the traditional philosophical accounts of scientific explanation in order to make sense of this practice. These new model-based accounts of scientific explanation, however, raise a number of key questions: (...) Can the fictions and falsehoods inherent in the modeling practice do real explanatory work? Do some highly abstract and mathematical models exhibit a noncausal form of scientific explanation? How can one distinguish an exploratory "how-possibly" model explanation from a genuine "how-actually" model explanation? Do modelers face tradeoffs such that a model that is optimized for yielding explanatory insight, for example, might fail to be the most predictively accurate, and vice versa? This chapter explores the various answers that have been given to these questions. (shrink)
This paper constitutes a radical departure from the existing philosophical literature on models, modeling-practices, and model-based science. I argue that the various entities and practices called 'models' and 'modeling-practices' are too diverse, too context-sensitive, and serve too many scientific purposes and roles, as to allow for a general philosophical analysis. From this recognition an alternative view emerges that I shall dub model anarchism.
What is the mind? How does it work? How does it influence behavior? Some psychologists hope to answer such questions in terms of concepts drawn from computer science and artificial intelligence. They test their theories by modeling mental processes in computers. This book shows how computer models are used to study many psychological phenomena--including vision, language, reasoning, and learning. It also shows that computer modeling involves differing theoretical approaches. Computational psychologists disagree about some basic questions. For instance, should the (...) mind be modeled by digital computers, or by parallel-processing systems more like brains? Do computer programs consist of meaningless patterns, or do they embody (and explain) genuine meaning? (shrink)
Although some previous studies have investigated the relationship between moral foundations and moral judgment development, the methods used have not been able to fully explore the relationship. In the present study, we used Bayesian Model Averaging (BMA) in order to address the limitations in traditional regression methods that have been used previously. Results showed consistency with previous findings that binding foundations are negatively correlated with post-conventional moral reasoning and positively correlated with maintaining norms and personal interest schemas. In addition to (...) previous studies, our results showed a positive correlation for individualizing foundations and post-conventional moral reasoning. Implications are discussed as well as a detailed explanation of the novel BMA method in order to allow others in the field of moral education to be able to use it in their own studies. (shrink)
In a recent paper, Kaplan (Synthese 183:339–373, 2011) takes up the task of extending Craver’s (Explaining the brain, 2007) mechanistic account of explanation in neuroscience to the new territory of computational neuroscience. He presents the model to mechanism mapping (3M) criterion as a condition for a model’s explanatory adequacy. This mechanistic approach is intended to replace earlier accounts which posited a level of computational analysis conceived as distinct and autonomous from underlying mechanistic details. In this paper I discuss work in (...) computational neuroscience that creates difficulties for the mechanist project. Carandini and Heeger (Nat Rev Neurosci 13:51–62, 2012) propose that many neural response properties can be understood in terms of canonical neural computations. These are “standard computational modules that apply the same fundamental operations in a variety of contexts.” Importantly, these computations can have numerous biophysical realisations, and so straightforward examination of the mechanisms underlying these computations carries little explanatory weight. Through a comparison between this modelling approach and minimal models in other branches of science, I argue that computational neuroscience frequently employs a distinct explanatory style, namely, efficient coding explanation. Such explanations cannot be assimilated into the mechanistic framework but do bear interesting similarities with evolutionary and optimality explanations elsewhere in biology. (shrink)
The geosciences include a wide spectrum of disciplines ranging from paleontology to climate science, and involve studies of a vast range of spatial and temporal scales, from the deep-time history of microbial life to the future of a system no less immense and complex than the entire Earth. Modeling is thus a central and indispensable tool across the geosciences. Here, we review both the history and current state of model-based inquiry in the geosciences. Research in these fields makes use of (...) a wide variety of models, such as conceptual, physical, and numerical models, and more specifically cellular automata, artificial neural networks, agent-based models, coupled models, and hierarchical models. We note the increasing demands to incorporate biological and social systems into geoscience modeling, challenging the traditional boundaries of these fields. Understanding and articulating the many different sources of scientific uncertainty – and finding tools and methods to address them – has been at the forefront of most research in geoscience modeling. We discuss not only structuralmodel uncertainties, parameter uncertainties, and solution uncertainties, but also the diverse sources of uncertainty arising from the complex nature of geoscience systems themselves. Without an examination of the geosciences, our philosophies of science and our understanding of the nature of model-based science are incomplete. (shrink)
Mechanistic philosophy of science views a large part of scientific activity as engaged in modelling mechanisms. While science textbooks tend to offer qualitative models of mechanisms, there is increasing demand for models from which one can draw quantitative predictions and explanations. Casini et al. (Theoria 26(1):5–33, 2011) put forward the Recursive Bayesian Networks (RBN) formalism as well suited to this end. The RBN formalism is an extension of the standard Bayesian net formalism, an extension that allows for modelling (...) the hierarchical nature of mechanisms. Like the standard Bayesian net formalism, it models causal relationships using directed acyclic graphs. Given this appeal to acyclicity, causal cycles pose a prima facie problem for the RBN approach. This paper argues that the problem is a significant one given the ubiquity of causal cycles in mechanisms, but that the problem can be solved by combining two sorts of solution strategy in a judicious way. (shrink)
Many accounts of scientific modelling assume that models can be decomposed into the contributions made by their accurate and inaccurate parts. These accounts then argue that the inaccurate parts of the model can be justified by distorting only what is irrelevant. In this paper, I argue that this decompositional strategy requires three assumptions that are not typically met by our best scientific models. In response, I propose an alternative view in which idealized models are characterized as holistically (...) distorted representations that are justified by allowing for the application of various modelling techniques. (shrink)
After experiments with various economic systems, we appear to have conceded, to misquote Winston Churchill that "free enterprise is the worst economic system, except all the others that have been tried." Affirming that conclusion, I shall argue that in today's expanding global economy, we need to revisit our mind-sets about corporate governance and leadership to fit what will be new kinds of free enterprise. The aim is to develop a values-based model for corporate governance in this age of globalization that (...) will be appropriate in a variety of challenging cultural and economic settings. I shall present an analysis of mental models from a social constructivist perspective. I shall then develop the notion of moral imagination as one way to revisit traditional mind-sets about values-based corporate governance and outline what I mean by systems thinking. I shall conclude with examples for modeling corporate governance in multi-cultural settings and draw tentative conclusions about globalization. (shrink)
We develop an account of laboratory models, which have been central to the group selection controversy. We compare arguments for group selection in nature with Darwin's arguments for natural selection to argue that laboratory models provide important grounds for causal claims about selection. Biologists get information about causes and cause-effect relationships in the laboratory because of the special role their own causal agency plays there. They can also get information about patterns of effects and antecedent conditions in nature. (...) But to argue that some cause is actually responsible in nature, they require an inference from knowledge of causes in the laboratory context and of effects in the natural context. This process, cause detection, forms the core of an analogical argument for group selection. We discuss the differing roles of mathematical and laboratory models in constructing selective explanations at the group level and apply our discussion to the units of selection controversy to distinguish between the related problems of cause determination and evaluation of evidence. Because laboratory models are at the intersection of the two problems, their study is crucial for framing a coherent theory of explanation for evolutionary biology. (shrink)
This paper constructs a model of metaphysical indeterminacy that can accommodate a kind of ‘deep’ worldly indeterminacy that arguably arises in quantum mechanics via the Kochen-Specker theorem, and that is incompatible with prominent theories of metaphysical indeterminacy such as that in Barnes and Williams (2011). We construct a variant of Barnes and Williams's theory that avoids this problem. Our version builds on situation semantics and uses incomplete, local situations rather than possible worlds to build a model. We evaluate the resulting (...) theory and contrast it with similar alternatives, concluding that our model successfully captures deep indeterminacy. (shrink)
Recent studies of emotion mindreading reveal that for three emotions, fear, disgust, and anger, deficits in face-based recognition are paired with deficits in the production of the same emotion. What type of mindreading process would explain this pattern of paired deficits? The simulation approach and the theorizing approach are examined to determine their compatibility with the existing evidence. We conclude that the simulation approach offers the best explanation of the data. What computational steps might be used, however, in simulation-style emotion (...) detection? Four alternative models are explored: a generate-and-test model, a reverse simulation model, a variant of the reverse simulation model that employs an “as if” loop, and an unmediated resonance model. (shrink)
This book analyses the impact computerization has had on contemporary science and explains the origins, technical nature and epistemological consequences of the current decisive interplay between technology and science: an intertwining of formalism, computation, data acquisition, data and visualization and how these factors have led to the spread of simulation models since the 1950s. -/- Using historical, comparative and interpretative case studies from a range of disciplines, with a particular emphasis on the case of plant studies, the author shows (...) how and why computers, data treatment devices and programming languages have occasioned a gradual but irresistible and massive shift from mathematical models to computer simulations. -/- . (shrink)
A model is a representation of something beyond itself in the sense of being used as a representative of that something, and in prompting questions of resemblance between the model and that something. Models are substitute systems that are directly examined in order to indirectly acquire information about their target systems. An experiment is an arrangement seeking to isolate a fragment of the world by controlling for causally relevant things outside that fragment. It is suggested that many theoretical (...) class='Hi'>models are (?thought?) experiments, and that many ordinary experiments are (?material?) models. The major difference between the two is that the controls effecting the required isolation are based on material manipulations in one case, and on assumptions in the other. (shrink)
The papers collected in this volume were written over a period of some eight or nine years, with some still earlier material incorporated in one of them. Publishing them under the same cover does not make a con tinuous book of them. The papers are thematically connected with each other, however, in a way which has led me to think that they can naturally be grouped together. In any list of philosophically important concepts, those falling within the range of application (...) of modal logic will rank high in interest. They include necessity, possibility, obligation, permission, knowledge, belief, perception, memory, hoping, and striving, to mention just a few of the more obvious ones. When a satisfactory semantics (in the sense of Tarski and Carnap) was first developed for modal logic, a fascinating new set of methods and ideas was thus made available for philosophical studies. The pioneers of this model theory of modality include prominently Stig Kanger and Saul Kripke. Several others were working in the same area independently and more or less concurrently. Some of the older papers in this collection, especially 'Quantification and Modality' and 'Modes of Modality', serve to clarify some of the main possibilities in the semantics of modal logics in general. (shrink)
Kaplan and Craver claim that all explanations in neuroscience appeal to mechanisms. They extend this view to the use of mathematical models in neuroscience and propose a constraint such models must meet in order to be explanatory. I analyze a mathematical model used to provide explanations in dynamical systems neuroscience and indicate how this explanation cannot be accommodated by the mechanist framework. I argue that this explanation is well characterized by Batterman’s account of minimal model explanations and that (...) it demonstrates how relationships between explanatory models in neuroscience and the systems they represent is more complex than has been appreciated. (shrink)
In this paper, I first argue against various attempts to justify idealizations in scientific models that explain by showing that they are harmless and isolable distortions of irrelevant features. In response, I propose a view in which idealized models are characterized as providing holistically distorted representations of their target system. I then suggest an alternative way that idealized modeling can be justified by appealing to universality.
In this paper, I distinguish scientific models in three kinds on the basis of their ontological status—material models, mathematical models and fictional models, and develop and defend an account of fictional models as fictional objects—i.e. abstract objects that stand for possible concrete objects.
We critically engage two traditional views of scientific data and outline a novel philosophical view that we call the pragmatic-representational view of data. On the PR view, data are representations that are the product of a process of inquiry, and they should be evaluated in terms of their adequacy or fitness for particular purposes. Some important implications of the PR view for data assessment, related to misrepresentation, context-sensitivity, and complementary use, are highlighted. The PR view provides insight into the common (...) but little-discussed practices of iteratively reusing and repurposing data, which result in many datasets’ having a phylogeny—an origin and complex evolutionary history—that is relevant to their evaluation and future use. We relate these insights to the open-data and data-rescue movements, and highlight several future avenues of research that build on the PR view of data. (shrink)