This paper constructs a model of metaphysical indeterminacy that can accommodate a kind of ‘deep’ worldly indeterminacy that arguably arises in quantum mechanics via the Kochen-Specker theorem, and that is incompatible with prominent theories of metaphysical indeterminacy such as that in Barnes and Williams (2011). We construct a variant of Barnes and Williams's theory that avoids this problem. Our version builds on situation semantics and uses incomplete, local situations rather than possible worlds to build a model. We evaluate the resulting (...) theory and contrast it with similar alternatives, concluding that our model successfully captures deep indeterminacy. (shrink)
Models as Make-Believe offers a new approach to scientific modelling by looking to an unlikely source of inspiration: the dolls and toy trucks of children's games of make-believe.
A general account of modeling in physics is proposed. Modeling is shown to involve three components: denotation, demonstration, and interpretation. Elements of the physical world are denoted by elements of the model; the model possesses an internal dynamic that allows us to demonstrate theoretical conclusions; these in turn need to be interpreted if we are to make predictions. The DDI account can be readily extended in ways that correspond to different aspects of scientific practice.
Models as Mediators discusses the ways in which models function in modern science, particularly in the fields of physics and economics. Models play a variety of roles in the sciences: they are used in the development, exploration and application of theories and in measurement methods. They also provide instruments for using scientific concepts and principles to intervene in the world. The editors provide a framework which covers the construction and function of scientific models, and explore the (...) ways in which they enable us to learn about both theories and the world. The contributors to the volume offer their own individual theoretical perspectives to cover a wide range of examples of modelling, from physics, economics and chemistry. These papers provide ideal case-study material for understanding both the concepts and typical elements of modelling, using analytical approaches from the philosophy and history of science. (shrink)
Non-actual model systems discussed in scientific theories are compared to fictions in literature. This comparison may help with the understanding of similarity relations between models and real-world target systems. The ontological problems surrounding fictions in science may be particularly difficult, however. A comparison is also made to ontological problems that arise in the philosophy of mathematics.
Despite an enormous philosophical literature on models in science, surprisingly little has been written about data models and how they are constructed. In this paper, I examine the case of how paleodiversity data models are constructed from the fossil data. In particular, I show how paleontologists are using various model-based techniques to correct the data. Drawing on this research, I argue for the following related theses: first, the ‘purity’ of a data model is not a measure of (...) its epistemic reliability. Instead it is the fidelity of the data that matters. Second, the fidelity of a data model in capturing the signal of interest is a matter of degree. Third, the fidelity of a data model can be improved ‘vicariously’, such as through the use of post hoc model-based correction techniques. And, fourth, data models, like theoretical models, should be assessed as adequate for particular purposes. (shrink)
We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to (...) incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. 1 Introduction2 Remarks about Models and Adequacy-for-Purpose3 Evidence for Calibration Can Also Yield Comparative Confirmation3.1 Double-counting I3.2 Double-counting II4 Climate Science Examples: Comparative Confirmation in Practice4.1 Confirmation due to better and worse best fits4.2 Confirmation due to more and less plausible forcings values5 Old Evidence6 Doubts about the Relevance of Past Data7 Non-comparative Confirmation and Catch-Alls8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice9 Concluding Remarks. (shrink)
In this paper I propose an account of representation for scientific models based on Kendall Walton’s ‘make-believe’ theory of representation in art. I first set out the problem of scientific representation and respond to a recent argument due to Craig Callender and Jonathan Cohen, which aims to show that the problem may be easily dismissed. I then introduce my account of models as props in games of make-believe and show how it offers a solution to the problem. Finally, (...) I demonstrate an important advantage my account has over other theories of scientific representation. All existing theories analyse scientific representation in terms of relations, such as similarity or denotation. By contrast, my account does not take representation in modelling to be essentially relational. For this reason, it can accommodate a group of models often ignored in discussions of scientific representation, namely models which are representational but which represent no actual object. (shrink)
Mechanistic explanation has an impressive track record of advancing our understanding of complex, hierarchically organized physical systems, particularly biological and neural systems. But not every complex system can be understood mechanistically. Psychological capacities are often understood by providing cognitive models of the systems that underlie them. I argue that these models, while superficially similar to mechanistic models, in fact have a substantially more complex relation to the real underlying system. They are typically constructed using a range of (...) techniques for abstracting the functional properties of the system, which may not coincide with its mechanistic organization. I describe these techniques and show that despite being non-mechanistic, these cognitive models can satisfy the normative constraints on good explanations. (shrink)
The paper presents an argument for treating certain types of computer simulation as having the same epistemic status as experimental measurement. While this may seem a rather counterintuitive view it becomes less so when one looks carefully at the role that models play in experimental activity, particularly measurement. I begin by discussing how models function as “measuring instruments” and go on to examine the ways in which simulation can be said to constitute an experimental activity. By focussing on (...) the connections between models and their various functions, simulation and experiment one can begin to see similarities in the practices associated with each type of activity. Establishing the connections between simulation and particular types of modelling strategies and highlighting the ways in which those strategies are essential features of experimentation allows us to clarify the contexts in which we can legitimately call computer simulation a form of experimental measurement. (shrink)
Although some previous studies have investigated the relationship between moral foundations and moral judgment development, the methods used have not been able to fully explore the relationship. In the present study, we used Bayesian Model Averaging (BMA) in order to address the limitations in traditional regression methods that have been used previously. Results showed consistency with previous findings that binding foundations are negatively correlated with post-conventional moral reasoning and positively correlated with maintaining norms and personal interest schemas. In addition to (...) previous studies, our results showed a positive correlation for individualizing foundations and post-conventional moral reasoning. Implications are discussed as well as a detailed explanation of the novel BMA method in order to allow others in the field of moral education to be able to use it in their own studies. (shrink)
A classification of models of reduction into three categories — theory reductionism, explanatory reductionism, and constitutive reductionism — is presented. It is shown that this classification helps clarify the relations between various explications of reduction that have been offered in the past, especially if a distinction is maintained between the various epistemological and ontological issues that arise. A relatively new model of explanatory reduction, one that emphasizes that reduction is the explanation of a whole in terms of its parts (...) is also presented in detail. Finally, the classification is used to clarify the debate over reductionism in molecular biology. It is argued there that while no model from the category of theory reduction might be applicable in that case, models of explanatory reduction might yet capture the structure of the relevant explanations. (shrink)
I argue that the contrast between models and theories is important for public policy issues. I focus especially on the way a mathematical model explains just one aspect of the data.
In this article, I explore the compatibility of inference to the best explanation (IBE) with several influential models and accounts of scientific explanation. First, I explore the different conceptions of IBE and limit my discussion to two: the heuristic conception and the objective Bayesian conception. Next, I discuss five models of scientific explanation with regard to each model’s compatibility with IBE. I argue that Philip Kitcher’s unificationist account supports IBE; Peter Railton’s deductive-nomological-probabilistic model, Wesley Salmon’s statistical-relevance Model, and (...) Bas van Fraassen’s erotetic account are incompatible with IBE; and Wesley Salmon’s causal-mechanical model is merely consistent with IBE. In short, many influential models of scientific explanation do not support IBE. I end by outlining three possible conclusions to draw: (1) either philosophers of science or defenders of IBE have seriously misconstrued the concept of explanation, (2) philosophers of science and defenders of IBE do not use the term ‘explanation’ univocally, and (3) the ampliative conception of IBE, which is compatible with any model of scientific explanation, deserves a closer look. (shrink)
Detailed examinations of scientific practice have revealed that the use of idealized models in the sciences is pervasive. These models play a central role in not only the investigation and prediction of phenomena, but in their received scientific explanations as well. This has led philosophers of science to begin revising the traditional philosophical accounts of scientific explanation in order to make sense of this practice. These new model-based accounts of scientific explanation, however, raise a number of key questions: (...) Can the fictions and falsehoods inherent in the modeling practice do real explanatory work? Do some highly abstract and mathematical models exhibit a noncausal form of scientific explanation? How can one distinguish an exploratory "how-possibly" model explanation from a genuine "how-actually" model explanation? Do modelers face tradeoffs such that a model that is optimized for yielding explanatory insight, for example, might fail to be the most predictively accurate, and vice versa? This chapter explores the various answers that have been given to these questions. (shrink)
The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus (...) on the relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemic tools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation. (shrink)
What sort of claims do scientific models make and how do these claims then underwrite empirical successes such as explanations and reliable policy interventions? In this paper I propose answers to these questions for the class of models used throughout the social and biological sciences, namely idealized deductive ones with a causal interpretation. I argue that the two main existing accounts misrepresent how these models are actually used, and propose a new account. *Received July 2006; revised August (...) 2008. †To contact the author, please write to: Department of Philosophy, University of Missouri, St. Louis, 599 Lucas Hall (MC 73), One University Blvd., St. Louis, MO 63121-4400; e-mail: [email protected] (shrink)
Most of the economic models on basic income account just for pecuniary forms of work, i. e. “time spent making money”, in employment. This restriction is a drawback of these analyses and of the standard economic labor supply model itself. If one wants to understand the potential effects of basic income on individual and social welfare, one should not restrict observation to the pecuniary uses of time. The objective of this contribution is to rethink the meaning of work usually (...) applied in economic models, based on contributions of other social scientists. This reassessment is undertaken through the development of a microeconomic model, which discusses the effects of basic income on time use and interprets work not just as a source of income, but also of non-pecuniary benefits. Further, we disentangle the usual work-leisure dichotomy in two other ones. (shrink)
This paper constitutes a radical departure from the existing philosophical literature on models, modeling-practices, and model-based science. I argue that the various entities and practices called 'models' and 'modeling-practices' are too diverse, too context-sensitive, and serve too many scientific purposes and roles, as to allow for a general philosophical analysis. From this recognition an alternative view emerges that I shall dub model anarchism.
This paper introduces and defends an account of model-based science that I dub model pluralism. I argue that despite a growing awareness in the philosophy of science literature of the multiplicity, diversity, and richness of models and modeling practices, more radical conclusions follow from this recognition than have previously been inferred. Going against the tendency within the literature to generalize from single models, I explicate and defend the following two core theses: any successful analysis of models must (...) target sets of models, their multiplicity of functions within science, and their scientific context and history and for almost any aspect x of phenomenon y, scientists require multiple models to achieve scientific goal z. (shrink)
Many biological investigations are organized around a small group of species, often referred to as ‘model organisms’, such as the fruit fly Drosophila melanogaster. The terms ‘model’ and ‘modelling’ also occur in biology in association with mathematical and mechanistic theorizing, as in the Lotka–Volterra model of predator-prey dynamics. What is the relation between theoretical models and model organisms? Are these models in the same sense? We offer an account on which the two practices are shown to have different (...) epistemic characters. Theoretical modelling is grounded in explicit and known analogies between model and target. By contrast, inferences from model organisms are empirical extrapolations. Often such extrapolation is based on shared ancestry, sometimes in conjunction with other empirical information. One implication is that such inferences are unique to biology, whereas theoretical models are common across many disciplines. We close by discussing the diversity of uses to which model organisms are put, suggesting how these relate to our overall account. 1 Introduction2 Volterra and Theoretical Modelling3 Drosophila as a Model Organism4 Generalizing from Work on Model Organisms5 Phylogenetic Inference and Model Organisms6 Further Roles of Model Organisms6.1 Preparative experimentation6.2 Model organisms as paradigms6.3 Model organisms as theoretical models6.4 Inspiration for engineers6.5 Anchoring a research community7 Conclusion. (shrink)
It appears that in the 30 years that business ethics has been a discipline in its own right a model of business ethics has not been proffered. No one appears to have tried to explain the phenomenon known as ‚business ethics’ and the ways that we as a society interact with the concept, therefore, the authors have addressed this gap in the literature by proposing a model of business ethics that the authors hope will stimulate debate. The business ethics model (...) consists of three principal components (i.e. expectations, perceptions and evaluations) that are interconnected by five sub-components (i.e. society expects; organizational values, norms and beliefs; outcomes; society evaluates; and reconnection). The introduced model makes a contribution to the creation of a conceptual framework for business ethics. A few tentative conclusions may be drawn from the introduced model of business ethics. The model aspires to be highly dynamic. The ultimate outcome is dependent upon the evolution of time and contexts. It is also dependent upon and provides reference to the behaviours and perceptions of people. The model proposes business ethics to be a continuous and an iterative process. There is no actual end of the process, but a constant reconnection to the initiation of successive process iterations of the business ethics model. The principals and sub-components of the model construct the dynamics of this continuous process. They provide guidance on what and how to explore our common efforts to understand the phenomenon known as business ethics. The model provides opportunities for further research in the field of business ethics. (shrink)
I propose a distinct type of robustness, which I suggest can support a confirmatory role in scientific reasoning, contrary to the usual philosophical claims. In model robustness, repeated production of the empirically successful model prediction or retrodiction against a background of independentlysupported and varying model constructions, within a group of models containing a shared causal factor, may suggest how confident we can be in the causal factor and predictions/retrodictions, especially once supported by a variety of evidence framework. I present (...) climate models of greenhouse gas global warming of the 20th Century as an example, and emphasize climate scientists’ discussions of robust models and causal aspects. The account is intended as applicable to a broad array of sciences that use complex modeling techniques. (shrink)
Causal models show promise as a foundation for the semantics of counterfactual sentences. However, current approaches face limitations compared to the alternative similarity theory: they only apply to a limited subset of counterfactuals and the connection to counterfactual logic is not straightforward. This paper addresses these difficulties using exogenous interventions, where causal interventions change the values of exogenous variables rather than structural equations. This model accommodates judgments about backtracking counterfactuals, extends to logically complex counterfactuals, and validates familiar principles of (...) counterfactual logic. This combines the interventionist intuitions of the causal approach with the logical advantages of the similarity approach. (shrink)
Most scientific models are not physical objects, and this raises important questions. What sort of entity are models, what is truth in a model, and how do we learn about models? In this paper I argue that models share important aspects in common with literary fiction, and that therefore theories of fiction can be brought to bear on these questions. In particular, I argue that the pretence theory as developed by Walton has the resources to answer (...) these questions. I introduce this account, outline the answers that it offers, and develop a general picture of scientific modelling based on it. (shrink)
Models are of central importance in many scientific contexts. The centrality of models such as the billiard ball model of a gas, the Bohr model of the atom, the MIT bag model of the nucleon, the Gaussian-chain model of a polymer, the Lorenz model of the atmosphere, the Lotka-Volterra model of predator-prey interaction, the double helix model of DNA, agent-based and evolutionary models in the social sciences, or general equilibrium models of markets in their respective domains (...) are cases in point. Scientists spend a great deal of time building, testing, comparing and revising models, and much journal space is dedicated to introducing, applying and interpreting these valuable tools. In short, models are one of the principal instruments of modern science. (shrink)
This article discusses minimal model explanations, which we argue are distinct from various causal, mechanical, difference-making, and so on, strategies prominent in the philosophical literature. We contend that what accounts for the explanatory power of these models is not that they have certain features in common with real systems. Rather, the models are explanatory because of a story about why a class of systems will all display the same large-scale behavior because the details that distinguish them are irrelevant. (...) This story explains patterns across extremely diverse systems and shows how minimal models can be used to understand real systems. (shrink)
Abstract While agreeing that dynamical models play a major role in cognitive science, we reject Stepp, Chemero, and Turvey's contention that they constitute an alternative to mechanistic explanations. We review several problems dynamical models face as putative explanations when they are not grounded in mechanisms. Further, we argue that the opposition of dynamical models and mechanisms is a false one and that those dynamical models that characterize the operations of mechanisms overcome these problems. By briefly considering (...) examples involving the generation of action potentials and circadian rhythms, we show how decomposing a mechanism and modeling its dynamics are complementary endeavors. (shrink)
We develop an account of laboratory models, which have been central to the group selection controversy. We compare arguments for group selection in nature with Darwin's arguments for natural selection to argue that laboratory models provide important grounds for causal claims about selection. Biologists get information about causes and cause-effect relationships in the laboratory because of the special role their own causal agency plays there. They can also get information about patterns of effects and antecedent conditions in nature. (...) But to argue that some cause is actually responsible in nature, they require an inference from knowledge of causes in the laboratory context and of effects in the natural context. This process, cause detection, forms the core of an analogical argument for group selection. We discuss the differing roles of mathematical and laboratory models in constructing selective explanations at the group level and apply our discussion to the units of selection controversy to distinguish between the related problems of cause determination and evaluation of evidence. Because laboratory models are at the intersection of the two problems, their study is crucial for framing a coherent theory of explanation for evolutionary biology. (shrink)
I analyse three most interesting and extensive approaches to theoretical models: classical ones—proposed by Peter Achinstein and Michael Redhead, and the rela-tively rareanalysed approach of Ryszard Wójcicki, belonging to a later phase of his research where he gave up applyingthe conceptual apparatus of logical semantics. I take into consideration the approaches to theoretical models in which they are qualified as models representing the reality. That is why I omit Max Black’s and Mary Hesse’s concepts of such (...) class='Hi'>models, as those two concepts belong to the analogue model group if we consider the main function of the model of a given class as its classification criterion. My main focus is on theoretical models with representative functions as these very models and, in a broader context, the question of representation. (shrink)
Batterman and Rice ([2014]) argue that minimal models possess explanatory power that cannot be captured by what they call ‘common features’ approaches to explanation. Minimal models are explanatory, according to Batterman and Rice, not in virtue of accurately representing relevant features, but in virtue of answering three questions that provide a ‘story about why large classes of features are irrelevant to the explanandum phenomenon’ ([2014], p. 356). In this article, I argue, first, that a method (the renormalization group) (...) they propose to answer the three questions cannot answer them, at least not by itself. Second, I argue that answers to the three questions are unnecessary to account for the explanatoriness of their minimal models. Finally, I argue that a common features account, what I call the ‘generalized ontic conception of explanation’, can capture the explanatoriness of minimal models. (shrink)
Many models in economics are very unrealistic. At the same time, economists put a lot of effort into making their models more realistic. I argue that in many cases, including the Modigliani-Miller irrelevance theorem investigated in this paper, the purpose of this process of concretization is explanatory. When evaluated in combination with their assumptions, a highly unrealistic model may well be true. The purpose of relaxing an unrealistic assumption, then, need not be to move from a false model (...) to a true one. Instead, it may be providing an explanation of some phenomenon by invoking the factor that figures in the assumption. This idea is developed in terms of the contrastive account of explanation. It is argued that economists use highly unrealistic assumptions to determine a contrast that is worth explaining. The process of concretization also motivates new explanatory questions. A high degree of explanatory power, then, may well be due to a high number of unrealistic assumptions. Thus, highly unrealistic models can be powerful explanatory engines. Key Words: concretization • explanation • explanatory power • idealization • model • Modigliani-Miller theorem • unrealistic assumption. (shrink)
In this paper I discuss the relationship between model, theories, and laws in the practice of experimental scale modeling. The methodology of experimental scale modeling, also known as physical similarity, differs markedly from that of other kinds of models in ways that are important to issues in philosophy of science. Scale models are not discussed in much depth in mainstream philosophy of science. In this paper, I examine how scale models are used in making inferences. The main (...) question I address in this talk is ``How are fundamental laws involved in the construction of, and inferences drawn from, experimental scale models?'' We shall see that there is a refreshing alternative to the mainstream view that models can serve only as intermediaries between theory and experiment. Using the methodology of scale models, one can use observations on one piece of the world to make inferences about another piece of the world, without involving an intermediate abstract model about which one reasons. The philosophical significance of that point to philosophy of science is that the method of physical similarity, which provides the basis for inferences based upon scale models, is a qualitatively different way in which fundamental laws can be used in analogical reasoning that is truly informative. Finally, as this method provides a formal basis for case-based reasoning, it may be helpful in formalizing methods used in some of the so-called ``special sciences''. (shrink)
[Correction Notice: An erratum for this article was reported in Vol 109 of Psychological Review. Due to circumstances that were beyond the control of the authors, the studies reported in "Models of Ecological Rationality: The Recognition Heuristic," by Daniel G. Goldstein and Gerd Gigerenzer overlap with studies reported in "The Recognition Heuristic: How Ignorance Makes Us Smart," by the same authors and with studies reported in "Inference From Ignorance: The Recognition Heuristic". In addition, Figure 3 in the Psychological Review (...) article was originally published in the book chapter and should have carried a note saying that it was used by permission of Oxford University Press.] One view of heuristics is that they are imperfect versions of optimal statistical procedures considered too complicated for ordinary minds to carry out. In contrast, the authors consider heuristics to be adaptive strategies that evolved in tandem with fundamental psychological mechanisms. The recognition heuristic, arguably the most frugal of all heuristics, makes inferences from patterns of missing knowledge. This heuristic exploits a fundamental adaptation of many organisms: the vast, sensitive, and reliable capacity for recognition. The authors specify the conditions under which the recognition heuristic is successful and when it leads to the counter-intuitive less-is-more effect in which less knowledge is better than more for making accurate inferences. (shrink)
The geosciences include a wide spectrum of disciplines ranging from paleontology to climate science, and involve studies of a vast range of spatial and temporal scales, from the deep-time history of microbial life to the future of a system no less immense and complex than the entire Earth. Modeling is thus a central and indispensable tool across the geosciences. Here, we review both the history and current state of model-based inquiry in the geosciences. Research in these fields makes use of (...) a wide variety of models, such as conceptual, physical, and numerical models, and more specifically cellular automata, artificial neural networks, agent-based models, coupled models, and hierarchical models. We note the increasing demands to incorporate biological and social systems into geoscience modeling, challenging the traditional boundaries of these fields. Understanding and articulating the many different sources of scientific uncertainty – and finding tools and methods to address them – has been at the forefront of most research in geoscience modeling. We discuss not only structuralmodel uncertainties, parameter uncertainties, and solution uncertainties, but also the diverse sources of uncertainty arising from the complex nature of geoscience systems themselves. Without an examination of the geosciences, our philosophies of science and our understanding of the nature of model-based science are incomplete. (shrink)
The program of research now known as the heuristics and biases approach began with a study of the statistical intuitions of experts, who were found to be excessively confident in the replicability of results from small samples. The persistence of such systematic errors in the intuitions of experts implied that their intuitive judgments may be governed by fundamentally different processes than the slower, more deliberate computations they had been trained to execute. The ancient idea that cognitive processes can be partitioned (...) into two main families--traditionally called intuition and reason--is now widely embraced under the general label of dual-process theories. Dual-process models come in many flavors, but all distinguish cognitive operations that are quick and associative from others that are slow and governed by rules. To represent intuitive and deliberate reasoning, we borrow the terms "system 1" and "system 2" from Stanovich and West. In the following section, we present an attribute-substitution model of heuristic judgment, which assumes that difficult questions are often answered by substituting an answer to an easier one. Following sections introduce a research design for studying attribute substitution, as well as discuss the controversy over the representativeness heuristic in the context of a dual-system view that we endorse. The final section situates representativeness within a broad family of prototype heuristics, in which properties of a prototypical exemplar dominate global judgments concerning an entire set. (shrink)
The paper studies the topography of the model landscape of the physics in the Higgs sector both within the Standard Model of Elementary Particle Physics and beyond in the months before the discovery of a SM Higgs boson. At first glance, this landscape appears fragmented into a large number of different models and research communities. But it also clusters around certain guiding ideas, among them supersymmetry or dynamical symmetry breaking, in which representative and narrative features of the models (...) are combined. These models do not stand for themselves, waiting to be experimentally confirmed and elevated to the status of theory. Rather do they, quite in the sense advocated by Morgan and Morrison, enjoy a far-reaching autonomy. Typically models in the Higgs sector entertain three types of mediating relationships. First, they mediate between the SM and the data in those instances where the SM contains some uncertainty in the values of its basic parameters. Second, they mediate between BSM physics and the data by instantiating the core ideas behind these often speculative generalizations of the SM as stories—in Hartmann’s sense—that motivate or justify the respective model. Third, the fact that Higgs models within BSM physics reproduce the SM predictions in the low-energy limit functions as a consistency constraint that does not involve any additional autonomy. Due to the second type of mediating relationship, the representative features of Higgs models BSM are complex. (shrink)
One striking feature of the contemporary modelling practice is its interdisciplinary nature. The same equation forms, and mathematical and computational methods, are used across different disciplines, as well as within the same discipline. Are there, then, differences between intra- and interdisciplinary transfer, and can the comparison between the two provide more insight on the challenges of interdisciplinary theoretical work? We will study the development and various uses of the Ising model within physics, contrasting them to its applications to socio-economic systems. (...) While the renormalization group methods justify the transfer of the Ising model within physics – by ascribing them to the same universality class – its application to socio-economic phenomena has no such theoretical grounding. As a result, the insights gained by modelling socio-economic phenomena by the Ising model may remain limited. (shrink)
I provide a theory of causation within the causal modeling framework. In contrast to most of its predecessors, this theory is model-invariant in the following sense: if the theory says that C caused (didn't cause) E in a causal model, M, then it will continue to say that C caused (didn't cause) E once we've removed an inessential variable from M. I suggest that, if this theory is true, then we should understand a cause as something which transmits deviant or (...) non-inertial behavior to its effect. (shrink)
In this topical section, we highlight the next step of research on modeling aiming to contribute to the emerging literature that radically refrains from approaching modeling as a scientific endeavor. Modeling surpasses “doing science” because it is frequently incorporated into decision-making processes in politics and management, i.e., areas which are not solely epistemically oriented. We do not refer to the production of models in academia for abstract or imaginary applications in practical fields, but instead highlight the real entwinement of (...) science and policy and the real erosion of their boundaries. Models in decision making – due to their strong entwinement with policy and management – are utilized differently than models in science; they are employed for different purposes and with different constraints. We claim that “being a part of decision-making” implies that models are elements of a very particular situation, in which knowledge about the present and the future is limited but dependence of decisions on the future is distinct. Emphasis on the future indicates that decisions are made about actions that have severe and lasting consequences. In these specific situations, models enable not only the acquisition of knowledge (the primary goal of science) but also enable deciding upon actions that change the course of events. As a result, there are specific ways to construct effective models and justify their results. Although some studies have explored this topic, our understanding of how models contribute to decision making outside of science remains fragmentary. This topical section aims to fill this gap in research and formulate an agenda for additional and more systematic investigations in the field. (shrink)
Kripke models, interpreted realistically, have difficulty making sense of the thesis that there might have existed things that do not in fact exist, since a Kripke model in which this thesis is true requires a model structure in which there are possible worlds with domains that contain things that do not exist. This paper argues that we can use Kripke models as representational devices that allow us to give a realistic interpretation of a modal language. The method of (...) doing this is sketched, with the help of an analogy with a Galilean relativist theory of spatial properties and relations. (shrink)
If models can be true, where is their truth located? Giere (Explaining science, University of Chicago Press, Chicago, 1998) has suggested an account of theoretical models on which models themselves are not truth-valued. The paper suggests modifying Giere’s account without going all the way to purely pragmatic conceptions of truth—while giving pragmatics a prominent role in modeling and truth-acquisition. The strategy of the paper is to ask: if I want to relocate truth inside models, how do (...) I get it, what else do I need to accept and reject? In particular, what ideas about model and truth do I need? The case used as an illustration is the world’s first economic model, that of von Thünen (1826/1842) on agricultural land use in the highly idealized Isolated State. (shrink)
Recent accounts of scientific method suggest that a model, or analogy, for an axiomatized theory is another theory, or postulate set, with an identical calculus. The present paper examines five central theses underlying this position. In the light of examples from physical science it seems necessary to distinguish between models and analogies and to recognize the need for important revisions in the position under study, especially in claims involving an emphasis on logical structure and similarity in form between theory (...) and analogy. While formal considerations are often relevant in the employment of an analogy they are neither as extensive as proponents of this viewpoint suggest, nor are they in most cases sufficient for allowing analogies to fulfill the roles imputed to them. Of major importance, and what these authors generally fail to consider, are physical similarities between analogue and theoretical object. Such similarities, which are characteristic in varying degrees of most analogies actually employed, play an important role in affording a better understanding of concepts in the theory and also in the development of the theoretical assumptions. (shrink)
CONTINUOUS MODEL THEORY CHAPTER I TOPOLOGICAL PRELIMINARIES. Notation Throughout the monograph our mathematical notation does not differ drastically from ...
Scientists have used models for hundreds of years as a means of describing phenomena and as a basis for further analogy. In Scientific Models in Philosophy of Science, Daniela Bailer-Jones assembles an original and comprehensive philosophical analysis of how models have been used and interpreted in both historical and contemporary contexts. Bailer-Jones delineates the many forms models can take (ranging from equations to animals; from physical objects to theoretical constructs), and how they are put to use. (...) She examines early mechanical models employed by nineteenth-century physicists such as Kelvin and Maxwell, describes their roots in the mathematical principles of Newton and others, and compares them to contemporary mechanistic approaches. Bailer-Jones then views the use of analogy in the late nineteenth century as a means of understanding models and to link different branches of science. She reveals how analogies can also be models themselves, or can help to create them. The first half of the twentieth century saw little mention of models in the literature of logical empiricism. Focusing primarily on theory, logical empiricists believed that models were of temporary importance, flawed, and awaiting correction. The later contesting of logical empiricism, particularly the hypothetico-deductive account of theories, by philosophers such as Mary Hesse, sparked a renewed interest in the importance of models during the 1950s that continues to this day. Bailer-Jones analyzes subsequent propositions of: models as metaphors; Kuhn's concept of a paradigm; the Semantic View of theories; and the case study approaches of Cartwright and Morrison, among others. She then engages current debates on topics such as phenomena versus data, the distinctions between models and theories, the concepts of representation and realism, and the discerning of falsities in models. (shrink)