The focus in the literature on scientific explanation has shifted in recent years towards modelbased approaches. The idea that there are simple and true laws of nature has met with objections from philosophers such as Nancy Cartwright (1983) and Paul Teller (2001), and this has made a strictly Hempelian D-N style explanation largely irrelevant to the explanatory practices of science (Hempel & Oppenheim, 1948). Much of science does not involve subsuming particular events under laws of nature. It is increasingly recognized (...) that science across the disciplines is to some degree a patchwork of scientific models, with different methods, strategies, and with varying degrees of successful prediction and explanation. And so accounts of scientific explanation have reflected this change of perspective and model-based approaches have flourished in the explanation literature (Batterman, 2002b; Bokulich, 2008; Craver, 2006; Woodward, 2003). (shrink)
Disagreement about how best to think of the relation between theories and the realities they represent has a longstanding and venerable history. We take up this debate in relation to the free energy principle (FEP) - a contemporary framework in computational neuroscience, theoretical biology and the philosophy of cognitive science. The FEP is very ambitious, extending from the brain sciences to the biology of self-organisation. In this context, some find apparent discrepancies between the map (the FEP) and the territory (target (...) systems) a compelling reason to defend instrumentalism about the FEP. We take this to be misguided. We identify an important fallacy made by those defending instrumentalism about the FEP. We call it the literalist fallacy: this is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto real-world, target systems. We conclude that scientific realism about the FEP is a live and tenable option. (shrink)
It is sometimes said that simulation can serve as epistemic substitute for experimentation. Such a claim might be suggested by the fast-spreading use of computer simulation to investigate phenomena not accessible to experimentation (in astrophysics, ecology, economics, climatology, etc.). But what does that mean? The paper starts with a clarification of the terms of the issue and then focuses on two powerful arguments for the view that simulation and experimentation are ‘epistemically on a par’. One is based on the claim (...) that, in experimentation, no less than in simulation, it is not the system under study that is manipulated but a system that ‘stands-in’ for it. The other one highlights the pervasive use of models in experimentation. It will be argued that these arguments, as compelling as they might seem, are each based on a mistaken interpretation of experimentation and that, far from simulation and experimentation being epistemically on a par, they do not have the same epistemic function, do not produce the same kind of epistemic results. (shrink)
First, I propose a new argument in favor of the Dappled World perspective introduced by Nancy Cartwright. There are systems, for which detailed models can't exist in the natural world. And this has nothing to do with the limitations of human minds or technical resources. The limitation is built into the very principle of modeling: we are trying to replace some system by another one. In full detail, this may be impossible. Secondly, I'm trying to refine the Dappled World perspective (...) by applying the correct distinction between models and theories. At the level of models, because of the above-mentioned limitations, we will always have only a patchwork of models each very restricted in its application scope. And at the level of theories, we will never have a single complete Theory of Everything allowing, without additional postulates, to generate all the models we may need for surviving in this world. (shrink)
What is a model? Surprisingly, in philosophical texts, this question is asked (sometimes), but almost never – answered. Instead of a general answer, usually, some classification of models is considered. The broadest possible definition of modeling could sound as follows: a model is anything that is (or could be) used, for some purpose, in place of something else. If the purpose is “answering questions”, then one has a cognitive model. Could such a broad definition be useful? Isn't it empty? Can (...) one derive useful consequences from it? I'm trying to show that there is a lot of them. (shrink)
This paper represents a philosophical experiment inspired by the formalist philosophy of mathematics. In the formalist picture of cognition, the principal act of knowledge generation is represented as tentative postulation – as introduction of a new knowledge construct followed by exploration of the consequences that can be derived from it. Depending on the result, the new construct may be accepted as normative, rejected, modified etc. Languages and means of reasoning are generated and selected in a similar process. In the formalist (...) picture, all kinds of “truth” are detected intra-theoretically. Some knowledge construct may be considered as “true”, if it is accepted in a particular normative knowledge system. Some knowledge construct may be considered as persistently true, if it remains invariant during the evolution of some knowledge system for a sufficiently long time. And, if you wish, you may consider some knowledge construct as absolutely true, if you do not intend abandoning it in your knowledge system. And finally, in the formalist picture, all kinds of ontologies generated by humans can be demystified by reconstructing them within the basic solipsist ontology simply as hypothetical branches of it. (shrink)
This paper constitutes a radical departure from the existing philosophical literature on models, modeling-practices, and model-based science. I argue that the various entities and practices called 'models' and 'modeling-practices' are too diverse, too context-sensitive, and serve too many scientific purposes and roles, as to allow for a general philosophical analysis. From this recognition an alternative view emerges that I shall dub model anarchism.
In this paper I argue against a deflationist view that as representational vehicles symbols and models do their jobs in essentially the same way. I argue that symbols are conventional vehicles whose chief function is denotation while models are epistemic vehicles whose chief function is showing what their targets are like in the relevant aspects. It is further pointed out that models usually do not rely on similarity or some such relations to relate to their targets. For that referential relation (...) they reply instead on symbols (names and labels) given to them and their parts. And a Goodmanian view on pictures of fictional characters reveals the distinction between symbolic and model representations. (shrink)
Contemporary literature in philosophy of science has begun to emphasize the practice of modeling, which differs in important respects from other forms of representation and analysis central to standard philosophical accounts. This literature has stressed the constructed nature of models, their autonomy, and the utility of their high degrees of idealization. What this new literature about modeling lacks, however, is a comprehensive account of the models that figure in to the practice of modeling. This paper offers a new account of (...) both concrete and mathematical models, with special emphasis on the intentions of theorists, which are necessary for evaluating the model-world relationship during the practice of modeling. Although mathematical models form the basis of most of contemporary modeling, my discussion begins with more traditional, concrete models such as the San Francisco Bay model. (shrink)
The philosophical literatures on models and thought experiments have been developing exponentially, and independently, for decades. This independence is surprising, given how similar models and thought experiments are. They each have “lives of their own,” they sit between theory and experience, they are important for both pedagogy and cutting-edge science, they galvanize conceptual changes and paradigm shifts, and they involve entertaining imaginary scenarios and working out what happens. Recently, philosophers have begun to highlight these similarities. This entry aims at taking (...) the idea further, by trying to systematically identify places where insights from one literature can be taken up in the other. Along the way, important differences will also be highlighted. (shrink)
Contemporary action theory is generally concerned with giving theories of action ontology. In this paper, we make the novel proposal that the standard view in action theory—the Causal Theory of Action—should be recast as a “model”, akin to the models constructed and investigated by scientists. Such models often consist in fictional, hypothetical, or idealized structures, which are used to represent a target system indirectly via some resemblance relation. We argue that recasting the Causal Theory as a model can not only (...) accomplish the goals of causal theorists, but also give the theory greater flexibility in responding to common objections. (shrink)
In this article we argue that idealizations and limiting cases in models play an exploratory role in science. Four senses of exploration are presented: exploration of the structure and representational capacities of theory; proof-of-principle demonstrations; potential explanations; and exploring the suitability of target systems. We illustrate our claims through three case studies, including the Aharonov-Bohm effect, the emergence of anyons and fractional quantum statistics, and the Hubbard model of the Mott phase transitions. We end by reflecting on how our case (...) studies and claims compare to accounts of idealization in the philosophy of science literature such as Michael Weisberg’s three-fold taxonomy. (shrink)
Maps and mapping raise questions about models and modeling and in science. This chapter archives map discourse in the founding generation of philosophers of science (e.g., Rudolf Carnap, Nelson Goodman, Thomas Kuhn, and Stephen Toulmin) and in the subsequent generation (e.g., Philip Kitcher, Helen Longino, and Bas van Fraassen). In focusing on these two original framing generations of philosophy of science, I intend to remove us from the heat of contemporary discussions of abstraction, representation, and practice of science and thereby (...) see in a more distant and neutral light the many productive ways in which maps can stand in analytically for scientific theories and models. The chapter concludes by complementing the map analogy – i.e., a scientific theory is a map of the world – with a model analogy, viz., a scientific model is a vehicle for understanding. (shrink)
According to a widely held view, scientific modelling consists in entertaining a set of model descriptions that specify a model. Rather than studying the phenomenon of interest directly, scientists investigate the phenomenon indirectly via a model in the hope of learning about some of the phenomenon’s features. I call this view the description-driven modelling (DDM) account. I argue that although an accurate description of much of scientific research, the DDM account is found wanting as regards the mechanistic modelling found in (...) many branches of biology. By analysing research practices in cancer immunology concerning the development of mechanistic models of the process of cancer metastasis, this paper presents and argues for a complementary account of scientific modelling, herein called the experimentation-driven modelling (EDM) account. In EDM, scientists investigate a set of experimental systems and then integrate the results obtained from experiments into a mechanistic model. While EDM shares some key features with DDM, the two are epistemically very different approaches to research. (shrink)
The practice of scientific modelling often resorts to hypothetical, false, idealised, targetless, partial, generalised, and other types of modelling that appear to have at least partially non-actual targets. In this paper, I will argue that we can avoid a commitment to non-actual targets by sketching a framework where models are understood as having networks of possibilities as their targets. This raises a further question: what are the truthmakers for the modal claims that we can derive from models? I propose that (...) we can find truthmakers for the modal claims derived from models in actuality, even in the case of supposedly non-actual targets. I then put this framework to use by examining a case study concerning the modelling of superheavy elements. (shrink)
Previously, I (Boesch 2017) described a notion called “representational licensing”—the set of activities of scientific practice by which scientists establish the intended representational use of a vehicle. In this essay, I expand and develop this concept of representational licensing. I begin by showing how the concept is of value for both pragmatic and substantive approaches to scientific representation. Then, through the examination of a case study of the Mississippi River Basin Model, I point out and explain some of the activities (...) of representational licensing that help to establish the representational nature of this model. Throughout the exploration of the case study, I pause to identify some important lessons which apply more generally about the nature of representational licensing in science. (shrink)
A well known conception of axiomatization has it that an axiomatized theory must be interpreted, or otherwise coordinated with reality, in order to acquire empirical content. An early version of this account is often ascribed to key figures in the logical empiricist movement, and to central figures in the early “formalist” tradition in mathematics as well. In this context, Reichenbach’s “coordinative definitions” are regarded as investing abstract propositions with empirical significance. We argue that over-emphasis on the abstract elements of this (...) approach fails to appreciate a rich tradition of empirical axiomatization in the late nineteenth and early twentieth centuries, evident in particular in the work of Moritz Pasch, Heinrich Hertz, David Hilbert, and Reichenbach himself. We claim that such over-emphasis leads to a misunderstanding of the role of empirical facts in Reichenbach’s approach to the axiomatization of a physical theory, and of the role of Reichenbach’s coordinative definitions in particular. (shrink)
Sallie McFague’s theological models construct a tensive relationship between conceptual structures and symbolic, metaphorical language to interpret the defining and elusive aspects of theological phenomena and loci. Computational models of language can extend and formalize the conceptual structures of theological models to develop computer-augmented interpretations of theological texts. Previously unclear is whether computational models can retain the tensive symbolism essential for theological investigation. I demonstrate affirmatively by constructing a computational topic model of the moral theology of Thomas Aquinas from Summa (...) Theologica (Second Part, in English translation) useful for interpreting not only the Thomistic text but also recent papal encyclicals. (shrink)
While discussions of the imagination have been limited in philosophy of science, this is beginning to change. In recent years, a vast literature on imagination in science has emerged. This paper surveys the current field, including the changing attitudes towards the scientific imagination, the fiction view of models, how the imagination can lead to knowledge and understanding, and the value of different types of imagination. It ends with a discussion of the gaps in the current literature, indicating avenues for future (...) research. (shrink)
This paper defends three claims about concrete or physical models: these models remain important in science and engineering, they are often essentially idealized, in a sense to be made precise, and despite these essential idealizations, some of these models may be reliably used for the purpose of causal explanation. This discussion of concrete models is pursued using a detailed case study of some recent models of landslide generated impulse waves. Practitioners show a clear awareness of the idealized character of these (...) models, and yet address these concerns through a number of methods. This paper focuses on experimental arguments that show how certain failures to accurately represent feature X are consistent with accurately representing some causes of feature Y, even when X is causally relevant to Y. To analyse these arguments, the claims generated by a model must be carefully examined and grouped into types. Only some of these types can be endorsed by practitioners, but I argue that these endorsed claims are sufficient for limited forms of causal explanation. (shrink)
Abstraction and idealization are the two notions that are most often discussed in the context of assumptions employed in the process of model building. These notions are also routinely used in philosophical debates such as that on the mechanistic account of explanation. Indeed, an objection to the mechanistic account has recently been formulated precisely on these grounds: mechanists cannot account for the common practice of idealizing difference-making factors in models in molecular biology. In this paper I revisit the debate and (...) I argue that the objection does not stand up to scrutiny. This is because it is riddled with a number of conceptual inconsistencies. By attempting to resolve the tensions, I also draw several general lessons regarding the difficulties of applying abstraction and idealization in scientific practice. Finally, I argue that more care is needed only when speaking of abstraction and idealization in a context in which these concepts play an important role in an argument, such as that on mechanistic explanation. (shrink)
Models are of central importance in many scientific contexts. The centrality of models such as inflationary models in cosmology, general-circulation models of the global climate, the double-helix model of DNA, evolutionary models in biology, agent-based models in the social sciences, and general-equilibrium models of markets in their respective domains is a case in point (the Other Internet Resources section at the end of this entry contains links to online resources that discuss these models). Scientists spend significant amounts of time building, (...) testing, comparing, and revising models, and much journal space is dedicated to interpreting and discussing the implications of models. In short, models are one of the principal instruments of modern science. (shrink)
Manuel García Carpintero defends a form of antirealism for the explicit talk and thought both about fictional entities and scientific models: a version of StephenYablo’s figuralist brand of factionalism. He argues that, in contrast with pretense-theoretic fictionalist proposals, on his view, utterances in those discourses are straightforward assertions with straightforward truth-conditions, involving a particular kind of metaphors or figurative manner. But given that the relevant metaphors are all but “dead”, this might suggest that the view is after all realist, committed (...) to referents of some sort for singular terms in the relevant discourses. He revisits these issues from the perspective of the more recent work on them and applies his view to recent debates in semantics on the role and adequacy of supervaluationist models of indeterminacy. (shrink)
The aim of this paper is to grasp the relevant distinctions between various ways in which models and simulations in Artificial Intelligence (AI) relate to cognitive phenomena. In order to get a systematic picture, a taxonomy is developed that is based on the coordinates of formal versus material analogies and theory-guided versus pre-theoretic models in science. These distinctions have parallels in the computational versus mimetic aspects and in analytic versus exploratory types of computer simulation. The proposed taxonomy cuts across the (...) traditional dichotomies between symbolic and embodied AI, general intelligence and symbol and intelligence and cognitive simulation and human/non-human-like AI. -/- According to the taxonomy proposed here, one can distinguish between four distinct general approaches that figured prominently in early and classical AI, and that have partly developed into distinct research programs: first, phenomenal simulations (e.g., Turing’s “imitation game”); second, simulations that explore general-level formal isomorphisms in pursuit of a general theory of intelligence (e.g., logic-based AI); third, simulations as exploratory material models that serve to develop theoretical accounts of cognitive processes (e.g., Marr’s stages of visual processing and classical connectionism); and fourth, simulations as strictly formal models of a theory of computation that postulates cognitive processes to be isomorphic with computational processes (strong symbolic AI). -/- In continuation of pragmatic views of the modes of modeling and simulating world affairs, this taxonomy of approaches to modeling in AI helps to elucidate how available computational concepts and simulational resources contribute to the modes of representation and theory development in AI research—and what made that research program uniquely dependent on them. (shrink)
This paper discusses modeling from the artifactual perspective. The artifactual approach conceives models as erotetic devices. They are purpose-built systems of dependencies that are constrained in view of answering a pending scientific question, motivated by theoretical or empirical considerations. In treating models as artifacts, the artifactual approach is able to address the various languages of sciences that are overlooked by the traditional accounts that concentrate on the relationship of representation in an abstract and general manner. In contrast, the artifactual approach (...) focuses on epistemic affordances of different kinds of external representational and other tools employed in model construction. In doing so, the artifactual account gives a unified treatment of different model types as it circumvents the tendency of the fictional and other representational approaches to separate model systems from their “model descriptions”. (shrink)
The epistemic value of models has traditionally been approached from a representational perspective. This paper argues that the artifactual approach evades the problem of accounting for representation and better accommodates the modal dimension of modeling. From an artifactual perspective, models are viewed as erotetic vehicles constrained by their construction and available representational tools. The modal dimension of modeling is approached through two case studies. The first portrays mathematical modeling in economics, while the other discusses the modeling practice of synthetic biology, (...) which exploits and combines models in various modes and media. Neither model intends to represent any actual target system. Rather, they are constructed to study possible mechanisms through the construction of a model system with built-in dependencies. (shrink)
Fiora Salis compares the fictional and the artifactual views of models. She argues that both accounts contain several deep insights concerning the nature of scientific models but they also face some difficult challenges. She then puts forward an account of the ontology of models intended to incorporate the benefits of both views avoiding their main difficulties. Her key idea is that models are human-made artifacts that are akin to literary works of fiction. In this view, models are complex objects that (...) are constituted by a model description and the model content generated within a game of make-believe. As per the fiction view, model descriptions are construed as props in a game of make-believe, where props are concrete objects that prescribe certain imaginings. As per the artifactual view, model descriptions are construed as concrete representational tools that enable and constrain a scientist’s cognitive processes and provide intersubjective epistemic access to their imaginings. (shrink)
How do models represent reality? There are two conditions that scientific models must satisfy to be representations of real systems, the aboutness condition and the epistemic condition. In this article, I critically assess the two main fictionalist theories of models as representations, the indirect fiction view and the direct fiction view, with respect to these conditions. And I develop a novel proposal, what I call ‘the new fiction view of models’. On this view, models are akin to fictional stories; they (...) represent real-world phenomena if they stand in a denotation relation with reality; and they enable knowledge of reality via the generation of theoretical hypotheses, model–world comparisons and direct attributions. (shrink)
Several philosophers of science claim that scientific toy models afford knowledge of possibility, but answers to the question of why toy models can be expected to competently play this role are scarce. The main line of reply is that toy models support possibility claims insofar as they are credible. I raise a challenge for this credibility-thesis, drawing on a familiar problem for imagination-based modal epistemologies, and argue that it remains unanswered in the current literature. The credibility-thesis has a long way (...) to go if it is to account for the epistemic merits of toy models. (shrink)
Biologists, climate scientists, and economists all rely on models to move their work forward. In this book, I explore the use of models in these and other fields to introduce readers to the various philosophical issues that arise in scientific modeling. I show that paying attention to models plays a crucial role in appraising scientific work. -/- After surveying a wide range of models from a number of different scientific disciplines, I demonstrate how focusing on models sheds light on many (...) perennial issues in philosophy of science and in philosophy in general. For example, reviewing the range of views on how models represent their targets introduces readers to the key issues in debates on representation, not only in science but in the arts as well. Also, standard epistemological questions are cast in new and interesting ways when we confront the question, "What makes for a good (or bad) model?". (shrink)
This monograph offers a critical introduction to current theories of how scientific models represent their target systems. Representation is important because it allows scientists to study a model to discover features of reality. The authors provide a map of the conceptual landscape surrounding the issue of scientific representation, arguing that it consists of multiple intertwined problems. They provide an encyclopaedic overview of existing attempts to answer these questions, and they assess their strengths and weaknesses. The book also presents a comprehensive (...) statement of their alternative proposal, the DEKI account of representation, which they have developed over the last few years. They show how the account works in the case of material as well as non-material models; how it accommodates the use of mathematics in scientific modelling; and how it sheds light on the relation between representation in science and art. The issue of representation has generated a sizeable literature, which has been growing fast in particular over the last decade. This makes it hard for novices to get a handle on the topic because so far there is no book-length introduction that would guide them through the discussion. Likewise, researchers may require a comprehensive review that they can refer to for critical evaluations. This book meets the needs of both groups. (shrink)
We explore three questions about Earth system modeling that are of both scientific and philosophical interest: What kind of understanding can be gained via complex Earth system models? How can the limits of understanding be bypassed or managed? How should the task of evaluating Earth system models be conceptualized?
In Simulation and Similarity, Michael Weisberg offers a similarity-based account of the model–world relation, which is the relation in virtue of which successful models are successful. Weisberg’s main idea is that models are similar to targets in virtue of sharing features. An important concern about Weisberg’s account is that it remains silent on what it means for models and targets to share features, and consequently on how feature-sharing contributes to models’ epistemic success. I consider three potential ways of concretizing the (...) concept of shared features: as identical, quantitatively sufficiently close, and sufficiently similar features. I argue that each of these concretizations faces significant challenges, leaving unclear how Weisberg’s account substantially contributes to elucidating the relation in virtue of which successful models are successful. Against this background, I outline a pluralistic revision and argue that this revision may not only help Weisberg's account evade several of the problems that I raise, but also offers a novel perspective on the model–world relation more generally. 1Introduction 2Weisberg’s Feature-Sharing Account 3What Is a Shared Feature? 3.1Identity 3.2Sufficient closeness 3.3Sufficient similarity 4Turning Weisberg’s Account ‘Upside Down’ 5Conclusion. (shrink)
Over the last decades, network-based approaches have become highly popular in diverse fields of biology, including neuroscience, ecology, molecular biology and genetics. While these approaches continue to grow very rapidly, some of their conceptual and methodological aspects still require a programmatic foundation. This challenge particularly concerns the question of whether a generalized account of explanatory, organisational and descriptive levels of networks can be applied universally across biological sciences. To this end, this highly interdisciplinary theme issue focuses on the definition, motivation (...) and application of key concepts in biological network science, such as explanatory power of distinctively network explanations, network levels, and network hierarchies. (shrink)
Three metascientific concepts that have been object of philosophical analysis are the concepts oflaw, model and theory. The aim ofthis article is to present the explication of these concepts, and of their relationships, made within the framework of Sneedean or Metatheoretical Structuralism (Balzer et al. 1987), and of their application to a case from the realm of biology: Population Dynamics. The analysis carried out will make it possible to support, contrary to what some philosophers of science in general and of (...) biology in particular hold, the following claims: a) there are "laws" in biological sciences, b) many of the heterogeneous and different "models" of biology can be accommodated under some "theory", and c) this is exactly what confers great unifying power to biological theories. (shrink)
This paper defends the thesis of learning from non-causal models: viz. that the study of some model can prompt justified changes in one’s confidence in empirical hypotheses about a real-world target in the absence of any known or predicted similarity between model and target with regards to their causal features. Recognizing that we can learn from non-causal models matters not only to our understanding of past scientific achievements, but also to contemporary debates in the philosophy of science. At one end (...) of the philosophical spectrum, my thesis undermines the views of those who, like Cartwright (Erkenntnis 70:45–58, 2009), follow Hesse (Models and Analogies in Science, Notre Dame, University of Notre Dame Press, 1963) in restricting the possibility of learning from models to only those situations where a model identifies some causal factors present in the target. At the other end of the spectrum, my thesis also helps undermine some extremely permissive positions, e.g., Grüne-Yanoff’s (Erkenntnis 70(1):81–99, 2009, Philos Sci 80(5): 850–861, 2013) claim that learning from a model is possible even in the absence of any similarity at all between model and target. The thesis that we can learn from non-causal models offers a cautious middle ground between these two extremes. (shrink)
Ecological-enactive approaches to cognition aim to explain cognition in terms of the dynamic coupling between agent and environment. Accordingly, cognition of one’s immediate environment (which is sometimes labeled “basic” cognition) depends on enaction and the picking up of affordances. However, ecological-enactive views supposedly fail to account for what is sometimes called “higher” cognition, i.e., cognition about potentially absent targets, which therefore can only be explained by postulating representational content. This challenge levelled against ecological-enactive approaches highlights a putative explanatory gap between (...) basic and higher cognition. In this paper, we examine scientific cognition—a paradigmatic case of higher cognition—and argue that it shares fundamental features with basic cognition, for enaction and affordance selection are central to the scientific enterprise. Our argument focuses on modeling, and on how models promote scientific understanding. We base our argument on a non-representational account of scientific understanding and on the material engagement theory, for models are hereby conceived as material objects designed for scientific engagements. Having done so, we conclude that the explanatory gap is significantly less threatening to the ecological-enactive approach than it might appear. (shrink)
Theoretical models are widely held as sources of knowledge of reality. Imagination is vital to their development and to the generation of plausible hypotheses about reality. But how can imagination, which is typically held to be completely free, effectively instruct us about reality? In this paper I argue that the key to answering this question is in constrained uses of imagination. More specifically, I identify make-believe as the right notion of imagination at work in modelling. I propose the first overarching (...) taxonomy of types of constraints on scientific imagination that enables knowledge of reality. And I identify two main kinds of knowledge enabled by models, knowledge of the imaginary scenario specified by models and knowledge of reality. (shrink)
What are theoretical models and how do they contribute to a scientific understanding of reality? In this chapter, I will argue that models are akin to fictional stories in that they are human-made artifacts created through the imaginative activities of scientists. And I will suggest that the sort of imagination involved in modeling is make-believe and that this is constrained in three main ways which, together, enable knowledge of reality. I will conclude by addressing recent criticisms against the fiction view (...) of models and the relevance of scientific imagination in modeling put forward by Weisberg and Knuuttila. (shrink)
Discussion of modeling within philosophy of science has focused in how models, understood as finished products, represent the world. This approach has some issues accounting for the value of modeling in situations where there are controversies as to which should be the object of representation. In this work I show that a historical analysis of modeling complements the aforementioned representational program, since it allows us to examine processes of integration of analogies that play a role in the generation of criteria (...) of relevance, which are important for the configuration of the object of research. This, in turn, shows that there are norms in modeling practices whose historical reconstruction is relevant for their philosophical analysis. (shrink)
Many philosophers have drawn parallels between scientific models and fictions. In this paper I will be concerned with a recent version of the analogy, which compares models to the imagined characters of fictional literature. Though versions of the position differ, the shared idea is that modeling essentially involves imagining concrete systems analogously to the way that we imagine characters and events in response to works of fiction. Advocates of this view argue that imagining concrete systems plays an ineliminable role in (...) the practice of modeling that cannot be captured by other accounts. The approach thus leaves open what we should say about the ontological status of model-systems, and here advocates differ among themselves, defending a variety of realist or anti-realist positions. I argue that this debate over the ontological status of model-systems is misguided. If model-systems are the kinds of objects fictional realists posit, they can play no role in explaining the epistemology of modeling for an advocate of this approach. So they are at best superfluous. Defenders of the approach should focus on developing an account of the epistemological role of imagining model-systems. (shrink)
In recent decades, philosophers of science have devoted considerable efforts to understand what models represent. One popular position is that models represent fictional situations. Another position states that, though models often involve fictional elements, they represent real objects or scenarios. Though these two positions may seem to be incompatible, I believe it is possible to reconcile them. Using a threefold distinction between different signs proposed by Peirce, I develop an argument based on a proposal recently made by Kralemann and Lattman (...) (in Synthese 190:3397–3420, 2013) that shows that the two aforementioned positions can be reconciled by distinguishing different ways in which a model representation can be used. In particular, on the basis of Peirce’s distinction between icons, indices and symbols, I argue that models can sometimes function as icons, sometimes as indexes and sometimes as symbols, depending on the context in which they are considered and the use that they are developed for because they all have iconic, indexical and symbolic features. In addition, I show that conceiving models as signs enables us to develop an account of scientific representation that meets the main desiderata that Shech (in Synthese 192:3463–3485, 2015) presents. (shrink)
The aim of this dissertation is to comprehensively study various robustness arguments proposed in the literature from Levins to Lloyd as well as the opposition offered to them and pose enquiry into the degree of epistemic virtue that they provide to the model prediction results with respect to climate science and modeling. Another critical issue that this dissertation strives to examine is that of the actual epistemic notion that is operational when scientists and philosophers appeal to robustness. In attempting to (...) explicate this idea, the discussion turns to arguments provided by Schupbach who completely rejects probabilistic independence in favour of explanatory reasoning, Stegenga and Menon who still see some value in probabilistic independence, and Winsberg who takes applies Schupbach’s to climate science, going beyond models to involve multi-modal evidence. After an exhaustive discussion on these arguments, this dissertation attempts to provide a thorough and updated notion of robustness in climate modeling and climate science. (shrink)
Many scientific models are representations. Building on Goodman and Elgin’s notion of representation-as we analyse what this claim involves by providing a general definition of what makes something a scientific model, and formulating a novel account of how they represent. We call the result the DEKI account of representation, which offers a complex kind of representation involving an interplay of, denotation, exemplification, keying up of properties, and imputation. Throughout we focus on material models, and we illustrate our claims with the (...) Phillips-Newlyn machine. In the conclusion we suggest that, mutatis mutandis, the DEKI account can be carried over to other kinds of models, notably fictional and mathematical models. (shrink)
Margaret Morrison, (2015) Reconstructing Reality: Models, Mathematics, and Simulations. Oxford University Press, New York. -/- Scientific models, mathematical equations, and computer simulations are indispensable to scientific practice. Through the use of models, scientists are able to effectively learn about how the world works, and to discover new information. However, there is a challenge in understanding how scientists can generate knowledge from their use, stemming from the fact that models and computer simulations are necessarily incomplete representations, and partial descriptions, of their (...) target systems. In order to construct a model, one must make idealizations, approximations, and abstractions. Given these constraints in constructing models, there is a question of whether they can provide new insight into how real systems actually work. So, how is it that highly abstract models inform us about the nature of the world, and more specifically, how do they provide explanatory knowledge? In Reconstructing Reality: Models, Mathematics, and Simulations, philosopher Margaret Morrison undertakes this task of ... (shrink)
It is plausible to think that, in order to actively employ models in their inquiries, scientists should be aware of their existence. The question is especially puzzling for realists in the case of abstract models, since it is not obvious how this is possible. Interestingly, though, this question has drawn little attention in the relevant literature. Perhaps the most obvious choice for a realist is appealing to intuition. In this paper, I argue that if scientific models were abstract entities, one (...) could not be aware of them intuitively. I deploy my argumentation by building on Chudnoff’s elaboration on intuitive awareness. Furthermore, I shortly discuss some other options to which realists could turn in order to address the question of awareness. (shrink)
In this paper, I argue that the newly developed network approach in neuroscience and biology provides a basis for formulating a unique type of realization, which I call topological realization. Some of its features and its relation to one of the dominant paradigms of realization and explanation in sciences, i.e. the mechanistic one, are already being discussed in the literature. But the detailed features of topological realization, its explanatory power and its relation to another prominent view of realization, namely the (...) semantic one, have not yet been discussed. I argue that topological realization is distinct from mechanistic and semantic ones because the realization base in this framework is not based on local realisers, regardless of the scale but on global realizers. In mechanistic approach, the realization base is always at the local level, in both ontic and epistemic accounts. The explanatory power of realization relation in mechanistic approach comes directly from the realization relation-either by showing how a model is mapped onto a mechanism, or by describing some ontic relations that are explanatory in themselves. Similarly, the semantic approach requires that concepts at different scales logically satisfy microphysical descriptions, which are at the local level. In topological framework the realization base can be found at different scales, but whatever the scale the realization base is global, within that scale, and not local. Furthermore, topological realization enables us to answer the “why” questions, which according to Polger 2010 make it explanatory. The explanatoriness of topological realization stems from understanding mathematical consequences of different topologies, not from the mere fact that a system realizes them. (shrink)