The Representational Theory of Measurement conceives measurement as establishing homomorphisms from empirical relational structures into numerical relation structures, called models. There are two different approaches to deal with the justification of a model: an axiomatic and an empirical approach. The axiomatic approach verifies whether a given relational structure satisfies certain axioms to secure homomorphic mapping. The empirical approach conceives models to function as measuring instruments by transferring observations of a phenomenon under investigation into quantitative facts about that phenomenon. These facts (...) are evaluated by their accuracy and precision. Precision is generally achieved by least squares methods and accuracy by calibration. For calibration standards are needed. Then two polar strategies can be distinguished: white-box modeling and black-box modeling. The first strategy of modeling aims at estimating the invariant equations of the phenomenon, thereby fulfilling Hertz’s correctness requirement. The latter strategy of modeling is to use known stable facts about the phenomenon to adjust the model parameters, thereby fulfilling Hertz’s appropriateness requirement. For this latter strategy, the requirement of models as homomorphic mappings has been dropped. Where one will find the axiomatic approach more often used for measurement in the laboratory, the empirical approach is more appropriate for measurement outside the laboratory. The reason for this is that for measurement of phenomena outside the laboratory, one also needs to take account of the environment to achieve accurate results. Environments are generally too relation-rich for an axiomatic approach, which are only applicable for relation-poor systems. The white-box modeling strategy, reflecting the complexity of the environment due to its correctness requirement, will, however, lead to immensely large models. To avoid this problem, modular design is an appropriate strategy to reduce this complexity. Modular design is a grey-box modeling strategy. Grey-box models are assemblies of modules; these are black boxes with standard interface. It should be noted that the structure of the assemblage need not be homomorphic to the relations describing the interaction between phenomenon and environment. These three modelingstrategies map out the possible designs for computer simulations as measuring instruments. Whether a simulation is based on a white-box, grey-box or black-box model is only determined by the relationship between the phenomenon and its environment and not by e.g. its materiality or physicality. (shrink)
Experimental modeling in biology involves the use of living organisms (not necessarily so-called "model organisms") in order to model or simulate biological processes. I argue here that experimental modeling is a bona fide form of scientific modeling that plays an epistemic role that is distinct from that of ordinary biological experiments. What distinguishes them from ordinary experiments is that they use what I call "in vivo representations" where one kind of causal process is used to stand in (...) for a physically different kind of process. I discuss the advantages of this approach in the context of evolutionary biology. (shrink)
Both von Neumann and Wiener were outsiders to biology. Both were inspired by biology and both proposed models and generalizations that proved inspirational for biologists. Around the same time in the 1940s von Neumann developed the notion of self reproducing automata and Wiener suggested an explication of teleology using the notion of negative feedback. These efforts were similar in spirit. Both von Neumann and Wiener used mathematical ideas to attack foundational issues in biology, and the concepts they articulated had lasting (...) effect. But there were significant differences as well. Von Neumann presented a how-possibly model, which sparked interest by mathematicians and computer scientists, while Wiener collaborated more directly with biologists, and his proposal influenced the philosophy of biology. The two cases illustrate different strategies by which mathematicians, the “professional outsiders” of science, can choose to guide their engagement with biological questions and with the biological community, and illustrate different kinds of generalizations that mathematization can contribute to biology. The different strategies employed by von Neumann and Wiener and the types of models they constructed may have affected the fate of von Neumann’s and Wiener’s ideas – as well as the reputation, in biology, of von Neumann and Wiener themselves. (shrink)
John Maynard Smith is the person most responsible for the use of game theory in evolutionary biology, having introduced and developed its major concepts, and later surveyed its uses. In this paper I look at some rhetorical work done by Maynard Smith and his co-author G.R. Price to make game theory a standard and common modelling tool for the evolutionary study of behavior. The original presentation of the ideas — in a 1973 Nature article — is frequently cited but almost (...) certainly rarely read. It took reformulation of the approach to create a usable model and an object of study. Perhaps paradoxically, the new model dealt with more abstract objects than did its predecessor, but because of that a better case could be made for its realism. The particular strategy of abstraction allowed game-theoretic modelling to gain a certain measure of autonomy from empirical problems, and thus to flourish. (shrink)
Green offers us two options: either connectionist models are literal models of brain activity or they are mere instruments, with little or no ontological significance. According to Green, only the first option renders connectionist models genuinely explanatory. I think there is a third possibility. Connectionist models are not literal models of brain activity, but neither are they mere instruments. They are abstract, IDEALISED models of the brain that are capable of providing genuine explanations of cognitive phenomena.
This paper examines creative strategies employed inscientific modelling. It is argued that being creativepresents not a discrete event, but rather an ongoingeffort consisting of many individual `creative acts''.These take place over extended periods of time andcan be carried out by different people, working ondifferent aspects of the same project. The example ofextended extragalactic radio sources shows that, inorder to model a complicated phenomenon in itsentirety, the modelling task is split up into smallerproblems that result in several sub-models. This is (...) away of using cognitive resources efficiently and in away which overcomes their limitations. Another aspectof modelling that requires creativity is theemployment of visualisation in order to reassemble,i.e. recreate the unity of, the various sub-models bymeans of visualisation. This illustrates how thecreative effort required to deal with the complexityof the complicated phenomenon of radio sources ischannelled in order to use cognitive resourcesefficiently and to stay within their capacity. (shrink)
In 1966, Richard Levins argued that there are different strategies in model building in population biology. In this paper, I reply to Orzack and Sober’s (1993) critiques of Levins, and argue that his views on modelingstrategies apply also in the context of evolutionary genetics. In particular, I argue that there are different ways in which models are used to ask and answer questions about the dynamics of evolutionary change, prospectively and retrospectively, in classical versus molecular evolutionary (...) genetics. Further, I argue that robustness analysis is a tool for, if not confirmation, then something near enough, in this discipline. (shrink)
Interest in the computational aspects of modeling has been steadily growing in philosophy of science. This paper aims to advance the discussion by articulating the way in which modeling and computational errors are related and by explaining the significance of error management strategies for the rational reconstruction of scientific practice. To this end, we first characterize the role and nature of modeling error in relation to a recipe for model construction known as Euler’s recipe. We then (...) describe a general model that allows us to assess the quality of numerical solutions in terms of measures of computational errors that are completely interpretable in terms of modeling error. Finally, we emphasize that this type of error analysis involves forms of perturbation analysis that go beyond the basic model-theoretical and statistical/probabilistic tools typically used to characterize the scientific method; this demands that we revise and complement our reconstructive toolbox in a way that can affect our normative image of science. (shrink)
Two controversies exist regarding the appropriate characterization of hierarchical and adaptive evolution in natural populations. In biology, there is the Wright-Fisher controversy over the relative roles of random genetic drift, natural selection, population structure, and interdemic selection in adaptive evolution begun by Sewall Wright and Ronald Aylmer Fisher. There is also the Units of Selection debate, spanning both the biological and the philosophical literature and including the impassioned group-selection debate. Why do these two discourses exist separately, and interact relatively little? (...) We postulate that the reason for this schism can be found in the differing focus of each controversy, a deep difference itself determined by distinct general styles of scientific research guiding each discourse. That is, the Wright-Fisher debate focuses on adaptive process, and tends to be instructed by the mathematical modeling style, while the focus of the Units of Selection controversy is adaptive product, and is typically guided by the function style. The differences between the two discourses can be usefully tracked by examining their interpretations of two contested strategies for theorizing hierarchical selection: horizontal and vertical averaging. (shrink)
In this paper I argue that the appropriate analogy for “understanding what makes simulation results reliable” in global climate modeling is not with scientific experimentation or measurement, but—at least in the case of the use of global climate models for policy development—with the applications of science in applied design problems. The prospects for using this analogy to argue for the quantitative reliability of GCMs are assessed and compared with other potential strategies.
The strategies of action employed by a human subject in order to perceive simple 2-D forms on the basis of tactile sensory feedback have been modelled by an explicit computer algorithm. The modelling process has been constrained and informed by the capacity of human subjects both to consciously describe their own strategies, and to apply explicit strategies; thus, the strategies effectively employed by the human subject have been influenced by the modelling process itself. On this basis, (...) good qualitative and semi-quantitative agreement has been achieved between the trajectories produced by a human subject, and the traces produced by a computer algorithm. The advantage of this reciprocal modelling option, besides facilitating agreement between the algorithm and the empirically observed trajectories, is that the theoretical model provides an explanation, and not just a description, of the active perception of the human subject. (shrink)
A theory of vagueness gives a model of vague language and of reasoning within the language. Among the models that have been offered are Degree Theorists’ numerical models that assign values between 0 and 1 to sentences, rather than simply modelling sentences as true or false. In this paper, I ask whether we can benefit from employing a rich, well-understood numerical framework, while ignoring those aspects of it that impute a level of mathematical precision that is not present in the (...) modelled phenomenon of vagueness. Can we ignore apparent implications for the phenomena by pointing out that it is just a model and that the unwanted features are mere artefacts? I explore the distinction between representors and artefacts and criticise the strategy of appealing to features as mere artefacts in defence of a theory. I focus largely on theories using numerical resources, but also consider other, related theories and strategies, including theories appealing to non-linear structures. (shrink)
The paper presents the most general aspects of scientific modeling and shows that social systems naturally include different belief systems. Belief systems differ in a variety of respects, most notably in the selection of suitable qualities to encode and the internal structure of the observables. The following results emerge from the analysis: conflict is explained by showing that different models encode different qualities, which implies that they model different realities; explicitly connecting models to the realities that they encode makes (...) it possible to clarify the relations among models; by understanding that social systems are complex one knows that there is no chance of developing a maximal model of the system; the distinction among different levels of depth implicitly includes a strategy for inducing change; identity-preserving models are among the most difficult to modify; since models do not customarily generate internal signals of error, strategies with which to determine when models are out of synch with their situations are especially valuable; changing the form of power from a zero sum game to a positive sum game helps transform the nature of conflicts. (shrink)
Response strategy is a key for preventing widespread corruption vulnerabilities in the public construction sector. Although several studies have been devoted to this area, the effectiveness of response strategies has seldom been evaluated in China. This study aims to fill this gap by investigating the effectiveness of response strategies for corruption vulnerabilities through a survey in the Chinese public construction sector. Survey data obtained from selected experts involved in the Chinese public construction sector were analyzed by factor analysis (...) and partial least squares-structural equation modeling. Analysis results showed that four response strategies of leadership, rules and regulations, training, and sanctions, only achieved an acceptable level in preventing corruption vulnerabilities in the Chinese public construction sector. This study contributes to knowledge by improving the understanding of the effectiveness of response strategies for corruption vulnerabilities in the public construction sector of developing countries. (shrink)
Loop analysis is a method of qualitative modeling anticipated by Sewall Wright and systematically developed by Richard Levins. In Levins’ (1966) distinctions between modelingstrategies, loop analysis sacrifices precision for generality and realism. Besides criticizing the clarity of these distinctions, Orzack and Sober (1993) argued qualitative modeling is conceptually and methodologically problematic. Loop analysis of the stability of ecological communities shows this criticism is unjustified. It presupposes an overly narrow view of qualitative modeling and underestimates (...) the broad role models play in scientific research, especially in helping scientists represent and understand complex systems. (shrink)
Webb has articulated a clear, multi-dimensional framework for discussing simulation models and modelling strategies. This framework will likely co-evolve with modelling. As such, it will be important to continue to clarify these dimensions and perhaps add to them. I discuss the dimension of generality and suggest that a dimension of integrativeness may also be needed.
College instructors use a variety of approaches to teach students to reason more effectively about issues with a moral dimension and achieve mixed results. This pre?post study of 423 undergraduate students examined the effects of morally explicit and implicit curricular content and of selected pedagogical strategies on moral reasoning development. Using causal modelling to control for a range of student background variables as well as Time 1 scores, 52% of the variance in moral reasoning scores was explained; we found (...) that these scores were affected by type of curricular content and by three pedagogical strategies (active learning, reflection and faculty?student interaction). Students who experienced more negative interactions with diverse peers were the least likely to show positive change in moral reasoning as a result of participating in any course. Implications for the design of intervention studies are discussed, including the need to attend to selection and attenuation effects. (shrink)
Multilevel research strategies characterize contemporary molecular inquiry into biological systems. We outline conceptual, methodological, and explanatory dimensions of these multilevel strategies in microbial ecology, systems biology, protein research, and developmental biology. This review of emerging lines of inquiry in these fields suggests that multilevel research in molecular life sciences has significant implications for philosophical understandings of explanation, modeling, and representation.
Biologists and economists use models to study complex systems. This similarity between these disciplines has led to an interesting development: the borrowing of various components of model-based theorizing between the two domains. A major recent example of this strategy is economists’ utilization of the resources of evolutionary biology in order to construct models of economic systems. This general strategy has come to be called evolutionary economics and has been a source of much debate among economists. Although philosophers have developed literatures (...) on the nature of models and modeling, the unique issues surrounding this kind of interdisciplinary model building have yet to be independently investigated. In this paper, we utilize evolutionary economics as a case study in the investigation of more general issues concerning interdisciplinary modeling. We begin by critiquing the distinctions currently used within the evolutionary economics literature and propose an alternative carving of the conceptual terrain. We then argue that the three types of evolutionary economics we distinguish capture distinctions that will be important whenever resources of model-based theorizing are borrowed across distinct scientific domains. Our analysis of these model-building strategies identifies several of the unique methodological and philosophical issues that confront interdisciplinary modeling. (shrink)
Is there something specific about modelling that distinguishes it from many other theoretical endeavours? We consider Michael Weisberg’s thesis that modelling is a form of indirect representation through a close examination of the historical roots of the Lotka–Volterra model. While Weisberg discusses only Volterra’s work, we also study Lotka’s very different design of the Lotka–Volterra model. We will argue that while there are elements of indirect representation in both Volterra’s and Lotka’s modelling approaches, they are largely due to two other (...) features of contemporary model construction processes that Weisberg does not explicitly consider: the methods-drivenness and outcome-orientedness of modelling. 1 Introduction2 Modelling as Indirect Representation3 The Design of the Lotka–Volterra Model by Volterra3.1 Volterra’s method of hypothesis3.2 The construction of the Lotka–Volterra model by Volterra4 The Design of the Lotka–Volterra Model by Lotka4.1 Physical biology according to Lotka4.2 Lotka’s systems approach and the Lotka–Volterra model5 Philosophical Discussion: Strategies and Tools of Modelling5.1 Volterra’s path from the method of isolation to the method of hypothesis5.2 The template-based approach of Lotka5.3 Modelling: methods-driven and outcome-oriented6 Conclusion. (shrink)
[Correction Notice: An erratum for this article was reported in Vol 109 of Psychological Review. Due to circumstances that were beyond the control of the authors, the studies reported in "Models of Ecological Rationality: The Recognition Heuristic," by Daniel G. Goldstein and Gerd Gigerenzer overlap with studies reported in "The Recognition Heuristic: How Ignorance Makes Us Smart," by the same authors and with studies reported in "Inference From Ignorance: The Recognition Heuristic". In addition, Figure 3 in the Psychological Review article (...) was originally published in the book chapter and should have carried a note saying that it was used by permission of Oxford University Press.] One view of heuristics is that they are imperfect versions of optimal statistical procedures considered too complicated for ordinary minds to carry out. In contrast, the authors consider heuristics to be adaptive strategies that evolved in tandem with fundamental psychological mechanisms. The recognition heuristic, arguably the most frugal of all heuristics, makes inferences from patterns of missing knowledge. This heuristic exploits a fundamental adaptation of many organisms: the vast, sensitive, and reliable capacity for recognition. The authors specify the conditions under which the recognition heuristic is successful and when it leads to the counter-intuitive less-is-more effect in which less knowledge is better than more for making accurate inferences. (shrink)
This paper contrasts and compares strategies of model-building in condensed matter physics and biology, with respect to their alleged unequal susceptibility to trade-offs between different theoretical desiderata. It challenges the view, often expressed in the philosophical literature on trade-offs in population biology, that the existence of systematic trade-offs is a feature that is specific to biological models, since unlike physics, biology studies evolved systems that exhibit considerable natural variability. By contrast, I argue that the development of ever more sophisticated (...) experimental, theoretical, and computational methods in physics is beginning to erode this contrast, since condensed matter physics is now in a position to measure, describe, model, and manipulate sample-specific features of individual systems – for example at the mesoscopic level – in a way that accounts for their contingency and heterogeneity. Model-building in certain areas of physics thus turns out to be more akin to modeling in biology than has been supposed and, indeed, has traditionally been the case. (shrink)
Winsberg's "handshaking" account of inter-model relations is a well-known theory of multiscale modeling in physical systems. Winsberg argues that relations among the component models in a multiscale modeling system are not related mereologically, but rather by empirically determined algorithms. I argue that while the handshaking account does demonstrate the existence of non-mereological relationships among component models, Winsberg does not attend to the different ways in which handshaking algorithms are developed. By overlooking the distinct strategies employed in different (...) handshake models, Winsberg's account fails to capture the central feature of effective multiscale modeling practices, namely, how the dominant behaviors of the modeled systems vary across the different scales, and how this variation constrains the ways modelers can combine component models. Using Winsberg's example of nanoscale crack propagation, I distinguish two modes of handshaking and show how the different modes arise from the scale-dependent physics involved in each component model. (shrink)
Life scientists increasingly rely upon abstraction-based modeling and reasoning strategies for understanding biological phenomena. We introduce the notion of constraint-based reasoning as a fruitful tool for conceptualizing some of these developments. One important role of mathematical abstractions is to impose formal constraints on a search space for possible hypotheses and thereby guide the search for plausible causal models. Formal constraints are, however, not only tools for biological explanations but can be explanatory by virtue of clarifying general dependency-relations and (...) patterning between functions and structures. We describe such situations as constraint-based explanations and argue that these differ from mechanistic strategies in important respects. While mechanistic explanations emphasize change-relating causal features, constraint-based explanations emphasize formal dependencies and generic organizational features that are relatively independent of lower-level changes in causal details. Our distinction between mechanistic and constraint-based explanations is pragmatically motivated by the wish to understand scientific practice. We contend that delineating the affordances and assumptions of different explanatory questions and strategies helps to clarify tensions between diverging scientific practices and the innovative potentials in their combination. Moreover, we show how constraint-based explanation integrates several features shared by otherwise different philosophical accounts of abstract explanatory strategies in biology. (shrink)
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, (...) the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling and simulation needs to be made. Holding on to that distinction, I propose to relate verification to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes. (shrink)
Are there relationships between consciousness and the material world? Empirical evidence for such a connection was reported in several meta-analyses of mind-matter experiments designed to address this question. In this paper we consider such meta-analyses from a statistical modeling perspective, emphasizing strategies to validate the models and the associated statistical procedures. In particular, we explicitly model increased data variability and selection mechanisms, which permits us to estimate 'selection profiles ' and to reassess the experimental effect in view of (...) potential other effects. An application to the data pool considered in the influential meta-analysis of Radin and Nelson (1989) yields indications for the presence of random and selection effects Adjustment for possible selection is found to render the,without such an adjustment significant, experimental effect non-significant. Somewhat different conclusions apply to a subset of the data deserving separate consideration. The actual origin of the data features that are described as experimental, random, or selection effects within the proposed model cannot be clarified y our approach and remains open. (shrink)
The processes of wound healing and bone regeneration and problems in tissue engineering have been an active area for mathematical modeling in the last decade. Here we review a selection of recent models which aim at deriving strategies for improved healing. In wound healing, the models have particularly focused on the inflammatory response in order to improve the healing of chronic wound. For bone regeneration, the mathematical models have been applied to design optimal and new treatment strategies (...) for normal and specific cases of impaired fracture healing. For the field of tissue engineering, we focus on mathematical models that analyze the interplay between cells and their biochemical cues within the scaffold to ensure optimal nutrient transport and maximal tissue production. Finally, we briefly comment on numerical issues arising from simulations of these mathematical models. (shrink)
In this article we describe research methods that are used for the study of individual multiattribute evaluation processes. First we explain that a multiattribute evaluation problem involves the evaluation of a set of alternatives, described by their values on a number of alternatives. We discuss a number of evaluation strategies that may be applied to arrive at a conclusion about the attractiveness or suitability of the alternatives, and next introduce two main research paradigms in this area, structural modelling and (...) process tracing. We argue that the techniques developed within these paradigms all have their advantages and disadvantages, and conclude that the most promising technique to detect the true nature of the evaluation strategy used by a judge seems to be the analysis of verbal protocols. At the same time we think it is wise not to rely on just one technique, but to use a multimethod approach to the study of multiattribute evaluation processes whenever that is possible. (shrink)
Commentary on our target article centers around six main topics: (1) strategies in modeling the neurobehavioral foundation of human behavioral traits; (2) clarification of the construct of affiliation; (3) developmental aspects of affiliative bonding; (4) modeling disorders of affiliative reward; (5) serotonin and affiliative behavior; and (6) neural considerations. After an initial important research update in section R1, our Response is organized around these topics in the following six sections, R2 to R7.
A simple numerical procedure is presented for the problem of estimating the parameters of models for the distribution of eggs oviposited in a host. The modelling is extended to incorporate both host density and time dependence to produce a remarkably parsimonious structure with only seven parameters to describe a data set of over 3,000 observations. This is further refined using a mixed model to accommodate several large outliers. Both models show that the level of superparasitism declines with increasing host density, (...) and the rate declines over time. It is proposed that the differing behaviours represented by the mixed model may reflect a balance between behavioural strategies of different selective benefit. (shrink)
In this paper I argue that the appropriate analogy for “understanding what makes simulation results reliable” in Global Climate Modeling is not with scientific experimentation or measurement, but—at least in the case of the use of global climate models for policy development—with the applications of science in engineering design problems. The prospects for using this analogy to argue for the quantitative reliability of GCMs are assessed and compared with other potential strategies.
How do people reason about their opponent in turn-taking games? Often, people do not make the decisions that game theory would prescribe. We present a logic that can play a key role in understanding how people make their decisions, by delineating all plausible reasoning strategies in a systematic manner. This in turn makes it possible to construct a corresponding set of computational models in a cognitive architecture. These models can be run and fitted to the participants’ data in terms (...) of decisions, response times, and answers to questions. We validate these claims on the basis of an earlier game-theoretic experiment about the turn-taking game “Marble Drop with Surprising Opponent”, in which the opponent often starts with a seemingly irrational move. We explore two ways of segregating the participants into reasonable “player types”. The first way is based on latent class analysis, which divides the players into three classes according to their first decisions in the game: Random players, Learners, and Expected players, who make decisions consistent with forward induction. The second way is based on participants’ answers to a question about their opponent, classified according to levels of theory of mind: zero-order, first-order and second-order. It turns out that increasing levels of decisions and theory of mind both correspond to increasing success as measured by monetary awards and increasing decision times. Next, we use the logical language to express different kinds of strategies that people apply when reasoning about their opponent and making decisions in turn-taking games, as well as the ‘reasoning types’ reflected in their behavior. Then, we translate the logical formulas into computational cognitive models in the PRIMs architecture. Finally, we run two of the resulting models, corresponding to the strategy of only being interested in one’s own payoff and to the myopic strategy, in which one can only look ahead to a limited number of nodes. It turns out that the participant data fit to the own-payoff strategy, not the myopic one. The article closes the circle from experiments via logic and cognitive modelling back to predictions about new experiments. (shrink)
A crisis continues to brew within the pharmaceutical research and development (R&D) enterprise: productivity continues declining as costs rise, despite ongoing, often dramatic scientific and technical advances. To reverse this trend, we offer various suggestions for both the expansion and broader adoption of modeling and simulation (M&S) methods. We suggest strategies and scenarios intended to enable new M&S use cases that directly engage R&D knowledge generation and build actionable mechanistic insight, thereby opening the door to enhanced productivity. What (...) M&S requirements must be satisfied to access and open the door, and begin reversing the productivity decline? Can current methods and tools fulfill the requirements, or are new methods necessary? We draw on the relevant, recent literature to provide and explore answers. In so doing, we identify essential, key roles for agent-based and other methods. We assemble a list of requirements necessary for M&S to meet the diverse needs distilled from a collection of research, review, and opinion articles. We argue that to realize its full potential, M&S should be actualized within a larger information technology framework—a dynamic knowledge repository—wherein models of various types execute, evolve, and increase in accuracy over time. We offer some details of the issues that must be addressed for such a repository to accrue the capabilities needed to reverse the productivity decline. (shrink)
Pickering & Garrod (P&G) explain dialogue dynamics in terms of forward modeling and prediction-by-simulation mechanisms. Their theory dissolves a strict segregation between production and comprehension processes, and it links dialogue to action-based theories of joint action. We propose that the theory can also incorporate intentional strategies that increase communicative success: for example, signaling strategies that help remaining predictable and forming common ground.
In the philosophy of science and epistemology literature, robustness analysis has become an umbrella term that refers to a variety of strategies. One of the main purposes of this paper is to argue that different strategies rely on different criteria for justifications. More specifically, I will claim that: i) robustness analysis differs from de-idealization even though the two concepts have often been conflated in the literature; ii) the comparison of different model frameworks requires different justifications than the comparison (...) of models that differ only for the assumption under test; iii) the replacement of specific assumptions with different ones can encounter specific difficulties in scientific practice. These claims will be supported by a case study in population ecology and a case study in geographical economics. (shrink)
The paper presents an argument for treating certain types of computer simulation as having the same epistemic status as experimental measurement. While this may seem a rather counterintuitive view it becomes less so when one looks carefully at the role that models play in experimental activity, particularly measurement. I begin by discussing how models function as “measuring instruments” and go on to examine the ways in which simulation can be said to constitute an experimental activity. By focussing on the connections (...) between models and their various functions, simulation and experiment one can begin to see similarities in the practices associated with each type of activity. Establishing the connections between simulation and particular types of modelling strategies and highlighting the ways in which those strategies are essential features of experimentation allows us to clarify the contexts in which we can legitimately call computer simulation a form of experimental measurement. (shrink)
Abstraction is seen as an active process which both enlightens and obscures. Abstractions are not true or false but relatively enlightening or obscuring according to the problem under study; different abstractions may grasp different aspects of a problem. Abstractions may be useless if they can answer questions only about themselves. A theoretical enterprise explores reality through acluster of abstractions that use different perspectives, temporal and horizontal scales, and assumes different givens.
This paper revisits the concept of fiction employed in recent debates about the reality of theoretical entities in the philosophy of science. From an anti-realist perspective the dependence of evidence for some scientific entities on mediated forms of observation and modelling strategies reflects a degree of construction that is argued to closely resemble fiction. As a realist’s response to this debate, this paper provides an analysis of fictional entities in comparison to real ones. I argue that the distinction between (...) fictional and real entities is reflected in their different relations toward their representations. This is particularly evident when it comes to the investigation of properties not explicitly given in a representation but that rely on knowledge external to it. A comparison of the resulting difference in the interpretation of fictional and real entities is then shown to provide guidelines for the assessment of when a realist claim can be made for model-based inferences to theoretical entities in science. At the end of this paper I advocate a pluralistic view on scientific realism by showing that representational pluralism, far from posing a problem for a realist interpretation of scientific practice, serves as an indicator for the reality of scientific entities. (shrink)
Explaining the complex dynamics exhibited in many biological mechanisms requires extending the recent philosophical treatment of mechanisms that emphasizes sequences of operations. To understand how nonsequentially organized mechanisms will behave, scientists often advance what we call dynamic mechanistic explanations. These begin with a decomposition of the mechanism into component parts and operations, using a variety of laboratory-based strategies. Crucially, the mechanism is then recomposed by means of computational models in which variables or terms in differential equations correspond to properties (...) of its parts and operations. We provide two illustrations drawn from research on circadian rhythms. Once biologists identified some of the components of the molecular mechanism thought to be responsible for circadian rhythms, computational models were used to determine whether the proposed mechanisms could generate sustained oscillations. Modeling has become even more important as researchers have recognized that the oscillations generated in individual neurons are synchronized within networks; we describe models being employed to assess how different possible network architectures could produce the observed synchronized activity. (shrink)
Behavior oftentimes allows for many possible interpretations in terms of mental states, such as goals, beliefs, desires, and intentions. Reasoning about the relation between behavior and mental states is therefore considered to be an effortful process. We argue that people use simple strategies to deal with high cognitive demands of mental state inference. To test this hypothesis, we developed a computational cognitive model, which was able to simulate previous empirical findings: In two-player games, people apply simple strategies at (...) first. They only start revising their strategies when these do not pay off. The model could simulate these findings by recursively attributing its own problem solving skills to the other player, thus increasing the complexity of its own inferences. The model was validated by means of a comparison with findings from a developmental study in which the children demonstrated similar strategic developments. (shrink)
The implementation of moral decision making abilities in artificial intelligence (AI) is a natural and necessary extension to the social mechanisms of autonomous software agents and robots. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building (...) of systems that aim at goals or standards which may or may not be specified in explicitly theoretical terms. In this paper we wish to provide some direction for continued research by outlining the value and limitations inherent in each of these approaches. (shrink)
Accounts of the relation between theories and models in biology concentrate on mathematical models. In this paper I consider the dual role of models as representations of natural systems and as a material basis for theorizing. In order to explicate the dual role, I develop the concept of a remnant model, a material entity made from parts of the natural system(s) under study. I present a case study of an important but neglected naturalist, Joseph Grinnell, to illustrate the extent to (...) which mundane practices in a museum setting constitute theorizing. I speculate that historical and sociological analyses of institutions can play a specific role in the philosophical analysis of model-building strategies. (shrink)
Whether any non-human animal can attribute mental states to others remains the subject of extensive debate. This despite the fact that several species have behaved as if they have a ‘theory of mind’ in various behavioral tasks. In this paper, we review the reasons of skeptics for their doubts: That existing experimental setups cannot distinguish between ‘mind readers’ and ‘behavior readers’, that results that seem to indicate ‘theory of mind’ may come from studies that are insufficiently controlled, and that our (...) own intuitive biases may lead us to interpret behavior more ‘cognitively’ than is necessary. The merits of each claim and suggested solution are weighed. The conclusion is that while it is true that existing setups cannot conclusively demonstrate ‘theory of mind’ in non-human animals, focusing on this fact is unlikely to be productive. Instead, the more interesting question is how sophisticated their social reasoning can be, whether it is about ‘unobservable inner experiences’ or not. Therefore, it is important to address concerns about the setup and interpretation of specific experiments. To alleviate the impact of intuitive biases, various strategies have been proposed in the literature. These include a deeper understanding of associative learning, a better knowledge of the limited ‘theory of mind’ humans actually use, and thinking of animal cognition in an embodied, embedded way; that is, being aware that constraints outside of the brain, and outside of the body, may naturally predispose individuals to produce behavior that looks smart without requiring complex cognition. To enable this kind of thinking, a powerful methodological tool is advocated: Computational modeling, namely agent-based modeling and, particularly, cognitive modeling. By explicitly simulating the rules and representations that underlie animal performance on specific tasks, it becomes much easier to look past one’s own biases and to see what cognitive processes might actually be occurring. (shrink)