Experimental activity is traditionally identified with testing the empirical implications or numerical simulations of models against data. In critical reaction to the ‘tribunal view’ on experiments, this essay will show the constructive contribution of experimental activity to the processes of modeling and simulating. Based on the analysis of a case in fluid mechanics, it will focus specifically on two aspects. The first is the controversial specification of the conditions in which the data are to be obtained. The second is (...) conceptual clarification, with a redefinition of concepts central to the understanding of the phenomenon and the conditions of its occurrence. (shrink)
The article first addresses the importance of cognitive modeling, in terms of its value to cognitive science (as well as other social and behavioral sciences). In particular, it emphasizes the use of cognitive architectures in this undertaking. Based on this approach, the article addresses, in detail, the idea of a multi-level approach that ranges from social to neural levels. In physical sciences, a rigorous set of theories is a hierarchy of descriptions/explanations, in which causal relationships among entities at a (...) high level can be reduced to causal relationships among simpler entities at a more detailed level. We argue that a similar hierarchy makes possible an equally productive approach toward cognitive modeling. The levels of models that we conceive in relation to cognition include, at the highest level, sociological/anthropological models of collective human behavior, behavioral models of individual performance, cognitive models involving detailed mechanisms, representations, and processes, as well as biological/physiological models of neural circuits, brain regions, and other detailed biological processes. (shrink)
This paper describes the processes of cognitive modeling and representation of human expertise for developing an ontology and knowledge base of an expert system. An ontology is an organization and classification of knowledge. Ontological engineering in artificial intelligence (AI) has the practical goal of constructing frameworks for knowledge that allow computational systems to tackle knowledge-intensive problems and supports knowledge sharing and reuse. Ontological engineering is also a process that facilitates construction of the knowledge base of an intelligent system, which (...) can be defined as a computer program that can duplicate problem-solving capabilities of human experts in specific areas. This paper presents the processes of knowledge acquisition, analysis, and representation, which laid the basis for ontology construction. In this case, the processes are applied in ontological engineering for construction of an expert system in the domain of monitoring of a petroleum production and separation facility. The acquired knowledge was also formally represented in two knowledge acquisition tools. (shrink)
Modeling and simulation clearly have an upside. My discussion here will deal with the inevitable downside of modeling — the sort of things that can go wrong. It will set out a taxonomy for the pathology of models — a catalogue of the various ways in which model contrivance can go awry. In the course of that discussion, I also call on some of my past experience with models and their vulnerabilities.
The Lotka–Volterra predator-prey-model is a widely known example of model-based science. Here we reexamine Vito Volterra’s and Umberto D’Ancona’s original publications on the model, and in particular their methodological reflections. On this basis we develop several ideas pertaining to the philosophical debate on the scientific practice of modeling. First, we show that Volterra and D’Ancona chose modeling because the problem in hand could not be approached by more direct methods such as causal inference. This suggests a philosophically insightful (...) motivation for choosing the strategy of modeling. Second, we show that the development of the model follows a trajectory from a “how possibly” to a “how actually” model. We discuss how and to what extent Volterra and D’Ancona were able to advance their model along that trajectory. It turns out they were unable to establish that their model was fully applicable to any system. Third, we consider another instance of model-based science: Darwin’s model of the origin and distribution of coral atolls in the Pacific Ocean. Darwin argued more successfully that his model faithfully represents the causal structure of the target system, and hence that it is a “how actually” model. (shrink)
This study examined how ethical case study content and the process for working through case material influenced training effectiveness. Specifically, the effects of behavioral modeling content and the use of forecasting prompt questions on knowledge acquisition and transfer were tested. Graduate students participating in a case-based ethics training course read a case where the main actor demonstrated key behaviors effectively (mastery model), some behaviors effectively and some ineffectively (mixed model), or no behaviors (no model). The students then responded to (...) forecasting or summarizing prompts. Results revealed a main effect for modeling content. Explicitly modeling key behaviors within a case improved constraint analyses, sensemaking, and decision ethicality on a transfer task. The mastery model using effective behaviors was most beneficial. Forecasting prompts resulted in better transfer performance when the main actor used a mix of ineffective and effective behaviors. Implications for designing ethics training programs are discussed. (shrink)
This paper investigates the relationship between reality and model, information and truth. It will argue that meaningful data need not be true in order to constitute information. Information to which truth-value cannot be ascribed, partially true information or even false information can lead to an interesting outcome such as technological innovation or scientific breakthrough. In the research process, during the transition between two theoretical frameworks, there is a dynamic mixture of old and new concepts in which truth is not well (...) defined. Instead of veridicity, correctness of a model and its appropriateness within a context are commonly required. Despite empirical models being in general only truthlike, they are nevertheless capable of producing results from which conclusions can be drawn and adequate decisions made. (shrink)
The credibility of digital computer simulations has always been a problem. Today, through the debate on verification and validation, it has become a key issue. I will review the existing theses on that question. I will show that, due to the role of epistemological beliefs in science, no general agreement can be found on this matter. Hence, the complexity of the construction of sciences must be acknowledged. I illustrate these claims with a recent historical example. Finally I temperate this diversity (...) by insisting on recent trends in environmental sciences and in industrial sciences. (shrink)
Both von Neumann and Wiener were outsiders to biology. Both were inspired by biology and both proposed models and generalizations that proved inspirational for biologists. Around the same time in the 1940s von Neumann developed the notion of self reproducing automata and Wiener suggested an explication of teleology using the notion of negative feedback. These efforts were similar in spirit. Both von Neumann and Wiener used mathematical ideas to attack foundational issues in biology, and the concepts they articulated had lasting (...) effect. But there were significant differences as well. Von Neumann presented a how-possibly model, which sparked interest by mathematicians and computer scientists, while Wiener collaborated more directly with biologists, and his proposal influenced the philosophy of biology. The two cases illustrate different strategies by which mathematicians, the “professional outsiders” of science, can choose to guide their engagement with biological questions and with the biological community, and illustrate different kinds of generalizations that mathematization can contribute to biology. The different strategies employed by von Neumann and Wiener and the types of models they constructed may have affected the fate of von Neumann’s and Wiener’s ideas – as well as the reputation, in biology, of von Neumann and Wiener themselves. (shrink)
This paper uses a number of examples of diverse types and functions of models in evolutionary biology to argue that the demarcation between theory and practice, or "theory model" and "data model," is often difficult to make. It is shown how both mathematical and laboratory models function as plausibility arguments, existence proofs, and refutations in the investigation of questions about the pattern and process of evolutionary history. I consider the consequences of this for the semantic approach to theories and theory (...) confirmation. The paper attempts to reconcile the insights of both critics and advocates of the semantic approach to theories. (shrink)
Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more (...) closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. (shrink)
We explore the interaction between oculomotor control and language comprehension on the sentence level using two well-tested computational accounts of parsing difficulty. Previous work (Boston, Hale, Vasishth, & Kliegl, 2011) has shown that surprisal (Hale, 2001; Levy, 2008) and cue-based memory retrieval (Lewis & Vasishth, 2005) are significant and complementary predictors of reading time in an eyetracking corpus. It remains an open question how the sentence processor interacts with oculomotor control. Using a simple linking hypothesis proposed in Reichle, Warren, and (...) McConnell (2009), we integrated both measures with the eye movement model EMMA (Salvucci, 2001) inside the cognitive architecture ACT-R (Anderson et al., 2004). We built a reading model that could initiate short “Time Out regressions” (Mitchell, Shen, Green, & Hodgson, 2008) that compensate for slow postlexical processing. This simple interaction enabled the model to predict the re-reading of words based on parsing difficulty. The model was evaluated in different configurations on the prediction of frequency effects on the Potsdam Sentence Corpus. The extension of EMMA with postlexical processing improved its predictions and reproduced re-reading rates and durations with a reasonable fit to the data. This demonstration, based on simple and independently motivated assumptions, serves as a foundational step toward a precise investigation of the interaction between high-level language processing and eye movement control. (shrink)
Recent findings indicate that the constituting digits of multi-digit numbers are processed, decomposed into units, tens, and so on, rather than integrated into one entity. This is suggested by interfering effects of unit digit processing on two-digit number comparison. In the present study, we extended the computational model for two-digit number magnitude comparison of Moeller, Huber, Nuerk, and Willmes (2011a) to the case of three-digit number comparison (e.g., 371_826). In a second step, we evaluated how hundred-decade and hundred-unit compatibility effects (...) were moderated by varying the percentage of within-hundred (e.g., 539_582) and within-hundred-and-decade filler items (e.g., 483_489). From the results we predict that numerical distance as well as compatibility effects should indeed be modulated by the relevance of tens and units in three-digit number magnitude comparison: While in particular the hundred distance effect should decrease, we predict hundred-decade and hundred-unit compatibility effects to increase with the relevance of tens and units. (shrink)
Despite efforts from regulatory agencies (e.g. NIH, FDA), recent systematic reviews of randomised controlled trials (RCTs) show that top medical journals continue to publish trials without requiring authors to report details for readers to evaluate early stopping decisions carefully. This article presents a systematic way of modelling and simulating interim monitoring decisions of RCTs. By taking an approach that is both general and rigorous, the proposed framework models and evaluates early stopping decisions of RCTs based on a clear and consistent (...) set of criteria. The framework allows decision analysts to generate and quickly answer ‘what-if’ questions by simulating alternate trial scenarios. I illustrate the framework with a case study of an RCT that was stopped early due to harm. This was a trial of vitamin A supplement in relation to HIV transmission from mother-to-child through breastfeeding. (shrink)
This paper examines the role of mathematical idealization in describing and explaining various features of the world. It examines two cases: first, briefly, the modeling of shock formation using the idealization of the continuum. Second, and in more detail, the breaking of droplets from the points of view of both analytic fluid mechanics and molecular dynamical simulations at the nano-level. It argues that the continuum idealizations are explanatorily ineliminable and that a full understanding of certain physical phenomena cannot be (...) obtained through completely detailed, nonidealized representations. (shrink)
Contemporary literature in philosophy of science has begun to emphasize the practice of modeling, which differs in important respects from other forms of representation and analysis central to standard philosophical accounts. This literature has stressed the constructed nature of models, their autonomy, and the utility of their high degrees of idealization. What this new literature about modeling lacks, however, is a comprehensive account of the models that figure in to the practice of modeling. This paper offers a (...) new account of both concrete and mathematical models, with special emphasis on the intentions of theorists, which are necessary for evaluating the model-world relationship during the practice of modeling. Although mathematical models form the basis of most of contemporary modeling, my discussion begins with more traditional, concrete models such as the San Francisco Bay model. (shrink)
The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus on the (...) relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemic tools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation. (shrink)
Philosophy can shed light on mathematical modeling and the juxtaposition of modeling and empirical data. This paper explores three philosophical traditions of the structure of scientific theory—Syntactic, Semantic, and Pragmatic—to show that each illuminates mathematical modeling. The Pragmatic View identifies four critical functions of mathematical modeling: (1) unification of both models and data, (2) model fitting to data, (3) mechanism identification accounting for observation, and (4) prediction of future observations. Such facets are explored using a recent (...) exchange between two groups of mathematical modelers in plant biology. Scientific debate can arise from different modeling philosophies. (shrink)
Making sense of modeling: beyond representation Content Type Journal Article Category Original paper in Philosophy of Science Pages 335-352 DOI 10.1007/s13194-011-0032-8 Authors Isabelle Peschard, Philosophy Department, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132, USA Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912 Journal Volume Volume 1 Journal Issue Volume 1, Number 3.
The goal of the present article is to contribute to the epistemology and methodology of computer simulations. The central thesis is that the process of simulation modeling takes the form of an explorative cooperation between experimenting and modeling. This characteristic mode of modeling turns simulations into autonomous mediators in a specific way; namely, it makes it possible for the phenomena and the data to exert a direct influence on the model. The argumentation will be illustrated by a (...) case study of the general circulation models of meteorology, the major simulation models in climate research. (shrink)
Cellular Automata (CA) based simulations are widely used in a great variety of domains, fromstatistical physics to social science. They allow for spectacular displays and numerical predictions. Are they forall that a revolutionary modeling tool, allowing for “direct simulation”, or for the simulation of “the phenomenon itself”? Or are they merely models "of a phenomenological nature rather than of a fundamental one”? How do they compareto other modeling techniques? In order to answer these questions, we present a systematic (...) exploration of CA’s various uses. (shrink)
Modeling involves the use of false idealizations, yet there is typically a belief or hope that modeling somehow manages to deliver true information about the world. The paper discusses one possible way of reconciling truth and falsehood in modeling. The key trick is to relocate truth claims by reinterpreting an apparently false idealizing assumption in order to make clear what possibly true assertion is intended when using it. These include interpretations in terms of negligibility, applicability, tractability, early-step, (...) and more. Elaborations are suggested about their precise formulations, mutual relationships, and truth-aptness. (shrink)
Efforts to bridge emotion theory with neurobiology can be facilitated by dynamic systems (DS) modeling. DS principles stipulate higher-order wholes emerging from lower-order constituents through bidirectional causal processes – offering a common language for psychological and neurobiological models. After identifying some limitations of mainstream emotion theory, I apply DS principles to emotion–cognition relations. I then present a psychological model based on this reconceptualization, identifying trigger, self-amplification, and self-stabilization phases of emotion-appraisal states, leading to consolidating traits. The article goes on (...) to describe neural structures and functions involved in appraisal and emotion, as well as DS mechanisms of integration by which they interact. These mechanisms include nested feedback interactions, global effects of neuromodulation, vertical integration, action-monitoring, and synaptic plasticity, and they are modeled in terms of both functional integration and temporal synchronization. I end by elaborating the psychological model of emotion–appraisal states with reference to neural processes. Key Words: appraisal; bidirectional causality; cognition; dynamic systems; emotion; neurobiology; part–whole relations; self-organization. (shrink)
This article briefly review the fundamentals of structural equation modeling for readers unfamiliar with the technique then goes on to offer a review of the Martin and Cullen paper. In summary, a number of fit indices reported by the authors reveal that the data do not fit their theoretical model and thus the conclusion of the authors that the model was “promising” are unwarranted.
Modeling in biology and economics Content Type Journal Article Pages 613-615 DOI 10.1007/s10539-011-9271-5 Authors Michael Weisberg, Department of Philosophy, University of Pennsylvania, 433, Cohen Hall, Philadelphia, PA 19104-6304, USA Samir Okasha, Department of Philosophy, University of Bristol, Bristol, BS8 1TB UK Uskali Mäki, Department of Political and Economic Studies / Philosophy, University of Helsinki, Helsinki, Finland Journal Biology and Philosophy Online ISSN 1572-8404 Print ISSN 0169-3867 Journal Volume Volume 26 Journal Issue Volume 26, Number 5.
The fate of optimality modeling is typically linked to that of adaptationism: the two are thought to stand or fall together (Gould and Lewontin, Proc Relig Soc Lond 205:581–598, 1979; Orzack and Sober, Am Nat 143(3):361–380, 1994). I argue here that this is mistaken. The debate over adaptationism has tended to focus on one particular use of optimality models, which I refer to here as their strong use. The strong use of an optimality model involves the claim that selection (...) is the only important influence on the evolutionary outcome in question and is thus linked to adaptationism. However, biologists seldom intend this strong use of optimality models. One common alternative that I term the weak use simply involves the claim that an optimality model accurately represents the role of selection in bringing about the outcome. This and other weaker uses of optimality models insulate the optimality approach from criticisms of adaptationism, and they account for the prominence of optimality modeling (broadly construed) in population biology. The centrality of these uses of optimality models ensures a continuing role for the optimality approach, regardless of the fate of adaptationism. (shrink)
Mathematical models are a well established tool in most natural sciences. Although models have been neglected by the philosophy of science for a long time, their epistemological status as a link between theory and reality is now fairly well understood. However, regarding the epistemological status of mathematical models in the social sciences, there still exists a considerable unclarity. In my paper I argue that this results from specific challenges that mathematical models and especially computer simulations face in the social sciences. (...) The most important difference between the social sciences and the natural sciences with respect to modeling is that in the social sciences powerful and well confirmed background theories (like Newtonian mechanics, quantum mechanics or the theory of relativity in physics) do not exist in the social sciences. Therefore, an epistemology of models that is formed on the role model of physics may not be appropriate for the social sciences. I discuss the challenges that modeling faces in the social sciences and point out their epistemological consequences. The most important consequences are that greater emphasis must be placed on empirical validation than on theoretical validation and that the relevance of purely theoretical simulations is strongly limited. (shrink)
Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is (...) not a symbol system. A hybrid nonsymbolic/symbolic system will be sketched in which the meanings of the symbols are grounded bottom-up in the system's capacity to discriminate and identify the objects they refer to. Neural nets are one possible mechanism for learning the invariants in the analog sensory projection on which successful categorization is based. "Categorical perception" (Harnad 1987a), in which similarity space is "warped" in the service of categorization, turns out to be exhibited by both people and nets, and may mediate the constraints exerted by the analog world of objects on the formal world of symbols. (shrink)
Abstract: My aim in this paper is to articulate an account of scientific modeling that reconciles pluralism about modeling with a modest form of scientific realism. The central claim of this approach is that the models of a given physical phenomenon can present different aspects of the phenomenon. This allows us, in certain special circumstances, to be confident that we are capturing genuine features of the world, even when our modeling occurs in the absence of a fundamental (...) theory. This framework is illustrated using models from contemporary meteorology. (shrink)
The emphasis on models hasn’t completely eliminated laws from scientific discourse and philosophical discussion. Instead, I want to argue that much of physics lies beyond the strict domain of laws. I shall argue that in important cases the physics, or physical understanding, does not lie either in laws or in their properties, such as universality, consistency and symmetry. I shall argue that the domain of application commonly attributed to laws is too narrow. That is, laws can still play an important, (...) though peculiar, role outside their strict domain of validity. I shall argue also that, by way of a trade-off, while the actual domain of application of laws should be seen as much broader. At the same time, what I call ‘anomic’ representational elements reveal themselves as central to the descriptive and explanatory power of theories and model: boundary conditions, state descriptions, structures, constraints, limits and mechanisms. I conclude with a brief consideration of how my discussion has consequences for discussion of understanding, unification, approximation and dispositional properties. I focus on examples from physics, macroscopic and microscopic, phenomenological and fundametal: shock waves, propagation of cracks, symmetry breaking, and others. This law-eccentric kind of knowledge is central to both modeling the world and intervening in it. (shrink)
As noticed recently by Winsberg (2003), how computer models and simulations get their epistemic credentials remains in need of epistemological scrutiny. My aim in this paper is to contribute to fill this gap by discussing underappreciated features of simulations (such as “path-dependency” and plasticity) which, I’ll argue, affect their validation. The focus will be on composite modeling of complex real-world systems in astrophysics and cosmology. The analysis leads to a reassessment of the epistemic goals actually achieved by this kind (...) of modeling: I’ll show in particular that its realistic ambition and the possibility of empirical confirmation pull in opposite directions. (shrink)
Two widely accepted assumptions within cognitive science are that (1) the goal is to understand the mechanisms responsible for cognitive performances and (2) computational modeling is a major tool for understanding these mechanisms. The particular approaches to computational modeling adopted in cognitive science, moreover, have significantly affected the way in which cognitive mechanisms are understood. Unable to employ some of the more common methods for conducting research on mechanisms, cognitive scientists’ guiding ideas about mechanism have developed in conjunction (...) with their styles of modeling. In particular, mental operations often are conceptualized as comparable to the processes employed in classical symbolic AI or neural network models. These models, in turn, have been interpreted by some as themselves intelligent systems since they employ the same type of operations as does the mind. For this paper, what is significant about these approaches to modeling is that they are constructed specifically to account for behavior and are evaluated by how well they do so—not by independent evidence that they describe actual operations in mental mechanisms. (shrink)
Webb distinguishes two endeavors she calls animal modeling and animat modeling and advocates for the former. I share her preference and point to additional virtues of modeling actual biological mechanisms (animal modeling). As Webb argues, animat modeling should be regarded as modeling of specific, but madeup, biological mechanisms. I contend that modeling made-up mechanisms in situations in which we have some knowledge of the actual mechanisms involved is modeling with one hand—the good (...) one—tied behind one’s back.1 The hand that is used in animat modeling is constructing and evaluating models by whether they behave in the right way—do they exhibit the particular phenomenon one is trying to understand? The good hand that is disavowed seeks to use evidence about the mechanism employed in real living systems both for inspiration in designing the model and for evaluating the model. Denying oneself use of one’s good hand both limits one’s access to valuable evidence for evaluating a model and denies oneself access to a potent discovery strategy. Webb draws attention to one reason to employ the good hand—if models are to be relevant to biology (and not just characterize hypothetical mechanisms), then the component parts and operations specified in the model must in some way map onto those in actual biological organisms. Especially if one accepts the possibility of multiple realizations, then if one only uses behavior to evaluate the model one may well have described an alternative realization than that found in real organisms. To determine that one has modeled the actual realization, it is necessary to compare the proposed mechanism with the actual mechanism—does it.. (shrink)
Drawing substantive conclusions from linear causal models that perform acceptably on statistical tests is unreasonable if it is not known how alternatives fare on these same tests. We describe a computer program, TETRAD, that helps to search rapidly for plausible alternatives to a given causal structure. The program is based on principles from statistics, graph theory, philosophy of science, and artificial intelligence. We describe these principles, discuss how TETRAD employs them, and argue that these principles make TETRAD an effective tool. (...) Finally, we illustrate TETRAD's effectiveness by applying it to a multiple indicator model of Political and Industrial development. A pilot version of the TETRAD program is described in this paper. The current version is described in our forthcoming Discovering Causal Structure: Artificial Intelligence for Statistical Modeling. (shrink)
Models are a principle instrument of modern science. They are built, applied, tested, compared, revised and interpreted in an expansive scientific literature. Throughout this paper, I will argue that models are also a valuable tool for the philosopher of science. In particular, I will discuss how the methodology of Bayesian Networks can elucidate two central problems in the philosophy of science. The first thesis I will explore is the variety-of-evidence thesis, which argues that the more varied the supporting evidence, the (...) greater the degree of confirmation for a given hypothesis. However, when investigated using Bayesian methodology, this thesis turns out not to be sacrosanct. In fact, under certain conditions, a hypothesis receives more confirmation from evidence that is obtained from one rather than more instruments, and from evidence that confirms one rather than more testable consequences of the hypothesis. The second challenge that I will investigate is scientific theory change. This application highlights a different virtue of modeling methodology. In particular, I will argue that Bayesian modeling illustrates how two seemingly unrelated aspects of theory change, namely the (Kuhnian) stability of (normal) science and the ability of anomalies to over turn that stability and lead to theory change, are in fact united by a single underlying principle, in this case, coherence. In the end, I will argue that these two examples bring out some metatheoretical reflections regarding the following questions: What are the differences between modeling in science and modeling in philosophy? What is the scope of the modeling method in philosophy? And what does this imply for our understanding of Bayesianism? (shrink)
The optimality approach to modeling natural selection has been criticized by many biologists and philosophers of biology. For instance, Lewontin (1979) argues that the optimality approach is a shortcut that will be replaced by models incorporating genetic information, if and when such models become available. In contrast, I think that optimality models have a permanent role in evolutionary study. I base my argument for this claim on what I think it takes to best explain an event. In certain contexts, (...) optimality and game-theoretic models best explain some central types of evolutionary phenomena. ‡Thanks to Michael Friedman, Helen Longino, Michael Weisberg, and especially Elliott Sober for comments on earlier drafts of this paper. †To contact the author, please write to: Department of Philosophy, Stanford University, Stanford, CA 94305-2155; e-mail: firstname.lastname@example.org. (shrink)
In this paper I argue that the appropriate analogy for “understanding what makes simulation results reliable” in Global Climate Modeling is not with scientific experimentation or measurement, but—at least in the case of the use of global climate models for policy development—with the applications of science in engineering design problems. The prospects for using this analogy to argue for the quantitative reliability of GCMs are assessed and compared with other potential strategies.
Causal modeling methods such as path analysis, used in the social and natural sciences, are also highly relevant to philosophical problems of probabilistic causation and statistical explanation. We show how these methods can be effectively used (1) to improve and extend Salmon's S-R basis for statistical explanation, and (2) to repair Cartwright's resolution of Simpson's paradox, clarifying the relationship between statistical and causal claims.
This paper aims at integrating the work onanalogical reasoning in Cognitive Science into thelong trend of philosophical interest, in this century,in analogical reasoning as a basis for scientificmodeling. In the first part of the paper, threesimulations of analogical reasoning, proposed incognitive science, are presented: Gentner''s StructureMatching Engine, Mitchel''s and Hofstadter''s COPYCATand the Analogical Constraint Mapping Engine, proposedby Holyoak and Thagard. The differences andcontroversial points in these simulations arehighlighted in order to make explicit theirpresuppositions concerning the nature of analogicalreasoning. In the (...) last part, this debate in cognitivescience is applied to some traditional philosophicalaccounts of formal and material analogies as a basisfor scientific modeling, like Mary Hesse`s, and tomore recent ones, that already draw from the work inArtificial Intelligence, like that proposed byAronson, Harré and Way. (shrink)
Rather than taking the ontological fundamentality of an ideal microphysics as a starting point, this article sketches an approach to the problem of levels that swaps assumptions about ontology for assumptions about inquiry. These assumptions can be implemented formally via computational modeling techniques that will be described below. It is argued that these models offer a way to save some of our prominent commonsense intuitions concerning levels. This strategy offers a way of exploring the individuation of higher level properties (...) in a systematic and formally constrained manner. †To contact the author, please write to: Department of Philosophy, Worrell Hall 306, 500 University Avenue, University of Texas, El Paso, TX 79968; e‐mail: email@example.com. (shrink)
In the last few decades the role played by models and modeling activities has become a central topic in the scientific enterprise. In particular, it has been highlighted both that the development of models constitutes a crucial step for understanding the world and that the developed models operate as mediators between theories and the world. Such perspective is exploited here to cope with the issue as to whether error-based and uncertainty-based modeling of measurement are incompatible, and thus alternative (...) with one another, as sometimes claimed nowadays. The crucial problem is whether assuming this standpoint implies definitely renouncing to maintain a role for truth and the related concepts, particularly accuracy, in measurement. It is argued here that the well known objections against true values in measurement, which would lead to refuse the concept of accuracy as non-operational, or to maintain it as only qualitative, derive from a not clear distinction between three distinct processes: the metrological characterization of measuring systems, their calibration, and finally measurement. Under the hypotheses that (1) the concept of true value is related to the model of a measurement process, (2) the concept of uncertainty is related to the connection between such model and the world, and (3) accuracy is a property of measuring systems (and not of measurement results) and uncertainty is a property of measurement results (and not of measuring systems), not only the compatibility but actually the conjoint need of error-based and uncertainty-based modeling emerges. (shrink)
This paper develops a formal framework to model a process in which the formation of individual opinions is embedded in a deliberative exchange with others. The paper opts for a low-resolution modeling approach and abstracts away from most of the details of the social-epistemic process. Taking a bird's eye view allows us to analyze the chances for the truth to be found and broadly accepted under conditions of cognitive division of labour combined with a social exchange process. Cognitive division (...) of labour means that only some individuals are active truth seekers, possibly with different capacities. Both mathematical tools and computer simulations are used to investigate the model. As an analytical result, the Funnel Theorem states that under rather weak conditions on the social process, a consensus on the truth will be reached if all individuals possess an arbitrarily small capacity to go for the truth. The Leading the pack Theorem states that under certain conditions even a single truth seeker may lead all individuals to the truth. Systematic simulations analyze how close agents can get to the truth depending upon the frequency of truth seekers, their capacities as truth seekers, the position of the truth (more to the extreme or more in the centre of an opinion space), and the willingness to take into account the opinions of others when exchanging and updating opinions. (shrink)
Aristotle saw ethics as a habit that is modeled and developed though practice. Shelly's Victor Frankenstein, though well intentioned in his goals, failed to model ethical behavior for his creation, abandoning it to its own recourse. Today we live in an era of unfettered mergers and acquisitions where once separate and independent media increasingly are concentrated under the control and leadership of the fictitious but legal personhood of a few conglomerated corporations. This paper will explore the impact of mega-media mergers (...) on ethical modeling in journalism. It will diagram the behavioral context underlying the development of ethical habits, discuss leadership theory as it applies to management, and address the question of whether the creation of mega-media conglomerates will result in responsible corporate citizens or monsters who turn on their creators. (shrink)
Evidence is an objective matter. This is the prevailing view within science, and confirmation theory should aim to capture the objective nature of scientific evidence. Modeling an objective evidence relation in a probabilistic framework faces two challenges: the probabilities must have the right epistemic foundation, and they must be specifiable given the hypotheses and data under consideration. Here I will explore how Sober's (2008, 2009) approach to confirmation handles these challenges of foundation and specification. In particular, I will argue (...) that the specification problem proves especially difficult, and undermines the law of likelihood as an adequate representation of the objective nature of scientific evidence. (shrink)
An essential aspect of conceptual data modeling methodologies is the language’s expressiveness so as to represent the subject domain as precise as possible to obtain good quality models and, consequently, software. To gain better insight in the characteristics of the main conceptual modeling languages, we conducted a comparison between ORM, ORM2, UML, ER, and EER with the aid of Description Logic languages of the DLR family and the new formally defined generic conceptual data modeling language CMcom that (...) is based on DLRifd. ORM, ER, EER, and UML class diagrams are proper fragments of ORM2 and CMcom has the most expressive common denominator with these languages. CMcom simplifies prospects for automated, online, interoperability among the considered languages so that modelers not only can continue using their preferred modeling language yet be compatible with the other ones, but also have a common ground that eases database and software integration based on commonly used conceptual data models. (shrink)
Process modeling is ubiquitous in business and industry. While a great deal of effort has been devoted to the formal and philosophical investigation of processes, surprisingly little research connects this work to real world process modeling. The purpose of this paper is to begin making such a connection. To do so, we ﬁrst develop a simple mathematical model of activities and their instances based upon the model theory for the NIST Process Speciﬁcation Language (PSL), a simple language for (...) describing these entities, and a semantics for the latter in terms of the former, and a set of axioms for the semantics based upon the NIST Process Speciﬁcation Language (PSL). On the basis of this foundation, we then develop a general notion of a process model, and an account of what it is for such a model to be realized by a collection of events. (shrink)
Biologists and economists use models to study complex systems. This similarity between these disciplines has led to an interesting development: the borrowing of various components of model-based theorizing between the two domains. A major recent example of this strategy is economists’ utilization of the resources of evolutionary biology in order to construct models of economic systems. This general strategy has come to be called evolutionary economics and has been a source of much debate among economists. Although philosophers have developed literatures (...) on the nature of models and modeling, the unique issues surrounding this kind of interdisciplinary model building have yet to be independently investigated. In this paper, we utilize evolutionary economics as a case study in the investigation of more general issues concerning interdisciplinary modeling. We begin by critiquing the distinctions currently used within the evolutionary economics literature and propose an alternative carving of the conceptual terrain. We then argue that the three types of evolutionary economics we distinguish capture distinctions that will be important whenever resources of model-based theorizing are borrowed across distinct scientific domains. Our analysis of these model-building strategies identifies several of the unique methodological and philosophical issues that confront interdisciplinary modeling. (shrink)
In a recent article, “Wayward Modeling: Population Genetics and Natural Selection,” Bruce Glymour claims that population genetics is burdened by serious predictive and explanatory inadequacies and that the theory itself is to blame. Because Glymour overlooks a variety of formal modeling techniques in population genetics, his arguments do not quite undermine a major scientific theory. However, his arguments are extremely valuable as they provide definitive proof that those who would deploy classical population genetics over natural systems must do (...) so with careful attention to interactions between individual population members and environmental causes. Glymour’s arguments have deep implications for causation in classical population genetics. (shrink)
Engineers must deal with risks and uncertainties as a part of their professional work and, in particular, uncertainties are inherent to engineering models. Models play a central role in engineering. Models often represent an abstract and idealized version of the mathematical properties of a target. Using models, engineers can investigate and acquire understanding of how an object or phenomenon will perform under specified conditions. This paper defines the different stages of the modeling process in engineering, classifies the various sources (...) of uncertainty that arise in each stage, and discusses the categories into which these uncertainties fall. The paper then considers the way uncertainty and modeling are approached in science and the criteria for evaluating scientific hypotheses, in order to highlight the very different criteria appropriate for the development of models and the treatment of the inherent uncertainties in engineering. Finally, the paper puts forward nine guidelines for the treatment of uncertainty in engineering modeling. (shrink)
Artificial Life (ALife) has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other is (...) “wet” ALife that uses biological material to realize what has only been simulated on computers, effectively wet ALife uses biological material as a kind of computer. This is challenging to the field of computer ethics as it points towards a future in which computer and bioethics might have shared concerns. The emerging studies into wet ALife are likely to provide strong empirical evidence for ALife’s most challenging hypothesis: that life is a certain set of computable functions that can be duplicated in any medium. I believe this will propel ALife into the midst of the mother of all cultural battles that has been gathering around the emergence of biotechnology. Philosophers need to pay close attention to this debate and can serve a vital role in clarifying and resolving the dispute. But even if ALife is merely a computer modeling technique that sheds light on living systems, it still has a number of significant ethical implications such as its use in the modeling of moral and ethical systems, as well as in the creation of artificial moral agents. (shrink)
The bookModeling Reality covers a wide range of fascinating subjects, accessible to anyone who wants to learn about the use of computer modeling to solve a diverse range of problems, but who does not possess a specialized training in mathematics or computer science. The material presented is pitched at the level of high-school graduates, even though it covers some advanced topics (cellular automata, Shannon's measure of information, deterministic chaos, fractals, game theory, neural networks, genetic algorithms, and Turing machines). These (...) advanced topics are explained in terms of well known simple concepts: Cellular automata - Game of Life, Shannon's formula - Game of twenty questions, Game theory - Television quiz, etc. The book is unique in explaining in a straightforward, yet complete, fashion many important ideas, related to various models of reality and their applications. Twenty-five programs, written especially for this book, are provided on an accompanying CD. They greatly enhance its pedagogical value and make learning of even the more complex topics an enjoyable pleasure. (shrink)
Recent controversy over the existence of biological laws raises questions about the cognitive aims of theoretical modeling in that science. If there are no laws for successful theoretical models to approximate, then what is it that successful theories do? One response is to regard theoretical models as tools. But this instrumental reading cannot accommodate the explanatory role that theories are supposed to play. Yet accommodating the explanatory function, as articulated by Brandon and Sober for example, seems to involve us (...) once again in a reliance on laws. The paper concludes that we must rethink both the nature of laws and theoretical explanation in biology. (shrink)
I propose that a sociological and historical examination of nanotechnologists can contribute more to an understanding of nanotechnology than an ontological definition. Nanotechnology emerged from the convergent evolution of numerous "technical knowledge communities"-networks of tightly-interconnected people who operate between disciplines and individual research groups. I demonstrate this proposition by sketching the co-evolution of computational chemistry and computational nanotechnology. Computational chemistry arose in the 1950s but eventually segregated into an ab initio, basic research, physics-oriented flavor and an industry-oriented, molecular modeling (...) and visualization, biochemical flavor. Computational nanotechnology arose in the 1990s as a synthesis of these two subgroups, infused by people and practices from computational materials science, engineering, computer science, and elsewhere. I show that to understand the aims and outcomes of computational nanotechnology-and nanotechnology more generally-we need to understand relationships between different, but related, nano knowledge communities and their dependence on particular practices, artifacts, and theories. (shrink)
Thought experiments have played a prominent role in numerous cases of conceptual change in science. I propose that research in cognitive psychology into the role of mental modeling in narrative comprehension can illuminate how and why thought experiments work. In thought experimenting a scientist constructs and manipulates a mental simulation of the experimental situation. During this process, she makes use of inferencing mechanisms, existing representations, and general world knowledge to make realistic transformations from one possible physical state to the (...) next. The simulation reveals the impossibility of integrating multiple constraints drawn from existing representations and the world and pinpoints the locus of the required conceptual reform. (shrink)
The processes of wound healing and bone regeneration and problems in tissue engineering have been an active area for mathematical modeling in the last decade. Here we review a selection of recent models which aim at deriving strategies for improved healing. In wound healing, the models have particularly focused on the inflammatory response in order to improve the healing of chronic wound. For bone regeneration, the mathematical models have been applied to design optimal and new treatment strategies for normal (...) and specific cases of impaired fracture healing. For the field of tissue engineering, we focus on mathematical models that analyze the interplay between cells and their biochemical cues within the scaffold to ensure optimal nutrient transport and maximal tissue production. Finally, we briefly comment on numerical issues arising from simulations of these mathematical models. (shrink)
What is computational cognitive modeling? What exactly can it contribute to cognitive science? What has it contributed thus far? Where is it going? Answering such questions may sound overly defensive to the insiders of computational cognitive modeling, and may even seem so to some other cognitive scientists, but they are very much needed in a volume like this—because they lie at the very foundation of this field. Many insiders and outsiders alike would like to take a balanced and (...) rational look at these questions, without indulging in excessive cheer-leading, which, as one would expect, happens sometimes amongst computational modeling enthusiasts. (shrink)
Information modeling (also known as conceptual modeling or semantic data modeling) may be characterized as the formulation of a model in which information aspects of objective and subjective reality are presented (the application), independent of datasets and processes by which they may be realized (the system).A methodology for information modeling should incorporate a number of concepts which have appeared in the literature, but should also be formulated in terms of constructs which are understandable to and expressible (...) by the system user as well as the system developer. This is particularly desirable in connection with certain intimate relationships, such as being the same as or being a part of. (shrink)
Several key areas in modeling the cardiovascular and respiratory control systems are reviewed and examples are given which reflect the research state of the art in these areas. Attention is given to the interrelated issues of data collection, experimental design, and model application including model development and analysis. Examples are given of current clinical problems which can be examined via modeling, and important issues related to model adaptation to the clinical setting.
Commentary on our target article centers around six main topics: (1) strategies in modeling the neurobehavioral foundation of human behavioral traits; (2) clarification of the construct of affiliation; (3) developmental aspects of affiliative bonding; (4) modeling disorders of affiliative reward; (5) serotonin and affiliative behavior; and (6) neural considerations. After an initial important research update in section R1, our Response is organized around these topics in the following six sections, R2 to R7.
Cancer is a complex disease, necessitating research on many different levels; at the subcellular level to identify genes, proteins and signaling pathways associated with the disease; at the cellular level to identify, for example, cell-cell adhesion and communication mechanisms; at the tissue level to investigate disruption of homeostasis and interaction with the tissue of origin or settlement of metastasis; and finally at the systems level to explore its global impact, e.g. through the mechanism of cachexia. Mathematical models have been proposed (...) to identify key mechanisms that underlie dynamics and events at every scale of interest, and increasing effort is now being paid to multi-scale models that bridge the different scales. With more biological data becoming available and with increased interdisciplinary efforts, theoretical models are rendering suitable tools to predict the origin and course of the disease. The ultimate aims of cancer models, however, are to enlighten our concept of the carcinogenesis process and to assist in the designing of treatment protocols that can reduce mortality and improve patient quality of life. Conventional treatment of cancer is surgery combined with radiotherapy or chemotherapy for localized tumors or systemic treatment of advanced cancers, respectively. Although radiation is widely used as treatment, most scheduling is based on empirical knowledge and less on the predictions of sophisticated growth dynamical models of treatment response. Part of the failure to translate modeling research to the clinic may stem from language barriers, exacerbated by often esoteric model renderings with inaccessible parameterization. Here we discuss some ideas for combining tractable dynamical tumor growth models with radiation response models using biologically accessible parameters to provide a more intuitive and exploitable framework for understanding the complexity of radiotherapy treatment and failure. (shrink)
Our aim in this paper is to bring the woefully neglected literature on predictive modeling to bear on some central questions in the philosophy of science. The lesson of this literature is straightforward: For a very wide range of prediction problems, statistical prediction rules (SPRs), often rules that are very easy to implement, make predictions than are as reliable as, and typically more reliable than, human experts. We will argue that the success of SPRs forces us to reconsider our (...) views about what is involved in understanding, explanation, good reasoning, and about how we ought to do philosophy of science. (shrink)
After a historical sketch of the dynamical hypothesis, we stress that it is a functionalist hypothesis. We then tackle the point of a dynamical approach to constituent structures and emphasize that dynamical modeling must be coupled with morphological analysis.
Appropriate enablers are essential for management of intellectual capital. Through the use of structural equation modeling, we investigate whether organic renewal environments, interactive behaviors, and trust are conducive to intellectual capital management processes, as they each depend upon the establishment of a climate emphasizing mutual respect. Owing to a lack of clarity in the literature, we tested the ordering of the variables and found statistical significance for two ordering alternatives. However, the sequence presented in this article provides the best (...) statistical fit: an organic renewal environment provides a foundation for interactive behaviors, which leads to trust, and thus is consistent with the development of intellectual capital management pro- cesses within the organization. (shrink)
The scientific methodology underlying model-building is critically investigated. The modeling views of Popper and Samuelson and their prototypes are critically examined in the light of the theme of the moral law of unity of knowledge and unity of the world-system configured by the meta-epistemology of organic unity of knowledge. Upon such critical examination of received methodology of model-building in economics, the extended perspective?namely of integrating the moral law derived from the divine roots as the meta-epistemology?is rigorously studied. The example (...) of the Islamic prerogative in interpreting the holistic world-system through model-building in economics is highlighted. A religio-philosophical approach is adopted to exemplify some approaches in Islamic model-building. An especial focus is placed here on grassroots types of financing and activities. The critique of these models within the existing Islamic scholarship is carried out. The result is new dimensions of macroeconomic analysis that emanate in a logical way from the meta-epistemological approach, and oppose the mainstream ideas, both in received and Islamic economic thinking as of now. (shrink)
The distinction between the modeling of information and the modeling of data in the creation of automated systems has historically been important because the development tools available to programmers have been wedded to machine oriented data types and processes. However, advances in software engineering, particularly the move toward data abstraction in software design, allow activities reasonably described as information modeling to be performed in the software creation process. An examination of the evolution of programming languages and development (...) of general programming paradigms, including object-oriented design and implementation, suggests that while data modeling will necessarily continue to be a programmer's concern, more and more of the programming process itself is coming to be characterized by information modeling activities. (shrink)
First, a principal distinction between two different kinds of semiotic investigations is introduced, both required in the study of living signs and signs of life. Then, the attempt within the new field of Artificial Life to model and synthesise computationally based living systems is discussed with special attention paid to the possible emergence of genuine life-like behaviour in such models of for instance self-reproduction. Remarks will be made on a seemingly odd aspect of the biological concept of life; that it (...) is not as coherent as normally conceived of. In general, biosemiotic emergence of new sign functions is distinguished from other kinds of emergence that pertain to the domain of the observer and the modeling relation. (shrink)
Richard Levins has advocated the scientific merits of qualitative modeling throughout his career. He believed an excessive and uncritical focus on emulating the models used by physicists and maximizing quantitative precision was hindering biological theorizing in particular. Greater emphasis on qualitative properties of modeled systems would help counteract this tendency, and Levins subsequently developed one method of qualitative modeling, loop analysis, to study a wide variety of biological phenomena. Qualitative modeling has been criticized for being conceptually and (...) methodologically problematic. As a clear example of a qualitative modeling method, loop analysis shows this criticism is indefensible. The method has, however, some serious limitations. This paper describes loop analysis, its limitations, and attempts to clarify the differences between quantitative and qualitative modeling, in content and objective. Loop analysis is but one of numerous types of qualitative analysis, so its limitations do not detract from the currently underappreciated and underdeveloped role qualitative modeling could have within science. (shrink)
Loop analysis is a method of qualitative modeling anticipated by Sewall Wright and systematically developed by Richard Levins. In Levins’ (1966) distinctions between modeling strategies, loop analysis sacrifices precision for generality and realism. Besides criticizing the clarity of these distinctions, Orzack and Sober (1993) argued qualitative modeling is conceptually and methodologically problematic. Loop analysis of the stability of ecological communities shows this criticism is unjustified. It presupposes an overly narrow view of qualitative modeling and underestimates the (...) broad role models play in scientific research, especially in helping scientists represent and understand complex systems. (shrink)
This article applies the concept of prudence to develop the characteristics of responsible risk-modeling practices in the insurance industry. A critical evaluation of the risk-modeling process suggests that ethical judgments are emergent rather than static, vague rather than clear, particular rather than universal, and still defensible according to the discipline’s established theory, which will support a range of judgments. Thus, positive moral guides for responsible behavior are of limited practical value. Instead, by being prudent, modelers can improve their (...) ability to deal with the ethical and technical complexity of the risk-modeling process. While the application of prudence to resolve ethical challenges in risk modeling, an issue of practical importance to managers, is a first in the literature, the practice of applying an ethical lens to issues of pragmatic importance for managers is well established in Maak and Pless (J Bus Ethics 66:99–115, 2006a ; Responsible leadership, 2006b ) among others. (shrink)
There are many different kinds of model and scientists do all kind of things with them. This diversity of model type and model use is a good thing for science. Indeed, it is crucial especially for the biological and cognitive sciences, which have to solve many different problems at many different scales, ranging from the most concrete of the structural details of a DNA molecule to the most abstract and generic principles of self-organization in networks. Getting a grip (or more (...) likely many separate grips) on this range of topics calls for a teeming forest of techniques, including many different modeling techniques. Barbara Webb’s target article strikes us as a proposal for clear-cutting the forest. We think clear-cutting here would be as good for science as it is for non-metaphorical forests. Our argument for this is primarily a recitation of a few of the ways that diversity has been useful. Recently, looking at the actual practice of artificial life modelers, one of us distinguished four uses of simulation models classified in terms of the position the models take up between theory and data (see Figure 1). The classification is not exhaustive, and the barriers between kinds are not absolute. Rather, the purpose of the taxonomy is to open up the view for an epistemic ecology of modeling practices. First, and closest to the empirical domain, there are mechanistic models, in which there is an almost one-to-one correspondence between variables in the model and observables in the target system and its environment. Webb’s.. (shrink)
Are there relationships between consciousness and the material world? Empirical evidence for such a connection was reported in several meta-analyses of mind-matter experiments designed to address this question. In this paper we consider such meta-analyses from a statistical modeling perspective, emphasizing strategies to validate the models and the associated statistical procedures. In particular, we explicitly model increased data variability and selection mechanisms, which permits us to estimate 'selection profiles ' and to reassess the experimental effect in view of potential (...) other effects. An application to the data pool considered in the influential meta-analysis of Radin and Nelson (1989) yields indications for the presence of random and selection effects Adjustment for possible selection is found to render the,without such an adjustment significant, experimental effect non-significant. Somewhat different conclusions apply to a subset of the data deserving separate consideration. The actual origin of the data features that are described as experimental, random, or selection effects within the proposed model cannot be clarified y our approach and remains open. (shrink)
Our ability to process spatial information is fundamental for understanding and interacting with the environment, and it pervades other components of cognitive functioning from language to mathematics. Moreover, technological advances have produced new capabilities that have created research opportunities and astonishing applications. In this Topic on Modeling Spatial Cognition, research crossing a variety of disciplines and methodologies is described, all focused on developing models to represent the capacities and limitations of human spatial cognition.
Edited by Daniel Rothbart of George Mason University in Virginia, this book is a collection of Rom Harré's work on modeling in science (particularly physics and psychology). In over 28 authored books and 240 articles and book chapters, Rom Harré of Georgetown University in Washington, DC is a towering figure in philosophy, linguistics, and social psychology. He has inspired a generation of scholars, both for the ways in which his research is carried out and his profound insights. For Harré, (...) the stunning discoveries of research demand a kind of thinking that is found in the construction and control of models. Iconic modeling is pivotal for representing real-world structures, explaining phenomena, manipulating instruments, constructing theories, and acquiring data. This volume in the new Elsevier book series Studies in Multidisciplinarity includes major topics on the structure and function of models, the debates over scientific realism, explanation through analogical modeling, a metaphysics for physics, the rationale for experimentation, and modeling in social encounters. * A multidisciplinary work of sweeping scope about the nature of science * Revolutionary interpretation that challenges conventional wisdom about the character of scientific thinking * Profound insights about fundamental challenges to contemporary physics * Brilliant discoveries into the nature of social interaction and human identity * Presents a rational conception of methods for acquiring knowledge of remote regions of the world * Written by one of the great thinkers of our time. (shrink)
Currently, the widely used notion of activity is increasingly present in computer science. However, because this notion is used in specific contexts, it becomes vague. Here, the notion of activity is scrutinized in various contexts and, accordingly, put in perspective. It is discussed through four scientific disciplines: computer science, biology, economics, and epistemology. The definition of activity usually used in simulation is extended to new qualitative and quantitative definitions. In computer science, biology and economics disciplines, the new simulation activity definition (...) is first applied critically. Then, activity is discussed generally. In epistemology, activity is discussed, in a prospective way, as a possible framework in models of human beliefs and knowledge. (shrink)
Scientific reasoning has long been an integral part of critical thinking taxonomies. In practice, however, it is frequently limited to induction, hypothesis testing and experimental design, thereby neglecting the central importance of modeling to contemporary scientific reasoning. In this paper, I wish to establish that this neglect undermines the possibility of critical engagement with the public discourse surrounding scientific reasoning. As a step towards rectifying that disconnect, I present one resource that I have developed to teach modeling in (...) an introductory critical thinking course. (shrink)
Lewis proposes a “reconceptualization” of how to link the psychology and neurobiology of emotion and cognitive-emotional interactions. His main proposed themes have actually been actively and quantitatively developed in the neural modeling literature for more than 30 years. This commentary summarizes some of these themes and points to areas of particularly active research in this area.
Borrett, Kelly and Kwan claim to provide neural-network models of important aspects of subjective human experience. To sidestep the long-standing and assumedly insurmountable problems with providing models of inner experience, they turn to a body-centered interpretation of experience, drawn from the work of Merleau-Ponty. This body-centered interpretation makes experience more tractable by linking it closely with bodily movement. However, when it comes to modeling, Borrett et al. ignore this body-centered interpretation and revert back to the traditional view of inner (...) experience as existing apart from the body. The result is uninteresting on two counts. The models that they present cannot be taken seriously as models of real inner experience. Additionally, these models do not apply to or extend the idea of a different, body-centered interpretation of experience either. (shrink)
This commentary gives a personal perspective on modeling and modeling developments in cognitive science, starting in the 1950s, but focusing on the author’s personal views of modeling since training in the late 1960s, and particularly focusing on advances since the official founding of the Cognitive Science Society. The range and variety of modeling approaches in use today are remarkable, and for many, bewildering. Yet to come to anything approaching adequate insights into the infinitely complex fields of (...) mind, brain, and intelligent systems, an extremely wide array of modeling approaches is vital and necessary. (shrink)
The moral ideology of banking and insurance employees in Spain was examined along with supervisor role modeling and ethics-related policies and procedures for their association with ethical behavioral intent. In addition to main effects, we found evidence supporting that the person–situation interactionist perspective in supervisor role modeling had a stronger positive relationship with ethical intention among employees with relativist moral ideology. Also as hypothesized, formal ethical polices and procedures were positively related to ethical intention among those with universal (...) beliefs, but the relationship was much weaker among relativists. Thus, firms wishing to optimally promote ethical attitudes and behavior must tailor their organization-based initiatives to the individual characteristics of their employees. (shrink)
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility (...) of models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals. (shrink)
Modeling a complex phenomena such as the mind presents tremendous computational complexity challenges. Modeling field theory (MFT) addresses these challenges in a non-traditional way. The main idea behind MFT is to match levels of uncertainty of the model (also, a problem or some theory) with levels of uncertainty of the evaluation criterion used to identify that model. When a model becomes more certain, then the evaluation criterion is adjusted dynamically to match that change to the model. This process (...) is called the Dynamic Logic of Phenomena (DLP) for model construction and it mimics processes of the mind and natural evolution. This paper provides a formal description of DLP by specifying its syntax, semantics, and reasoning system. We also outline links between DLP and other logical approaches. Computational complexity issues that motivate this work are presented using an example of polynomial models. (shrink)
This paper introduces a new approach to economic analysis. We show how to move from deductive to inductivemodeling and thereby reunite economics with approaches used in the natural sciences. This paper presents the empathy-generosity-punishment model as an example of research based on observation, experimentation, and the elimination of alternatives. Inductivemodeling in neuroeconomics allows the identification of the physiologic mechanisms that produce behavior. Unlike most neuroeconomics studies, we show how to establish causation by using drugs (...) to manipulate brain activity. This approach is demonstrated using three experiments that circumscribe the brain processes behind prosocial behavior. (shrink)
The book reveals different dimensions of modeling in the historical sciences. Papers collected in the first part (Ontology of the Historical Process) consider different models of historical reality and discuss their status. The second part (Modeling in the Methodology of History) presents various forms of idealization in historiographic research. The papers in the third part (Modeling in the Research Practice) present various models of past reality (e.g. of Poland, Central Europe and the general history of the feudal (...) system) put forward by historians. Other papers consider the status of scientific laws and historical generalizations. The volume will be of interest to those who study analytical philosophy of history, methodology of history and social sciences, social philosophy as well as theory and history of historiography. (shrink)
Recent philosophical studies of probabilistic causation and statistical explanation have opened up the possibility of unifying philosophical approaches with causal modeling as practiced in the social and biological sciences. This unification rests upon the statistical tools employed, the principle of common cause, the irreducibility of causation to statistics, and the idea of causal process as a suitable framework for understanding causal relationships. These four areas of contact are discussed with emphasis on the relevant aspects of causal modeling.
El artículo muestra una estrategia metodológica para garantizar la dirección del proceso educativo para el desarrollo de la habilidad intelectual modelación en los estudiantes; devela sus principales fundamentos epistémicos, sus etapas y acciones esenciales. Para ello se aplicaron métodos científicos de investigación. La constatación de los resultados brinda evidencias positivas acerca de su pertinencia, al considerar el carácter coparticipativo y coprotagónico que adquieren las influencias educativas en el contexto institucional en la dirección de un proceso educativo único. The article shows (...) a methodological strategy to ensure the teaching process direction in order develop intellectual skill modeling in students; It reveals its main epistemic foundations, its stages and essential actions. That is the reason for which different research scientific methods were applied. Verification of results provides positive evidence of its pertinence when considering the collaborative nature of the educational influences within the institutional context in the direction of a unique educational process. (shrink)
In this paper we review some problems with traditional approaches for acquiring and representing knowledge in the context of developing user interfaces. Methodological implications for knowledge engineering and for human-computer interaction are studied. It turns out that in order to achieve the goal of developing human-oriented (in contrast to technology-oriented) human-computer interfaces developers have to develop sound knowledge of the structure and the representational dynamics of the cognitive system which is interacting with the computer.We show that in a first step (...) it is necessary to study and investigate the different levels and forms of representation that are involved in the interaction processes between computers and human cognitive systems. Only if designers have achieved some understanding about these representational mechanisms, user interfaces enabling individual experiences and skill development can be designed. In this paper we review mechanisms and processes for knowledge representation on a conceptual, epistemological, and methodologieal level, and sketch some ways out of the identified dilemmas for cognitive modeling in the domain of human-computer interaction. (shrink)
In “Can Models of God Compete?”, J. R. Hustwit engages with fundamental questions regarding the epistemological foundations of modeling God. He argues that the approach of fallibilism best captures the criteria he employs to choose among different “models of God-modeling,” including one criterion that I call the Descriptive Criterion. I argue that Hustwit’s case for fallibilism should include both a stronger defense for the Descriptive Criterion and an explanation of the reasons that fallibilism does not run awry of (...) this criterion in virtue of its apparent inability to make sense of debates among models of God extant in religious communities. This paper was delivered during the APA Pacific 2007 Mini-Conference on Models of God. (shrink)
In international relations theory, there is a long history of Richardson-like modeling of the evolution of military capability. Usually, such models are deterministic and predictive and do not allow for the representation of the transition from competitive peace to shooting war. More recently, models have been developed which attempt to represent the evolution of relationship between nations. The relationship between nations, varying from friendship to hostility, is taken to be synonymous with the intent of nations towards each other, varying (...) from good will to malice. Generally, these relationship models do not include capability though common sense would indicate that capability and mutual intent should profoundly influence each other. A model is presented here which combines these two fundamental attributes of international relations and attempts to represent the outbreak of war in the world system by the onset of deterministic chaos in the extended model. (shrink)
Modeling cognition by structural analysis of representation leads to systematic difficulties which are not resolvable. We analyse the merits and limits of a representation-based methodology to modeling cognition by treating Jackendoff's Consciousness and the Computational Mind as a good case study. We note the effects this choice of methodology has on the view of consciousness he proposes, as well as a more detailed consideration of the computational mind. The fundamental difficulty we identify is the conflict between the desire (...) for modular processors which map directly onto representations and the need for dynamically interacting control. Our analysis of this approach to modeling cognition is primarily directed at separating merits from problems and inconsistencies by a critique internal to this approach; we also step outside the framework to note the issues it ignores. (shrink)
The following list contains a survey of some important and recent research in modeling face-to-face conversation. The list below is a presented as a guide to the literature by topic and date; we include complete citations afterwards in alphabetical order. For brevity, research works are keyed by ﬁrst author and date only (we use these keys on the slides as well as in this list). Of course, most papers are multiply authored. The list is not intended to be exhaustive. (...) Our primary aim is simply to provide bibliographic information for all the research that we will refer to during the ESSLLI class itself. The entries also provide a sampling from ongoing research projects so that you can get an overall sense of the state of the ﬁeld and begin to follow up topics of particular interest to you. (shrink)
Computer graphic modeling forms an increasing part of archaeological practice, implicated in modes of recording objects and spaces, interpretation of types, management of three-dimensional information, creation of artificial experiences of place for interpretation, and representation of archaeological ideas to a broader public. In all spheres of life computer graphics are increasingly influential—by some estimates computed visions constitute the "dominant medium of thought" (Gooding 2008, p. 1). Archaeological computer graphics build on a long tradition of physical model building for the (...) development of understanding, and representation of conclusions. Such physical models are now finding a renewed significance as .. (shrink)
Levelt et al. attempt to “model their theory” with WEAVER++. Modeling theories requires a model theory. The time is ripe for a methodology for building, testing, and evaluating computational models. We propose a tentative, five-step framework for tackling this problem, within which we discuss the potential strengths and weaknesses of Levelt et al.'s modeling approach.
Modeling is a relatively new topic in biblical and related subjects—it was first introduced in the 1970s—and it is controversial because the application of social-scientific models raises the difficult question of the cultural gap between the present societies, where the models are usually developed, and the ancient cultural context to which the models are applied.Because biblical and related studies may not belong to the most familiar scholarly fields of the readers of this journal, I first sketch an overall picture (...) of the development of the discipline and its main methods (Section 2). The subsequent sections summarize the arguments presented for and against modeling in biblical studies (Section 3), and discuss .. (shrink)
The image of a ball rolling along a series of hills and valleys is an effective heuristic by which to communicate stability concepts in ecology. However, the dynamics of this landscape model have little to do with ecological systems. Other landscape representations, however, are possible. These include the particle on an energy landscape, the potential landscape, and the Lyapunov function landscape. I discuss the dynamics that these representations admit, and the application of each to ecological modeling and the analysis (...) and representation of stability. (shrink)
Subjective experience is transformed into objective reality for societal members through cultural idea systems that can be represented with theory and data models. A theory model shows relationships and their logical implications that structure a cultural idea system. A data model expresses patterning found in ethnographic observations regarding the behavioral implementation of cultural idea systems. An example of this duality for modeling cultural idea systems is illustrated with Arabic proverbs that structurally link friend and enemy as concepts through a (...) culturally defined computational system. Computational systems also generate new concepts, as will be illustrated through a theory model for the structure of a .. (shrink)
In this study, a decision modeling approach is used to measure the relative importances of four social responsibility components. When given information concerning the economic, legal, ethical and philanthropic activities of 16 hypothetical organizations, 159 junior and senior management students judged the social responsibility of these firms. The study used two types of analysis: first, a within-subject regression, then a between-subject ANOVA. Results showed ethical behavior to be most important in judging social responsibility; legal behavior was second, discretionary behavior (...) third, and economic behavior was least important. In addition, all but one rater consistently applied the social responsibility components. The implications of these results and suggestions for future research are discussed. (shrink)
The Representational Theory of Measurement conceives measurement as establishing homomorphisms from empirical relational structures into numerical relation structures, called models. There are two different approaches to deal with the justification of a model: an axiomatic and an empirical approach. The axiomatic approach verifies whether a given relational structure satisfies certain axioms to secure homomorphic mapping. The empirical approach conceives models to function as measuring instruments by transferring observations of a phenomenon under investigation into quantitative facts about that phenomenon. These facts (...) are evaluated by their accuracy and precision. Precision is generally achieved by least squares methods and accuracy by calibration. For calibration standards are needed. Then two polar strategies can be distinguished: white-box modeling and black-box modeling. The first strategy of modeling aims at estimating the invariant (structural) equations of the phenomenon, thereby fulfilling Hertz’s correctness requirement. The latter strategy of modeling is to use known stable facts about the phenomenon to adjust the model parameters, thereby fulfilling Hertz’s appropriateness requirement. For this latter strategy, the requirement of models as homomorphic mappings has been dropped. Where one will find the axiomatic approach more often used for measurement in the laboratory, the empirical approach is more appropriate for measurement outside the laboratory. The reason for this is that for measurement of phenomena outside the laboratory, one also needs to take account of the environment to achieve accurate results. Environments are generally too relation-rich for an axiomatic approach, which are only applicable for relation-poor systems (laboratories). The white-box modeling strategy, reflecting the complexity of the environment due to its correctness requirement, will, however, lead to immensely large models. To avoid this problem, modular design is an appropriate strategy to reduce this complexity. Modular design is a grey-box modeling strategy. Grey-box models are assemblies of modules; these are black boxes with standard interface. It should be noted that the structure of the assemblage need not be homomorphic to the relations describing the interaction between phenomenon and environment. These three modeling strategies map out the possible designs for computer simulations as measuring instruments. Whether a simulation is based on a white-box, grey-box or black-box model is only determined by (the complexity of) the relationship between the phenomenon and its environment and not by e.g. its materiality or physicality. (shrink)
Agent-based modeling is starting to crack problems that have resisted treatment by analytical methods. Many of these are in the physical and biological sciences, such as the growth of viruses in organisms, flocking and migration patterns, and models of neural interaction. In the social sciences, agent-based models have had success in such areas as modeling epidemics, traffic patterns, and the dynamics of battlefields. And in recent years, the methodology has begun to be applied to economics, simulating such phenomena (...) as energy markets and the design of auctions. (shrink)