We argue from the Church-Turing thesis (Kleene Mathematical logic. New York: Wiley 1967) that a program can be considered as equivalent to a formal language similar to predicate calculus where predicates can be taken as functions. We can relate such a calculus to Wittgenstein’s first major work, the Tractatus, and use the Tractatus and its theses as a model of the formal classical definition of a computer program. However, Wittgenstein found flaws in his initial great work and he explored these (...) flaws in a new thesis described in his second great work; the Philosophical Investigations. The question we address is “can computer science make the same leap?” We are proposing, because of the flaws identified by Wittgenstein, that computers will never have the possibility of natural communication with people unless they become active participants of human society. The essential difference between formal models used in computing and human communication is that formal models are based upon rational sets whereas people are not so restricted. We introduce irrational sets as a concept that requires the use of an abductive inference system. However, formal models are still considered central to our means of using hypotheses through deduction to make predictions about the world. These formal models are required to continually be updated in response to peoples’ changes in their way of seeing the world. We propose that one mechanism used to keep track of these changes is the Peircian abductive loop. (shrink)
This paper critically assesses several model accounts written in the 1990s by epistemologists and philosophers of science by relating them to a specific but crucial example of model building, namely Hicks's (1937) construction of the first version of the IS-LM model, and examining in how far these accounts apply to this case. Thereby the paper contributes to answering why and how economists build models. The view crystallizes that economists build models not only to facilitate the conceptual exploration of theory, but (...) also to inform our understanding of the world. Elements of model building, such as analogies, metaphors, stories, theoretical notions, empirical findings and mathematizations, but also the mode of representation shape the model and largely determine how much can be learned about theory and the real world by using the model as a tool. (shrink)
The purpose of this paper is to provide an analysis of the concept of model as it is applied in the physical sciences, and to show that this analysis is fruitful insofar as it can be used as an acceptable account of the role of models in physical explanation.A realist interpretation of theories is adopted as a point of departure. A distinction between theories and models is drawn on the basis of this interpretation. The relation between model and prototype is (...) expressed in terms of the concepts of access and accessibility, and four conditions are proposed as an analysis of the concept of model. It is concluded that models are introduced when approximate methods are used. (shrink)
The present paper argues that ‘mature mathematical formalisms’ play a central role in achieving representation via scientific models. A close discussion of two contemporary accounts of how mathematical models apply—the DDI account (according to which representation depends on the successful interplay of denotation, demonstration and interpretation) and the ‘matching model’ account—reveals shortcomings of each, which, it is argued, suggests that scientific representation may be ineliminably heterogeneous in character. In order to achieve a degree of unification that is compatible with successful (...) representation, scientists often rely on the existence of a ‘mature mathematical formalism’, where the latter refers to a—mathematically formulated and physically interpreted—notational system of locally applicable rules that derive from (but need not be reducible to) fundamental theory. As mathematical formalisms undergo a process of elaboration, enrichment, and entrenchment, they come to embody theoretical, ontological, and methodological commitments and assumptions. Since these are enshrined in the formalism itself, they are no longer readily obvious to either the novice or the proficient user. At the same time as formalisms constrain what may be represented, they also function as inferential and interpretative resources. (shrink)
Edited by Daniel Rothbart of George Mason University in Virginia, this book is a collection of Rom Harré's work on modeling in science (particularly physics and psychology). In over 28 authored books and 240 articles and book chapters, Rom Harré of Georgetown University in Washington, DC is a towering figure in philosophy, linguistics, and social psychology. He has inspired a generation of scholars, both for the ways in which his research is carried out and his profound insights. For Harré, the (...) stunning discoveries of research demand a kind of thinking that is found in the construction and control of models. Iconic modeling is pivotal for representing real-world structures, explaining phenomena, manipulating instruments, constructing theories, and acquiring data. This volume in the new Elsevier book series Studies in Multidisciplinarity includes major topics on the structure and function of models, the debates over scientific realism, explanation through analogical modeling, a metaphysics for physics, the rationale for experimentation, and modeling in social encounters. * A multidisciplinary work of sweeping scope about the nature of science * Revolutionary interpretation that challenges conventional wisdom about the character of scientific thinking * Profound insights about fundamental challenges to contemporary physics * Brilliant discoveries into the nature of social interaction and human identity * Presents a rational conception of methods for acquiring knowledge of remote regions of the world * Written by one of the great thinkers of our time. (shrink)
In "Bayesian Confirmation of Theories that Incorporate Idealizations", Michael Shaffer argues that, in order to show how idealized hypotheses can be confirmed, Bayesians must develop a coherent proposal for how to assign prior probabilities to counterfactual conditionals. This paper develops a Bayesian reply to Shaffer's challenge that avoids the issue of how to assign prior probabilities to counterfactuals by treating idealized hypotheses as abstract descriptions. The reply allows Bayesians to assign non-zero degrees of confirmation to idealized hypotheses and to capture (...) the intuition that less idealized hypotheses tend to be better confirmed than their more idealized counterparts. (shrink)
Scientists have used models for hundreds of years as a means of describing phenomena and as a basis for further analogy. In Scientific Models in Philosophy of Science, Daniela Bailer-Jones assembles an original and comprehensive philosophical analysis of how models have been used and interpreted in both historical and contemporary contexts. Bailer-Jones delineates the many forms models can take (ranging from equations to animals; from physical objects to theoretical constructs), and how they are put to use. She examines early mechanical (...) models employed by nineteenth-century physicists such as Kelvin and Maxwell, describes their roots in the mathematical principles of Newton and others, and compares them to contemporary mechanistic approaches. Bailer-Jones then views the use of analogy in the late nineteenth century as a means of understanding models and to link different branches of science. She reveals how analogies can also be models themselves, or can help to create them. The first half of the twentieth century saw little mention of models in the literature of logical empiricism. Focusing primarily on theory, logical empiricists believed that models were of temporary importance, flawed, and awaiting correction. The later contesting of logical empiricism, particularly the hypothetico-deductive account of theories, by philosophers such as Mary Hesse, sparked a renewed interest in the importance of models during the 1950s that continues to this day. Bailer-Jones analyzes subsequent propositions of: models as metaphors; Kuhn's concept of a paradigm; the Semantic View of theories; and the case study approaches of Cartwright and Morrison, among others. She then engages current debates on topics such as phenomena versus data, the distinctions between models and theories, the concepts of representation and realism, and the discerning of falsities in models. (shrink)
This paper examines two recent approaches to the nature and functioning of economic models: models as isolating representations and models as credible constructions. The isolationist view conceives of economic models as surrogate systems that isolate some of the causal mechanisms or tendencies of their respective target systems, while the constructionist approach treats them rather like pure constructions or fictional entities that nevertheless license different kinds of inferences. I will argue that whereas the isolationist view is still tied to the representationalist (...) understanding of models that takes the model-target dyad as the basic unit of analysis, the constructionist perspective can better accommodate the way we actually acquire knowledge through them. Using the example of Tobin’s ultra-Keynesian model I will show how many of the epistemic characteristics of modelling tend to go unrecognised if too much focus is placed on the model-target dyad. (shrink)
How do models give us knowledge? The case of Carnot’s ideal heat engine Content Type Journal Article Category Original paper in Philosophy of Science Pages 309-334 DOI 10.1007/s13194-011-0029-3 Authors Tarja Knuuttila, Theoretical Philosophy, University of Helsinki, P.O. Box 24, FIN-00014 Helsinki, Finland Mieke Boon, Department of Philosophy, University of Twente, Postbox 217, 7500 AE Enschede, The Netherlands Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912 Journal Volume Volume 1 Journal Issue Volume 1, Number 3.
Models carry the meaning of science. This puts a tremendous burden on the process of model selection. In general practice, models are selected on the basis of their relative goodness of fit to data penalized by model complexity. However, this may not be the most effective approach for selecting models to answer a specific scientific question because model fit is sensitive to all aspects of a model, not just those relevant to the question. Model Structural Adequacy analysis is proposed as (...) a means to select models based on their ability to answer specific scientific questions given the current understanding of the relevant aspects of the real world. (shrink)
In his 1966 paper "The Strategy of model-building in Population Biology", Richard Levins argues that no single model in population biology can be maximally realistic, precise and general at the same time. This is because these desirable model properties trade-off against one another. Recently, philosophers have developed Levins' claims, arguing that trade-offs between these desiderata are generated by practical limitations on scientists, or due to formal aspects of models and how they represent the world. However this project is not complete. (...) The trade-offs discussed by Levins had a noticeable effect on modelling in population biology, but not on other sciences. This raises questions regarding why such a difference holds. I claim that in order to explain this finding, we must pay due attention to the properties of the systems, or targets modelled by the different branches of science. (shrink)
Despite their best efforts, scientists may be unable to construct models that simultaneously exemplify every theoretical virtue. One explanation for this is the existence of tradeoffs: relationships of attenuation that constrain the extent to which models can have such desirable qualities. In this paper, we characterize three types of tradeoffs theorists may confront. These characterizations are then used to examine the relationships between parameter precision and two types of generality. We show that several of these relationships exhibit tradeoffs and discuss (...) what consequences those tradeoffs have for theoretical practice. (shrink)
Investigating random homicides involves constructing models of an odd sort. While the differences between these models and scientific models are radical, calling them models is justified both by functional and structural similarities. Serial homicide investigations illustrate the marked difference between theoretical models in science and the models applied in these criminal investigations. This is further illustrated by considering Glymourian bootstrapping in attempts to solve such homicides. The solutions that result differ radically from explanations in science that are confirmed or disconfirmed (...) by occurrences. Unlike the scientist, the flatfoot gumshoe is also barefoot: he is bereft of a general, determinative theoretical frame. This result shows that criminal investigations do not apply science in the Galilean sense. (shrink)
In this article we defend the inferential view of scientific models and idealisation. Models are seen as “inferential prostheses” (instruments for surrogative reasoning) construed by means of an idealisation-concretisation process, which we essentially understand as a kind of counterfactual deformation procedure (also analysed in inferential terms). The value of scientific representation is understood in terms not only of the success of the inferential outcomes arrived at with its help, but also of the heuristic power of representation and their capacity to (...) correct and improve our models. This provides us with an argument against Sugden’s account of credible models: the likelihood or realisticness (their “credibility”) is not always a good measure of their acceptability. As opposed to “credibility” we propose the notion of “enlightening”, which is the capacity of giving us understanding in the sense of an inferential ability. (shrink)
The philosophy of measurement studies the conceptual, ontological, epistemic, and technological conditions that make measurement possible and reliable. A new wave of philosophical scholarship has emerged in the last decade that emphasizes the material and historical dimensions of measurement and the relationships between measurement and theoretical modeling. This essay surveys these developments and contrasts them with earlier work on the semantics of quantity terms and the representational character of measurement. The conclusions highlight four characteristics of the emerging research program in (...) philosophy of measurement: it is epistemological, coherentist, practice oriented, and model based. (shrink)
This work develops an epistemology of measurement, that is, an account of the conditions under which measurement and standardization methods produce knowledge as well as the nature, scope, and limits of this knowledge. I focus on three questions: (i) how is it possible to tell whether an instrument measures the quantity it is intended to? (ii) what do claims to measurement accuracy amount to, and how might such claims be justified? (iii) when is disagreement among instruments a sign of error, (...) and when does it imply that instruments measure different quantities? Based on a series of case studies conducted in collaboration with the US National Institute of Standards and Technology (NIST), I argue for a model-based approach to the epistemology of physical measurement. To measure a physical quantity, I argue, is to estimate the value of a parameter in an idealized model of a physical process. Such estimation involves inference from the final state (‘indication’) of a process to the value range of a parameter (‘outcome’) in light of theoretical and statistical assumptions. Contrary to contemporary philosophical views, measurement outcomes cannot be obtained by mapping the structure of indications. Instead, measurement outcomes as well as claims to accuracy, error and quantity individuation can only be adjudicated relative to a choice of idealized modelling assumptions. (shrink)
Contrary to the claim that measurement standards are absolutely accurate by definition, I argue that unit definitions do not completely fix the referents of unit terms. Instead, idealized models play a crucial semantic role in coordinating the theoretical definition of a unit with its multiple concrete realizations. The accuracy of realizations is evaluated by comparing them to each other in light of their respective models. The epistemic credentials of this method are examined and illustrated through an analysis of the contemporary (...) standardization of time. I distinguish among five senses of ‘measurement accuracy’ and clarify how idealizations enable the assessment of accuracy in each sense. (shrink)
This paper aims at integrating the work onanalogical reasoning in Cognitive Science into thelong trend of philosophical interest, in this century,in analogical reasoning as a basis for scientificmodeling. In the first part of the paper, threesimulations of analogical reasoning, proposed incognitive science, are presented: Gentner''s StructureMatching Engine, Mitchel''s and Hofstadter''s COPYCATand the Analogical Constraint Mapping Engine, proposedby Holyoak and Thagard. The differences andcontroversial points in these simulations arehighlighted in order to make explicit theirpresuppositions concerning the nature of analogicalreasoning. In the (...) last part, this debate in cognitivescience is applied to some traditional philosophicalaccounts of formal and material analogies as a basisfor scientific modeling, like Mary Hesse`s, and tomore recent ones, that already draw from the work inArtificial Intelligence, like that proposed byAronson, Harré and Way. (shrink)
Recent accounts of scientific method suggest that a model, or analogy, for an axiomatized theory is another theory, or postulate set, with an identical calculus. The present paper examines five central theses underlying this position. In the light of examples from physical science it seems necessary to distinguish between models and analogies and to recognize the need for important revisions in the position under study, especially in claims involving an emphasis on logical structure and similarity in form between theory and (...) analogy. While formal considerations are often relevant in the employment of an analogy they are neither as extensive as proponents of this viewpoint suggest, nor are they in most cases sufficient for allowing analogies to fulfill the roles imputed to them. Of major importance, and what these authors generally fail to consider, are physical similarities between analogue and theoretical object. Such similarities, which are characteristic in varying degrees of most analogies actually employed, play an important role in affording a better understanding of concepts in the theory and also in the development of the theoretical assumptions. (shrink)
In order to account for the actual function of analogue models in extending theories to new domains, we argue that it is necessary to analyze the inference involved into a complex two dimensional form. This form must go horizontally from descriptions of entities used as a model to redescriptions of entities in the new domain, and it must go vertically from an observation language to a theoretical language having a different and exclusive logical syntax. This complex inference can only be (...) intelligible if we interpret theoretical terms in a platonic manner, a la Körner. (shrink)
Metaphors and models involve correspondences between events in separate domains. They differ in the form and precision of how the correspondences are expressed. Examples include correspondences between phylogenic and ontogenic selection, and wave and particle metaphors of the mathematics of quantum physics. An implication is that the target article's metaphors of resistance to change may have heuristic advantages over those of momentum.
This paper examines the hypothesis that analogies may play a role in the generation of new ideas that are built into new explanatory theories. Methods of theory construction by analogy, by failed analogy, and by modular components from several analogies are discussed. Two different analyses of analogy are contrasted: direct mapping (Mary Hesse) and shared abstraction (Michael Genesereth). The structure of Charles Darwin's theory of natural selection shows various analogical relations. Finally, an "abstraction for selection theories" is shown to be (...) the structure of a number of theories. (shrink)
The purpose of this paper is to present two kinds of analogical representational change, both occurring early in the analogy-making process, and then, using these two kinds of change, to present a model unifying one sort of analogy-making and categorization. The proposed unification rests on three key claims: (1) a certain type of rapid representational abstraction is crucial to making the relevant analogies (this is the first kind of representational change; a computer model is presented that demonstrates this kind of (...) abstraction), (2) rapid abstractions are induced by retrieval across large psychological distances, and (3) both categorizations and analogies supply understandings of perceptual input via construing, which is a proposed type of categorization (this is the second kind of representational change). It is construing that finalizes the unification. (shrink)
Six questions are posed that are really specific versions of this question: How can Leech et al.'s system be extended to handle adult-level analogies that frequently combine concepts from semantically distant domains sharing few relational labels and that involve the production of abstractions? It is Leech et al. who stress development; finding such an extension would seem to have to be high on their priority list.
Sometimes analogy researchers talk as if the freshness of an experience of analogy resides solely in seeing that something is like something else -- seeing that the atom is like a solar system, that heat is like flowing water, that paint brushes work like pumps, or that electricity is like a teeming crowd. But analogy is more than this. Analogy isn't just seeing that the atom is like a solar system; rather, it is seeing something new about the atom, an (...) observation enabled by 'looking' at atoms from the perspective of one's understanding of solar systems. The question for analogy researchers then is this: Where does this new knowledge about atoms come from? How can an analogy provide new knowledge and new understanding? (shrink)
Analogical reminding in humans and machines is a great source for chance discoveries because analogical reminding can produce representational change and thereby produce insights. Here, we present a new kind of representational change associated with analogical reminding called packing. We derived the algorithm in part from human data we have on packing. Here, we explain packing and its role in analogy making, and then present a computer model of packing in a micro-domain. We conclude that packing is likely used in (...) human chance discoveries, and is needed if our machines are to make their own chance discoveries. (shrink)
The uses of analogy are ancient. It can even be argued that analogical thinking is the most basic cognitive tool humans have to move from the unknown to the known (Gentner et al. 2001). As Olson succinctly puts it, “analogies are useful when it is desired to compare an unfamiliar system with one that is better known” (Olson 1943, p. i). Analogical thinking is thus ubiquitous and found in many texts at least since Homer in Antiquity (Lloyd 1966). For example, (...) it is well known that to explain the properties of atoms, Aristotle compared them to the letters of alphabets, something much better known to his readers than invisible atoms (Hallyn 2000).Many studies have looked at particular uses of analogies among the .. (shrink)
Analogy making from examples is a central task in intelligent system behavior. A lot of real world problems involve analogy making and generalization. Research investigates these questions by building computer models of human thinking concepts. These concepts can be divided into high level approaches as used in cognitive science and low level models as used in neural networks. Applications range over the spectrum of recognition, categorization and analogy reasoning. A major part of legal reasoning could be formally interpreted as an (...) analogy making process. Because it is not the same as reasoning in mathematics or the physical sciences, it is necessary to use a method, which incorporates first the ability to specify likelihood and second the opportunity of including known court decisions. We use for modelling the analogy making process in legal reasoning neural networks and fuzzy systems. In the first part of the paper a neural network is described to identify precedents of immaterial damages. The second application presents a fuzzy system for determining the required waiting period after traffic accidents. Both examples demonstrate how to model reasoning in legal applications analogous to recent decisions: first, by learning a system with court decisions, and second, by analyzing, modelling and testing the decision making with a fuzzy system. (shrink)
Language understanding is one of the most important characteristics for human beings. As a pervasive phenomenon in natural language, metaphor is not only an essential thinking approach, but also an ingredient in human conceptual system. Many of our ways of thinking and experiences are virtually represented metaphorically. With the development of the cognitive research on metaphor, it is urgent to formulate a computational model for metaphor understanding based on the cognitive mechanism, especially with the view to promoting natural language understanding. (...) Many works have been done in pragmatics and cognitive linguistics, especially the discussions on metaphor understanding process in pragmatics and metaphor mapping representation in cognitive linguistics. In this paper, a theoretical framework for metaphor understanding based on the embodied mechanism of concept inquiry is proposed. Based on this framework, ontology is introduced as the knowledge representation method in metaphor understanding, and metaphor mapping is formulated as ontology mapping. In line with the conceptual blending theory, a revised conceptual blending framework is presented by adding a lexical ontology and context as the fifth mental space, and a metaphor mapping algorithm is proposed. (shrink)
This paper presents a new algorithm to find an appropriate similarityunder which we apply legal rules analogically. Since there may exist a lotof similarities between the premises of rule and a case in inquiry, we haveto select an appropriate similarity that is relevant to both thelegal rule and a top goal of our legal reasoning. For this purpose, a newcriterion to distinguish the appropriate similarities from the others isproposed and tested. The criterion is based on Goal-DependentAbstraction (GDA) to select a (...) similarity such that an abstraction basedon the similarity never loses the necessary information to prove the ground (purpose of legislation) of the legal rule. In order to cope withour huge space of similarities, our GDA algorithm uses some constraintsto prune useless similarities. (shrink)
Richard Goldschmidt famously rejected the notion of atomic and corpuscular genes, arranged on the chromosome like beads-on-a-string. I provide an exegesis of Goldschmidt’s intuition by analyzing his repeated and extensive use of metaphorical language and analogies in his attempts to convey his notion of the nature of the genetic material and specifically the significance of chromosomal pattern. The paper concentrates on Goldschmidt’s use of metaphors in publications spanning 1940-1955. -/- .
In this paper, I begin with a discussion of Giere’s recent work arguing against taking models as works of fiction. I then move on to explore a spectrum of scientific models that goes from the obviously fictional to the not so obviously fictional. And then I discuss the modeling of the unobservable and make a case for the idea that despite difficulties of defining them, unobservable systems are modeled in a fundamentally different way than the observable systems. While idealization and (...) approximation is key to the making of models for the observable systems, they are in fact inoperable, at least not straightforwardly so, regarding models for the unobservable. And because of this point, which is so far neglected in the literature, I speculate that factionalism may have a better chance with models for the unobservable. (shrink)
This article presents an analogical account of the meaning of function attributions in biology. To say that something has a function analogizes it with an artifact, but since the analogy rests on a necessary (but possibly insufficient) basis, function statements can still be assessed as true or false in an objective sense.
This paper develops a new version of instrumentalism, in light of progress in the realism debate in recent decades, and thereby defends the view that instrumentalism remains a viable philosophical position on science. The key idea is that talk of unobservable objects should be taken literally only when those objects are assigned properties (or described in terms of analogies involving things) with which we are experientially (or otherwise) acquainted. This is derivative from the instrumentalist tradition in so far as the (...) distinction between unobservable and observable is taken to have significance with respect to meaning. (shrink)
one takes to be the most salient, any pair could be judged more similar to each other than to the third. Goodman uses this second problem to showthat there can be no context-free similarity metric, either in the trivial case or in a scientifically ...
In this paper, a criticism of the traditional theories of approximation and idealization is given as a summary of previous works. After identifying the real purpose and measure of idealization in the practice of science, it is argued that the best way to characterize idealization is not to formulate a logical model – something analogous to Hempel's D-N model for explanation – but to study its different guises in the praxis of science. A case study of it is then made (...) in thermostatistical physics. After a brief sketch of the theories for phase transitions and critical phenomena, I examine the various idealizations that go into the making of models at three difference levels. The intended result is to induce a deeper appreciation of the complexity and fruitfulness of idealization in the praxis of model-building, not to give an abstract theory of it. (shrink)
In this paper, a criticism of the traditional theories of approximation and idealization is given. After identifying the real purpose and measure of idealization in the practice of science, it is argued that the best way to characterize idealization is not to formulate a logical model -- something analogous to Hempel's D-N model for explanation -- but to study its different guises in the praxis of science. A case study of it is then made in thermostatistical physics. After a brief (...) sketch of the theories for phase transitions and critical phenomena, I examine the various idealizations that go into the making of models at three difference levels. (shrink)
Based on a formalization of constructive empiricism’s core concept of empirical adequacy, I show that some previous discussions rest on misunderstandings of empirical adequacy. Using one of the inspirations for constructive empiricism, I generalize the concept of a theory to avoid implausible presumptions about the relations of theoretical concepts and observations, and generalize empirical adequacy to allow for lack of knowledge, approximations, and successive gain of knowledge and precision. As a test case, I provide an application of the concepts to (...) a simple interference phenomenon. (shrink)