Edited by James Nguyen(Stockholm University, London School of Economics, School of Advanced Study, University of London)
About this topic
Summary
Modeling is an increasingly important method in many fields of science. Scientific models are taken to be only partially similar to the phenomena they are used to study. Several philosophical questions result. For one, philosophers investigate how it is that models represent phenomena despite their differences, and what is responsible for models' epistemic success. This dovetails with questions about the nature of the representation relation. Philosophers also investigate abstraction and idealization in modeling, and some accord a further role to fictions. Finally, models are also significant in a different sense for the semantic view of theories.
We present an argument about the methodology of ethics, broadly conceived, drawing on recent research on modelling in the philosophy of science. More specifically, we argue that normative ethics should adopt the methodology of modelling. We make our case in two parts. First, despite the perhaps unfamiliar terminology, modelling already happens in ethics. We identify it, and argue that its practice could be improved by recognising that it is modelling and by adopting some methodological lessons from philosophy of science. Second, (...) modelling should be adopted more widely within normative ethics, because it fits well with various methodological ends we shall identify. Models can be used to investigate ethical questions in a manner that is systematic but relatively free of foundational theoretical commitments in first-order ethics. Models are more local, and less ambitious, than theories. They can be used to break deadlocks, by focusing attention on the particularities of a sub-domain and by providing a common tool–the surrogate model system–which each side can use to make their principles precise, illustrate the implications of their view, and identify sources of disagreement or points of agreement. We are pluralists about method, so this is not a call to abandon other philosophical methods. It is simply a plea for modelling, motivated by the method’s independent benefits and its fruitfulness in resolving some persistent methodological problems in ethics. (shrink)
This chapter concerns modal modeling practices: scientific modeling practices that are explicitly said to deliver, or should arguably be interpreted as delivering, support for modal conclusions. That includes, for instance, conclusions concerning possible causes, potential properties, and counterfactual histories. The chapter first outlines and gives examples of modal modeling practices and stresses the fact that such practices encompass a number of different kinds of modality, including both epistemic and objective modalities. It then describes three distinct but related sets of methodological (...) and epistemological issues raised by modal modeling and briefly reviews some possible ways to approach them. The chapter ends by highlighting some lacunae in the literature where further work is needed. (shrink)
This is a paper about model-building and overfitting in normative ethics. Overfitting is recognized as a methodological error in modeling in the philosophy of science and scientific practice, but this concern has not been brought to bear on the practice of normative ethics. I first argue that moral inquiry shares similarities with scientific inquiry in that both may productively rely on model-building, and, as such, overfitting worries should apply to both fields. I then offer a diagnosis of the problems of (...) overfitting in moral inquiry and explain how our current practice seems worryingly susceptible to such problems. I conclude by giving suggestions for how we might avoid overfitting when doing normative ethics. (shrink)
This article critically evaluates Itzhak Gilboa, Andrew Postlewaite, Larry Samuelson, and David Schmeidler’s account of economic models. First, it gives a selective overview of their argument, highlighting its emphasis on similarity and their oversight of the role of idealizations in economics. Second, it proposes a sketch of an account of models as arguments and argumentative devices. This account not only sheds light on Gilboa et al.’s approach, including its shortcomings, but also identifies key challenges in model-based inference, suggesting a fresh (...) perspective on the uses of models in economics for diverse objectives. (shrink)
I argue that dimensional analysis provides an answer to a skeptical challenge to the theory of model mediated measurement. The problem arises when considering the task of calibrating a novel measurement procedure, with greater range, to the results of a prior measurement procedure. The skeptical worry is that the agreement of the novel and prior measurement procedures in their shared range may only be apparent due to the emergence of systematic error in the exclusive range of the novel measurement procedure. (...) Alternatively: what if the two measurement procedures are not in fact measuring the same quantity? The theory of model mediated measurement can only say that we _assume_ that there is a common quantity. In contrast, I show that the satisfaction of dimensional homogeneity across the metrological extension is independent evidence for the so-called assumption. This is illustrated by the use of dimensional analysis in high pressure experiments. This results in an extension of the theory of model mediated measurement, in which a common quantity in metrological extension is no longer assumed, but hypothesized. (shrink)
Le présent travail s'inscrit à l'intersection de deux problèmes épistémologiques majeurs. D'une part, le problème de la démarcation scientifique, qui consiste à identifier ce qui distingue intrinsèquement un système (un énoncé, une théorie, ...) scientifique d'un système non scientifique ou pseudo-scientifique. D'autre part, le problème de l'unité épistémologique des sciences, qui consiste à se demander si toutes les disciplines à vocation scientifique peuvent être vues comme des instanciations d'une notion unique de la scientificité. Ces deux problèmes ont soulevé de nombreux (...) débats ayant mis en évidence un ensemble important de difficultés. Le terme « scientifique » désigne en effet des méthodes empiriques, des constructions théoriques et des pratiques de recherche si hétérogènes qu’il semble voué à l’échec d’en chercher une définition aisée à circonscrire. De plus, les objets des disciplines scientifiques sont eux-mêmes de natures très diverses, ce qui semble rendre pareillement caduque toute recherche d’un concept unique de science qui pourrait s’appliquer indépendamment de la discipline en question. Dans cette thèse, je me propose de prendre à contre-pied cet état de fait en soutenant la possibilité et la pertinence d'un modèle unitaire de la scientificité, tout en me restreignant à une approche épistémologique comparée entre les sciences physiques et les sciences sociales. Pour défendre mon propos, j’ai mobilisé deux types de réponses pouvant être opposées au constat présenté plus haut. D’une part, des réponses de principe, où j’examine et m’oppose à des arguments théoriques soutenant l’impossibilité ou en tout cas la difficulté de définir la scientificité en général, et la nécessité pour les sciences sociales de jouir d’une définition à part. D’autre part, je mobilise également des réponses par l’exemple. J’étudie alors plus en détail l’approche dite « analytique » en sociologie. Ce courant a cela d’intéressant pour mon propos qu’il ne semble pas nécessiter d’épistémologie alternative à celle ayant cours, par exemple, en physique ou en biologie, tout en prétendant bien produire des connaissances sur le monde social. Il s’agit donc d’un contre-exemple concret et manifeste de la thèse soutenant que la sociologie ne peut pas jouir du même type d’épistémologie que les autres disciplines. Plus concrètement, j’élabore un (méta-)modèle unitaire de la scientificité en me concentrant sur une unité d’analyse bien circonscrite : les modèles. Je distingue chez ses derniers, classiquement, deux composantes principales : une composante empirique, qui a pour vocation à identifier des régularités dans le réel auquel on a accès à travers des données, et une composante théorique, à visée explicative et classificatoire. Je propose alors de construire formellement un degré de scientificité global dans lequel se combinent la maximisation d’une certaine quantité d’information définie sur la composante empirique et un critère d’invariance structurelle défini sur la composante théorique. Ces diverses constructions, bien que formelles, permettent d’éclairer efficacement les questions épistémologiques que je me suis données au départ de mon travail, ce dernier ayant vocation à constituer une étape supplémentaire vers un modèle unitaire de la scientificité. (shrink)
Many recent AI systems take inspiration from biological episodic memory. Here, we ask how these ‘episodic-inspired’ AI systems might inform our understanding of biological episodic memory. We discuss work showing that these systems implement some key features of episodic memory whilst differing in important respects, and appear to enjoy behavioural advantages in the domains of strategic decision-making, fast learning, navigation, exploration and acting over temporal distance. We propose that these systems could be used to evaluate competing theories of episodic memory’s (...) operations and function. However, further work is needed to validate them as models of episodic memory and isolate the contributions of their memory systems to their behaviour. More immediately, we propose that these systems have a role to play in directing episodic memory research by highlighting novel or neglected hypotheses as pursuit-worthy. In this vein, we propose that the evidence reviewed here highlights two pursuit-worthy hypotheses about episodic memory’s function: that it plays a role in planning that is independent of future-oriented simulation, and that it is adaptive in virtue of its contributions to fast learning in novel, sparse-reward environments. (shrink)
It's standard in epistemology to approach questions about knowledge and rational belief using idealized, simplified models. But while the practice of constructing idealized models in epistemology is old, metaepistemological reflection on that practice is not. Greco argues that the fact that epistemologists build idealized models isn't merely a metaepistemological observation that can leave first-order epistemological debates untouched. Rather, once we view epistemology through the lens of idealization and model-building, the landscape looks quite different. Constructing idealized models is likely the best (...) epistemologists can do. Once one starts using epistemological categories like belief, knowledge, and confidence, the realm of idealization and model-building is entered. We can object to a model of knowledge by pointing to a better model, but in the absence of a better model, the fact that a framework for epistemologizing theorizing involves simplifications, approximations, and other inaccuracies-the fact of its status as an idealized model-is not in itself objectionable. Once we accept that theorizing in epistemological terms is inescapably idealized, a number of intriguing possibilities open up. Greco defends a package of epistemological views that might otherwise have looked indefensibly dismissive of our cognitive limitations-a package according to which we know a wide variety of facts with certainty, including what our evidence is, what we know and don't know, and what follows from our knowledge. (shrink)
Modern life sciences research is increasingly relying on artificial intelligence (AI) approaches to model biological systems, primarily centered around the use of machine learning (ML) models. Although ML is undeniably useful for identifying patterns in large, complex data sets, its widespread application in biological sciences represents a significant deviation from traditional methods of scientific inquiry. As such, the interplay between these models and scientific understanding in biology is a topic with important implications for the future of scientific research, yet it (...) is a subject that has received little attention. Here, we draw from an epistemological toolkit to contextualize recent applications of ML in biological sciences under modern philosophical theories of understanding, identifying general principles that can guide the design and application of ML systems to model biological phenomena and advance scientific knowledge. We propose that conceptions of scientific understanding as information compression, qualitative intelligibility, and dependency relation modelling provide a useful framework for interpreting ML-mediated understanding of biological systems. Through a detailed analysis of two key application areas of ML in modern biological research – protein structure prediction and single cell RNA-sequencing – we explore how these features have thus far enabled ML systems to advance scientific understanding of their target phenomena, how they may guide the development of future ML models, and the key obstacles that remain in preventing ML from achieving its potential as a tool for biological discovery. Consideration of the epistemological features of ML applications in biology will improve the prospects of these methods to solve important problems and advance scientific understanding of living systems. (shrink)
Default logic has been a very active research topic in artificial intelligence since the early 1980s, but has not received as much attention in the philosophical literature thus far. This paper shows one way in which the technical tools of artificial intelligence can be applied in contemporary epistemology by modeling a paradigmatic case of deep disagreement using default logic. In §1 model-building viewed as a kind of philosophical progress is briefly motivated, while §2 introduces the case of deep disagreement we (...) aim to model. On the heels of this, §3 defines our formal framework, viz., a refined Horty-style default logic. §4 then uses the framework to model deep disagreement, and finally §5 provides a critical discussion of the result. (shrink)
In an influential paper, Wendy Parker argues that agreement across climate models isn’t a reliable marker of confirmation in the context of cutting-edge climate science. In this paper, I argue that while Parker’s conclusion is generally correct, there is an important class of exceptions. Broadly speaking, agreement is not a reliable marker of confirmation when the hypotheses under consideration are mutually consistent—when, e.g., we’re concerned with overlapping ranges. Since many cutting-edge questions in climate modeling require making distinctions between mutually consistent (...) hypotheses, agreement across models will be generally unreliable in this domain. In cases where we are only concerned with mutually exclusive hypotheses, by contrast, agreement across climate models is plausibly a reliable marker of confirmation. (shrink)
This paper aims to develop an account of the pursuitworthiness of models based on a view of models as epistemic tools. This paper is motivated by the historical question of why, in the 1960s, when many scientists hardly found QSAR models attractive, some pharmaceutical scientists pursued Quantitative Structure–Activity Relationship (QSAR) models despite the lack of potential for theoretical development or empirical success. This paper addresses this question by focusing on how models perform their heuristic functions as epistemic tools rather than (...) as potential theories. I argue that models perform their heuristic function by “constructing” phenomena from data in the sense that they allow the model users who interact with the medium of the models to recognise the phenomena as such. The constructed phenomena assist model users in identifying which conditional hypotheses that are focused on low-level regularities concerning entities such as chemical compounds are more “testworthy,” a concept that links the costs associated with hypothesis testing with the fertility of the hypothesis. (shrink)
Counterfactual reasoning has been used to account for many aspects of scientific reasoning. More recently, it has also been used to account for the scientific practice of modeling. Truth in a model is truth in a situation considered as counterfactual. When we reason with models, we reason with counterfactuals. Focusing on selected models like Bohr’s atom model or models of population dynamics, I present an account of how the imaginative development of a counterfactual supposition leads us from reality to interesting (...) model assumptions; how it guides our reasoning from these assumptions to interesting consequences for the model scenario via counterfactual entailment; and how it leads us back to conclusions on real target phenomena. (shrink)
This book -which initiates the collection "Philosophy and Science" of the National University of Quilmes Publishing House- contains almost all the papers presented at the I International Meeting "Current Perspectives of Metatheoretical Structuralism", which, with the purpose of gathering a small group of distinguished Spanish-speaking philosophers interested in discussing the epistemological and methodological problems of science from the perspective of the structuralist view, was held in Zacatecas, Mexico, from February 16 to 20, 1998.
Financial modelling is an essential tool for studying the possibility of financial transactions. This paper argues that financial models are conventional tools widely used in formulating and establishing possibility claims about a prospective investment transaction, from a set of governing possibility assumptions. What is distinctive about financial models is that they articulate how a transaction possibly could occur in a non-actual investment scenario given a limited base of possibility conditions assumed in the model. For this reason, it is argued that (...) the epistemic contribution of financial models is that of enabling the model users to envision exactly how a prospective investment could be achieved in various ways through a detailed understanding of the available transaction mechanisms. Thus, financial models provide information about the possibility of an investment scenario by showing how a specific transaction mechanism could result from a small set of initial possibility conditions assumed in the model. (shrink)
Cancer biology features the ascription of normal functions to parts of cancers. At least some ascriptions of function in cancer biology track local normality of parts within the global abnormality of the aberration to which those parts belong. That is, cancer biologists identify as functions activities that, in some sense, parts of cancers are supposed to perform, despite cancers themselves having no purpose. The present paper provides a theory to accommodate these normal function ascriptions—I call it the Modeling Account of (...) Normal Function (MA). MA comprises two claims. First, normal functions are activities whose performance by the function-bearing part contributes to the self-maintenance of the whole system and, thereby, results in the continued presence of that part. Second, MA holds that models of system-level activities that are (partly) constitutive of self-maintenance are improved by including a representation of the relevant function-bearing part and by making reference to the activity/activities that part performs which contribute(s) to those system-level activities. I contrast MA with two other accounts that seek to explicate the ascription of normal functions in biology, namely, the organizational account and the selected effects account. Both struggle to extend to cancer biology. However, I offer ecumenical readings which allow them to recover some ascriptions of normal function to parts of cancers. So, though I contend that MA excels in this respect, the purpose of this paper is served if it provides materials for bridging the gap between cancer biology, philosophy of cancer, and the literature on function. (shrink)
Predictive processing (PP) and embodied cognition (EC) have emerged as two influential approaches within cognitive science in recent years. Not only have PP and EC been heralded as “revolutions” and “paradigm shifts” but they have motivated a number of new and interesting areas of research. This has prompted some to wonder how compatible the two views might be. This paper looks to weigh in on the issue of PP-EC compatibility. After outlining two recent proposals, I argue that further clarity can (...) be achieved on the issue by considering a model of scientific progress. Specifically, I suggest that Larry Laudan’s “problem solving model” can provide important insights into a number of outstanding challenges that face existing accounts of PP-EC compatibility. I conclude by outlining additional implications of the problem solving model for PP and EC more generally. (shrink)
Modern science is, to a large extent, a model-building activity. But how are models contructed? How are they related to theories and data? How do they explain complex scientific phenomena, and which role do computer simulations play here? These questions have kept philosophers of science busy for many years, and much work has been done to identify modeling as the central activity of theoretical science. At the same time, these questions have been addressed by methodologically-minded scientists, albeit from a different (...) point of view. While philosophers typically have an eye on general aspects of scientific modeling, scientists typically take their own science as the starting point and are often more concerned with specific methodological problems. There is, however, also much common ground in middle, where philosophers and scientists can engage in a productive dialogue, as the present volume demonstrates. (shrink)
Lisciandra poses a challenge for robustness analysis as applied to economic models. She argues that substituting tractability assumptions risks altering the main mathematical structure of the model, thereby preventing the possibility of meaningfully evaluating the same model under different assumptions. In such cases RA is argued to be inapplicable. However, Lisciandra is mistaken to take the goal of RA as keeping the mathematical properties of tractability assumptions intact. Instead, RA really aims to keep the modeling component while varying the corresponding (...) mathematical formulation. Thus, her argument concerning whether the associated mathematical properties of certain assumptions can be kept intact is irrelevant to the success of RA. Furthermore, we explicate and develop Lloyd’s account of “model robustness” to provide solutions to Lisciandra’s challenges. Our solutions are, namely, error analysis and independent empirical support. We conclude that although complex economic models do face potential dangers, there are solutions and robustness analysis need not be given up.. (shrink)
Ce livre est un livre d’épistémologie de la sociologie. L’objectif est d’appliquer des méthodes analytiques pour clarifier le vocabulaire, expliciter des relations non-apparentes entre concepts, dégager la portée d’une méthode, ou souligner les incohérences d’un programme de recherche. Les questions épineuses ne sont pas écartéees: Comment clarifier des notions confuses? Peut-on mathématiser les concepts sociologiques? Peut-on pratiquer la sociologie comme on pratique les sciences naturelles? Quelle est la place du déterminisme? Chaque question est examinée à la fois dans sa structure (...) logique et sur des cas concrets. La mathématisation est étudiée sur les mécanismes agrégatifs et sur les modèles de diffusion des innovations. La sociologie expérimentale – généralement méconnue dans la sociologie française – est étudiée sur des programmes de relogement, la diffusion de l’information, ou la genèse de la solidarité sociale. Le livre aborde frontalement la question du déterminisme, du naturalisme, du matérialisme et du scientisme, réputés intenables en sociologie. L’enquête montre que leur rejet résulte généralement de confusions conceptuelles. Une fois écartées, peu d’obstacles s’opposent à leur emploi en sociologie. L’objet de ce livre est la sociologie fondamentale: ensemble des mécanismes qui structurent la production des connaissances sociologiques de base à partir de concepts, programmes ou principes, c’est-à-dire tout ce qui ne relève pas de l’expérience immédiate du terrain et des mondes sociaux. (shrink)
Convergence of model projections is often considered by climate scientists to be an important objective in so far as it may indicate the robustness of the models’ core hypotheses. Consequently, the range of climate projections from a multi-model ensemble, called “model spread”, is often expected to reduce as climate research moves forward. However, the successive Assessment Reports of the Intergovernmental Panel on Climate Change indicate no reduction in model spread, whereas it is indisputable that climate science has made improvements in (...) its modelling. In this paper, after providing a detailed explanation of the situation, we describe an epistemological setting in which a steady model spread is not doomed to be seen as negative, and is indeed compatible with a desirable evolution of climate models taken individually. We further argue that, from the perspective of collective progress, as far as the improvement of the products of a multi-model ensemble is concerned, reduction of model spread is of lower priority than model independence. (shrink)
Empiricist modal epistemologies can be attractive, but are often limited in the range of modal knowledge they manage to secure. In this paper, I argue that one such account – similarity-based modal empiricism – can be extended to also cover justification of many scientifically interesting possibility claims. Drawing on recent work on modelling in the philosophy of science, I suggest that scientific modelling is usefully seen as the creation and investigation of relevantly similar epistemic counterparts of real target systems. On (...) the basis of experiential knowledge of what is actually the case with the models, one can draw justified conclusions about what is de re possible for the target systems. (shrink)
Much has been written about the free energy principle (FEP), and much misunderstood. The principle has traditionally been put forth as a theory of brain function or biological self-organisation. Critiques of the framework have focused on its lack of empirical support and a failure to generate concrete, falsifiable predictions. I take both positive and negative evaluations of the FEP thus far to have been largely in error, and appeal to a robust literature on scientific modelling to rectify the situation. A (...) prominent account of scientific modelling distinguishes between model structure and model construal. I propose that the FEP be reserved to designate a model structure, to which philosophers and scientists add various construals, leading to a plethora of models based on the formal structure of the FEP. An entailment of this position is that demands placed on the FEP that it be falsifiable or that it conform to some degree of biological realism rest on a category error. To this end, I deliver first an account of the phenomenon of model transfer and the breakdown between model structure and model construal. In the second section, I offer an overview of the formal elements of the framework, tracing their history of model transfer and illustrating how the formalism comes apart from any interpretation thereof. Next, I evaluate existing comprehensive critical assessments of the FEP, and hypothesise as to potential sources of existing confusions in the literature. In the final section, I distinguish between what I hold to be the FEP—taken to be a modelling language or modelling framework—and what I term “FEP models.”. (shrink)
The second edition of the work of the Brazilian physicist Paulo C. Abrantes (2016), entitled Images of nature, images of science, is a good alternative for students of history and philosophy of science. The reason is Abrantes' thesis in this work: to defend that the development of scientific knowledge is dependent on the influence of different images of "nature" and "science" existing during the history of Western scientific-philosophical thought; and an advocate for the historian of science Studying as reasons that (...) allow the adoption of such images at a given time. Through the analysis of historical cases of scientific thought, Abrantes argues that the consolidation of research programs in different subareas of natural science - such as biology, physics and chemistry - was only possible due to the influence, in particular time, of images of nature and science. (shrink)
Although there is a substantial philosophical literature on dynamical systems theory in the cognitive sciences, the same is not the case for neuroscience. This paper attempts to motivate increased discussion via a set of overlapping issues. The first aim is primarily historical and is to demonstrate that dynamical systems theory is currently experiencing a renaissance in neuroscience. Although dynamical concepts and methods are becoming increasingly popular in contemporary neuroscience, the general approach should not be viewed as something entirely new to (...) neuroscience. Instead, it is more appropriate to view the current developments as making central again approaches that facilitated some of neuroscience’s most significant early achievements, namely, the Hodgkin–Huxley and FitzHugh–Nagumo models. The second aim is primarily critical and defends a version of the “dynamical hypothesis” in neuroscience. Whereas the original version centered on defending a noncomputational and nonrepresentational account of cognition, the version I have in mind is broader and includes both cognition and the neural systems that realize it as well. In view of that, I discuss research on motor control as a paradigmatic example demonstrating that the concepts and methods of dynamical systems theory are increasingly and successfully being applied to neural systems in contemporary neuroscience. More significantly, such applications are motivating a stronger metaphysical claim, that is, understanding neural systems as being dynamical systems, which includes not requiring appeal to representations to explain or understand those phenomena. Taken together, the historical claim and the critical claim demonstrate that the dynamical hypothesis is undergoing a renaissance in contemporary neuroscience. (shrink)
ABSTRACT This article distinguishes nine senses of polarization and provides formal measures for each one to refine the methodology used to describe polarization in distributions of attitudes. Each distinct concept is explained through a definition, formal measures, examples, and references. We then apply these measures to GSS data regarding political views, opinions on abortion, and religiosity—topics described as revealing social polarization. Previous breakdowns of polarization include domain-specific assumptions and focus on a subset of the distribution’s features. This has conflated multiple, (...) independent features of attitude distributions. The current work aims to extract the distinct senses of polarization and demonstrate that by becoming clearer on these distinctions we can better focus our efforts on substantive issues in social phenomena. (shrink)
Para el constructivismo, la ciencia y la cognición comparten intereses similares. Ambos dominios pueden describirse como dos sistemas entrelazados que se activan mutuamente y se modulan entre sí a través de un lazo interno de retroalimentación, lazo que opera mediante la dinámica interna representativa en el caso de la cognición y mediante la dinámica del desarrollo teórico en el caso de la ciencia. Cada uno de estos dominios —ciencia y cognición— busca generar un marco de interacción adecuado que garantice, por (...) parte de la ciencia, el éxito predictivo a partir de los modelos utilizados en la investigación y, por parte de la cognición, una amplia gama de estrategias funcionalmente exitosas con las que salvaguardar una imagen viable del mundo. Al pensar la ciencia como una extensión de nuestra apertura cognitiva al mundo, el constructivismo adopta la noción de viabilidad, es decir, de ajuste funcional, con el entorno como fundamental en la correcta aproximación al estudio tanto de la cognición como de la modelización, ya que, al igual que ocurre con las estructuras y procesos que conforman la arquitectura cognitiva de cualquier sistema de observación, más o menos evolucionado, los modelos en ciencia se encuentran limitados por su propia estructura teórica, así como por su dinámica operativa. Adoptando el constructivismo como una factible filosofía de la ciencia, este trabajo tiene como objetivo estudiar el fenómeno de la modelización, así como examinar el papel que asumen algunas estrategias tropológicas inherentes a la actividad científica, como la analogía y la metáfora, a la hora de configurar modelos y formular hipótesis y conjeturas que sirven de aproximación para la indagación y el estudio del mundo empírico. (shrink)
Tony Lawson, founder of The Social Ontology Group and The Realist Workshop of Cambridge, has proposed critical realism to reorient economics. The transformation of the social world that Lawson tries, emerges from the adherence to critical realism, this is, from taking the transcendental realism of Roy Bhaskar to the social realm. With the purpose of deepening the criticisms to this movement, we will specify what is critical realism, and which are the philosophical assumptions of the mainstream according to this author. (...) We will set out the criticisms on: a) the notion of mainstream economics, b) the possibilities of economics based on social ontology, c) the realism of economic models, and d) the notions of isolation and abstraction. (shrink)
The present paper tries to show that in the discussion on whether it is better to model or not to capture truth in the social world, that is not what is mainly being discussed. We put forward that the main question in this discussion is, essentially, ontological, not methodological. As a representative of the “to model position” we will refer to Uskali Mäki’s Possible Realism, and as one ofthe “notto model position” we will consider Tony Lawson’s Critical Realism. What will (...) be argued isthat the main differences between these positions asregards methodology to access the social world, lie in the different ontologies about the social realm. It will also be introduced ifthere is any possibility of an “in between position”, or atleastthe chance of a dialogue between these different epistemological trends. (shrink)
La presente obra ofrece un análisis crítico de la filosofía de la economía de Uskali Mäki; en particular de la consideración realista científica de la economía. Se intenta a lo largo del texto responder, de algún modo, a las preguntas que plantea Lehtinen en la introducción: “¿Están los economistas aspirando en absoluto a la verdad, o están solamente jugando un juego intelectual en que tales supuestos son aceptables por alguna razón misteriosa? ¿Están estudiando la economía en serio? ¿Están simplemente desinteresados (...) en la verdad?, o ¿hay tal vez alguna otra forma de considerar sus prácticas de modelar?” p. 1. (shrink)
Entender algo sobre un mundo que se nos presenta de modo desordenado e incompleto constituye buena parte de la tarea de la filosofía y de la ciencia. La racionalidad, los modelos, y el mundo social introducen preocupaciones propias de la filosofía de la ciencia en general y de la epistemología de la economía en particular. Los aportes de Popper, Lawson, Mäki, Hayek y Cartwright se expresan en estos trazos como intentos abiertos para alcanzar a comprender nuestro mundo.
In the debate on realism of models in economics, the Austrian School and Hayekin particular, seem to have, in a certain way, remained outside. Assuming neoclassical models asunrealistic, the theory of the market as a process looks like a more realistic proposal. However, oneof the fundamental issue s in Hayek’s dissent is not so much the unrealism of the assumptions, but that the market equilibrium theory was not correctly raised, especially with regards to the perfectknowledge assumption. Despite this, in this (...) setting and in line with a previous paper (Zanotti & Borella, 2015), we will argue that Hayek’s spontaneous order may be understood as the Austrian School’s “model”, assuming Mäki’s MISS version of models (Models as Isolations and Surrogate Systems) and emphasizing the place of the ontological foundation of Hayek’s proposal when assessing his model. (shrink)
Models, information and meaning.Marc Artiga - 2020 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 82:101284.details
There has recently been an explosion of formal models of signalling, which have been developed to learn about different aspects of meaning. This paper discusses whether that success can also be used to provide an original naturalistic theory of meaning in terms of information or some related notion. In particular, it argues that, although these models can teach us a lot about different aspects of content, at the moment they fail to support the idea that meaning just is some kind (...) of information. As an alternative, I suggest a more modest approach to the relationship between informational notions used in models and semantic properties in the natural world. (shrink)
Rather than assume a unitary cybernetics, I ask how its disunity mattered to the history of the human sciences in the United States from about 1940 to 1980. I compare the work of four prominent social scientists – Herbert Simon, George Miller, Karl Deutsch, and Talcott Parsons – who created cybernetic models in psychology, economics, political science, and sociology with the work of anthropologist Gregory Bateson, and relate their interpretations of cybernetics to those of such well-known cyberneticians as Norbert Wiener, (...) Warren McCulloch, W. Ross Ashby, and Heinz von Foerster. I argue that viewing cybernetics through the lens of disunity – asking what was at stake in choosing a specific cybernetic model – shows the complexity of the relationship between first-order cybernetics and the postwar human sciences, and helps us rethink the history of second-order cybernetics. (shrink)
In this article I give a critical evaluation of the use and limitations of null-model-based hypothesis testing as a research strategy in the biological sciences. According to this strategy, the null model based on a randomization procedure provides an appropriate null hypothesis stating that the existence of a pattern is the result of random processes or can be expected by chance alone, and proponents of other hypotheses should first try to reject this null hypothesis in order to demonstrate their own (...) hypotheses. Using as an example the controversy over the use of null hypotheses and null models in species co-occurrence studies, I argue that null-model-based hypothesis testing fails to work as a proper analog to traditional statistical null-hypothesis testing as used in well-controlled experimental research, and that the random process hypothesis should not be privileged as a null hypothesis. Instead, the possible use of the null model resides in its role of providing a way to challenge scientists’ commonsense judgments about how a seemingly unusual pattern could have come to be. Despite this possible use, null-model-based hypothesis testing still carries certain limitations, and it should not be regarded as an obligation for biologists who are interested in explaining patterns in nature to first conduct such a test before pursuing their own hypotheses. (shrink)
Our paper studies the anatomy of the discovery of the Higgs boson at the Large Hadron Collider and its influence on the broader model landscape of particle physics. We investigate the phases of this discovery, which led to a crucial reconfiguration of the model landscape of elementary particle physics and eventually to a confirmation of the standard model. A keyword search of preprints covering the electroweak symmetry breaking sector of particle physics, along with an examination of physicists own understanding of (...) the discovery as documented in semiannual conferences, has allowed us an empirical investigation of its model dynamics. From our analyses we draw two main philosophical lessons concerning the nature of scientific reasoning in a complex experimental and theoretical environment. For one, from a confirmation standpoint, some SM alternatives could be considered even more confirmed by the Higgs discovery than the SM. Nevertheless, the SM largely remains the commonly accepted account of EWSB. We present criteria for comparing degrees of confirmation and expose some limits of a purely logical approach to understanding the Higgs discovery as a victory for the SM. Second, we understand the persistence of SM alternatives in the face of disfavourable evidence by borrowing the Lakatosian concept of a research programme, where the core idea behind a group of models survives, while other aspects adapt to incoming data. In order to apply this framework to the model landscape of EWSB, we must introduce a new category of research programme, the model-group, and we test its viability using the example of composite Higgs models. (shrink)
South Korean high school students are being taught Einstein’s Special Theory of Relativity. In this article, I examine the portrayal of this theory in South Korean high school physics textbooks and discuss an alternative method used to solve the analyzed problems. This examination of how these South Korean textbooks present this theory has revealed two main flaws: First, the textbooks’ contents present historically fallacious backgrounds regarding the origin of this theory because of a blind dependence on popular undergraduate textbooks, which (...) ignore the revolutionary aspects of the theory in physics. And second, the current ingredients of teaching this theory are so simply enumerated and conceptually confused that students are not provided with good opportunities to develop critical capacities for evaluating scientific theories. Reviewing textbooks used in South Korea, I will, first, claim that the history of science contributes to understand not merely the origins but also two principles of this theory. Second, in addition to this claim, I argue that we should distinguish not only hypotheses from principles but also phenomena from theoretical consequences and evidence. Finally, I suggest an alternative way in which theory testing occurs in the process of evaluation among competitive theories on the basis of data, not in the simple relation between a hypothesis and evidence. (shrink)
In this paper, I address the issue of scientific modelling in contemporary linguistics, focusing on the generative tradition. In so doing, I identify two common varieties of linguistic idealisation, which I call determination and isolation respectively. I argue that these distinct types of idealisation can both be described within the remit of Weisberg’s :639–659, 2007) minimalist idealisation strategy in the sciences. Following a line set by Blutner :27–35, 2011), I propose this minimalist idealisation analysis for a broad construal of the (...) generative linguistic programme and thus cite examples from a wide range of linguistic frameworks including early generative syntax, Minimalism, the parallel architecture and optimality theory. Lastly, I claim that from a modelling perspective, the dynamic turn in syntax can be explained as a continuation, as opposed to a marked shift, of the generative modelling paradigm. Seen in this light, my proposal is an even broader construal of the generative tradition, along scientific modelling lines. Thus, I offer a lens through which to appreciate the scientific contribution of generative grammar, amid an increased resistance to some of its core theoretical posits, in terms of a brand of structural realism in the philosophy of science and specifically scientific modelling. (shrink)
Theoretical astrophysics emerged as a significant research programme with the construction of a series of stellar models by A. S. Eddington. This paper examines the controversies surrounding those models as a way of understanding the development and justification of new theoretical technologies. In particular, it examines the challenges raised against Eddington by James Jeans, and explores how the two astronomers championed different visions of what it meant to do science. Jeans argued for a scientific method based on certainty and completeness, (...) whereas Eddington called for a method that valued exploration and further investigation, even at the sake of secure foundations. The first generation of stellar models depended on the validity of Eddington's approach – the physics and many of the basic facts of stars were poorly understood and he justified his models through their utility for future research and their robustness under challenging use. What would become theoretical astrophysics depended heavily on this phenomenological outlook, which Jeans dismissed as not even science. This was a dispute about the practice of theory, and it would be this methodological debate that made theoretical astrophysics viable. (shrink)
One persistent challenge in scientific practice is that the structure of the world can be unstable: changes in the broader context can alter which model of a phenomenon is preferred, all without any overt signal. Scientific discovery becomes much harder when we have a moving target, and the resulting incorrect understandings of relationships in the world can have significant real-world and practical consequences. In this paper, we argue that it is common (in certain sciences) to have changes of context that (...) lead to changes in the relationships under study, but that standard normative accounts of scientific inquiry have assumed away this problem. At the same time, we show that inference and discovery methods can “protect” themselves in various ways against this possibility by using methods with the novel methodological virtue of “diligence.” Unfortunately, this desirable virtue provably is incompatible with other desirable methodological virtues that are central to reliable inquiry. No scientific method can provide every virtue that we might want. (shrink)
Precaution is a relevant and much-invoked value in environmental risk analysis, as witnessed by the ongoing vivid discussion about the precautionary principle (PP). This article argues (i) against purely decision-theoretic explications of PP; (ii) that the construction, evaluation, and use of scientific models falls under the scope of PP; and (iii) that epistemic and decision-theoretic robustness are essential for precautionary policy making. These claims are elaborated and defended by means of case studies from climate science and conservation biology.
Using Sneed's metatheory an attempt is made to reconstruct Hodgkin and Huxley's theory of excitation of cell membranes. The structure of this theory is uncovered by defining set-theoretical predicates for the partial potential models, potential models, and models of the theory. The function of permeability is said to be the only theoretical function with respect to this theory. The main underlying assumptions of the theory are briefly outlined.