Whereas computer simulations involve no direct physical interaction between the machine they are run on and the physical systems they are used to investigate, they are often used as experiments and yield data about these systems. It is commonly argued that they do so because they are implemented on physical machines. We claim that physicality is not necessary for their representational and predictive capacities and that the explanation of why computer simulations generate desired information about their target system is only (...) to be found in the detailed analysis of their semantic levels. We provide such an analysis and we determine the actual consequences of physical implementation for simulations. (shrink)
Epistemic accounts of scientific collaboration usually assume that, one way or another, two heads really are more than twice better than one. We show that this hypothesis is unduly strong. We present a deliberately crude model with unfavorable hypotheses. We show that, even then, when the priority rule is applied, large differences in successfulness can emerge from small differences in efficiency, with sometimes increasing marginal returns. We emphasize that success is sensitive to the structure of competing communities. Our results suggest (...) that purely epistemic explanations of the efficiency of collaborations are less plausible but have much more powerful socioepistemic versions. (shrink)
Experiments (E), computer simulations (CS) and thought experiments (TE) are usually seen as playing different roles in science and as having different epistemologies. Accordingly, they are usually analyzed separately. We argue in this paper that these activities can contribute to answering the same questions by playing the same epistemic role when they are used to unfold the content of a well-described scenario. We emphasize that in such cases, these three activities can be described by means of the same conceptual framework—even (...) if each of them, because they involve different types of processes, fall under these concepts in different ways. We further illustrate our claims by presenting a threefold case study describing how a TE, a CS and an E were indeed used in the same role at different periods to answer the same questions about the possibility of a physical Maxwellian demon. We also point at fluid dynamics as another field where these activities seem to be playing the same unfolding role. We analyze the importance of unfolding as a general task of science and highlight how our description in terms of epistemic functions articulates in a noncommittal way with the epistemology of these three activities and accounts for their similarities and the existence of hybrid forms of activities. We finally emphasize that picturing these activities as functionally substitutable does not imply that they are epistemologically substitutable. (shrink)
We analyze the effects of the introduction of new mathematical tools on an old branch of physics by focusing on lattice fluids, which are cellular automata -based hydrodynamical models. We examine the nature of these discrete models, the type of novelty they bring about within scientific practice and the role they play in the field of fluid dynamics. We critically analyze Rohrlich's, Fox Keller's and Hughes' claims about CA-based models. We distinguish between different senses of the predicates “phenomenological” and “theoretical” (...) for scientific models and argue that it is erroneous to conclude, as they do, that CA-based models are necessarily phenomenological in any sense of the term. We conversely claim that CA-based models of fluids, though at first sight blatantly misrepresenting fluids, are in fact conservative as far as the basic laws of statistical physics are concerned and not less theoretical than more traditional models in the field. Based on our case-study, we propose a general discussion of the prospect of CA for modeling in physics. We finally emphasize that lattice fluids are not just exotic oddities but do bring about new advantages in the investigation of fluids' behavior. (shrink)
Although scientific models and simulations differ in numerous ways, they are similar in so far as they are posing essentially philosophical problems about the nature of representation. This collection is designed to bring together some of the best work on the nature of representation being done by both established senior philosophers of science and younger researchers. Most of the pieces, while appealing to existing traditions of scientific representation, explore new types of questions, such as: how understanding can be developed within (...) computational science; how the format of representations matters for their use, be it for the purpose of research or education; how the concepts of emergence and supervenience can be further analyzed by taking into account computational science; or how the emphasis upon tractability--a particularly important issue in computational science--sheds new light on the philosophical analysis of scientific reasoning. (shrink)
According to Woodward’s causal model of explanation, explanatory information is relevant for manipulation purposes and indicates by means of invariant causal relations how to change the value of certain target explanandum variables by intervening on others. Therefore, the depth of an explanation is evaluated through the size of the domain of invariance of the generalization involved. In this article, I argue that Woodward’s account of explanatory relevance is still unsatisfactory and claim that the depth of an explanation should be explicated (...) in terms of the size of the domain of circumstances which it designates as leaving the explanandum unchanged. (shrink)
Special issue. With contributions by Anouk Barberouse, Sarah Francescelli and Cyrille Imbert, Robert Batterman, Roman Frigg and Julian Reiss, Axel Gelfert, Till Grüne-Yanoff, Paul Humphreys, James Mattingly and Walter Warwick, Matthew Parker, Wendy Parker, Dirk Schlimm, and Eric Winsberg.
Dans Science, Perception and Reality, Sellars distingue l’image manifeste de l’homme et l’image scientifique de l’homme. La première est obtenue à partir de la façon dont nous prenons conscience de nous-mêmes comme humains dans le monde. La seconde correspond à ce que les différentes sciences nous amènent à postuler sur la manière dont l’homme est constitué. Van Fraassen, lui, étend au monde ces concepts...
In this paper, I criticize Bedau's definition of `diachronically emergent properties', which says that a property is a DEP if it can only be predicted by a simulation and is nominally emergent. I argue at length that this definition is not complete because it fails to eliminate trivial cases. I discuss the features that an additional criterion should meet in order to complete the definition and I develop a notion, salience, which together with the simulation requirement can be used to (...) characterize DEPs. In the second part of the paper, I sketch this notion. Basically, a property is salient when one can find an indicator, namely a descriptive function, that is such that its fitting description shifts from one elementary mathematical object to another when the property appears. Finally, I discuss restrictions that must be brought to what can count as DFs and EMOs if the definition of salience is to work and be non trivial. I conclude that salience can complete the definition of DEPs. (shrink)
ABSTRACT: Whereas relevance in scientific explanations is usually discussed as if it was a single problem, several criteria of relevance will be distinguished in this paper. Emphasis is laid upon the notion of intra-scientific relevance, which is illustrated using explanation of the law of areas as an example. Traditional accounts of explanation, such as the causal and unificationist accounts, are analyzed against these criteria of relevance. Particularly, it will be shown that these accounts fail to indicate which explanations fulfill the (...) condition of intra-scientific relevance. Finally, the significance of this latter criterion is emphasized, and the epistemic benefits of explanations that fulfill it are highlighted. (shrink)
Cellular Automata (CA) based simulations are widely used in a great variety of domains, fromstatistical physics to social science. They allow for spectacular displays and numerical predictions. Are they forall that a revolutionary modeling tool, allowing for “direct simulation”, or for the simulation of “the phenomenon itself”? Or are they merely models "of a phenomenological nature rather than of a fundamental one”? How do they compareto other modeling techniques? In order to answer these questions, we present a systematic exploration of (...) CA’s various uses. (shrink)
Les simulations donnent l’impression de ressembler aux systèmes qu’elles représentent, au point d’en être peut-être des analogues. Dans cet article, je discute d’abord les différentes notions de temps qu’il faut distinguer pour bien analyser les simulations puis je montre sur cette base que, pour être de bonnes représentations scientifiques, les simulations ne doivent pas en général ressembler du point de vue temporel aux objets qu’elles représentent.Simulations often give the impression of being similar to the systems they represent. In this paper, (...) I first show that it is necessary to distinguish between different notions of time in order to analyse simulations correctly. This enables me to argue that, in order to serve correctly their purpose as scientific representations, simulations must not in general be similar from a temporal point of view to the systems they represent. (shrink)
Scientific models need to be investigated if they are to provide valuable information about the systems they represent. Surprisingly, the epistemological question of what enables this investigation has hardly been investigated. Even authors who consider the inferential role of models as central, like Hughes or Bueno and Colyvan, content themselves with claiming that models contain mathematical resources that provide inferential power. We claim that these notions require further analysis and argue that mathematical formalisms contribute to this inferential role. We characterize (...) formalisms, illustrate how they extend our mathematical resources, and highlight how distinct formalisms offer various inferential affordances. (shrink)
Why are some models, like the harmonic oscillator, the Ising model, a few Hamiltonian equations in quantum mechanics, the poisson equation, or the Lokta-Volterra equations, repeatedly used within and across scientific domains, whereas theories allow for many more modeling possibilities? Some historians and philosophers of science have already proposed plausible explanations. For example, Kuhn and Cartwright point to a tendency toward conservatism in science, and Humphreys emphasizes the importance of the intractability of what he calls “templates.” This paper investigates more (...) systematically the reasons for this remarkable interdisciplinary recurrence. To this aim, the authors describe in more detail the phenomenon they focus on and review competing potentialexplanations. The authors disentangle the various assumptions underlying these explanations based on sensitivity to a computational constraints and assess its relationships with the other analyzed explanatons. (shrink)
For two centuries, collaborative research has become increasingly widespread. Various explanations of this trend have been proposed. Here, we offer a novel functional explanation of it. It differs from ac- counts like that of Wray by the precise socio-epistemic mech- anism that grounds the beneficialness of collaboration. Boyer-Kassem and Imbert show how minor differences in the step-efficiency of collaborative groups can make them much more successful in particular configurations. We investigate this model further, derive robust social patterns concerning the general (...) successfulness of collaborative groups, and argue that these patterns can be used to defend a general functional account. (shrink)
The roles models play in science have long been recognised and sparked rich and varied philosophical debates. In recent years attention has also been paid to the computational techniques used in the sciences, and the question arose what the implications were of the use of computer simulations for our understanding of scientific modelling, and science more generally. This was the subject of the conference “Models and Simulations”, which took place at the IHPST in Paris in June 2006. Selected papers of (...) that conference appeared in a special issue of this journal (Synthese 169(3) (2009)). After the conference there was a general feeling that there still was much ground to cover, and so we decided to organize a follow up a year later. “Models and Simulations 2” took place in October 2007 at the Tilburg Center for Logic and Philosophy of Science (TiLPS) in the Netherlands. The conference was made possible due to generous financial support from IHPST (Paris, CNRS), Universitat de Barcelona, TiLPS (Tilburg Center for Logic and Philosophy of Science) and the Evert Willem Beth Foundation. The papers in this special issue were presented at the conference and selected after a double-blind review process. We would like to thank the authors and referees of the papers for their work, and Vincent Hendricks and John Symons for their support of the project of another special issue on models and simulations. (shrink)
Computer simulations are usually considered to be non-explanatory because, when a simulation reveals that a property is instantiated in a system, it does not enable the exact identification of what it is that brings this property out (relevance requirement). Conversely, analytical deductions are widely considered to yield explanations and understanding. In this paper, I emphasize that explanations should satisfy the relevance requirement and argue that the more they do so, the more they have explanatory value. Finally, I show that this (...) emphasis on relevance has the unexpected consequence that simulations can sometimes be explanatory. (shrink)
This paper shows that, under certain reasonable conditions, if the investigation of the behavior of a physical system is difficult, no scientific change can make it significantly easier. This impossibility result implies that complexity is then a necessary feature of models which truly represent the target system and of all models which are rich enough to catch its behavior and therefore that it is an inevitable element of any possible science in which this behavior is accounted for. I finally argue (...) that complexity can then be seen as representing an intrinsic feature of the system itself. (shrink)
ABSTRACTDeliberative and decisional groups play crucial roles in most aspects of social life. But it is not obvious how to organize these groups and various socio-cognitive mechanisms can spoil debates and decisions. In this paper we focus on one such important mechanism: the misrepresentation of views, i.e. when agents express views that are aligned with those already expressed, and which differ from their private opinions. We introduce a model to analyze the extent to which this behavioral pattern can warp deliberations (...) and distort the decisions that are finally taken. We identify types of situations in which misrepresentation can have major effects and investigate how to reduce these effects by adopting appropriate deliberative procedures. We discuss the beneficial effects of holding a sufficient number of rounds of expression of views; choosing an appropriate order of speech, typically a random one; rendering the deliberation dissenter-friendly; having agents express fined-grained views. These applicable procedures help improve deliberations because they dampen conformist behavior, give epistemic minorities more opportunities to be heard, and reduce the number of cases in which an inadequate consensus or majority develops. (shrink)
This paper shows that, under certain reasonable conditions, if the investigation of the behavior of a physical system is difficult, no scientific change can make it significantly easier. This impossibility result implies that complexity is then a necessary feature of models which truly represent the target system and of all models which are rich enough to catch its behavior and therefore that it is an inevitable element of any possible science in which this behavior is accounted for. I finally argue (...) that complexity can then be seen as representing an intrinsic feature of the system itself. (shrink)
In Woodward's causal model of explanation, explanatory information is information that is relevant to manipulation and control and that affords to change the value of some target explanandum variable by intervening on some other. Accordingly, the depth of an explanation is evaluated through the size of the domain of invariance of the generalization involved. In this paper, I argue that Woodward's treatment of explanatory relevance in terms of invariant causal relations is still wanting and suggest to evaluate the depth of (...) an explanation through the size of the domain of circumstances that it designates as leaving the explanandum unchanged. (shrink)
J’analyse dans cet article la valeur explicative que peuvent avoir les simulations numériques. On rencontre en effet souvent l’affirmation selon laquelle les simulations permettent de prédire, de reproduire ou d’imiter des phénomènes, mais guère de les expliquer. Les simulations rendraient aussi possible l’étude du comportement d’un système par la force brute du calcul mais n’apporteraient pas une compréhension réelle de ce système et de son comportement. Dans tous les cas, il semble que, à tort ou à raison, les simulations posent, (...) du point de vue de leur valeur explicative, des problèmes spécifiques qu’il convient de démêler et de décrire précisément. J’essaie dans cet article d’analyser systématiquement ces problèmes en utilisant comme guide les théories existantes de l’explication. J’analyse d’abord le rapport des simulations à la vérité. J’examine ensuite en quoi les simulations satisfont ou non les exigences de déductivité et de nomicité, qui jouent un rôle central dans le modèle de l’explication de Hempel. J’étudie dans quelle mesure les simulations sont aptes à véhiculer l’information causale pertinente qu’on attend d’une bonne explication. Je poursuis en analysant en quoi l’abondance informationnelle et la lourdeur computationnelle des simulations peut sembler problématique par rapport au développement de nos connaissances explicatives et de notre compréhension des phénomènes. J’analyse enfin en quoi les simulations ont un rôle unificateur comme cela est attendu des bonnes explications. Au final, cette étude permet de comprendre plus précisément pourquoi les simulations, alors même qu’elles semblent pouvoir satisfaire les conditions que doivent remplir les bonnes explications, semblent spécifiquement problématiques au regard de l’activité explicative. Je suggère que les raisons sont notamment à chercher dans l’épistémologie de l’activité explicative, dans les attentes méthodologiques envers les bonnes explications et dans l’usage spécifique qu’on fait des simulations pour l’étude des cas difficiles – en plus du fait que les simulations constituent une activité qui n’est plus à taille humaine. (shrink)
Computers have transformed science and help to extend the boundaries of human knowledge. However, does the validation and diffusion of results of computational inquiries and computer simulations call for a novel epistemological analysis? I discuss how the notion of novelty should be cashed out to investigate this issue meaningfully and argue that a consequentialist framework similar to the one used by Goldman to develop social epistemologySocial epistemology can be helpful at this point. I highlight computational, mathematical, representational, and social stages (...) on which the validity of simulation-based belief-generating processes hinges, and emphasize that their epistemic impact depends on the scientific practices that scientists adopt at these different stages. I further argue that epistemologists cannot ignore these partially novel issues and conclude that the epistemology of computational inquiries needs to go beyond that of models and scientific representations and has cognitive, social, and in the present case computational, dimensions. (shrink)