Chaos is often explained in terms of random behaviour; and having positive Kolmogorov–Sinai entropy (KSE) is taken to be indicative of randomness. Although seemly plausible, the association of positive KSE with random behaviour needs justification since the definition of the KSE does not make reference to any notion that is connected to randomness. A common way of justifying this use of the KSE is to draw parallels between the KSE and ShannonÕs information theoretic entropy. However, as it stands this no (...) more than a heuristic point, because no rigorous connection between the KSE and ShannonÕs entropy has been established yet. This paper fills this gap by proving that the KSE of a Hamiltonian dynamical system is equivalent to a generalized version of ShannonÕs information theoretic entropy under certain plausible assumptions. Ó 2005 Elsevier Ltd. All rights reserved. (shrink)
Scientific discourse is rife with passages that appear to be ordinary descriptions of systems of interest in a particular discipline. Equally, the pages of textbooks and journals are filled with discussions of the properties and the behavior of those systems. Students of mechanics investigate at length the dynamical properties of a system consisting of two or three spinning spheres with homogenous mass distributions gravitationally interacting only with each other. Population biologists study the evolution of one species procreating at a constant (...) rate in an isolated ecosystem. And when studying the exchange of goods, economists consider a situation in which there are only two goods, two perfectly rational agents, no restrictions on available information, no transaction costs, no money, and dealings are done immediately. Their surface structure notwithstanding, no competent scientist would mistake descriptions of such systems as descriptions of an actual system: we know very well that there are no such systems. These descriptions are descriptions of a model-system, and scientists use model-systems to represent parts or aspects of the world they are interested in. Following common practice, I refer to those parts or aspects as target-systems. What are we to make of this? Is discourse about such models merely a picturesque and ultimately dispensable façon de parler? This was the view of some early twentieth century philosophers. Duhem (1906) famously guarded against confusing model building with scientific theorizing and argued that model building has no real place in science, beyond a minor heuristic role. The aim of science was, instead, to construct theories, with theories understood as classificatory or representative structures systematically presented and formulated in precise symbolic.. (shrink)
At first blush, the idea that fictions play a role in science seems to be off the mark. Realists and antirealists alike believe that science instructs us about how the world is (they part ways only over the question of what exactly science tells us about the world). Fiction not only seems to play no role in such an endeavour; it seems to detract from it. The aims of science and fiction seem to be (...) diametrically opposed and a view amalgamating the two rightly seems to be the cause of discomfort and concern. (shrink)
Ψ and out corresponding to the marble being inside or outside the box. These states are eigenvectors of the operator Bˆ , measuring whether the marble is inside or outside the box. The formalism of quantum mechanics..
La mayoría de los modelos científicos no son objetos físicos y esto origina cuestiones importantes. ¿Qué clase de entidad son los modelos?, ¿qué es la verdad en un modelo? Y ¿cómo aprendemos sobre los modelos? En este escrito, argumento que los modelos tienen importantes aspectos en común con la ficción literaria y que, por tanto, las teorías de la ficción pueden aplicarse a estas cuestiones. En particular, argumento que la teoría de la simulación como la desarrolla Walton (1990) tiene (...) los recursos para responder a estas cuestiones. Introduzco esta explicación, perfilo las respuestas que ofrece y desarrollo una imagen general del modelado científico basado en ella. (shrink)
Models occupy a central role in the scientific endeavour. Among the many purposes they serve, representation is of great importance. Many models are representations of something else; they stand for, depict, or imitate a selected part of the external world (often referred to as target system, parent system, original, or prototype). Well-known examples include the model of the solar system, the billiard ball model of a gas, the Bohr model of the atom, the Gaussian-chain model of a polymer, the MIT (...) bag model of quark confinement, the Lorenz model of the atmosphere, the Lotka-Volterra model of the predator-prey interaction, or the hydraulic model of an economy, to mention just a few. All these models represent their target systems (or selected parts of them) in one way or another. (shrink)
In its most common use, the term ‘model’ refers to a simplified and stylised version of the socalled target system, the part or aspect of the world that we are interested in. For instance, in order to determine the orbit of a planet moving around the sun we model the planet and the sun as perfect homogenous spheres that gravitationally interact with each other but nothing else in the universe, and then apply Newtonian mechanics to this system, which reveals that (...) the planet moves on an elliptical orbit. Views diverge about what sort of entity such a model is. Those focussing on the formal aspects of models regard them either as equations or settheoretical structures, while those opposed to such an approach take them to be descriptions or abstract (yet non-mathematical) entities. A further question concerns the relation of models and theories. In some cases models can be derived from theory simply by specifying the relevant determinables in a theory’s general equations. But many models cannot be obtained from theory in this straightforward way, and some even involve assumptions that contradict the fundamental theory. The relation of models to their respective target systems is equally complex and fraught with controversy. Two influential proposals take the relation between a model and its target to be isomorphism or similarity, respectively. This, however, has been criticised as too restrictive as many models do not seem to fit this mould. (shrink)
What does quantum ﬁeld theory (QFT) tell us about the furniture of the world? Seventeen essays gathered in the four parts of Ontological Aspects of Quantum Field Theory address this question from different angles and with different objectives. Together, they form a wide-ranging and up-to-date volume that makes a valuable contribution to an ongoing discussion, which, due to the comprehensive introduction by the editors, can be of interest to experts and novices alike.
The essays in the first part, Approaches to Ontology, explore different philosophical frameworks in which the ontology of QFT could fruitfully be examined. Despite their differences, they all agree that traditional ontologies, in particular substance-attribute ontology, are unsuitable for QFT. Peter Simons begins by pointing out why substance-attribute ontology, applied set theory, fact ontology, occurrent ontologies, and trope theory are inadequate ontologies for QFT and then puts forward his own suggestion: factored ontology. The main idea of this ontology is to (...) posit basic features (so-called ‘factors’) and to view objects as suitable combinations of some of these factors. He presents an outline of a version of a factored ontology, called PACIS, which he and his collaborators have developed over the last fifteen years and which they have – in their view successfully – applied to different domains in the natural and the social sciences. Given this success, Simons is confident that this framework will also prove fruitful in the case of QFT. However, he does not give any further argument for this claim and does not make an attempt at formulating a concrete factor 1 ontology of QFT. He merely puts forward his framework as a conceptual tool and leaves it to the philosopher of physics to work out an interpretation of QFT in its terms. (shrink)
We experience time in different ways, and we construct different kinds of representation of time. What kinds of representation are there and how do they work? In particular, how do we integrate temporal features of the world into our understanding of the mechanisms underlying representations in the media of perception, memory, art, and narrative? Le Poidevin’s well written and carefully argued book is an exploration of these questions. Although interesting in its own right, Le Poidevin pursues this question as a (...) means of exploring another pressing issue, namely the metaphysics of time. The central posit of the book is that we can learn a lot about time from ordinary representations of time, and accordingly the book is an exploration of what representations of time can tell us about the metaphysical structure of time itself. This viewpoint is justified by the adoption of a causal theory of representation, the claim that representations are causally linked to what they represent and that this is what determines both their content and their epistemic status. The central metaphysical concern of the book is the reality of the passage of time. Does time in reality pass, and can events therefore be located in the past, present, or future, or does time not pass and nothing in reality changes its position in time? In McTaggart’s terms, this is the distinction between the A-theory and the B- theory of time. (shrink)
this study is to analyse the biomechanical influence of the been studied extensively, little is known how it is influclinical data on stability of the ankle joint. A two-dimenenced by the osseous joint configuration. Based on lateral..
Let us begin with a characteristic example. Consider a gas that is confined to the left half of a box. Now we remove the barrier separating the two halves of the box. As a result, the gas quickly disperses, and it continues to do so until it homogeneously fills the entire box. This is illustrated in Figure 1.
Why do systems prepared in a non-equilibrium state approach, and eventually reach, equilibrium? An important contemporary version of the Boltzmannian approach to statistical mechanics answers this question by an appeal to the notion of typicality. The problem with this approach is that it comes in different versions, which are, however, not recognised as such, much less clearly distinguished, and we often find different arguments pursued side by side. The aim of this paper is to disentangle different versions of typicality-based explanations (...) of thermodynamic behaviour and evaluate their respective success. My conclusion will be that the boldest version fails for technical reasons, while more prudent versions leave unanswered essential questions. (shrink)
The time honoured philosophical issue of how to resolve the mind/body problem has taken a more scientific turn of late. Instead of discussing issues of the soul and emotion and person and their reduction to a physical form, we now ask ourselves how well-understood cognitive and social concepts fit into the growing and changing field of neuropsychology. One of the many projects that have come out of this new scientific endeavour is Zaidel’s (2005) inquiry into the neuropsychological bases of art.
Entropy is ubiquitous in physics, and it plays important roles in numerous other disciplines ranging from logic and statistics to biology and economics. However, a closer look reveals a complicated picture: entropy is defined differently in different contexts, and even within the same domain different notions of entropy are at work. Some of these are defined in terms of probabilities, others are not. The aim of this chapter is to arrive at an understanding of some of the most important notions (...) of entropy and to clarify the relations between them. In particular, we discuss the question what kind of probabilities are involved whenever entropy is defined in terms of probabilities. (shrink)
Classical statistical mechanics posits probabilities for various events to occur, and these probabilities seem to be objective chances. This does not seem to sit well with the fact that the theory’s time evolution is deterministic. We argue that the tension between the two is only apparent. We present a theory of Humean objective chance and show that chances thus understood are compatible with underlying determinism and provide an interpretation of the probabilities we find in Boltzmannian statistical mechanics.
Various scientific theories stand in a reductive relation to each other. In a recent article, we have argued that a generalized version of the Nagel-Schaffner model (GNS) is the right account of this relation. In this article, we present a Bayesian analysis of how GNS impacts on confirmation. We formalize the relation between the reducing and the reduced theory before and after the reduction using Bayesian networks, and thereby show that, post-reduction, the two theories are confirmatory of each other. We (...) then ask when a purported reduction should be accepted on epistemic grounds. To do so, we compare the prior and posterior probabilities of the conjunction of both theories before and after the reduction and ask how well each is confirmed by the available evidence. (shrink)
The so-called ergodic hierarchy (EH) is a central part of ergodic theory. It is a hierarchy of properties that dynamical systems can possess. Its five levels are egrodicity, weak mixing, strong mixing, Kolomogorov, and Bernoulli. Although EH is a mathematical theory, its concepts have been widely used in the foundations of statistical physics, accounts of randomness, and discussions about the nature of chaos. We introduce EH and discuss how its applications in these fields.
Everything you always wanted to know about structural realism but were afraid to ask Content Type Journal Article Pages 227-276 DOI 10.1007/s13194-011-0025-7 Authors Roman Frigg, Department of Philosophy, Logic and Scientific Method, London School of Economics and Political Science, Houghton Street, London, WC2A 2AE UK Ioannis Votsis, Philosophisches Institut, Heinrich-Heine-Universität Düsseldorf, Universitätsstraße 1, Geb. 23.21/04.86, 40225 Düsseldorf, Germany Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912 Journal Volume Volume 1 Journal Issue Volume 1, Number 2.
We reconsider the Nagelian theory of reduction and argue that, contrary to a widely held view, it is the right analysis of intertheoretic reduction, since the alleged difficulties of the theory either vanish upon closer inspection or turn out to be substantive philosophical questions rather than knock-down arguments.
Most scientific models are not physical objects, and this raises important questions. What sort of entity are models, what is truth in a model, and how do we learn about models? In this paper I argue that models share important aspects in common with literary fiction, and that therefore theories of fiction can be brought to bear on these questions. In particular, I argue that the pretence theory as developed by Walton (1990, Mimesis as make-believe: on the foundations of (...) the representational arts. Harvard University Press, Cambridge/MA) has the resources to answer these questions. I introduce this account, outline the answers that it offers, and develop a general picture of scientific modelling based on it. (shrink)
On the face of it ‘deterministic chance’ is an oxymoron: either an event is chancy or deterministic, but not both. Nevertheless, the world is rife with events that seem to be exactly that: chancy and deterministic at once. Simple gambling devices like coins and dice are cases in point. On the one hand they are governed by deterministic laws – the laws of classical mechanics – and hence given the initial condition of, say, a coin toss it is determined whether (...) it will land heads or tails.2 On the other hand, we commonly assign probabilities to the different outcomes a coin toss, and doing so has proven successful in guiding our actions. The same dilemma also emerges in less mundane contexts. Classical statistical mechanics (which is still an important part of modern physics) assigns probabilities to the occurrence of certain events – for instance to the spreading of a gas that is originally confined to the left half of a container – but at the same time assumes that the relevant systems are deterministic. How can this apparent conflict be resolved? (shrink)
An important contemporary version of Boltzmannian statistical mechanics explains the approach to equilibrium in terms of typicality. The problem with this approach is that it comes in different versions, which are, however, not recognized as such and not clearly distinguished. This article identifies three different versions of typicality‐based explanations of thermodynamic‐like behavior and evaluates their respective successes. The conclusion is that the first two are unsuccessful because they fail to take the system's dynamics into account. The third, however, is promising. (...) I give a precise formulation of the proposal and present an argument in support of its central contention. †To contact the author, please write to: Department of Philosophy, Logic, and Scientific Method, London School of Economics, Houghton Street, London WC2A 2AE, England; e‐mail: firstname.lastname@example.org. (shrink)
Special issue. With contributions by Anouk Barberouse, Sarah Francescelli and Cyrille Imbert, Robert Batterman, Roman Frigg and Julian Reiss, Axel Gelfert, Till Grüne-Yanoff, Paul Humphreys, James Mattingly and Walter Warwick, Matthew Parker, Wendy Parker, Dirk Schlimm, and Eric Winsberg.
Computer simulations are an exciting tool that plays important roles in many scientific disciplines. This has attracted the attention of a number of philosophers of science. The main tenor in this literature is that computer simulations not only constitute interesting and powerful new science , but that they also raise a host of new philosophical issues. The protagonists in this debate claim no less than that simulations call into question our philosophical understanding of scientific ontology, the epistemology and semantics of (...) models and theories, and the relation between experimentation and theorising, and submit that simulations demand a fundamentally new philosophy of science in many respects. The aim of this paper is to critically evaluate these claims. Our conclusion will be sober. We argue that these claims are overblown and that simulations, far from demanding a new metaphysics, epistemology, semantics and methodology, raise few if any new philosophical problems. The philosophical problems that do come up in connection with simulations are not specific to simulations and most of them are variants of problems that have been discussed in other contexts before. (shrink)
In two recent papers Barry Loewer ( 2001 , 2004 ) has suggested to interpret probabilities in statistical mechanics as chances in David Lewis’s ( 1994 ) sense. I first give a precise formulation of this proposal, then raise two fundamental objections, and finally conclude that these can be overcome only at the price of interpreting these probabilities epistemically. †To contact the author, please write to: Roman Frigg, Department of Philosophy, Logic and Scientific Method, London School of Economics, Houghton Street, (...) London WC2A 2AE, England; e‐mail: email@example.com. (shrink)
Models are of central importance in many scientific contexts. The centrality of models such as the billiard ball model of a gas, the Bohr model of the atom, the MIT bag model of the nucleon, the Gaussian-chain model of a polymer, the Lorenz model of the atmosphere, the Lotka-Volterra model of predator-prey interaction, the double helix model of DNA, agent-based and evolutionary models in the social sciences, or general equilibrium models of markets in their respective domains are cases in point. (...) Scientists spend a great deal of time building, testing, comparing and revising models, and much journal space is dedicated to introducing, applying and interpreting these valuable tools. In short, models are one of the principal instruments of modern science. (shrink)
In two recent papers Barry Loewer (2001, 2004) has suggested to interpret probabilities in statistical mechanics as Humean chances in David Lewis’ (1994) sense. I first give a precise formulation of this proposal, then raise two fundamental objections, and finally conclude that these can be overcome only at the price of interpreting these probabilities epistemically.
GRW Theory postulates a stochastic mechanism assuring that every so often the wave function of a quantum system is `hit', which leaves it in a localised state. How are we to interpret the probabilities built into this mechanism? GRW theory is a firmly realist proposal and it is therefore clear that these probabilities are objective probabilities (i.e. chances). A discussion of the major theories of chance leads us to the conclusion that GRW probabilities can be understood only as either single (...) case propensities or Humean objective chances. Although single case propensities have some intuitive appeal in the context of GRW theory, on balance it seems that Humean objective chances are preferable on conceptual grounds because single case propensities suffer from various well know problems such as unlimited frequency tolerance and lack of a rationalisation of the principal principle. (shrink)
Various processes are often classified as both deterministic and random or chaotic. The main difficulty in analysing the randomness of such processes is the apparent tension between the notions of randomness and determinism: what type of randomness could exist in a deterministic process? Ergodic theory seems to offer a particularly promising theoretical tool for tackling this problem by positing a hierarchy, the so-called ‘ergodic hierarchy’ (EH), which is commonly assumed to provide a hierarchy of increasing degrees of randomness. However, that (...) notion of randomness requires clarification. The mathematical definition of EH does not make explicit appeal to randomness; nor does the usual way of presenting EH involve a specification of the notion of randomness that is supposed to underlie the hierarchy. In this paper we argue that EH is best understood as a hierarchy of random behaviour if randomness is explicated in terms of unpredictability. We then show that, contrary to common wisdom, EH is useful in characterising the behaviour of Hamiltonian dynamical systems. (shrink)
It is now part and parcel of the official philosophical wisdom that models are essential to the acquisition and organisation of scientific knowledge. It is also generally accepted that most models represent their target systems in one way or another. But what does it mean for a model to represent its target system? I begin by introducing three conundrums that a theory of scientific representation has to come to terms with and then address the question of whether the semantic view (...) of theories, which is the currently most widely accepted account of theories and models, provides us with adequate answers to these questions. After having argued in some detail that it does not, I conclude by pointing out in what direction a tenable account of scientific representation might be sought. (shrink)
Various processes are often classified as both deterministic and random or chaotic. The main difficulty in analysing the randomness of such processes is the apparent tension between the notions of randomness and determinism: what type of randomness could exist in a deterministic process? Ergodic theory seems to offer a particularly promising theoretical tool for tackling this problem by positing a hierarchy, the so-called ‘ergodic hierarchy’ (EH), which is commonly assumed to provide a hierarchy of increasing degrees of randomness. However, that (...) notion of randomness requires clarification. The mathematical definition of EH does not make explicit appeal to randomness; nor does the usual way of presenting EH involve a specification of the notion of randomness that is supposed to underlie the hierarchy. In this paper we argue that EH is best understood as a hierarchy of random behaviour if randomness is explicated in terms of unpredictability. We then show that, contrary to common wisdom, EH is useful in characterising the behaviour of Hamiltonian dynamical systems. r 2006 Elsevier Ltd. All rights reserved. (shrink)
Models are of central importance in many scientific contexts. The centrality of models such as the billiard ball model of a gas, the Bohr model of the atom, the MIT bag model of the nucleon, the Gaussian-chain model of a polymer, the Lorenz model of the atmosphere, the Lotka-Volterra model of predator-prey interaction, the double helix model of DNA, agent-based and evolutionary models in the social sciences, or general equilibrium models of markets in their respective domains are cases in point. (...) Scientists spend a great deal of time building, testing, comparing and revising models, and much journal space is dedicated to introducing, applying and interpreting these valuable tools. In short, models are one of the principal instruments of modern science. -/- Philosophers are acknowledging the importance of models with increasing attention and are probing the assorted roles that models play in scientific practice. The result has been an incredible proliferation of model-types in the philosophical literature. Probing models, phenomenological models, computational models, developmental models, explanatory models, impoverished models, testing models, idealized models, theoretical models, scale models, heuristic models, caricature models, didactic models, fantasy models, toy models, imaginary models, mathematical models, substitute models, iconic models, formal models, analogue models and instrumental models are but some of the notions that are used to categorize models. While at first glance this abundance is overwhelming, it can quickly be brought under control by recognizing that these notions pertain to different problems that arise in connection with models. For example, models raise questions in semantics (what is the representational function that models perform?), ontology (what kind of things are models?), epistemology (how do we learn with models?), and, of course, in philosophy of science (how do models relate to theory?; what are the implications of a model based approach to science for the debates over scientific realism, reductionism, explanation and laws of nature?). (shrink)
Models are of central importance in many scientific contexts. The roles the MIT bag model of the nucleon, the billiard ball model of a gas, the Bohr model of the atom, the Gaussian-chain model of a polymer, the Lorenz model of the atmosphere, the Lotka- Volterra model of predator-prey interaction, agent-based and evolutionary models of social interaction, or general equilibrium models of markets play in their respective domains are cases in point.