Robert Batterman examines a form of scientific reasoning called asymptotic reasoning, arguing that it has important consequences for our understanding of the scientific process as a whole. He maintains that asymptotic reasoning is essential for explaining what physicists call universal behavior. With clarity and rigor, he simplifies complex questions about universal behavior, demonstrating a profound understanding of the underlying structures that ground them. This book introduces a valuable new method that is certain to fill explanatory gaps across disciplines.
This article discusses minimal model explanations, which we argue are distinct from various causal, mechanical, difference-making, and so on, strategies prominent in the philosophical literature. We contend that what accounts for the explanatory power of these models is not that they have certain features in common with real systems. Rather, the models are explanatory because of a story about why a class of systems will all display the same large-scale behavior because the details that distinguish them are irrelevant. This story (...) explains patterns across extremely diverse systems and shows how minimal models can be used to understand real systems. (shrink)
This paper examines contemporary attempts to explicate the explanatory role of mathematics in the physical sciences. Most such approaches involve developing so-called mapping accounts of the relationships between the physical world and mathematical structures. The paper argues that the use of idealizations in physical theorizing poses serious difficulties for such mapping accounts. A new approach to the applicability of mathematics is proposed.
This paper examines the role of mathematical idealization in describing and explaining various features of the world. It examines two cases: first, briefly, the modeling of shock formation using the idealization of the continuum. Second, and in more detail, the breaking of droplets from the points of view of both analytic fluid mechanics and molecular dynamical simulations at the nano-level. It argues that the continuum idealizations are explanatorily ineliminable and that a full understanding of certain physical phenomena cannot be obtained (...) through completely detailed, non-idealized representations. (shrink)
This paper aims to draw attention to an explanatory problem posed by the existence of multiply realized or universal behavior exhibited by certain physical systems. The problem is to explain how it is possible that systems radically distinct at lower-scales can nevertheless exhibit identical or nearly identical behavior at upper-scales. Theoretically this is reflected by the fact that continuum theories such as fluid mechanics are spectacularly successful at predicting, describing, and explaining fluid behaviors despite the fact that they do not (...) recognize the discrete nature of fluids. A standard attempt to reduce one theory to another, is shown to fail to answer the appropriate question about autonomy. (shrink)
This paper looks at emergence in physical theories and argues that an appropriate way to understand socalled “emergent protectorates” is via the explanatory apparatus of the renormalization group. It is argued that mathematical singularities play a crucial role in our understanding of at least some well-defined emergent features of the world.
This paper concerns what Jerry Fodor calls a 'metaphysical mystery': How can there by macroregularities that are realized by wildly heterogeneous lower level mechanisms? But the answer to this question is not as mysterious as many, including Jaegwon Kim, Ned Block, and Jerry Fodor might think. The multiple realizability of the properties of the special sciences such as psychology is best understood as a kind of universality, where 'universality' is used in the technical sense one finds in the physics literature. (...) It is argued that the same explanatory strategy used by physicists to provide understanding of universal behavior in physics can be used to explain how special science properties can be heterogeneously multiply realized. (shrink)
A traditional view of mathematical modeling holds, roughly, that the more details of the phenomenon being modeled that are represented in the model, the better the model is. This paper argues that often times this ‘details is better’ approach is misguided. One ought, in certain circumstances, to search for an exactly solvable minimal model—one which is, essentially, a caricature of the physics of the phenomenon in question.
This paper examines a fundamental problem in applied mathematics. How can one model the behavior of materials that display radically different, dominant behaviors at different length scales. Although we have good models for material behaviors at small and large scales, it is often hard to relate these scale-based models to one another. Macroscale models represent the integrated effects of very subtle factors that are practically invisible at the smallest, atomic, scales. For this reason it has been notoriously difficult to model (...) realistic materials with a simple bottom-up-from-the-atoms strategy. The widespread failure of that strategy forced physicists interested in overall macro-behavior of materials toward completely top-down modeling strategies familiar from traditional continuum mechanics. The problem of the ``tyranny of scales'' asks whether we can exploit our rather rich knowledge of intermediate micro- scale behaviors in a manner that would allow us to bridge between these two dominant methodologies. Macroscopic scale behaviors often fall into large common classes of behaviors such as the class of isotropic elastic solids, characterized by two phenomenological parameters---so-called elastic coefficients. Can we employ knowledge of lower scale behaviors to understand this universality---to determine the coefficients and to group the systems into classes exhibiting similar behavior? (shrink)
In its broadest sense, "universality" is a technical term for something quite ordinary. It refers to the existence of patterns of behavior by physical systems that recur and repeat despite the fact that in some sense the situations in which these patterns recur and repeat are different. Rainbows, for example, always exhibit the same pattern of spacings and intensities of their bows despite the fact that the rain showers are different on each occasion. They are different because the shapes of (...) the drops, and their sizes can vary quite widely due to differences in temperature, wind direction, etc. There are different questions one might ask about such patterns. For instance, one might ask why the particular rainbow... (shrink)
This paper addresses issues surrounding the concept of geometric phase or "anholonomy". Certain physical phenomena apparently require for their explanation and understanding, reference to toplogocial/geometric features of some abstract space of parameters. These issues are related to the question of how gauge structures are to be interpreted and whether or not the debate over their "reality" is really going to be fruitful.
This paper addresses a relatively common scientific (as opposed to philosophical) conception of intertheoretic reduction between physical theories. This is the sense of reduction in which one (typically newer and more refined) theory is said to reduce to another (typically older and coarser) theory in the limit as some small parameter tends to zero. Three examples of such reductions are discussed: First, the reduction of Special Relativity (SR) to Newtonian Mechanics (NM) as (v/c)20; second, the reduction of wave optics to (...) geometrical optics as 0; and third, the reduction of Quantum Mechanics (QM) to Classical Mechanics (CM) as0. I argue for the following two claims. First, the case of SR reducing to NM is an instance of a genuine reductive relationship while the latter two cases are not. The reason for this concerns the nature of the limiting relationships between the theory pairs. In the SR/NM case, it is possible to consider SR as a regular perturbation of NM; whereas in the cases of wave and geometrical optics and QM/CM, the perturbation problem is singular. The second claim I wish to support is that as a result of the singular nature of the limits between these theory pairs, it is reasonable to maintain that third theories exist describing the asymptotic limiting domains. In the optics case, such a theory has been called catastrophe optics. In the QM/CM case, it is semiclassical mechanics. Aspects of both theories are discussed in some detail. (shrink)
Mesoscale modeling is often considered merely as a practical strategy used when information on lower-scale details is lacking, or when there is a need to make models cognitively or computationally tractable. Without dismissing the importance of practical constraints for modeling choices, we argue that mesoscale models should not just be considered as abbreviations or placeholders for more “complete” models. Because many systems exhibit different behaviors at various spatial and temporal scales, bottom-up approaches are almost always doomed to fail. Mesoscale models (...) capture aspects of multi-scale systems that cannot be parameterized by simple averaging of lower-scale details. To understand the behavior of multi-scale systems, it is essential to identify mesoscale parameters that “code for” lower-scale details in a way that relate phenomena intermediate between microscopic and macroscopic features. We illustrate this point using examples of modeling of multi-scale systems in materials science and biology, where identification of material parameters such as stiffness or strain is a central step. The examples illustrate important aspects of a so-called “middle-out” modeling strategy. Rather than attempting to model the system bottom-up, one starts at intermediate scales where systems exhibit behaviors distinct from those at the atomic and continuum scales. One then seeks to upscale and downscale to gain a more complete understanding of the multi-scale system. The cases highlight how parameterization of lower-scale details not only enables tractable modeling but is also central to understanding functional and organizational features of multi-scale systems. (shrink)
I respond to Belot's argument and defend the view that sometimes `fundamental theories' are explanatorily inadequate and need to be supplemented with certain aspects of less fundamental `theories emeritus'.
This Handbook provides an overview of many of the topics that currently engage philosophers of physics. It surveys new issues and the problems that have become a focus of attention in recent years. It also provides up-to-date discussions of the still very important problems that dominated the field in the past.
Discussions of the foundations of Classical Equilibrium Statistical Mechanics (SM) typically focus on the problem of justifying the use of a certain probability measure (the microcanonical measure) to compute average values of certain functions. One would like to be able to explain why the equilibrium behavior of a wide variety of distinct systems (different sorts of molecules interacting with different potentials) can be described by the same averaging procedure. A standard approach is to appeal to ergodic theory to justify this (...) choice of measure. A different approach, eschewing ergodicity, was initiated by A. I. Khinchin. Both explanatory programs have been subjected to severe criticisms. This paper argues that the Khinchin type program deserves further attention in light of relatively recent results in understanding the physics of universal behavior. (shrink)
This paper considers definitions of classical dynamical chaos that focus primarily on notions of predictability and computability, sometimes called algorithmic complexity definitions of chaos. I argue that accounts of this type are seriously flawed. They focus on a likely consequence of chaos, namely, randomness in behavior which gets characterized in terms of the unpredictability or uncomputability of final given initial states. In doing so, however, they can overlook the definitive feature of dynamical chaos--the fact that the underlying motion generating the (...) behavior exhibits extreme trajectory instability. I formulate a simple criterion of adequacy for any definition of chaos and show how such accounts fail to satisfy it. (shrink)
I discuss recent work in ergodic theory and statistical mechanics, regarding the compatibility and origin of random and chaotic behavior in deterministic dynamical systems. A detailed critique of some quite radical proposals of the Prigogine school is given. I argue that their conclusion regarding the conceptual bankruptcy of the classical conceptions of an exact microstate and unique phase space trajectory is not completely justified. The analogy they want to draw with quantum mechanics is not sufficiently close to support their most (...) radical conclusion. (shrink)
This article attempts to address the problem of the applicability of mathematics in physics by considering the (narrower) question of what make the so-called special functions of mathematical physics special. It surveys a number of answers to this question and argues that neither simple pragmatic answers, nor purely mathematical classificatory schemes are sufficient. What is required is some connection between the world and the way investigators are forced to represent the world.
This paper considers the relationship between continuum hydrodynamics and discrete molecular dynamics in the context of explaining the behavior of breaking droplets. It is argued that the idealization of a fluid as a continuum is actually essential for a full explanation of the drop breaking phenomenon and that, therefore, the less "fundamental," emergent hydrodynamical theory plays an ineliminable role in our understanding.
Our aim is to discover whether the notion of algorithmic orbit-complexity can serve to define “chaos” in a dynamical system. We begin with a mostly expository discussion of algorithmic complexity and certain results of Brudno, Pesin, and Ruelle (BRP theorems) which relate the degree of exponential instability of a dynamical system to the average algorithmic complexity of its orbits. When one speaks of predicting the behavior of a dynamical system, one usually has in mind one or more variables in the (...) phase space that are of particular interest. To say that the system is unpredictable is, roughly, to say that one cannot feasibly determine future values of these variables from an approximation of the initial conditions of the system. We introduce the notions of restrictedexponential instability and conditionalorbit-complexity, and announce a new and rather general result, similar in spirit to the BRP theorems, establishing average conditional orbit-complexity as a lower bound for the degree of restricted exponential instability in a dynamical system. The BRP theorems require the phase space to be compact and metrizable. We construct a noncompact kicked rotor dynamical system of physical interest, and show that the relationship between orbit-complexity and exponential instability fails to hold for this system. We conclude that orbit-complexity cannot serve as a general definition of “chaos.”. (shrink)
I discuss a broad critique of the classical approach to the foundations of statistical mechanics (SM) offered by N. S. Krylov. He claims that the classical approach is in principle incapable of providing the foundations for interpreting the "laws" of statistical physics. Most intriguing are his arguments against adopting a de facto attitude towards the problem of irreversibility. I argue that the best way to understand his critique is as setting the stage for a positive theory which treats SM as (...) a theory in its own right, involving a completely different conception of a system's state. As the orthodox approach treats SM as an extension of the classical or quantum theories (one which deals with large systems), Krylov is advocating a major break with the traditional view of statistical physics. (shrink)
This paper discusses a conception of physics as a collection of theories that, from a logical point of view, is inconsistent. It is argued that this logical conception of the relations between physical theories is too crude. Mathematical subtleties allow for a much more nuanced and sophisticated understanding of the relations between different physical theories.
I. Prigogine has proposed, and the writings of N. S. Krylov to some extent suggest, a novel and unorthodox solution to foundational problems in statistical mechanics. In particular, the view claims to offer new insight into two interconnected problems: understanding the role of probability in physics, and that of reconciling the irreversibility of physical processes with the temporal symmetry of dynamical theories. The approach in question advocates a conception of the state of a system which incorporates features of the quantum (...) mechanical state concept in a context, classical statistical mechanics, where quantum considerations are generally considered to be irrelevant. I examine the plausibility of this new approach by offering an analysis of the various notions of state employed in modern physics. ;In the first chapter, I analyze the conceptual connections between dynamical laws and the nature of a system's state. I argue that laws and states are correlative. In constructing dynamical theories one does not start with a fixed or pre-determined state concept. Neither is one given the laws of the theory from which the conception of state is derived. Rather, we get the law/state structure as a "package." In light of this general analysis, I next examine the notion of state employed in the quantum theory. Here I consider a variety of conceptions of quantum states and assess their ability to answer the "paradoxes" of quantum theory. I pay particular attention to the role of probability and related restrictions on the realization of certain states. The new approach to statistical mechanics proposes to exploit similar restrictions on states in order to resolve the irreversibility problem. But is this unorthodox approach viable? In the final four chapters, I offer a detailed critique of this approach, examining the plausibility of the radical reworking of the state concept. I argue that while some important progress can be made, certain old puzzles remain, and new and difficult ones arise--ones which raise serious doubts about the ultimate success of this particular approach. I conclude, however, by arguing that such radical proposals are not unmotivated; and that novel and unorthodox proposals concerning the foundations of statistical mechanics must be taken seriously. (shrink)
Top-down causation is often taken to be a metaphysically suspicious type of causation that is found in a few complex systems, such as in human mind-body relations. However, as Ellis and others have shown, top-down causation is ubiquitous in physics as well as in biology. Top-down causation occurs whenever specific dynamic behaviors are realized or selected among a broader set of possible lower-level states. Thus understood, the occurrence of dynamic and structural patterns in physical and biological systems presents a problem (...) for reductionist positions. We illustrate with examples of universality and functional equivalence classes how higher-level behaviors can be multiple realized by distinct lower-level systems or states. Multiple realizability in both contexts entails what Ellis calls “causal slack” between levels, or what others understand as relative explanatory autonomy. To clarify these notions further, we examine procedures for upscaling in multi-scale modeling. We argue that simple averaging strategies for upscaling only work for simplistic homogenous systems, because of the scale-dependency of characteristic behaviors in multi-scale systems. We suggest that this interpretation has implications for what Ellis calls mechanical top-down causation, as it presents a stronger challenge to reductionism than typically assumed. (shrink)