I provide an explicit formulation of empirical adequacy, the central concept of constructive empiricism, and point out a number of problems. Based on one of the inspirations for empirical adequacy, I generalize the notion of a theory to avoid implausible presumptions about the relation of theoretical concepts and observations, and generalize empirical adequacy with the help of approximation sets to allow for lack of knowledge, approximations, and successive gain of knowledge and precision. As a test case, I provide an application (...) of these generalizations to a simple interference phenomenon. (shrink)
Disagreement about how best to think of the relation between theories and the realities they represent has a longstanding and venerable history. We take up this debate in relation to the free energy principle (FEP) - a contemporary framework in computational neuroscience, theoretical biology and the philosophy of cognitive science. The FEP is very ambitious, extending from the brain sciences to the biology of self-organisation. In this context, some find apparent discrepancies between the map (the FEP) and the territory (target (...) systems) a compelling reason to defend instrumentalism about the FEP. We take this to be misguided. We identify an important fallacy made by those defending instrumentalism about the FEP. We call it the literalist fallacy: this is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto real-world, target systems. We conclude that scientific realism about the FEP is a live and tenable option. (shrink)
This book is addresses a topic that has received little or no attention in orthodox epistemology. Typical epistemological investigation focuses almost exclusively on knowledge, where knowing that something is the case importantly implies that what is believed is strictly true. This condition on knowledge is known as factivity and it is, to be sure, a bit of epistemological orthodoxy. So, if a belief is to qualify as knowledge according to the orthodox view it cannot be false. There is also an (...) increasingly influential group of epistemologists who argue that one ought to act only on what one knows, because truth of belief is the surest way to guarantee that our actions work out as planned. They defend what is known as the knowledge norm for action. This view is typically justified in virtue of the idea that successful intentional action should stem from knowledge because knowledge is a factive propositional attitude and it is, as a result, success-prone with respect to intentional action. In other words, true beliefs that constitute knowledge are the rationally normative standard for successful acting. But, there are clearly multitudes of cases where epistemic agents operate successfully and even rationally on the basis of beliefs that are false. That this is the case is not especially controversial. Sometimes false beliefs facilitate successful action. Of course, this can be because the agent is simply lucky. However, there is something particularly important about some false beliefs that relates to successful action but not in virtue of luck. While not strictly true, the beliefs in question are close to the truth or approximately true. This gives rise to the possibility that there are knowledge-like states that play roles very similar to knowledge but which are only quasi-factive. That is to say such states imply approximate truth and do not imply strict truth. Moreover, often such beliefs facilitate successful action in virtue of their being approximately true and this suggests a much more plausible norm for successful action that encompasses both cases of rational action guided by such quasi-factive states as well as cases of rational action guided by knowledge. The thesis of this book is that quasi-factive knowledge-like states are far more common that epistemologists have acknowledge and the book introduces a theory of such states and how they give rise to a much more reasonable account of the norms for action. This involves some tricky issues concerning approximate truth, the rationality of believing not strictly true claims, the justification of approximately true beliefs, the nature of false but approximately true evidence, the norms of belief, knowledge and quasi-knowledge, etc. (shrink)
Maps and mapping raise questions about models and modeling and in science. This chapter archives map discourse in the founding generation of philosophers of science (e.g., Rudolf Carnap, Nelson Goodman, Thomas Kuhn, and Stephen Toulmin) and in the subsequent generation (e.g., Philip Kitcher, Helen Longino, and Bas van Fraassen). In focusing on these two original framing generations of philosophy of science, I intend to remove us from the heat of contemporary discussions of abstraction, representation, and practice of science and thereby (...) see in a more distant and neutral light the many productive ways in which maps can stand in analytically for scientific theories and models. The chapter concludes by complementing the map analogy – i.e., a scientific theory is a map of the world – with a model analogy, viz., a scientific model is a vehicle for understanding. (shrink)
I argue that molecules may not have structure in isolation. I support this by investigating how quantum models identify structure for isolated molecules. Specifically, I distinguish between two sets of models: those that identify structure in isolation and those that do not. The former identify structure because they presuppose structural information about the target system via the Born- Oppenheimer approximation. However, it is an idealisation to assume structure in isolation because there is no empirical evidence of this. In fact, whenever (...) structure is empirically examined it is always partially determined by factors that are absent in isolation. Together with the growing empirical evidence that isolated molecules behave in non-classical ways, this shows that the quantum models that do not identify structure are more faithful representations of isolated molecules. (shrink)
This paper is a response to Baumann's comments on "Can Knowledge Really be Non-fative?" In this paper Baumann's suggestions for how those who deny the factivty of knowledge might deal with the argument from inconsistency and explosion are addressed.
In philosophical studies regarding mathematical models of dynamical systems, instability due to sensitive dependence on initial conditions, on the one side, and instability due to sensitive dependence on model structure, on the other, have by now been extensively discussed. Yet there is a third kind of instability, which by contrast has thus far been rather overlooked, that is also a challenge for model predictions about dynamical systems. This is the numerical instability due to the employment of numerical methods involving a (...) discretization process, where discretization is required to solve the differential equations of dynamical systems on a computer. We argue that the criteria for numerical stability, as usually provided by numerical analysis textbooks, are insufficient, and, after mentioning the promising development of backward analysis, we discuss to what extent, in practice, numerical instability can be controlled or avoided. (shrink)
While the predominant focus of the philosophical literature on scientific modeling has been on single-scale models, most systems in nature exhibit complex multiscale behavior, requiring new modeling methods. This challenge of modeling phenomena across a vast range of spatial and temporal scales has been called the tyranny of scales problem. Drawing on research in the geosciences, I synthesize and analyze a number of strategies for taming this tyranny in the context of conceptual, physical, and mathematical modeling. This includes several strategies (...) that can be deployed in physical modeling, even when strict dynamical scaling fails. In all cases, I argue that having an adequate conceptual model—given both the nature of the system and the particular purpose of the model—is essential. I draw a distinction between depiction and representation, and use this research in the geosciences to advance a number of debates in the philosophy of modeling. (shrink)
I show that extant attempts to capture and generalize empirical adequacy in terms of partial structures fail. Indeed, the motivations for the generalizations in the partial structures approach are better met by the generalizations via approximation sets developed in “Generalizing Empirical Adequacy I”. Approximation sets also generalize partial structures.
This article traces the origins of Kenneth Wilson's conception of effective field theories (EFTs) in the 1960s. I argue that what really made the difference in Wilson's path to his first prototype of EFT are his long-standing pragmatic aspirations and methodological commitments. Wilson's primary interest was to work on mathematically interesting physical problems and he thought that progress could be made by treating them as if they could be analyzed in principle by a sufficiently powerful computer. The first point explains (...) why he had no qualms about twisting the structure of field theories; the second why he divided the state-space of a toy model field theory into continuous slices by following a standard divide-and-conquer algorithmic strategy instead of working directly with a fully discretized and finite theory. I also show how Wilson's prototype bears the mark of these aspirations and commitments and clear up a few striking ironies along the way. (shrink)
The notion of understanding occupies an increasingly prominent place in contemporary epistemology, philosophy of science, and moral theory. A central and ongoing debate about the nature of understanding is how it relates to the truth. In a series of influential contributions, Catherine Elgin has used a variety of familiar motivations for antirealism in philosophy of science to defend a non- factive theory of understanding. Key to her position are: (i) the fact that false theories can contribute to the upwards trajectory (...) of scientific understanding, and (ii) the essential role of inaccurate idealisations in scientific research. Using Elgin’s arguments as a foil, I show that a strictly factive theory of understanding has resources with which to offer a unified response to both the problem of idealisations and the role of false theories in the upwards trajectory of scientific understanding. Hence, strictly factive theories of understanding are viable notwithstanding these forceful criticisms. (shrink)
Mauricio Suárez and Agnes Bolinska apply the tools of communication theory to scientific modeling in order to characterize the informational content of a scientific model. They argue that when represented as a communication channel, a model source conveys information about its target, and that such representations are therefore appropriate whenever modeling is employed for informational gain. They then extract two consequences. First, the introduction of idealizations is akin in informational terms to the introduction of noise in a signal; for in (...) an idealization we introduce ‘extraneous’ elements into the model that have no correlate in the target. Second, abstraction in a model is informationally equivalent to equivocation in the signal; for in an abstraction we “neglect” in the model certain features that obtain in the target. They conclude that it becomes possible in principle to quantify idealization and abstraction in informative models, although precise absolute quantification will be difficult to achieve in practice. (shrink)
This monograph offers a critical introduction to current theories of how scientific models represent their target systems. Representation is important because it allows scientists to study a model to discover features of reality. The authors provide a map of the conceptual landscape surrounding the issue of scientific representation, arguing that it consists of multiple intertwined problems. They provide an encyclopaedic overview of existing attempts to answer these questions, and they assess their strengths and weaknesses. The book also presents a comprehensive (...) statement of their alternative proposal, the DEKI account of representation, which they have developed over the last few years. They show how the account works in the case of material as well as non-material models; how it accommodates the use of mathematics in scientific modelling; and how it sheds light on the relation between representation in science and art. The issue of representation has generated a sizeable literature, which has been growing fast in particular over the last decade. This makes it hard for novices to get a handle on the topic because so far there is no book-length introduction that would guide them through the discussion. Likewise, researchers may require a comprehensive review that they can refer to for critical evaluations. This book meets the needs of both groups. (shrink)
Over the last decades, network-based approaches have become highly popular in diverse fields of biology, including neuroscience, ecology, molecular biology and genetics. While these approaches continue to grow very rapidly, some of their conceptual and methodological aspects still require a programmatic foundation. This challenge particularly concerns the question of whether a generalized account of explanatory, organisational and descriptive levels of networks can be applied universally across biological sciences. To this end, this highly interdisciplinary theme issue focuses on the definition, motivation (...) and application of key concepts in biological network science, such as explanatory power of distinctively network explanations, network levels, and network hierarchies. (shrink)
A common strategy for simplifying complex systems involves partitioning them into subsystems whose behaviors are roughly independent of one another at shorter timescales. Dynamic causal models clarify how doing so reveals a system’s nonequilibrium causal relationships. Here I use these models to elucidate the idealizations and abstractions involved in representing a system at a timescale. The models reveal that key features of causal representations—such as which variables are exogenous—may vary with the timescale at which a system is considered. This has (...) implications for debates regarding which systems can be represented causally. (shrink)
In this note, I apply Norton’s (Philos Sci 79(2):207–232, 2012) distinction between idealizations and approximations to argue that the epistemic and inferential advantages often taken to accrue to minimal models (Batterman in Br J Philos Sci 53:21–38, 2002) could apply equally to approximations, including “infinite” ones for which there is no consistent model. This shows that the strategy of capturing essential features through minimality extends beyond models, even though the techniques for justifying this extended strategy remain similar. As an application (...) I consider the justification and advantages of the approximation of a inertial reference frame in Norton’s dome scenario (Philos Sci 75(5):786–798, 2008), thereby answering a question raised by Laraudogoitia (Synthese 190(14):2925–2941, 2013). (shrink)
The replicator dynamics and Moran process are the main deterministic and stochastic models of evolutionary game theory. The models are connected by a mean-field relationship—the former describes the expected behavior of the latter. However, there are conditions under which their predictions diverge. I demonstrate that the divergence between their predictions is a function of standard techniques used in their analysis and of differences in the idealizations involved in each. My analysis reveals problems for stochastic stability analysis in a broad class (...) of games, demonstrates a novel domain of agreement between the dynamics, and indicates a broader moral for evolutionary modeling. (shrink)
In this essay, I provide an overview of the debate on infinite and essential idealizations in physics. I will first present two ostensible examples: phase transitions and the Aharonov– Bohm effect. Then, I will describe the literature on the topic as a debate between two positions: Essentialists claim that idealizations are essential or indispensable for scientific accounts of certain physical phenomena, while dispensabilists maintain that idealizations are dispensable from mature scientific theory. I will also identify some attempts at finding a (...) middle ground between the essentialists and dispensabilists camps. Finally, I will raise questions for future research on essential and infinite idealizations via the notion of exploration. (shrink)
In her new book Reconstructing Reality (henceforth RR), Margaret Morrison’s main target is the kind of information about the world (or, more specifically, about physical and biological systems) one can extract from the ‘reconstructive methods and practices of science’ (p. 1). To address this, Morrison focuses on three central kinds of interrelated strategies for ‘recasting nature’ (p. 2) by using reconstructive methods and practices: (i) abstract mathematical explanations and understanding (Part 1 of the book), (ii) scientific models (Part 2), and (...) (iii) computer simulations (Part 3). Hence, RR is the ambitious attempt to connect and analyse three major topics in current philosophy of science. (shrink)
The main question addressed in this paper is whether some false sentences can constitute evidence for the truth of other propositions. In this paper it is argued that there are good reasons to suspect that at least some false propositions can constitute evidence for the truth of certain other contingent propositions. The paper also introduces a novel condition concerning propositions that constitute evidence that explains a ubiquitous evidential practice and it contains a defense of a particular condition concerning the possession (...) of evidence. The core position adopted here then is that false propositions that are approximately true reports of measurements can constitute evidence for the truth of other propositions. So, it will be argued that evidence is only quasi-factive in this very specific sense. (shrink)
one takes to be the most salient, any pair could be judged more similar to each other than to the third. Goodman uses this second problem to showthat there can be no context-free similarity metric, either in the trivial case or in a scientifically ...
Partial explanations are everywhere. That is, explanations citing causes that explain some but not all of an effect are ubiquitous across science, and these in turn rely on the notion of degree of explanation. I argue that current accounts are seriously deficient. In particular, they do not incorporate adequately the way in which a cause’s explanatory importance varies with choice of explanandum. Using influential recent contrastive theories, I develop quantitative definitions that remedy this lacuna, and relate it to existing measures (...) of degree of causation. Among other things, this reveals the precise role here of chance, as well as bearing on the relation between causal explanation and causation itself. (shrink)
It is proposed that we use the term “approximation” for inexact description of a target system and “idealization” for another system whose properties also provide an inexact description of the target system. Since systems generated by a limiting process can often have quite unexpected, even inconsistent properties, familiar limit systems used in statistical physics can fail to provide idealizations, but are merely approximations. A dominance argument suggests that the limiting idealizations of statistical physics should be demoted to approximations.
In this paper Timothy Williamson’s argument that the knowledge norm of assertion is the best explanation of the unassertability of Morrean sentences is challenged and an alternative account of the norm of assertion is defended.
Recently a number of variously motivated epistemologists have argued that knowledge is closely tied to practical matters. On the one hand, radical pragmatic encroachment is the view that facts about whether an agent has knowledge depend on practical factors and this is coupled to the view that there is an important connection between knowledge and action. On the other hand, one can argue for the less radical thesis only that there is an important connection between knowledge and practical reasoning. So, (...) defenders of both of these views endorse the view that knowledge is the norm of practical reasoning. This thesis has recently come under heavy fire and a number of weaker proposals have been defended. In this paper counter-examples to the knowledge norm of reasoning will be presented and it will be argued that this viewand a number of related but weaker viewscannot be sustained in the face of these counter-examples. The paper concludes with a novel proposal concerning the norm of practical reasoning that is immune to the counter-examples introduced here. (shrink)
This paper, which is based on recent empirical research at the University of Leeds, the University of Edinburgh, and the University of Bristol, presents two difficulties which arise when condensed matter physicists interact with molecular biologists: (1) the former use models which appear to be too coarse-grained, approximate and/or idealized to serve a useful scientific purpose to the latter; and (2) the latter have a rather narrower view of what counts as an experiment, particularly when it comes to computer simulations, (...) than the former. It argues that these findings are related; that computer simulations are considered to be undeserving of experimental status, by molecular biologists, precisely because of the idealizations and approximations that they involve. The complexity of biological systems is a key factor. The paper concludes by critically examining whether the new research programme of ‘systems biology’ offers a genuine alternative to the modelling strategies used by physicists. It argues that it does not. (shrink)
We study Δ2 reals x in terms of how they can be approximated symmetrically by a computable sequence of rationals. We deal with a natural notion of ‘approximation representation’ and study how these are related computationally for a fixed x. This is a continuation of earlier work; it aims at a classification of Δ2 reals based on approximation and it turns out to be quite different than the existing ones (based on information content etc.).
I show that the standard approach to modeling phenomena involving microscopic classical electrodynamics is mathematically inconsistent. I argue that there is no conceptually unproblematic and consistent theory covering the same phenomena to which this inconsistent theory can be thought of as an approximation; and I propose a set of conditions for the acceptability of inconsistent theories.
In this paper, a criticism of the traditional theories of approximation and idealization is given as a summary of previous works. After identifying the real purpose and measure of idealization in the practice of science, it is argued that the best way to characterize idealization is not to formulate a logical model – something analogous to Hempel's D-N model for explanation – but to study its different guises in the praxis of science. A case study of it is then made (...) in thermostatistical physics. After a brief sketch of the theories for phase transitions and critical phenomena, I examine the various idealizations that go into the making of models at three difference levels. The intended result is to induce a deeper appreciation of the complexity and fruitfulness of idealization in the praxis of model-building, not to give an abstract theory of it. (shrink)
We propose a new account of vagueness and approximation in terms of the theory of granular partitions. We distinguish different kinds of crisp and non-crisp granular partitions and we describe the relations between them, concentrating especially on spatial examples. We describe the practice whereby subjects use regular grid-like reference partitions as a means for tempering the vagueness of their judgments, and we demonstrate how the theory of reference partitions can yield a natural account of this practice, which is referred to (...) in the literature as ‘approximation’. (shrink)
Abstract: We propose a view of vagueness as a semantic property of names and predicates. All entities are crisp, on this semantic view, but there are, for each vague name, multiple portions of reality that are equally good candidates for being its referent, and, for each vague predicate, multiple classes of objects that are equally good candidates for being its extension. We provide a new formulation of these ideas in terms of a theory of granular partitions. We show that this (...) theory provides a general framework within which we can understand the relation between vague terms and concepts and the corresponding crisp portions of reality. We also sketch how it might be possible to formulate within this framework a theory of vagueness which dispenses with the notion of truth-value gaps and other artifacts of more familiar approaches. Central to our approach is the idea that judgments about reality involve in every case (1) a separation of reality into foreground and background of attention and (2) the feature of granularity. On this basis we attempt to show that even vague judgments made in naturally occurring contexts are not marked by truth-value indeterminacy. We distinguish, in addition to crisp granular partitions, also vague partitions, and reference partitions, and we explain the role of the latter in the context of judgments that involve vagueness. We conclude by showing how reference partitions provide an effective means by which judging subjects are able to temper the vagueness of their judgments by means of approximations. (shrink)
While the use of so-called idealizations in science has been widely recognized for many years, the philosophical problems that arise from this use have received relatively little attention. Even a cursory reading of the philosophical literature devoted to these problems reveals that the following questions remain unanswered: In general, what, if any, are the distinguishing characteristics of idealizations? More specifically, do idealizations have any distinguishing syntactic or semantic characteristics? In addition to these questions there exist the following pragmatic questions, questions (...) relating to the ways in which idealizations are used in science: How are idealizations used in explanations?. Do these explanations have any peculiar characteristics—characteristics not shared by deductive-nomological explanations? If we assume that idealizations are false or “do not obtain,” how is it that they can have any explanatory power?. Further, there are questions of more general philosophic concern. How do the problems of idealizations relate to those of simplicity, for an idealization seems to be, in some sense, a type of simplification? How do the problems of ideal laws and theories relate to the general problems of scientific laws and theories? (shrink)
There are two notions of abstraction that are often confused. The material view implies that the products of abstraction are not concrete. It is vulnerable to the criticism that abstracting introduces misrepresentations to the system, hence abstraction is indistinguishable from idealization. The omission view fares better against this criticism because it does not entail that abstract objects are non-physical and because it asserts that the way scientists abstract is different to the way they idealize. Moreover, the omission view better captures (...) the way that abstraction is used in many parts of science. Disentangling the two notions is an important prerequisite for determining how to evaluate the use abstraction in science. (shrink)