The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...) field work, which should go beyond the in-vitro analysis of UX to examine in-vivo problems emerging in the field. Our methodology is also comparative, as it chooses to steer away from the almost exclusive focus on ML to compare its challenges with those faced by more vintage algorithms. Finally, it is also philosophical, as we defend the relevance of the philosophical literature to define the epistemic desiderata of a good explanation. This study was conducted in collaboration with Etalab, a Task Force of the French Prime Minister in charge of Open Data & Open Government Policies, dealing in particular with the enforcement of the right to an explanation. In order to illustrate and refine our methodology before going up to scale, we conduct a preliminary work of case studies on the main different types of algorithms used by the French administration: computation, matching algorithms and ML. We study the merits and drawbacks of a recent approach to explanation, which we baptize input-output black box reasoning or BBR for short. We begin by presenting a conceptual framework including the distinctions necessary to a study of pedagogical explainability. We proceed to algorithmic case studies, and draw model-specific and model-agnostic lessons and conjectures. (shrink)
Some people believe that there is an “explanatory gap” between the facts of physics and certain other facts about the world—for example, facts about consciousness. The gap is presented as a challenge to any thoroughgoing naturalism or physicalism. We believe that advocates of the explanatory gap have some reasonable expectations that cannot be merely dismissed. We also believe that naturalistic thinkers have the resources to close the explanatory gap, but that they have not adequately explained how and why these resources (...) work. In this paper we isolate the legitimate explanatory demands in the gap reasoning, as it is defended by Chalmers and Jackson . We then argue that these demands can be met. Our solution involves a novel proposal for understanding the relationship between theories, explanations, and scientific identities. (shrink)
Scientists and philosophers routinely talk about phenomena, and the ways in which they relate to explanation, theory and practice in science. However, there are very few definitions of the term, which is often used synonymously with "data'', "model'' and in older literature, "hypothesis''. In this paper I will attempt to clarify how phenomena are recognized, categorized and the role they play in scientific epistemology. I conclude that phenomena are not necessarily theory-based commitments, but that they are what explanations are called (...) to account for, which are not presently explained. (shrink)
The traditional sciences have always had trouble with ambiguity. Through the imposition of “enabling constraints” -- making a set of assumptions and then declaring ceteris paribus -- science can bracket away ambiguity. These enabling constraints take the form of uncritically examined presuppositions or “uceps.” Second order science examines variations in values assumed for these uceps and looks at the resulting impacts on related scientific claims. After rendering explicit the role of uceps in scientific claims, the scientific method is used to (...) question rarely challenged assertions. This article lays out initial foundations for second order science, its ontology, methodology, and implications. (shrink)
It is mostly agreed that Popper's criterion of falsifiability fails to provide a useful demarcation between science and pseudo-science, because ad-hoc assumptions are always able to save any theory that conflicts with the empirical data, and a characterization of ad-hoc assumptions is lacking. Moreover, adding some testable predictions is not very difficult. It should be emphasized that the Duhem-Quine argument does not simply make the demarcation approximate, but it makes it totally useless. Indeed, no philosophical criterion of demarcation is presently (...) able to rule out even some of the most blatant cases of pseudo-science, not even approximatively. This is in sharp contrast with our firm belief that some theories are clearly not scientific. Where does this belief come from? In this paper I argue that it is necessary and possible to recognize the notion of syntactic simplicity that is able to tell the difference between empirically equivalent scientific and non-scientific theories, with a precision that is adequate to many important practical purposes, and it fully agrees with the judgments generally held in the scientific community. (shrink)
In this article, we tackle the phenomenon of what seems to be a misunderstanding between science education theory and philosophy of science−one which does not seem to have received any attention in the literature. While there seems to be a consensus within the realm of science education on limiting or altogether denying the explanatory role of scientific laws (particularly in contrast with “theories”), none of the canonical models of scientific explanation (covering law, statistical relevance, unification, mechanistic-causal, pragmatic) lends any support (...) to this view of laws. We will reconstruct three different versions of this demotion of laws (i.e., laws are merely descriptive; laws are explanatory only of singular events, not of laws; laws are explanatory but only in a “superficial” way), propose possible grounds for them and illustrate why these perspectives pose a conceptual challenge as they contrast with epistemological approaches to the problem of explanation. We will also suggest the potential negative outcomes that would arise from science teachers adopting these approaches in the classroom when aiming to assist students in moving beyond mere description and towards explanation. (shrink)
At its core this book is concerned with logic and computation with respect to the mathematical characterization of sentient biophysical structure and its behavior. -/- Three related theories are presented: The first of these provides an explanation of how sentient individuals come to be in the world. The second describes how these individuals operate. And the third proposes a method for reasoning about the behavior of individuals in groups. -/- These theories are based upon a new explanation of experience in (...) nature, the construction of senses, and motile behavior. This new approach is developed from first principles to enable a rigorous and systematic explanation of the variety of associated intelligent behaviors. -/- Alongside this development is a further account that focuses upon the nature of our work. It discusses the existential aspects of scientific inquiry, its epistemology and logic. It seeks to clarify the nature of the mathematical characterization and computation of natural behaviors, dealing with questions in the foundations of logic. It explores methodological issues related to reduction and the refinement of ideas from intuition to formal logical structure. -/- In support of this inquiry we work toward the development of a calculus for biophysical construction and its dynamics. If successful this mechanics mathematically characterizes sensory and motile behavior. -/- Upon this foundation we propose a model of apprehension and explore how its products are processed by the organism. Finally, we develop a probabilistic theory that enables us to reason about inaccessible factors in group behavior. -/- The mechanics we propose suggests the design and physical realization of a new model of computation; one in which structure and the concurrency of action are a first-order consideration. -/- We identify opportunities for experimental verification of the theory and we suggest a proof of our results in practice by the identification of this mechanism, allowing the construction of machines that experience. (shrink)
I highlight a metaphysical concern that stands in the way of more widespread adoption of causal modeling techniques such as causal Bayes nets. Researchers in some fields may resist adoption due to concerns that they don't 'really' understand what they are saying about a system when they apply such techniques. Students in these fields are repeated exhorted to be cautious about application of statistical techniques to their data without a clear understanding of the conditions required for those techniques to yield (...) genuine insight into the data. They are acutely aware that anyone can chuck some data into a software package and get what looks like an answer, even though these tests may not be well-defined for the data on which they can apparently be run. This is thus a healthy skepticism for uptake of causal modeling methods, which points directly to the need for a metaphysical understanding of causation in order to successfully use the modeling methods. Without a clear understanding of what the methods are committing to, including what it means to say that there exists a causal relationship, researchers have limited ability to identify potentially bad output, or to independently verify results. (shrink)
A "patternist" approach to explanation seeks to formalize unificationism using notions from algorithmic information theory. Among other advantages, this account provides both a rigorous sense of how data can admit multiple explanations, and a rigorous sense of how some of those explanations can conjoin, while others compete.
According to a number of approaches in theoretical physics, spacetime does not exist fundamentally. Rather, spacetime exists by depending on another, more fundamental, non-spatiotemporal structure. A prevalent opinion in the literature is that this dependence should not be analyzed in terms of composition. We should not say, that is, that spacetime depends on an ontology of non-spatiotemporal entities in virtue of having them as parts. But is that really right? On the contrary, we argue that a mereological approach to dependent (...) spacetime is not only viable, but promises to enhance our understanding of the physical situation. (shrink)
We provide two programmatic frameworks for integrating philosophical research on understanding with complementary work in computer science, psychology, and neuroscience. First, philosophical theories of understanding have consequences about how agents should reason if they are to understand that can then be evaluated empirically by their concordance with findings in scientific studies of reasoning. Second, these studies use a multitude of explanations, and a philosophical theory of understanding is well suited to integrating these explanations in illuminating ways.
Jan G. Michel argues that we need a philosophy of scientific discovery. Before turning to the question of what such a philosophy might look like, he addresses two questions: Don’t we have a philosophy of scientific discovery yet? And do we need one at all? To answer the first question, he takes a closer look at history and finds that we have not had a systematic philosophy of scientific discovery worthy of the name for over 150 years. To answer the (...) second question, Michel puts forward three arguments that show the importance of a philosophy of scientific discovery. Briefly, he arrives at the following answers: No, we don’t yet have a philosophy of scientific discovery, and yes, we definitely need one. To remedy this shortcoming, Michel analyzes the concept of discovery, leading him to the insight that scientific discoveries have an underlying structure with certain structural features. Some of these features may be important but not indispensable to scientific discovery processes; these include eureka moments, serendipities, joint discoveries, special science funding, and others. In addition, Michel identifies three indispensable structural features which he examines in detail and which he places in a picture with a certain dynamics according to which the process of making scientific discoveries can be seen as a path, leading us from finding and acceptance to knowledge. (shrink)
In this paper I will defend the incapacity of the informational frameworks in thermal physics, mainly those that historically and conceptually derive from the work of Brillouin (1962) and Jaynes (1957a), to robustly explain the approach of certain gaseous systems to their state of thermal equilibrium from the dynamics of their molecular components. I will further argue that, since their various interpretative, conceptual and technical-formal resources (e.g. epistemic interpretations of probabilities and entropy measures, identification of thermal entropy as Shannon information, (...) and so on) are shown to be somehow incoherent, inconsistent or inaccurate, these informational proposals need to 'epistemically parasitize' the manifold of theoretical resources of Boltzmann's and Gibbs' statistical mechanics, respectively, in order to properly account for the equilibration process of an ideal gas from its microscopic properties. Finally, our conclusion leads us to adopt a sort of constructive skepticism regarding the explanatory value of the main informationalist trends in statistical thermophysics. (shrink)
Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...) are successfully used in exploratory contexts, scientists face a whole new challenge in forming the concepts required for understanding underlying mechanisms. (shrink)
One of the most difficult problems in the foundations of physics is what gives rise to the arrow of time. Since the fundamental dynamical laws of physics are (essentially) symmetric in time, the explanation for time's arrow must come from elsewhere. A promising explanation introduces a special cosmological initial condition, now called the Past Hypothesis: the universe started in a low-entropy state. Unfortunately, in a universe where there are many copies of us (in the distant ''past'' or the distant ''future''), (...) the Past Hypothesis is not enough; we also need to postulate self-locating (de se) probabilities. However, I show that we can similarly use self-locating probabilities to strengthen its rival---the Fluctuation Hypothesis, leading to in-principle empirical underdetermination and radical epistemological skepticism. The underdetermination is robust in the sense that it is not resolved by the usual appeal to 'empirical coherence' or 'simplicity.' That is a serious problem for the vision of providing a completely scientific explanation of time's arrow. (shrink)
What explains the outcomes of chance processes? We claim that their setups do. Chances, we think, mediate these explanations of outcome by setup but do not feature in them. Facts about chances do feature in explanations of a different kind: higher-order explanations, which explain how and why setups explain their outcomes. In this paper, we elucidate this 'mediator view' of chancy explanation and defend it from a series of objections. We then show how it changes the playing field in four (...) metaphysical disputes concerning chance. First, it makes it more plausible that even low chances can have explanatory power. Second, it undercuts a circularity objection against reductionist theories of chance. Third, it redirects the debate about a prominent argument against epistemic theories of chance. Finally, it sheds light on potential chancy explanations of the Universe's origin. (shrink)
Kinds that share historical properties are dubbed “historical kinds” or “etiological kinds,” and they have some distinctive features. I will try to characterize etiological kinds in general terms and briefly survey some previous philosophical discussions of these kinds. Then I will take a closer look at a few case studies involving different types of etiological kinds. Finally, I will try to understand the rationale for classifying on the basis of etiology, putting forward reasons for classifying phenomena on the basis of (...) diachronic features, thereby making a provisional case for considering at least some etiological kinds to be natural kinds. (shrink)
Scientists appeal to models when explaining phenomena. Such explanations are often dubbed model explanations or model-based explanations. But what are the precise conditions for ME? Are ME special explanations? In our paper, we first rebut two definitions of ME and specify a more promising one. Based on this analysis, we single out a related conception that is concerned with explanations that are induced from working with a model. We call them ‘model-induced explanations’. Second, we study three paradigmatic cases of alleged (...) ME. We argue that all of them are MIE, upon closer examination. Third, we argue that this undermines the building consensus that model explanations are special explanations that, e.g., challenge the factivity of explanation. Instead, it suggests that what is special about models in science is the epistemology behind how models induce explanations. (shrink)
In the last few years, biologists and computer scientists have claimed that the introduction of data science techniques in molecular biology has changed the characteristics and the aims of typical outputs (i.e. models) of such a discipline. In this paper we will critically examine this claim. First, we identify the received view on models and their aims in molecular biology. Models in molecular biology are mechanistic and explanatory. Next, we identify the scope and aims of data science (machine learning in (...) particular). These lie mainly in the creation of predictive models which performances increase as data set increases. Next, we will identify a tradeoff between predictive and explanatory performances by comparing the features of mechanistic and predictive models. Finally, we show how this a priori analysis of machine learning and mechanistic research applies to actual biological practice. This will be done by analyzing the publications of a consortium—The Cancer Genome Atlas—which stands at the forefront in integrating data science and molecular biology. The result will be that biologists have to deal with the tradeoff between explaining and predicting that we have identified, and hence the explanatory force of the ‘new’ biology is substantially diminished if compared to the ‘old’ biology. However, this aspect also emphasizes the existence of other research goals which make predictive force independent from explanation. (shrink)
A Série Investigação Filosófica, uma iniciativa do Núcleo de Ensino e Pesquisa em Filosofia do Departamento de Filosofia da UFPel e do Grupo de Pesquisa Investigação Filosófica do Departamento de Filosofia da UNIFAP, sob o selo editorial do NEPFil online e da Editora da Universidade Federal de Pelotas, com auxílio financeiro da John Templeton Foundation, tem por objetivo precípuo a publicação da tradução para a língua portuguesa de textos selecionados a partir de diversas plataformas internacionalmente reconhecidas, tal como a Stanford (...) Encyclopedia of Philosophy, por exemplo. O objetivo geral da série é disponibilizar materiais bibliográficos relevantes tanto para a utilização enquanto material didático quanto para a própria investigação filosófica. (shrink)
This article presents a challenge that those philosophers who deny the causal interpretation of explanations provided by population genetics might have to address. Indeed, some philosophers, known as statisticalists, claim that the concept of natural selection is statistical in character and cannot be construed in causal terms. On the contrary, other philosophers, known as causalists, argue against the statistical view and support the causal interpretation of natural selection. The problem I am concerned with here arises for the statisticalists because the (...) debate on the nature of natural selection intersects the debate on whether mathematical explanations of empirical facts are genuine scientific explanations. I argue that if the explanations provided by population genetics are regarded by the statisticalists as non-causal explanations of that kind, then statisticalism risks being incompatible with a naturalist stance. The statisticalist faces a dilemma: either she maintains statisticalism but has to renounce naturalism; or she maintains naturalism but has to content herself with an account of the explanations provided by population genetics that she deems unsatisfactory. This challenge is relevant to the statisticalists because many of them see themselves as naturalists. (shrink)
This chapter draws upon the archaeological and philosophical literature to offer an analysis and diagnosis of the popular ‘ancient aliens’ theory. First, we argue that ancient aliens theory is a form of conspiracy theory. Second, we argue that it differs from other familiar conspiracy theories because it does distinctive ideological work. Third, we argue that ancient aliens theory is a form of non-contextualized inquiry that sacrifices the very thing that makes archaeological research successful, and does so for the sake of (...) popular accessibility. Rather than merely dismissing ancient aliens as ‘pseudoarchaeology’ on demarcationist grounds, we offer a more complicated account of how the theory works, and what ideological work it does. (shrink)
Philosophers of physics have long debated whether the Past State of low entropy of our universe calls for explanation. What is meant by “calls for explanation”? In this article we analyze this notion, distinguishing between several possible meanings that may be attached to it. Taking the debate around the Past State as a case study, we show how our analysis of what “calling for explanation” might mean can contribute to clarifying the debate and perhaps to settling it, thus demonstrating the (...) fruitfulness of this analysis. Applying our analysis, we show that two main opponents in this debate, Huw Price and Craig Callender, are, for the most part, talking past each other rather than disagreeing, as they employ different notions of “calling for explanation”. We then proceed to show how answering the different questions that arise out of the different meanings of “calling for explanation” can result in clarifying the problems at hand and thus, hopefully, to solving them. (shrink)
Bradford Hill (1965) highlighted nine aspects of the complex evidential situation a medical researcher faces when determining whether a causal relation exists between a disease and various conditions associated with it. These aspects are widely cited in the literature on epidemiological inference as justifying an inference to a causal claim, but the epistemological basis of the Hill aspects is not understood. We offer an explanatory coherentist interpretation, explicated by Thagard's ECHO model of explanatory coherence. The ECHO model captures the complexity (...) of epidemiological inference and provides a tractable model for inferring disease causation. We apply this model to three cases: the inference of a causal connection between the Zika virus and birth defects, the classic inference that smoking causes cancer, and John Snow’s inference about the cause of cholera. (shrink)
The critics of rational choice theory (RCT) frequently build on the contrast between so-called thick and thin applications of RCT to argue that thin RCT lacks the potential to explain the choices of real-world agents. In this paper, I draw on often-cited RCT applications in several decision sciences to demonstrate that despite this prominent critique there are at least two different senses in which thin RCT can explain real-world agents’ choices. I then defend this thesis against the most influential objections (...) put forward by the critics of RCT. In doing so, I explicate the implications of my thesis for the ongoing philosophical debate concerning the explanatory potential of RCT and the comparative merits of widely endorsed accounts of explanation. (shrink)
There has been a growing trend to include non-causal models in accounts of scientific explanation. A worry addressed in this paper is that without a higher threshold for explanation there are no tools for distinguishing between models that provide genuine explanations and those that provide merely potential explanations. To remedy this, a condition is introduced that extends a veridicality requirement to models that are empirically underdetermined, highly-idealised, or otherwise non-causal. This condition is applied to models of electroweak symmetry breaking beyond (...) the Standard Model. (shrink)
In recent years there has been increasing interest in scientific understanding as an epistemic success term that is distinct from scientific knowledge (see, for example, De Regt, Leonelli and Eigner 2009). Although this literature is diverse, three dominant strands can be found that have rather deeper roots in the philosophy of science: understanding as unification (Friedman 1974; Kitcher 1981); understanding through mechanistic thinking as in certain types of causal modelling (Salmon 1998; Woodward 2003); and a kind of contextualist pluralist approach (...) to understanding (De Regt and Dieks 2005; De Regt 2009; 2014), which is in some ways similar to the account offered by Nelson Goodman (1968, 1978) and Catherine Elgin (1997, 2004). Proponents of these views often see them as complimentary or at least not contradictory. However, they have not yet to been brought neatly together in a single account. This is what I propose to do. At the heart of my approach is the thought that we should treat the characteristic content of understanding as pictorial, in contrast to the characteristic content of knowledge, which is propositional. By virtue of the distinctive ways in which they present their content, epistemically efficacious pictures exemplify, unify, show mechanical (and other) causal relations, allow for multiple readings and facilitate the contextualisation of their content, while still having determinate content. In other words, features of pictorial content facilitate cognitive and evaluative procedures that are characteristic of understanding and have already been identified as such in the literature. That understanding should be treated as something like “getting the picture” is supported, albeit weakly, by multiple remarks, more or less well-developed, linking understanding to picturing that can also be found in the literature on scientific understanding, as we shall see. (shrink)
We explore the prospects of a monist account of explanation for both non-causal explanations in science and pure mathematics. Our starting point is the counterfactual theory of explanation for explanations in science, as advocated in the recent literature on explanation. We argue that, despite the obvious differences between mathematical and scientific explanation, the CTE can be extended to cover both non-causal explanations in science and mathematical explanations. In particular, a successful application of the CTE to mathematical explanations requires us to (...) rely on counterpossibles. We conclude that the CTE is a promising candidate for a monist account of explanation in both science and mathematics. (shrink)
ABSTRACT The emerging consensus in the secondary literature on Duhem is that his notion of ‘good sense’ is a virtue of individual scientists that guides them choosie between empirically equal rival theories : 149–159; Ivanova 2010. “Pierre Duhem’s Good Sense as a Guide to Theory Choice.” Studies in History and Philosophy of Science Part A 41 : 58–64; Fairweather 2011. “The Epistemic Value of Good Sense.” Studies in History and Philosophy of Science Part A 43 : 139–146; Bhakthavatsalam. “Duhemian Good (...) Sense and Agent Reliabilism.” Studies in History and Philosophy of Science Part A 64: 22–29). In this paper, I argue that good sense is irrelevant for theory choice within Duhem’s conception of scientific methodology. Theory choice, for Duhem, is either a pseudo-problem or addressed purely by empirical and formal desiderata depending on how it is understood. I go on to provide a positive interpretation of good sense as a feature of scientific communities that undergo particular forms of education that allow scientists to abandon theory pursuit. I conclude by suggesting that this interpretation entails that virtue epistemological readings of Duhem are insufficient for understanding good sense; we must employ a social epistemological perspective. (shrink)
Some proponents of mechanistic explanation downplay the significance of how-possibly explanations. We argue that developing accounts of mechanisms that could explain a phenomenon is an important aspect of scientific reasoning, one that involves imagination. Although appeals to imagination may seem to obscure the process of reasoning, we illustrate how, by examining diagrams we can gain insights into the construction of mechanistic explanations.
In this paper, I offer an explication of the notion of local explanation. In the literature, local explanations are considered as metaphysically and methodologically satisfactory: local explanations reveal the contingency of science and provide a methodologically sound historiography of science. However, the lack of explication of the notion of local explanation makes these claims difficult to assess. The explication provided in this paper connects the degree of locality of an explanans to the degree of contingency of the explanandum. Moreover, the (...) explication is shown to be compatible with the methodological need for a general consideration in the historiography of science. In this way, the explication satisfies the need to explicate an important notion, connects local explanations and contingency, and enables us to see how local explanations and general considerations can be connected. However, the explication also sheds critical light on many claims and expectations that are associated with local explanations and their satisfactoriness. (shrink)
Scientific knowledge is the most solid and robust kind of knowledge that humans have because of its inherent self-correcting character. Nevertheless, anti-evolutionists, climate denialists, and anti-vaxxers, among others, question some of the best-established scientific findings, making claims unsupported by empirical evidence. A common aspect of these claims is reference to the uncertainties of science concerning evolution, climate change, vaccination, and so on. This is inaccurate: whereas the broad picture is clear, there will always exist uncertainties about the details of the (...) respective phenomena. This book shows that uncertainty is an inherent feature of science that does not devalue it. In contrast, uncertainty advances science because it motivates further research. This is the first book on this topic that draws on philosophy of science to explain what uncertainty in science is and how it makes science advance. It contrasts evolution, climate change, and vaccination, where the uncertainties are exaggerated, and genetic testing and forensic science, where the uncertainties are usually overlooked. The goal is to discuss the scientific, psychological, and philosophical aspects of uncertainty in order to explain what it really is, what kinds of problems it actually poses, and why in the end it makes science advance. Contrary to public representations of scientific findings and conclusions that produce an intuitive but distorted view of science as certain, people need to understand and learn to live with uncertainty in science. This book is intended for anyone who wants to get a clear view of the nature of science. (shrink)
For many old and new mechanists, Mechanism is both a metaphysical position and a thesis about scientific methodology. In this paper we discuss the relation between the metaphysics of mechanisms and the role of mechanical explanation in the practice of science, by presenting and comparing the key tenets of Old and New Mechanism. First, by focusing on the case of gravity, we show how the metaphysics of Old Mechanism constrained scientific explanation, and discuss Newton’s critique of Old Mechanism. Second, we (...) examine the current mechanistic metaphysics, arguing that it is not warranted by the use of the concept of mechanism in scientific practice, and motivate a thin conception of mechanism (the truly minimal view), according to which mechanisms are causal pathways for a certain effect or phenomenon. Finally, we draw analogies between Newton’s critique of Old Mechanism and our thesis that the metaphysical commitments of New Mechanism are not necessary in order to illuminate scientific practice. (shrink)
According to Interventionism, explanations cite invariant relations which hold among multiple variables. Interventionism incorrectly implies, however, that many common scientific explanations—which cite single‐variable boundary constraints—are not actually explanatory. So I propose a different account of explanation, similar in spirit to Interventionism, which gets those cases of scientific explanation right.
In the spirit of explanatory pluralism, this chapter argues that causal and noncausal explanations of a phenomenon are compatible, each being useful for bringing out different sorts of insights. After reviewing a model-based account of scientific explanation, which can accommodate causal and noncausal explanations alike, an important core conception of noncausal explanation is identified. This noncausal form of model-based explanation is illustrated using the example of how Earth scientists in a subfield known as aeolian geomorphology are explaining the formation of (...) regularlyspaced sand ripples. The chapter concludes that even when it comes to everyday "medium-sized dry goods" such as sand ripples, where there is a complete causal story to be told, one can find examples of noncausal scientific explanations. (shrink)
Two approaches to understanding the idealizations that arise in the Aharonov–Bohm effect are presented. It is argued that a common topological approach, which takes the non-simply connected electron configuration space to be an essential element in the explanation and understanding of the effect, is flawed. An alternative approach is outlined. Consequently, it is shown that the existence and uniqueness of self-adjoint extensions of symmetric operators in quantum mechanics have important implications for philosophical issues. Also, the alleged indispensable explanatory role of (...) said idealizations is examined via a minimal model explanatory scheme. Last, the idealizations involved in the AB effect are placed in a wider philosophical context via a short survey of part of the literature on infinite and essential idealizations. (shrink)
Many important explanations in physics are based on ideas and assumptions about symmetries, but little has been said about the nature of such explanations. This chapter aims to fill this lacuna, arguing that various symmetry explanations can be naturally captured in the spirit of the counterfactual-dependence account of Woodward, liberalized from its causal trappings. From the perspective of this account symmetries explain by providing modal information about an explanatory dependence, by showing how the explanandum would have been different, had the (...) facts about an explanatory symmetry been different. Furthermore, the authors argue that such explanatory dependencies need not be causal. (shrink)
In this paper, I argue that the newly developed network approach in neuroscience and biology provides a basis for formulating a unique type of realization, which I call topological realization. Some of its features and its relation to one of the dominant paradigms of realization and explanation in sciences, i.e. the mechanistic one, are already being discussed in the literature. But the detailed features of topological realization, its explanatory power and its relation to another prominent view of realization, namely the (...) semantic one, have not yet been discussed. I argue that topological realization is distinct from mechanistic and semantic ones because the realization base in this framework is not based on local realisers, regardless of the scale but on global realizers. In mechanistic approach, the realization base is always at the local level, in both ontic and epistemic accounts. The explanatory power of realization relation in mechanistic approach comes directly from the realization relation-either by showing how a model is mapped onto a mechanism, or by describing some ontic relations that are explanatory in themselves. Similarly, the semantic approach requires that concepts at different scales logically satisfy microphysical descriptions, which are at the local level. In topological framework the realization base can be found at different scales, but whatever the scale the realization base is global, within that scale, and not local. Furthermore, topological realization enables us to answer the “why” questions, which according to Polger 2010 make it explanatory. The explanatoriness of topological realization stems from understanding mathematical consequences of different topologies, not from the mere fact that a system realizes them. (shrink)
Human nature has always been a foundational issue for philosophy. What does it mean to have a human nature? Is the concept the relic of a bygone age? What is the use of such a concept? What are the epistemic and ontological commitments people make when they use the concept? In What’s Left of Human Nature? Maria Kronfeldner offers a philosophical account of human nature that defends the concept against contemporary criticism. In particular, she takes on challenges related to social (...) misuse of the concept that dehumanizes those regarded as lacking human nature (the dehumanization challenge); the conflict between Darwinian thinking and essentialist concepts of human nature (the Darwinian challenge); and the consensus that evolution, heredity, and ontogenetic development results from nurture and nature. After answering each of these challenges, Kronfeldner presents a revisionist account of human nature that minimizes dehumanization and does not fall back on outdated biological ideas. Her account is post-essentialist because it eliminates the concept of an essence of being human; pluralist in that it argues that there are different things in the world that correspond to three different post-essentialist concepts of human nature; and interactive because it understands nature and nurture as interacting at the developmental, epigenetic, and evolutionary levels. On the basis of this, she introduces a dialectical concept of an ever-changing and “looping” human nature. Finally, noting the essentially contested character of the concept and the ambiguity and redundancy of the terminology, she wonders if we should simply eliminate the term “human nature” altogether. (shrink)
Recently, Luk mentioned that scientific knowledge both explains and predicts. Do these two functions of scientific knowledge have equal significance, or is one of the two functions more important than the other? This commentary explains why prediction may be mandatory but explanation may be only desirable and optional.
Power is often taken to be a central concept in social and political thought that can contribute to the explanation of many different social phenomena. This article argues that in order to play this role, a general theory of power is required to identify a stable causal capacity, one that does not depend on idiosyncratic social conditions and can thus exert its characteristic influence in a wide range of cases. It considers three promising strategies for such a theory, which ground (...) power in (1) the ability to use force, (2) access to resources, or (3) collective acceptance. It shows that these strategies fail to identify a stable causal capacity. The lack of an adequate general theory of power suggests that the concept lacks the necessary unity to play the broad explanatory role it is often accorded. (shrink)
This article demonstrates that non-mechanistic, dynamical explanations are a viable approach to explanation in the special sciences. The claim that dynamical models can be explanatory without reference to mechanisms has previously been met with three lines of criticism from mechanists: the causal relevance concern, the genuine laws concern, and the charge of predictivism. I argue, however, that these mechanist criticisms fail to defeat non-mechanistic, dynamical explanation. Using the examples of Haken et al.’s model of bimanual coordination, and Thelen et al.’s (...) dynamical field model of infant perseverative reaching, I show how each mechanist criticism fails once the standards of Woodward’s interventionist framework are applied to dynamical models. An even-handed application of Woodwardian interventionism reveals that dynamical models are capable of producing genuine explanations without appealing to underlying mechanistic details. 1Introduction2Interventionism and Mechanistic Explanation 2.1Causal relevance and ideal interventions2.2Invariance2.3Explanation3Covering-Laws and Dynamical Explanation 3.1Dynamical models3.2Covering-law explanation3.3Prediction4Causal Relevance 4.1The causal relevance concern4.2Intervening on dynamical models4.3Test case I: The Haken–Kelso–Bunz model4.4Test case II: Dynamical field model5Genuine Laws 5.1The genuine laws concern5.2Using invariance in place of laws5.3Test case I: The Haken–Kelso–Bunz model5.4Test case II: Dynamical field model6Prediction 6.1Predictivism6.2Crude and invariant prediction7Interventionist Criticism of the Haken–Kelso–Bunz Model8Dynamical Explanation8Conclusion. (shrink)
The use of idealized scientific theories in explanations of empirical facts and regularities is problematic in two ways: they don’t satisfy the condition that the explanans is true, and they may fail to entail the explanandum. An attempt to deal with the latter problem was proposed by Hempel and Popper with their notion of approximate explanation. A more systematic perspective on idealized explanations was developed with the method of idealization and concretization by the Poznan school in the 1970s. If idealizational (...) laws are treated as counterfactual conditionals, they can be true or truthlike, and the concretizations of such laws may increase their degree of truthlikeness. By replacing Hempel’s truth requirement with the condition that an explanatory theory is truthlike one can distinguish several important types of approximate, corrective, and contrastive explanations by idealized theories. The conclusions have important consequences for the debates about scientific realism and anti-realism. (shrink)
Toy models are highly idealized and extremely simple models. Although they are omnipresent across scientific disciplines, toy models are a surprisingly under-appreciated subject in the philosophy of science. The main philosophical puzzle regarding toy models concerns what the epistemic goal of toy modelling is. One promising proposal for answering this question is the claim that the epistemic goal of toy models is to provide individual scientists with understanding. The aim of this article is to precisely articulate and to defend this (...) claim. In particular, we will distinguish between autonomous and embedded toy models, and then argue that important examples of autonomous toy models are sometimes best interpreted to provide how-possibly understanding, while embedded toy models yield how-actually understanding, if certain conditions are satisfied. _1_ Introduction _2_ Embedded and Autonomous Toy Models _2.1_ Embedded toy models _2.2_ Autonomous toy models _2.3_ Qualification _3_ A Theory of Understanding for Toy Models _3.1_ Preliminaries and requirements _3.2_ The refined simple view _4_ Two Kinds of Understanding with Toy Models _4.1_ Embedded toy models and how-actually understanding _4.2_ Against a how-actually interpretation of all autonomous toy models _4.3_ The how-possibly interpretation of some autonomous toy models _5_ Conclusion. (shrink)
Explanations are very important to us in many contexts: in science, mathematics, philosophy, and also in everyday and juridical contexts. But what is an explanation? In the philosophical study of explanation, there is long-standing, influential tradition that links explanation intimately to causation: we often explain by providing accurate information about the causes of the phenomenon to be explained. Such causal accounts have been the received view of the nature of explanation, particularly in philosophy of science, since the 1980s. However, philosophers (...) have recently begun to break with this causal tradition by shifting their focus to kinds of explanation that do not turn on causal information. The increasing recognition of the importance of such non-causal explanations in the sciences and elsewhere raises pressing questions for philosophers of explanation. What is the nature of non-causal explanations – and which theory best captures it? How do non-causal explanations relate to causal ones? How are non-causal explanations in the sciences related to those in mathematics and metaphysics? This volume of new essays explores answers to these and other questions at the heart of contemporary philosophy of explanation. The essays address these questions from a variety of perspectives, including general accounts of non-causal and causal explanations, as well as a wide range of detailed case studies of non-causal explanations from the sciences, mathematics and metaphysics. (shrink)
This paper examines explanations that turn on non-local geometrical facts about the space of possible configurations a system can occupy. I argue that it makes sense to contrast such explanations from ‘geometry of motion’ with causal explanations. I also explore how my analysis of these explanations cuts across the distinction between kinematics and dynamics.
The idea of ‘modal fields’ is inspired by regional and pluralistic ontologies, which were sketched and developed by Hegel, Husserl and especially Nicolai Hartmann. It suggests that the world is structured by spheres which are not reducible to each other, and that modal fields denote the scope of real possibilities inside the spheres. It is, for example, possible to distinguish between physical, biological, ecological, economic and technological possibilities/modal fields. It is also possible to define, for the purpose of scientific research, (...) very specific modal fields. For example, we can ask “What are the physiological or social possibilities of ants?”, or “What are the social and psychological possibilities of fundamental religious sects?” It is possible to apply this ontological theory to philosophy of science in order to clarify the scope and limits of causal explanations and hermeneutic understanding especially in human and environmental sciences. In general, this ontological theory serves as a fruitful basis for the kind of scientific thinking which is open to counterfactuals and possibilities and which considers deterministic causal thinking too restricted for human and environmental sciences. On the other hand, this theory avoids the individualistic and anthropocentric presuppositions of hermeneutical understanding, connecting it to the real subjective and objective possibilities in a certain historical situation. (shrink)
There is much to admire in this book. As a rigorous and systematic physics-oriented presentation of an austere empiricist fundamental metaphysics, it has no real rivals. The clarity with which the overall vision is presented will provide a valuable stalking-horse for those who would defend less austere approaches in the future. Esfeld and Deckert never shy away from the radical consequences of their approach, or try to disguise its revisionary nature. I also found several points of agreement with Esfeld and (...) Deckert’s metaphysical outlook. In particular, I thought their form of structural realism sophisticated and plausible and their application of it to contemporary physics salutary. Fundamental metaphysics would be a more respectable discipline if all of its exponents felt the need to show how their preferred ontology plays out in the context of real physics. These points of agreement noted, I will concentrate in these comments on points where I disagree with Esfeld and Deckert. From my perspective, the metaphysical project of the book is subject to two serious objections: firstly, it remains insufficiently naturalistic and overly a prioristic, and secondly, it drains much of the explanatory power out of fundamental physics. (shrink)