Some how-possibly explanations have epistemic value because they are epistemically possible; we cannot rule out their truth. One paradoxical implication of that proposal is that epistemic value may be obtained from mere ignorance. For the less we know, then the more is epistemically possible. This chapter examines a particular class of problematic epistemically possible how-possibly explanations, viz. *epistemically opaque* how-possibly explanations. Those are how-possibly explanations justified by an epistemically opaque process. How could epistemically opaque how-possibly explanations have epistemic value if (...) they result from a process about which we lack knowledge or understanding? This chapter proposes three different strategies to salvage epistemic value from epistemic opacity, namely salvaging value from 1) functional transparency, 2) modal operator interpretation, and 3) pursuitworthiness. It illustrates using cases from deep neural network modeling. (shrink)
I present a problem for theories of explanation, concerning explanations involving disjunctive properties. The problem is particular acute for the explanatory non-fundamentalist, according to whom non-fundamental scientific explanations are sometimes superior to fundamental physical explanations. I criticise solutions to the problem due to Woodward, Strevens and Sober, and Lewis, and then defend a solution inspired by an account of non-fundamental laws recently defended by Callender and Cohen.
Many scientists and philosophers of science think that beauty should play a role in theory selection. Physicists like Paul Dirac and Steven Weinberg explicitly claim that the ultimate explanations of the physical world must be beautiful. And philosophers of science like Peter Lipton say that we should expect the loveliest theory to also be the most likely. In this paper, I contend that these arguments from loveliness bear a striking similarity to Thomas Aquinas’ arguments from fittingness; both seem to presume (...) that the most beautiful theory is also the most probable. To do this, I first explain the explanatory virtues that are commonly thought to be constitutive of a lovely theory. Second, I elucidate Aquinas’ arguments from fittingness and show how they work in light of his account of beauty. Lastly, I connect the two kinds of aesthetic arguments and show the extent to which they overlap. (shrink)
Dasgupta poses a serious challenge to realism about natural properties. He argues that there is no acceptable explanation of why natural properties deserve the value realists assign to them and are consequently absent of value. In response, this paper defines and defends an alternative non-explanatory account of normativity compatible with realism. Unlike Lewis and Sider, who believe it is sufficient to defend realism solely on realist terms, I engage with the challenge on unfriendly grounds by revealing a tu quoque. Dasgupta (...) and anti-realists face a similar challenge to that directed against realism: one that not only undermines the objection to realism by legitimising non-explanatory normativity but leaves them facing a significant dilemma. (shrink)
Many moral debunking arguments are driven by the idea that the correlation between our moral beliefs and the moral truths is a big coincidence, given a robustly realist conception of morality.One influential response is that the correlation is not a coincidence because there is a common explainer of our moral beliefs and the moral truths. For example, the reason that I believe that I should feed my child is because feeding my child helps them to survive, and natural selection instills (...) in me beliefs and dispositions that help my children survive since that is conductive to my genes continuing through the generations. Similarly, the reason that it's morally good to feed my child is because it helps them to survive, and survival is morally valuable.But if we look at some cases from scientific practice, and from everyday life, we can see, I argue, why this response fails. A correlation can be coincidental even if there is a common explainer. I give an account of the nature of coincidence that draws upon recent literature on scientific explanation and argue that the correlation between moral belief and moral truth is a coincidence, even given such common explainers. And I use this to defend a certain form of debunking argument. (shrink)
Această carte face parte dintr-un curent recent din filosofia analitică de preocupare cu rolul matematicii în știință și se vrea a fi o contribuție la discuția filosofică recentă despre valoarea explicativă a matematicii în știință și despre contribuția acesteia la înțelegerea naturii. Obiectivul principal al cărții este prezentarea unei teorii filosofice cu privire la felul în care matematica poate contribui la înțelegerea fenomenelor naturii fără a juca un rol explicativ în raport cu acestea.
The recent literature on causality has seen the introduction of several distinctions within causality, which are thought to be important for understanding the widespread scientific practice of focusing causal explanations on a subset of the factors that are causally relevant for a phenomenon. Concepts used to draw such distinctions include, among others, stability, specificity, proportionality, or actual-difference making. In this contribution, I propose a new distinction that picks out an explanatorily salient class of causes in biological systems. Some select causes (...) in complex biological systems, I argue, have the property of enabling coherent causal control of these systems. Examples of such control variables include hormones and other signaling molecules, e.g., TOR (target of rapamycin), morphogens or the products of homeotic selector genes in embryonic pattern formation. I propose an analysis of this notion based on concepts borrowed from causal graph theory. (shrink)
In this paper, I offer an analysis of the radical disagreement over the adequacy of string theory. The prominence of string theory despite its notorious lack of empirical support is sometimes explained as a troubling case of science gone awry, driven largely by sociological mechanisms such as groupthink (e.g. Smolin 2006). Others, such as Dawid (2013), explain the controversy by positing a methodological revolution of sorts, according to which string theorists have quietly turned to nonempirical methods of theory assessment given (...) the technological inability to directly test the theory. The appropriate response, according to Dawid, is to acknowledge this development and widen the canons of acceptable scientific methods. As I’ll argue, however, the current situation in fundamental physics does not require either of these responses. Rather, as I’ll suggest, much of the controversy stems from a failure to properly distinguish the “context of justification” from the “context of pursuit”. Both those who accuse string theorists of betraying the scientific method and those who advocate an enlarged conception of scientific methodology objectionably conflate epistemic justification with judgements of pursuit-worthiness. Once we get clear about this distinction and about the different norms governing the two contexts, the current situation in fundamental physics becomes much less puzzling. After defending this diagnosis of the controversy, I’ll show how the argument patterns that have been posited by Dawid as constituting an emergent methodological revolution in science are better off if reworked as arguments belonging to the context of pursuit. (shrink)
Why do we value higher-level scientific explanations if, ultimately, the world is physical? An attractive answer is that physical explanations often cite facts that don’t make a difference to the event in question. I claim that to properly develop this view we need to commit to a type of deterministic chance. And in doing so, we see the theoretical utility of deterministic chance, giving us reason to accept a package of views including deterministic chance.
Theories of explanation need to account for a puzzling feature of our explanatory practices: the fact that we prefer explanations that are relatively abstract but only moderately so. Contra Franklin-Hall (), I argue that the interventionist account of explanation provides a natural and elegant explanation of this fact. By striking the right balance between specificity and generality, moderately abstract explanations optimally subserve what interventionists regard as the goal of explanation, namely identifying possible interventions that would have changed the explanandum.
Several argue that truth cannot be science’s sole epistemic goal, for it would fail to do justice to several scientific practices that advance understanding. I challenge these arguments, but only after making a small concession: science’s sole epistemic goal is not truth as such; rather, its goal is finding true answers to relevant questions. Using examples from the natural and social sciences, I then show that scientific understanding’s epistemically valuable features are either true answers to relevant questions or a means (...) thereof. (shrink)
I give an account of what makes an event a coincidence. -/- I start by critically discussing a couple of other approaches to the notion of coincidence -- particularly that of Lando (2017) -- before developing my own view. The central idea of my view is that the correct understanding of coincidences is closely related to our understanding of the correct 'level' or 'grain' of explanation. Coincidences have a kind of explanatory deficiency — if they did not have this deficiency (...) they would not be coincidences. This deficiency, I claim, is the same explanatory deficiency as when we give low-level explanations of special science phenomena. Such explanations are typically too specific and not robust enough. I claim that there is this same badness in purported explanations of coincidences. -/- I cash out this idea sketching an account of explanatory goodness — an account of what makes explanations better or worse -- and using that to give a more precise account of coincidences. (shrink)
In Economics Rules, Dani Rodrik (2015) argues that what makes economics powerful despite the limitations of each and every model is its diversity of models. Rodrik suggests that the diversity of models in economics improves its explanatory capacities, but he does not fully explain how. I offer a clearer picture of how models relate to explanations of particular economic facts or events, and suggest that the diversity of models is a means to better economic explanations.
Some explanations in social science, psychology and biology belong to a higher level than other explanations. And higher explanations possess the virtue of abstracting away from the details of lower explanations, many philosophers argue. As a result, these higher explanations are irreplaceable. And this suggests that there are genuine higher laws or patterns involving social, psychological and biological states. I show that this ‘abstractness argument’ is really an argument schema, not a single argument. This is because the argument uses the (...) ‘is lower than’ relation, and this relation admits of different readings. I then suggest four rigorous definitions of the ‘is lower than’ relation, and show that the abstractness argument’s prospects are much brighter for some of these definitions than for others. To show this, I evaluate the so-called ‘disjunctive threat’ to the abstractness argument. (shrink)
Do multi-level selection explanations of the evolution of social traits deepen the understanding provided by single-level explanations? Central to the former is a mathematical theorem, the multi-level Price decomposition. I build a framework through which to understand the explanatory role of such non-empirical decompositions in scientific practice. Applying this general framework to the present case places two tasks on the agenda. The first task is to distinguish the various ways of suppressing within-collective variation in fitness, and moreover to evaluate their (...) biological interest. I distinguish four such ways: increasing retaliatory capacity, homogenising assortment, and collapsing either fitness structure or character distribution to a mean value. The second task is to discover whether the third term of the Price decomposition measures the effect of any of these hypothetical interventions. On this basis I argue that the multi-level Price decomposition has explanatory value primarily when the sharing-out of collective resources is `subtractable'. Thus its value is more circumscribed than its champions Sober and Wilson (1998) suppose. (shrink)
The relation between probabilistic and explanatory reasoning is a classical topic in philosophy of science. Most philosophical analyses are concerned with the compatibility of Inference to the Best Explanation with probabilistic, Bayesian inference, and the impact of explanatory considerations on the assignment of subjective probabilities. This paper reverses the question and asks how causal and explanatory considerations are affected by probabilistic information. We investigate how probabilistic information determines the explanatory value of a hypothesis, and in which sense folk explanatory practice (...) can be said to be rational. Our study identifies three main factors in reasoning about a explanatory hypothesis: cognitive salience, rational acceptability and logical entailment. This corresponds well to the variety of philosophical accounts of explanation. Moreover, we show that these factors are highly sensitive to manipulations of probabilistic information. This finding suggests that probabilistic reasoning is a crucial part of explanatory inferences, and it motivates new avenues of research in the debate about Inference to the Best Explanation and probabilistic measures of explanatory power. (shrink)
This dissertation argues that we have no good reason to accept any one theory of properties as correct. To show this, I present three possible bases for theory-choice in the properties debate: coherence, explanatory adequacy, and explanatory value. Then I argue that none of these bases resolve the underdetermination of our choice between theories of properties. First, I argue considerations about coherence cannot resolve the underdetermination, because no traditional theory of properties is obviously incoherent. Second, I argue considerations of explanatory (...) adequacy cannot resolve the underdetermination, because every traditional theory of properties lacks the theoretical resources to adequately explain resemblance, causal powers, and predication. However, these inadequacies are easily remedied with theoretical modifications. But this results in an overabundance of modified, but adequate, theories of properties. Third, I argue explanatory virtues cannot resolve the underdetermination, because we have no reason to think explanatory virtues make theories of properties more likely to be true. I reject the common argument that explanatory virtues are truth-conducive in theories of properties because they are truth-conducive in scientific theories. Since none of the three bases for theory choice can resolve the underdetermination, I conclude that we have no good reason to accept any one theory of properties as correct. Finally, I consider the possibility of choosing one theory over the others on pragmatic grounds. But I argue that pragmatic grounds cannot resolve the underdetermination either. Instead, I suggest we accept the view I call 'instrumental pluralism,' which allows practitioners to use whatever theory of properties they find useful. Adviser: Jennifer McKitrick. (shrink)
Recent literature in philosophy of science has addressed purported notions of explanatory virtues—‘explanatory power’, ‘unification’, and ‘coherence’. In each case, a probabilistic relation between a theory and data is said to measure the power of an explanation, or degree of unification, or degree of coherence. This essay argues that the measures do not capture cases that are paradigms of scientific explanation, that the available psychological evidence indicates that the measures do not capture judgements of explanatory power, and, finally, that the (...) measures do not provide useful methods for selecting hypotheses. 1. Introduction2. Some Proposed Measures of Explanatory Virtues3. Descriptive Inadequacy3.1 Excellent but false explanations3.2 Causal explanation4. Psychological Inadequacy5. Finding the Truth6. Conclusion. (shrink)
Objectivism about evidential support is the thesis that facts about the degree to which a body of evidence supports a hypothesis are objective rather than depending on subjective factors like one’s own language or epistemic values. Objectivism about evidential support is key to defending a synchronic, time-slice-centric conception of epistemic rationality, on which what you ought to believe at a time depends only on what evidence you have at that time, and not on how you were at previous times. Here, (...) I defend a version of objectivism about evidential support on which facts about evidential support are grounded in facts about explanatoriness. (shrink)
In his “EMU and Inference,” Mark Newman European Journal for Philosophy of Science, 4:55–74, 2014 provides several interesting challenges to my explanatory model of understanding :15–37, 2012). I offer three replies to Newman’s paper. First, Newman incorrectly attributes to EMU an overly restrictive view about the role of abilities in understanding. Second, his main argument against EMU rests on this incorrect attribution, and would still face difficulties even if this attribution were correct. Third, contrary to his stated ambitions, his own, (...) inferential model of understanding does not have any distinctive advantages over EMU. These three points defend EMU against Newman’s objections. (shrink)
This paper argues that in at least some cases, one proof of a given theorem is deeper than another by virtue of supplying a deeper explanation of the theorem — that is, a deeper account of why the theorem holds. There are cases of scientific depth that also involve a common abstract structure explaining a similarity between two otherwise unrelated phenomena, making their similarity no coincidence and purchasing depth by answering why questions that separate, dissimilar explanations of the two phenomena (...) cannot correctly answer. The connections between explanation, depth, unification, power, and coincidence in mathematics and science are compared. (shrink)
Although explanation is a central topic in the philosophy of science, there is an important issue concerning explanation that has not been discussed much, namely, why some phenomena need an explanation while some do not. In this paper we first explain why this is an important issue, and then discuss two accounts of the need for explanation that can be gathered from the literature. We argue that both accounts are inadequate. The main purpose of the paper is, however, to offer (...) a normative account of the need for explanation. On this account, a demand for explanation is possible only against the background of a certain understanding of the world. It is the map we are using that provides us with the concepts and beliefs in terms of which we can ask for an explanation. And a phenomenon needs explanation only when it does not fit the map—the phenomenon’s not fitting the map is a good reason for us to look for an explanation of it. This account not only captures our pre-theoretical understanding of the need for explanation, but also is in accordance with our practice of demanding an explanation. (shrink)
According to Principles of Sufficient Reason, every truth (in some relevant group) has an explanation. One of the most popular defenses of Principles of Sufficient Reason has been the presupposition of reason defense, which takes endorsement of the defended PSR to play a crucial role in our theory selection. According to recent presentations of this defense, our method of theory selection often depends on the assumption that, if a given proposition is true, then it has an explanation, and this will (...) only be justified if we think this holds for all propositions in the relevant group. I argue that this argument fails even when restricted to contingent propositions, and even if we grant that there is no non-arbitrary way to divide true propositions that have explanations from those that lack them. Further, we can give an alternate explanation of what justifies our selecting theories on the basis of explanatory features: the crucial role is not played by an endorsement of a PSR, but rather by our belief that, prima facie, we should prefer theories that exemplify explanatory power to greater degrees than their rivals. This guides our theory selection in a manner similar to ontological parsimony and theoretical simplicity. Unlike a PSR, our belief about explanatory power gives us a prima facie guiding principle, which provides justification in the cases where we think we have it, and not in the cases where we think we don't. (shrink)
If counterfactual dependence is sufficient for causation and if omissions can be causes, then all events have many more causes than common sense tends to recognize. This problem is standardly addressed by appeal to pragmatics. However, Carolina Sartorio  has recently raised what I shall argue is a more interesting problem concerning omissions for counterfactual theories of causation—more interesting because it demands a more subtle pragmatic solution. I discuss the relationship between the idea that causes are proportional to their effects, (...) the idea that causation is contrastive, and the question of the dimensions along which causal explanations should be evaluated with respect to one another. (shrink)
The mechanistic and causal accounts of explanation are often conflated to yield a ‘causal-mechanical’ account. This paper prizes them apart and asks: if the mechanistic account is correct, how can causal explanations be explanatory? The answer to this question varies according to how causality itself is understood. It is argued that difference-making, mechanistic, dualist and inferentialist accounts of causality all struggle to yield explanatory causal explanations, but that an epistemic account of causality is more promising in this regard.
We discuss the probabilistic analysis of explanatory power and prove a representation theorem for posterior ratio measures recently advocated by Schupbach and Sprenger. We then prove a representation theorem for an alternative class of measures that rely on the notion of relative probability distance. We end up endorsing the latter, as relative distance measures share the properties of posterior ratio measures that are genuinely appealing, while overcoming a feature that we consider undesirable. They also yield a telling result concerning formal (...) accounts of explanatory power versus inductive confirmation, thereby bridging our discussion to a so-called no-miracle argument. (shrink)
The aim of this paper is to analyse the role that the distinction between principle and constructive theories have in the question of the explanatory power of Special Relativity. We show how the distinction breaks down at the explanatory level. We assess Harvey Brown’s (2005) claim that, as a principle theory, Special Relativity lacks of explanatory power and criticize it, as, we argue, based upon an unrealistic picture of the kind of explanations provided by principle (and constructive) theories. Finally, we (...) claim that the structural account of explanation (Hughes 1989b) captures the explanatory success of Special Relativity. (shrink)
Recently, in attempting to account for explanatory reasoning in probabilistic terms, Bayesians have proposed several measures of the degree to which a hypothesis explains a given set of facts. These candidate measures of "explanatory power" are shown to have interesting normative interpretations and consequences. What has not yet been investigated, however, is whether any of these measures are also descriptive of people’s actual explanatory judgments. Here, I present my own experimental work investigating this question. I argue that one measure in (...) particular is an accurate descriptor of explanatory judgments. Then, I discuss some interesting implications of this result for both the epistemology and the psychology of explanatory reasoning. (shrink)
In the mid-1800s, there was much debate about the origin or 'exciting cause' of cholera. Despite much confusion surrounding the disease, the so-called miasma theory emerged as the prevalent account about cholera's cause. Going against this mainstream view, the British physician John Snow inferred several things about cholera's origin and pathology that no one else inferred. Without observing the vibrio cholerae, however,-data unavailable to Snow and his colleagues-, there was no way of settling the question of what exactly was causing (...) cholera and how, or if, it was passed on. The question then arises as to how Snow arrived at conclusions so systematically different from those of his opponents. In this paper, I want to look at Snow's reasoning in some detail, and show that there were certain principles, explanatory power in particular, that were epistemologically important to Snow in their own right. I will show that Snow himself takes explanatory power to be an epistemic property, and makes explicit links between explanatory power and confirmation. Systematically juxtaposing Snow's claims and his opponents', I will show that Snow was right to tout the explanatory power of his theory, and that his conclusions about the epistemic superiority of his theory over that of the miasmatists' were justified. (shrink)
Paul Needham has claimed in several recent papers that Dalton’s chemical atomism was not explanatory. I respond to his criticism of Dalton by arguing that explanation admits of degrees and that under a view that allows for a spectrum of explanatory value, it is possible to see ample worth in Dalton’s atomistic explanations. Furthermore, I argue that even Duhem, who rejected atomism, acknowledged the explanatory worth of Dalton’s atomism.
The value of optimality modeling has long been a source of contention amongst population biologists. Here I present a view of the optimality approach as at once playing a crucial explanatory role and yet also depending on external sources of confirmation. Optimality models are not alone in facing this tension between their explanatory value and their dependence on other approaches; I suspect that the scenario is quite common in science. This investigation of the optimality approach thus serves as a case (...) study, on the basis of which I suggest that there is a widely felt tension in science between explanatory independence and broad epistemic interdependence, and that this tension influences scientific methodology. (shrink)
A common argument against explanatory reductionism is that higher‐level explanations are sometimes or always preferable because they are more general than reductive explanations. Here I challenge two basic assumptions that are needed for that argument to succeed. It cannot be assumed that higher‐level explanations are more general than their lower‐level alternatives or that higher‐level explanations are general in the right way to be explanatory. I suggest a novel form of pluralism regarding levels of explanation, according to which explanations at different (...) levels are preferable in different circumstances because they offer different types of generality, which are appropriate in different circumstances of explanation. (shrink)
This paper's purpose is to set forth the conditions of explanation in the domain of formal modelling of social action. Explanation is defined as an adequate account of the underlying factors bringing about a phenomenon. The modelling of a social phenomenon can claim explanatory value in this sense if the following two conditions are fulfilled. (1) The generative mechanisms involved translate the effects of real factors abstracted from their phenomenal context, not those of purely ideal ones. (2) The explanatory hypotheses, (...) which account for the effects of explanatory factors, and the purely descriptive hypotheses, which introduce conceptual simplifications and summarise complex secondary mechanisms, are relatively independent from each other with regard to the phenomenon represented. This condition subjects the model to testing by alternatives through the development of purely descriptive hypotheses in the sense of explanatory or analytical realism. (shrink)
Although prediction has been largely absent from discussions of explanation for the past 40 years, theories of explanation can gain much from a reintroduction. I review the history that divorced prediction from explanation, examine the proliferation of models of explanation that followed, and argue that accounts of explanation have been impoverished by the neglect of prediction. Instead of a revival of the symmetry thesis, I suggest that explanation should be understood as a cognitive tool that assists us in generating new (...) predictions. This view of explanation and prediction clarifies what makes an explanation scientific and why inference to the best explanation makes sense in science. *Received August 2009; revised September 2009. †To contact the author, please write to: Department of Philosophy, University of Tennessee, 801 McClung Tower, Knoxville, TN 37920‐0480; e‐mail: [email protected]. (shrink)
In this chapter I will employ a well-known scientific research heuristic that studies how something works by focusing on circumstances in which it does not work. Rather than trying to describe what scientific understanding would ideally look like, I will try to learn something about it by observing mundane cases where understanding is partly illusory. My main thesis is that scientists are prone to the illusion of depth of understanding (IDU), and as a consequence they sometimes overestimate the detail, coherence, (...) and depth of their understanding. I will analyze the notion of understanding and its relation to a sense of understanding. In order to make plausible the claim that these are often disconnected, I will describe an interesting series of psychological experiments by Frank Keil and his coauthors that suggests that ordinary people routinely overestimate the depth of their understanding. en I will argue that we should take seriously the possibility that scientific cognition is also aðected by IDU and spell out some possible causes of explanatory illusions in science. I will conclude this chapter by discussing how scientific explanatory practices could be improved and how the philosophy of science might be able to contribute to this process. (shrink)
Explanatory inquiry characteristically begins with a certain puzzlement about the world. But why do certain situations elicit our puzzlement while others leave us, in some epistemically relevant sense, cold? Moreover, what exactly is involved in the move from a state of puzzlement to a state where one's puzzlement is satisfied? In this paper I try to answer both of these questions. I also suggest ways in which our account of scientific rationality might benefit from having a better sense of the (...) kind of epistemic goal we are trying to realize, when we engage in our explanatory inquiries. Two Senses The Need for Explanation An Example Proto-understanding Conclusion CiteULike Connotea Del.icio.us What's this? (shrink)