A powerful argument for anti-reductionism turns on the premise that the biological, behavioral, and social sciences are, in the way that they explain their characteristic subject matters, in some sense autonomous from physics. The argument is formulated and strengthened in this paper, and then undermined by showing that a reductionist account of explanation is not only consistent with, but provides a compelling account of, explanatory autonomy. Two kinds of explanatory abstraction, objective and contextual, play important roles in the story.
Bayesian conﬁrmation theory—abbreviated to in these notes—is the predominant approach to conﬁrmation in late twentieth century philosophy of science. It has many critics, but no rival theory can claim anything like the same following. The popularity of the Bayesian approach is due to its ﬂexibility, its apparently effortless handling of various technical problems, the existence of various a priori arguments for its validity, and its injection of subjective and contextual elements into the process of conﬁrmation in just the places where (...) critics of earlier approaches had come to think that subjectivity and sensitivity to context were necessary. (shrink)
Assumptions of stochastic independence are crucial to statistical models in science. But under what circumstances is it reasonable to suppose that two events are independent? When they are not causally or logically connected, so the usual story goes. But scientiﬁc models frequently treat causally dependent events as stochastically independent, raising the question whether there are kinds of causal connection that do not undermine stochastic independence. This paper provides one piece of an answer to this question, treating the simple case of (...) two tossed coins with and without a midair collision. (shrink)
It is argued that the relation of instance confirmation has a role to play in scientific methodology that complements, rather than competing with, a modern account of inductive support such as Bayesian confirmation theory. When an instance confirms a hypothesis, it provides inductive support, but it also provides two things that other inductive supporters normally do not: first, a connection to “empirical data” that makes science epistemically special, and second, inductive support not only for the hypothesis as a whole, but (...) for its parts. Further, when it is conceived in the right way, instance confirmation can duck the arguments most often thought to refute it. A causal account of instantiation, thus of instance confirmation, is offered that looks to deliver on all of the foregoing promises. (shrink)
Science is epistemically special, or so I will assume: it is better able to produce knowledge about the workings of the world than other knowledge-directed pursuits. Further, its superior epistemic powers are due to its being in some sense especially empirical: in particular, science puts great weight on a form of inductive reasoning that I call empirical con rmation. My aim in this paper is to investigate the nature of science’s “empiricism”, and to provide a preliminary explanation of the connection (...) between empirical confirmation and epistemic efficacy. I will try to convince you that the place to find an account of empirical confirmation is the dusty, long-neglected instantialist account of scientific inference offered by mid-century logical empiricists. Some revision of instantialism will be required. As for what is advantageous in empirical confirmation, I propose that it is an unusual degree of independence from background belief. (shrink)
Understanding without explanation? Impossible, or so I will argue – in the case of science, at least. More particularly, I defend in this paper a version of the following simple view concerning the connection between scientific explanation and understanding: scientific understanding is that state produced, and only produced, by grasping a correct explanation. The simple view, I will conclude, ought to be regarded as one part of a bigger picture, in which "understanding why", "understanding that", and "understanding with" are distinguished. (...) But the central idea, that scientific understanding is a matter of having the right epistemic relation to an explanation or explanations, will remain untouched. (shrink)
I aim to reconcile two apparently conﬂicting theses: (a) Everything that can be explained, can be explained in purely physical terms, that is, using the machinery of fundamental physics, and (b) some properties that play an explanatory role in the higher level sciences are irreducible in the strong sense that they are physically undefinable: their nature cannot be described using the vocabulary of physics. I investigate the contribution that physically undefinable properties typically make to explanations in the high-level sciences, and (...) I show that when they are explanatorily relevant, it is in virtue of their extension (or something close) alone. They are irreducible because physics cannot capture their nature; this is no obstacle, however, to physics' more or less capturing their extension, which is all that it need do to duplicate their explanatory power. In the course of the argument, I sketch the outlines of an account of the explanation of physically contingent regularities, such as the regularities found in most branches of biological inquiry, at the center of which is an account of the nature of contingent, empirical bridge principles. (shrink)
When new theoretical terms are introduced into scientific discourse, prevailing accounts imply, analytic or semantic truths come along with them, by way of either definitions or reference-fixing descriptions. But there appear to be few or no analytic truths in scientific theory, which suggests that the prevailing accounts are mistaken. This paper looks to research on the psychology of natural kind concepts to suggest a new account of the introduction of theoretical terms that avoids both definition and reference-fixing description. At the (...) core of the account is a novel psychological process that I call introjection. (shrink)
A theme of much work taking an ““economic approach”” to the study of science is the interaction between the norms of individual scientists and those of society at large. Though drawing from the same suite of formal methods, proponents of the economic approach offer what are in substantive terms profoundly different explanations of various aspects of the structure of science. The differences are illustrated by comparing Strevens's explanation of the scientific reward system (the ““priority rule””) with Max Albert's explanation of (...) the prevalence of ““high methodological standards”” in science. Some objections to the economic approach as a whole are then briefly considered. (shrink)
This paper offers a metaphysics of physical probability in (or if you prefer, truth conditions for probabilistic claims about) deterministic systems based on an approach to the explanation of probabilistic patterns in deterministic systems called the method of arbitrary functions. Much of the appeal of the method is its promise to provide an account of physical probability on which probability assignments have the ability to support counterfactuals about frequencies. It is argued that the eponymous arbitrary functions are of little philosophical (...) use, but that they can be substituted for facts about frequencies without losing the ability to provide counterfactual support. The result is an account of probability in deterministic systems that has a “propensity-like” look and feel, yet which requires no supplement to the standard modern empiricist tool kit of particular matters of fact and principles of physical dynamics. (shrink)
How to regard the weight we give to a proposition on the grounds of its being endorsed by an authority? I examine this question as it is raised within the epistemology of science, and I argue that “authority-based weight” should receive special handling, for the following reason. Our assessments of other scientists’ competence or authority are nearly always provisional, in the sense that to save time and money, they are not made nearly as carefully as they could be---indeed, they are (...) typically made on the basis of only a small portion of the available evidence. Consequently, we need to represent the authority-based elements of our epistemic attitudes in such a way as to allow the later revision of those elements, in case we decide in the light of new priorities that a more conscientious assessment is warranted. I look to the literature in confirmation theory, statistics, and economics for a semiformal model of this revision process, and make a particular proposal of my own. The discussion also casts some light on the question of why certain aspects of science’s epistemic state are not made public. (shrink)
Elliott Sober argues that the statistical slogan “Absence of evidence is not evidence of absence” cannot be taken literally: it must be interpreted charitably as claiming that the absence of evidence is (typically) not very much evidence of absence. I offer an alternative interpretation, on which the slogan claims that absence of evidence is (typically) not objective evidence of absence. I sketch a definition of objective evidence, founded in the notion of an epistemically objective likelihood, and I show that in (...) Sober’s paradigm case, the slogan can, on this understanding, be sustained. (shrink)
Approaches to explanation -- Causal and explanatory relevance -- The kairetic account of /D making -- The kairetic account of explanation -- Extending the kairetic account -- Event explanation and causal claims -- Regularity explanation -- Abstraction in regularity explanation -- Approaches to probabilistic explanation -- Kairetic explanation of frequencies -- Kairetic explanation of single outcomes -- Looking outward -- Looking inward.
The generalizations found in biology, psychology, sociology, and other high-level sciences are typically physically contingent. You might conclude that they play only a limited role in scientific investigation, on the grounds that physically contingent generalizations offer no or only feeble counterfactual support. But the link between contingency and counterfactual support is more complex than is commonly supposed. A certain class of physically contingent generalizations, comprising many, perhaps the vast majority, of those in the high-level sciences, provides strong counterfactual support of (...) just the sort that appears to be scientifically important. This paper explains why. (shrink)
Cases of overdetermination or preemption continue to play an important role in the debate about the proper interpretation of causal claims of the form "C was a cause of E". I argue that the best treatment of preemption cases is given by Mackie's venerable INUS account of causal claims. The Mackie account suffers, however, from problems of its own. Inspired by its ability to handle preemption, I propose a dramatic revision to the Mackie account – one that Mackie himself would (...) certainly have rejected – to solve these difficulties. The result is, I contend, a very attractive account of singular causal claims. (shrink)
Why do we represent the world around us using causal generalizations, rather than, say, purely statistical generalizations? Do causal representations contain useful additional information, or are they merely more efficient for inferential purposes? This paper considers the second kind of answer: it investigates some ways in which causal cognition might aid us not because of its expressive power, but because of its organizational power. Three styles of explanation are considered. The first, building on the work of Reichenbach in "The Direction (...) of Time", points to causal representation as especially efficient for predictive purposes in a world containing certain pervasive patterns of conditional independence. The second, inspired by work of Woodward and others, finds causal representation to be an excellent vehicle for representing all-important relations of manipulability. The third, based in part on my own work, locates the importance of causal cognition in the special role it reserves for information about underlying mechanisms. All three varieties of explanation show promise, but particular emphasis is placed on the third. (shrink)
A physical system has a chaotic dynamics, according to the dictionary, if its behavior depends sensitively on its initial conditions, that is, if systems of the same type starting out with very similar sets of initial conditions can end up in states that are, in some relevant sense, very different. But when science calls a system chaotic, it normally implies two additional claims: that the dynamics of the system is relatively simple, in the sense that it can be expressed in (...) the form of a mathematical expression having relatively few variables, and that the the geometry of the system’s possible trajectories has a certain aspect, often characterized by a strange attractor. (shrink)
The three cardinal aims of science are prediction, control, and explanation; but the greatest of these is explanation. Also the most inscrutable: prediction aims at truth, and control at happiness, and insofar as we have some independent grasp of these notions, we can evaluate science’s strategies of prediction and control from the outside. Explanation, by contrast, aims at scientific understanding, a good intrinsic to science and therefore something that it seems we can only look to science itself to explicate.
The posthumous publication, in 1763, of Thomas Bayes’ “Essay Towards Solving a Problem in the Doctrine of Chances” inaugurated a revolution in the understanding of the confirmation of scientific hypotheses—two hundred years later. Such a long period of neglect, followed by such a sweeping revival, ensured that it was the inhabitants of the latter half of the twentieth century above all who determined what it was to take a “Bayesian approach” to scientific reasoning.
Robert Merton observed that better-known scientists tend to get more credit than less well-known scientists for the same achievements; he called this the Matthew effect. Scientists themselves, even those eminent researchers who enjoy its benefits, regard the effect as a pathology: it results, they believe, in a misallocation of credit. If so, why do scientists continue to bestow credit in the manner described by the effect? This paper advocates an explanation of the effect on which it turns out to allocate (...) credit fairly after all, while at the same time making sense of scientists' opinions to the contrary. (shrink)
To understand the behavior of a complex system, you must understand the interactions among its parts. Doing so is difficult for non-decomposable systems, in which the interactions strongly influence the short-term behavior of the parts. Science's principal tool for dealing with non-decomposable systems is a variety of probabilistic analysis that I call EPA. I show that EPA's power derives from an assumption that appears to be false of non-decomposable complex systems, in virtue of their very non-decomposability. Yet EPA is extremely (...) successful. I aim to find an interpretation of EPA's assumption that is consistent with, indeed that explains, its success. (shrink)
Bayesian treatment of auxiliary hypotheses rests on a misinterpretation of Strevens's central claim about the negligibility of certain small probabilities. The present paper clarifies and proves a very general version of the claim. The project Clarifications The negligibility argument Generalization and proof.
Does the Bayesian theory of confirmation put real constraints on our inductive behavior? Or is it just a framework for systematizing whatever kind of inductive behavior we prefer? Colin Howson (Hume's Problem) has recently championed the second view. I argue that he is wrong, in that the Bayesian apparatus as it is usually deployed does constrain our judgments of inductive import, but also that he is right, in that the source of Bayesianism's inductive prescriptions is not the Bayesian machinery itself, (...) but rather what David Lewis calls the ``Principal Principle''. (shrink)
The two major modern accounts of explanation are the causal and unification accounts. My aim in this paper is to provide a kind of unification of the causal and the unification accounts, by using the central technical apparatus of the unification account to solve a central problem faced by the causal account, namely, the problem of determining which parts of a causal network are explanatorily relevant to the occurrence of an explanandum. The end product of my investigation is a causal (...) account of explanation that has many of the advantages of the unification account. (shrink)
Science's priority rule rewards those who are first to make a discovery, at the expense of all other scientists working towards the same goal, no matter how close they may be to making the same discovery. I propose an explanation of the priority rule that, better than previous explanations, accounts for the distinctive features of the rule. My explanation treats the priority system, and more generally, any scheme of rewards for scientific endeavor, as a device for achieving an allocation of (...) resources among different research programs that provides as much benefit as possible to society. I show that the priority system is especially well suited to finding an efficient allocation of resources in those situations, characteristic of scientific inquiry, in which any success in an endeavor subsequent to the first success brings little additional benefit to society. (shrink)
This paper examines the standard Bayesian solution to the Quine–Duhem problem, the problem of distributing blame between a theory and its auxiliary hypotheses in the aftermath of a failed prediction. The standard solution, I argue, begs the question against those who claim that the problem has no solution. I then provide an alternative Bayesian solution that is not question-begging and that turns out to have some interesting and desirable properties not possessed by the standard solution. This solution opens the way (...) to a satisfying treatment of a problem concerning ad hoc auxiliary hypotheses. (shrink)
It is widely held that the size of a probability makes no difference to the quality of a probabilistic explanation. I argue that explanatory practice in statistical physics belies this claim. The claim has gained currency only because of an impoverished conception of probabilistic processes and an unwarranted assumption that all probabilistic explanations have a single form.
Recent work on children’s inferences concerning biological and chemical categories has suggested that children (and perhaps adults) are essentialists— a view known as psychological essentialism. I distinguish three varieties of psychological essentialism and investigate the ways in which essentialism explains the inferences for which it is supposed to account. Essentialism succeeds in explaining the inferences, I argue, because it attributes to the child belief in causal laws connecting category membership and the possession of certain characteristic appearances and behavior. This suggests (...) that the data will be equally well explained by a non-essentialist hypothesis that attributes belief in the appropriate causal laws to the child, but makes no claim as to whether or not the child represents essences. I provide several reasons to think that this non-essentialist hypothesis is in fact superior to any version of the essentialist hypothesis. (shrink)
According to principles of probability coordination, such as Miller's Principle or Lewis's Principal Principle, you ought to set your subjective probability for an event equal to what you take to be the objective probability of the event. For example, you should expect events with a very high probability to occur and those with a very low probability not to occur. This paper examines the grounds of such principles. It is argued that any attempt to justify a principle of probability coordination (...) encounters the same difficulties as attempts to justify induction. As a result, no justification can be found. (shrink)
This paper justifies the inference of probabilities from symmetries. I supply some examples of important and correct inferences of this variety. Two explanations of such inferences -- an explanation based on the Principle of Indifference and a proposal due to Poincaré and Reichenbach -- are considered and rejected. I conclude with my own account, in which the inferences in question are shown to be warranted a posteriori, provided that they are based on symmetries in the mechanisms of chance setups.
David Lewis, Michael Thau, and Ned Hall have recently argued that the Principal Principle—an inferential rule underlying much of our reasoning about probability—is inadequate in certain respects, and that something called the ‘New Principle’ ought to take its place. This paper argues that the Principle Principal need not be discarded. On the contrary, Lewis et al. can get everything they need—including the New Principle—from the intuitions and inferential habits that inspire the Principal Principle itself, while avoiding the problems that originally (...) caused them to abandon that principle. (shrink)