The manuscript B of Aulus Gellius, containing N.A. 9-12 and 13.5, and now split at potuit/admonendi 12.10.3 between Cod. Bern. 404 and Cod. Lugd.- Bat. B. P. L. 1925, is dated by Hosius and Marshall to 1173 on the strength of the subscript to an astronomical work immediately preceding Gellius in Cod. Bern. 404. This work is the ‘Liber Atphargan'i [sic] in scientia astrorum et radicibus motuum caelestium’’ translated by Johannes Hispalensis; the subscriptio, quoted in full by Hertz, indicates the (...) date as follows : Expletus est die uicesimo quarto. V. mensis lunaris anni Arabum quingentesimi. XXVIIII. existente,.. die mensis Martii era. M.C.LXXIII. (shrink)
I explore some of the ways that assumptions about the nature of substance shape metaphysical debates about the structure of Reality. Assumptions about the priority of substance play a role in an argument for monism, are embedded in certain pluralist metaphysical treatments of laws of nature, and are central to discussions of substantivalism and relationalism. I will then argue that we should reject such assumptions and collapse the categorical distinction between substance and property.
In this book, Michael Strevens aims to explain how simplicity can coexist with, indeed be caused by, the tangled interconnections between a complex system's ...
Scientific understanding, this paper argues, can be analyzed entirely in terms of a mental act of “grasping” and a notion of explanation. To understand why a phenomenon occurs is to grasp a correct explanation of the phenomenon. To understand a scientific theory is to be able to construct, or at least to grasp, a range of potential explanations in which that theory accounts for other phenomena. There is no route to scientific understanding, then, that does not go by way of (...) scientific explanation. (shrink)
Causation is at once familiar and mysterious. Neither common sense nor extensive philosophical debate has led us to anything like agreement on the correct analysis of the concept of causation, or an account of the metaphysical nature of the causal relation. Causation: A User's Guide cuts a clear path through this confusing but vital landscape. L. A. Paul and Ned Hall guide the reader through the most important philosophical treatments of causation, negotiating the terrain by taking a set of examples (...) as landmarks. They clarify the central themes of the debate about causation, and cover questions about causation involving omissions or absences, preemption and other species of redundant causation, and the possibility that causation is not transitive. Along the way, Paul and Hall examine several contemporary proposals for analyzing the nature of causation and assess their merits and overall methodological cogency.The book is designed to be of value both to trained specialists and those coming to the problem of causation for the first time. It provides the reader with a broad and sophisticated view of the metaphysics of the causal relation. (shrink)
What is going on under the hood in philosophical analysis, that familiar process that attempts to uncover the nature of such philosophically interesting kinds as knowledge, causation, and justice by the method of posit and counterexample? How, in particular, do intuitions tell us about philosophical reality? The standard, if unappealing, answer is that philosophical analysis is conceptual analysis—that what we learn about when we do philosophy is in the first instance facts about our own minds. Drawing on recent work on (...) the psychology of concepts, this book proposes a new understanding of philosophical analysis, which I call inductive analysis. The thesis that philosophical analysis is inductive analysis explains how novel, substantive philosophical knowledge can be generated in the armchair. It also explains why attempts at philosophical analysis tend to fall short of providing a complete and uncontroversial definition, and supplies reasons not to lament this apparent shortcoming. (shrink)
Science's priority rule rewards those who are first to make a discovery, at the expense of all other scientists working towards the same goal, no matter how close they may be to making the same discovery. I propose an explanation of the priority rule that, better than previous explanations, accounts for the distinctive features of the rule. My explanation treats the priority system, and more generally, any scheme of rewards for scientific endeavor, as a device for achieving an allocation of (...) resources among different research programs that provides as much benefit as possible to society. I show that the priority system is especially well suited to finding an efficient allocation of resources in those situations, characteristic of scientific inquiry, in which any success in an endeavor subsequent to the first success brings little additional benefit to society. (shrink)
The two major modern accounts of explanation are the causal and unification accounts. My aim in this paper is to provide a kind of unification of the causal and the unification accounts, by using the central technical apparatus of the unification account to solve a central problem faced by the causal account, namely, the problem of determining which parts of a causal network are explanatorily relevant to the occurrence of an explanandum. The end product of my investigation is a causal (...) account of explanation that has many of the advantages of the unification account. (shrink)
A symposium on Michael Strevens' book "Tychomancy", concerning the psychological roots and historical significance of physical intuition about probability in physics, biology, and elsewhere.
According to principles of probability coordination, such as Miller's Principle or Lewis's Principal Principle, you ought to set your subjective probability for an event equal to what you take to be the objective probability of the event. For example, you should expect events with a very high probability to occur and those with a very low probability not to occur. This paper examines the grounds of such principles. It is argued that any attempt to justify a principle of probability coordination (...) encounters the same difficulties as attempts to justify induction. As a result, no justification can be found. (shrink)
How should we make choices when we know so little about our futures? L. A. Paul argues that we must view life decisions as choices to make discoveries about the nature of experience. Her account of transformative experience holds that part of the value of living authentically is to experience our lives and preferences in whatever ways they evolve.
This paper offers a metaphysics of physical probability in (or if you prefer, truth conditions for probabilistic claims about) deterministic systems based on an approach to the explanation of probabilistic patterns in deterministic systems called the method of arbitrary functions. Much of the appeal of the method is its promise to provide an account of physical probability on which probability assignments have the ability to support counterfactuals about frequencies. It is argued that the eponymous arbitrary functions are of little philosophical (...) use, but that they can be substituted for facts about frequencies without losing the ability to provide counterfactual support. The result is an account of probability in deterministic systems that has a “propensity-like” look and feel, yet which requires no supplement to the standard modern empiricist tool kit of particular matters of fact and principles of physical dynamics. (shrink)
Robert Batterman and others have argued that certain idealizing explanations have an asymptotic form: they account for a state of affairs or behavior by showing that it emerges “in the limit”. Asymptotic idealizations are interesting in many ways, but is there anything special about them as idealizations? To understand their role in science, must we augment our philosophical theories of idealization? This paper uses simple examples of asymptotic idealization in population genetics to argue for an affirmative answer and proposes a (...) general schema for asymptotic idealization, drawing on insights from Batterman’s treatment and from John Norton’s subsequent critique. (shrink)
Michael Strevens has produced an ambitious and comprehensive new account of scientific explanation. This review discusses its main themes, focusing on regularity explanation and a number of methodological concerns.
The concept of causation plays a central role in many philosophical theories, and yet no account of causation has gained widespread acceptance among those who have investigated its foundations. Theories based on laws, counterfactuals, physical processes, and probabilistic dependence and independence relations (the list is by no means exhaustive) have all received detailed treatment in recent years---{}and, while no account has been entirely successful, it is generally agreed that the concept has been greatly clari{}ed by the attempts. In this magni{}cent (...) book, Woodward aims to give a uni{}ed account of causation and causal explanation in terms of the notion of a manipulation (or intervention, terms which can be read interchangeably). Not only does he produce in my view the most illuminating and comprehensive account of causation on o{}er, his theory also opens a great many avenues for future work in the area, and has rami{}cations for many other areas of philosophy. Making Things Happen ought to be of interest not only to philosophers of causation and philosophers of science, but to any philosopher whose concerns involve assumptions about the nature of causation, laws, or explanation. (shrink)
A theme of much work taking an ““economic approach”” to the study of science is the interaction between the norms of individual scientists and those of society at large. Though drawing from the same suite of formal methods, proponents of the economic approach offer what are in substantive terms profoundly different explanations of various aspects of the structure of science. The differences are illustrated by comparing Strevens's explanation of the scientific reward system (the ““priority rule””) with Max Albert's explanation (...) of the prevalence of ““high methodological standards”” in science. Some objections to the economic approach as a whole are then briefly considered. (shrink)
Recent work on children’s inferences concerning biological and chemical categories has suggested that children (and perhaps adults) are essentialists— a view known as psychological essentialism. I distinguish three varieties of psychological essentialism and investigate the ways in which essentialism explains the inferences for which it is supposed to account. Essentialism succeeds in explaining the inferences, I argue, because it attributes to the child belief in causal laws connecting category membership and the possession of certain characteristic appearances and behavior. This suggests (...) that the data will be equally well explained by a non-essentialist hypothesis that attributes belief in the appropriate causal laws to the child, but makes no claim as to whether or not the child represents essences. I provide several reasons to think that this non-essentialist hypothesis is in fact superior to any version of the essentialist hypothesis. (shrink)
What do the words "ceteris paribus" add to a causal hypothesis, that is, to a generalization that is intended to articulate the consequences of a causal mechanism? One answer, which looks almost too good to be true, is that a ceteris paribus hedge restricts the scope of the hypothesis to those cases where nothing undermines, interferes with, or undoes the effect of the mechanism in question, even if the hypothesis's own formulator is otherwise unable to specify fully what might constitute (...) such undermining or interference. I will propose a semantics for causal generalizations according to which ceteris paribus hedges deliver on this promise, because the truth conditions for a causal generalization depend in part on the—perhaps unknown—nature of the mechanism whose consequences it is intended to describe. It follows that the truth conditions for causal hypotheses are typically opaque to the very scientists who formulate and test them. (shrink)
David Lewis, Michael Thau, and Ned Hall have recently argued that the Principal Principle—an inferential rule underlying much of our reasoning about probability—is inadequate in certain respects, and that something called the ‘New Principle’ ought to take its place. This paper argues that the Principle Principal need not be discarded. On the contrary, Lewis et al. can get everything they need—including the New Principle—from the intuitions and inferential habits that inspire the Principal Principle itself, while avoiding the problems that originally (...) caused them to abandon that principle. (shrink)
Bayesian confirmation theory—abbreviated to in these notes—is the predominant approach to confirmation in late twentieth century philosophy of science. It has many critics, but no rival theory can claim anything like the same following. The popularity of the Bayesian approach is due to its flexibility, its apparently effortless handling of various technical problems, the existence of various a priori arguments for its validity, and its injection of subjective and contextual elements into the process of confirmation in just the places where (...) critics of earlier approaches had come to think that subjectivity and sensitivity to context were necessary. (shrink)
This paper examines the standard Bayesian solution to the Quine–Duhem problem, the problem of distributing blame between a theory and its auxiliary hypotheses in the aftermath of a failed prediction. The standard solution, I argue, begs the question against those who claim that the problem has no solution. I then provide an alternative Bayesian solution that is not question-begging and that turns out to have some interesting and desirable properties not possessed by the standard solution. This solution opens the way (...) to a satisfying treatment of a problem concerning ad hoc auxiliary hypotheses. (shrink)
Bayesian treatment of auxiliary hypotheses rests on a misinterpretation of Strevens's central claim about the negligibility of certain small probabilities. The present paper clarifies and proves a very general version of the claim. The project Clarifications The negligibility argument Generalization and proof.
This paper justifies the inference of probabilities from symmetries. I supply some examples of important and correct inferences of this variety. Two explanations of such inferences -- an explanation based on the Principle of Indifference and a proposal due to Poincaré and Reichenbach -- are considered and rejected. I conclude with my own account, in which the inferences in question are shown to be warranted a posteriori, provided that they are based on symmetries in the mechanisms of chance setups.
How can a model that stops short of representing the whole truth about the causal production of a phenomenon help us to understand the phenomenon? I answer this question from the perspective of what I call the simple view of understanding, on which to understand a phenomenon is to grasp a correct explanation of the phenomenon. Idealizations, I have argued in previous work, flag factors that are casually relevant but explanatorily irrelevant to the phenomena to be explained. Though useful to (...) the would-be understander, such flagging is only a first step. Are there any further and more advanced ways that idealized models aid understanding? Yes, I propose: the manipulation of idealized models can provide considerable insight into the reasons that some causal factors are difference-makers and others are not, which helps the understander to grasp the nature of explanatory connections and so to better grasp the explanation itself. (shrink)
Robert Merton observed that better-known scientists tend to get more credit than less well-known scientists for the same achievements; he called this the Matthew effect. Scientists themselves, even those eminent researchers who enjoy its benefits, regard the effect as a pathology: it results, they believe, in a misallocation of credit. If so, why do scientists continue to bestow credit in the manner described by the effect? This paper advocates an explanation of the effect on which it turns out to allocate (...) credit fairly after all, while at the same time making sense of scientists' opinions to the contrary. (shrink)
I defend a one category ontology: an ontology that denies that we need more than one fundamental category to support the ontological structure of the world. Categorical fundamentality is understood in terms of the metaphysically prior, as that in which everything else in the world consists. One category ontologies are deeply appealing, because their ontological simplicity gives them an unmatched elegance and spareness. I’m a fan of a one category ontology that collapses the distinction between particular and property, replacing it (...) with a single fundamental category of intrinsic characters or qualities. We may describe the qualities as qualitative charactersor as modes, perhaps on the model of Aristotelian qualitative (nonsubstantial) kinds, and I will use the term “properties” interchangeably with “qualities”. The qualities are repeatable and reasonably sparse, although, as I discuss in section 2.6, there are empirical reasons that may suggest, depending on one’s preferred fundamental physical theory, that they include irreducibly intensive qualities. There are no uninstantiated qualities. I also assume that the fundamental qualitative natures are intrinsic, although physics may ultimately suggest that some of them are extrinsic. On my view, matter, concrete objects, abstract objects, and perhaps even spacetime are constructed from mereological fusions of qualities, so the world is simply a vast mixture of qualities, including polyadic properties (i.e., relations). This means that everything there is, including concrete objects like persons or stars, is a quality, a qualitative fusion, or a portion of the extended qualitative fusion that is the worldwhole. I call my view mereological bundle theory. (shrink)