Arguably, Hume's greatest single contribution to contemporary philosophy of science has been the problem of induction (1739). Before attempting its statement, we need to spend a few words identifying the subject matter of this corner of epistemology. At a first pass, induction concerns ampliative inferences drawn on the basis of evidence (presumably, evidence acquired more or less directly from experience)—that is, inferences whose conclusions are not (validly) entailed by the premises. Philosophers have historically drawn further distinctions, often appropriating the term (...) “induction” to mark them; since we will not be concerned with the philosophical issues for which these distinctions are relevant, we will use the word “inductive” in a catch-all sense synonymous with “ampliative”. But we will follow the usual practice of choosing, as our paradigm example of inductive inferences, inferences about the future based on evidence drawn from the past and present. A further refinement is more important. Opinion typically comes in degrees, and this fact makes a great deal of difference to how we understand inductive inferences. For while it is often harmless to talk about the conclusions that can be rationally believed on the basis of some.. (shrink)
The goal of this small book and accompanying DVD is to help you to have a better experience in your laboratory by getting you to step back and take a global look at what is involved in making progress in the laboratory.
Among the many philosophers who hold that causal facts1 are to be explained in terms of—or more ambitiously, shown to reduce to—facts about what happens, together with facts about the fundamental laws that govern what happens, the clear favorite is an approach that sees counterfactual dependence as the key to such explanation or reduction. The paradigm examples of causation, so advocates of this approach tell us, are examples in which events c and e—the cause and its effect—both occur, but: had (...) c not occurred, e would not have occurred either. From this starting point ideas proliferate in a vast profusion. But the remarkable disparity among these ideas should not obscure their common foundation. Neither should the diversity of opinion about the prospects for a philosophical analysis of causation obscure their importance. For even those philosophers who see these prospects as dim—perhaps because they suffer post-Quinean queasiness at the thought of any analysis of any concept of interest—can often be heard to say such things as that causal relations among events are somehow “a matter of” the patterns of counterfactual dependence to be found in them. It was not always so. Thirty-odd years ago, so-called “regularity” analyses (so-called, presumably, because they traced back to Hume’s well-known analysis of causation as constant conjunction) ruled the day, with Mackie’s Cement of the Universe embodying a classic statement. But they fell on hard times, both because of internal problems—which we will review in due course—and because dramatic improvements in philosophical understanding of counterfactuals made possible the emergence of a serious and potent rival: a counterfactual analysis of causation resting on foundations firm enough to be repel the kind of philosophical suspicion that had formerly warranted dismissal.. (shrink)
Structural equations have become increasingly popular in recent years as tools for understanding causation. But standard structural equations approaches to causation face deep problems. The most philosophically interesting of these consists in their failure to incorporate a distinction between default states of an object or system, and deviations therefrom. Exploring this problem, and how to fix it, helps to illuminate the central role this distinction plays in our causal thinking.
There are two central questions concerning probability. First, what are its formal features? That is a mathematical question, to which there is a standard, widely (though not universally) agreed upon answer. This answer is reviewed in the next section. Second, what sorts of things are probabilities---what, that is, is the subject matter of probability theory? This is a philosophical question, and while the mathematical theory of probability certainly bears on it, the answer must come from elsewhere. To see why, observe (...) that there are many things in the world that have the mathematical structure of probabilities---the set of measurable regions on the surface of a table, for example---but that one would never mistake for being probabilities. So probability is distinguished by more than just its formal characteristics. The bulk of this essay will be taken up with the central question of what this “more” might be. (shrink)
Lewis's work on causation was governed by a familiar methodological approach: the aim was to come up with an account of causation that would recover, in as elegant a fashion as possible, all of our firm “pre‐theoretic” intuitions about hypothetical cases. That methodology faces an obvious challenge, in that it is not clear why anyone not interested in the semantics of the English word “cause” should care about its results. Better to take a different approach, one which treats our intuitions (...) about cases merely as guides in the construction of a causal concept or concepts that will serve some useful theoretical purpose. I sketch one central such purpose, suggesting, first, that an account of causation that, like Lewis's, gives a central role to counterfactuals is well‐suited to fulfill it, and, second, that the most famous pre‐emption‐based counterexamples to a counterfactual account yield an important constraint on a successful account. (shrink)
David Lewis's influential work on the epistemology and metaphysics of objective chance has convinced many philosophers of the central importance of the following two claims: First, it is a serious cost of reductionist positions about chance (such as that occupied by Lewis) that they are, apparently, forced to modify the Principal Principle--the central principle relating objective chance to rational subjective probability--in order to avoid contradiction. Second, it is a perhaps more serious cost of the rival non-reductionist position that, unlike reductionism, (...) it can give no coherent explanation for why the Principal Principle should hold. I argue that both of these claims are fundamentally mistaken. (shrink)
one's subjective probability for a proposition should conform to one's beliefs about that proposition's objective chance of coming true. David Lewis has argued (i) that this principle provides the defining role for chance; (ii) that it conflicts with his reductionist thesis of Humean supervenience, and so must be replaced by an amended version that avoids the conflict; hence (iii) that nothing perfectly deserves the name ‘chance’, although something can come close enough by playing the role picked out by the amended (...) principle. We show that in fact there must be ‘chances’ that perfectly play what Lewis takes to be the defining role. But this is not the happy conclusion it might seem, since these ‘chances’ behave too strangely to deserve the name. The lesson is simple: much more than the Principal Principle—more to the point, much more than the connection between chance and credence—informs our understanding of objective chance. 1 Introduction 2 Preliminaries 3 Undermining futures and the New Principle 4 The Old Principle rescued? 5 The New Bug 6 Conclusion. (shrink)
The textbook presentation of quantum mechanics, in a nutshell, is this. The physical state of any isolated system evolves deterministically in accordance with Schrödinger's equation until a "measurement" of some physical magnitude M (e.g. position, energy, spin) is made. Restricting attention to the case where the values of M are discrete, the system's pre-measurement state-vector f is a linear combination, or "superposition", of vectors f1, f2,... that individually represent states that..
The professor announces a surprise exam for the upcoming week; her clever student purports to demonstrate by reductio that she cannot possibly give such an exam. Diagnosing his puzzling argument reveals a deeper puzzle: Is the student justified in believing the announcement? It would seem so, particularly if the upcoming 'week' is long enough. On the other hand, a plausible principle states that if, at the outset, the student is justified in believing some proposition, then he is also justified in (...) believing that he will continue to be justified in believing that proposition. It follows from this 'confidence' principle that the student is not justified in believing the announcement, regardless of the number of days in the week. I argue that the key to resolving this dilemma is to distinguish the confidence principle from a slightly weaker principle governing the student's justified degrees of belief. Representing these degrees of belief as probabilities, and taking 'justified belief' to mean 'justified degree of belief above a certain threshold', I show that we can uphold the weaker, probabilistic analog to the confidence principle, and maintain that, provided the 'week' is long enough, the student can justifiably believe the announcement. The resulting probabilistic analysis of the story leads to a new diagnosis of the logical flaw in the student's reasoning, and suggests, finally, that even those early stages of it which are logically impeccable exhibit another kind of flaw: circularity. (shrink)
Jonardon Ganeri, Paul Noordhof, and Murali Ramachandran (1996) have proposed a new counterfactual analysis of causation. We argue that this – the PCA-analysis – is incorrect. In section 1, we explain David Lewis’s ﬁrst counterfactual analysis of causation, and a problem that led him to propose a second. In section 2 we explain the PCA-analysis, advertised as an improvement on Lewis’s later account. We then give counterexamples to the necessity (section 3) and sufﬁciency (section 4) of the PCA-analysis.