In this paper I argue that there is a preface paradox for intention. The preface paradox for intention shows that intentions do not obey an agglomeration norm, requiring one to intend conjunctions of whatever else one intends. But what norms do intentions obey? I will argue that intentions come in degrees. These partial intentions are governed by the norms of the probability calculus. First, I will give a dispositional theory of partial intention, on which degrees of intention are the degrees (...) to which one possesses the dispositions characteristic of full intention. I will use this dispositional theory to defend probabilism about intention. Next, I will offer a more general argument for probabilism about intention. To do so, I will generalize recent decision theoretic arguments for probabilism from the case of belief to the case of intention. (shrink)
This paper develops a semantic solution to the puzzle of Free Choice permission. The paper begins with a battery of impossibility results showing that Free Choice is in tension with a variety of classical principles, including Disjunction Introduction and the Law of Excluded Middle. Most interestingly, Free Choice appears incompatible with a principle concerning the behavior of Free Choice under negation, Double Prohibition, which says that Mary can’t have soup or salad implies Mary can’t have soup and Mary can’t have (...) salad. Alonso-Ovalle 2006 and others have appealed to Double Prohibition to motivate pragmatic accounts of Free Choice. Aher 2012, Aloni 2018, and others have developed semantic accounts of Free Choice that also explain Double Prohibition. -/- This paper offers a new semantic analysis of Free Choice designed to handle the full range of impossibility results involved in Free Choice. The paper develops the hypothesis that Free Choice is a homogeneity effect. The claim possibly A or B is defined only when A and B are homogenous with respect to their modal status, either both possible or both impossible. Paired with a notion of entailment that is sensitive to definedness conditions, this theory validates Free Choice while retaining a wide variety of classical principles except for the transitivity of entailment. The homogeneity hypothesis is implemented in two different ways, homogeneous alternative semantics and homogeneous dynamic semantics, with interestingly different consequences. (shrink)
This paper explores the relationship between dynamic and truth conditional semantics for epistemic modals. It provides a generalization of a standard dynamic update semantics for modals. This new semantics derives a Kripke semantics for modals and a standard dynamic semantics for modals as special cases. The semantics allows for new characterizations of a variety of principles in modal logic, including the inconsistency of ‘p and might not p’. Finally, the semantics provides a construction procedure for transforming any truth conditional semantics (...) for modals into a dynamic semantics for modals with similar properties. (shrink)
Free Choice is the principle that possibly p or q implies and is implied by possibly p and possibly q. A variety of recent attempts to validate Free Choice rely on a nonclassical semantics for disjunction, where the meaning of p or q is not a set of possible worlds. This paper begins with a battery of impossibility results, showing that some kind of nonclassical semantics for disjunction is required in order to validate Free Choice. The paper then provides a (...) positive account of Free Choice, by identifying a family of dynamic semantics for disjunction that can validate the inference. On all such theories, the meaning of p or q has two parts. First, p or q requires that our information is consistent with each of p and q. Second, p or q narrows down our information by eliminating some worlds. It turns out that this second component of or is well behaved: there is a strongest such meaning that p or q can express, consistent with validating Free Choice. The strongest such meaning is the classical one, on which p or q eliminates any world where both p and q are false. In this way, the classical meaning of disjunction turns out to be intimately related to the validity of Free Choice. (shrink)
What is it to believe something might be the case? We develop a puzzle that creates difficulties for standard answers to this question. We go on to propose our own solution, which integrates a Bayesian approach to belief with a dynamic semantics for epistemic modals. After showing how our account solves the puzzle, we explore a surprising consequence: virtually all of our beliefs about what might be the case provide counterexamples to the view that rational belief is closed under logical (...) implication. (shrink)
Many defend the thesis that when someone knows p, they couldn’t easily have been wrong about p. But the notion of easy possibility in play is relatively undertheorized. One structural idea in the literature, the principle of Counterfactual Closure (CC), connects easy possibility with counterfactuals: if it easily could have happened that p, and if p were the case, then q would be the case, it follows that it easily could have happened that q. We first argue that while CC (...) is false, there is a true restriction of it to cases involving counterfactual dependence on a coin flip. The failure of CC falsifies a model where the easy possibilities are counterfactually similar to actuality. Next, we show that extant normality models, where the easy possibilities are the sufficiently normal ones, are incompatible with the restricted CC thesis involving coin flips. Next, we develop a new kind of normality theory that can accommodate the restricted version of CC. This new theory introduces a principle of Counterfactual Contamination, which says roughly that any world is fairly abnormal if at that world very abnormal events counterfactually depend on a coin flip. Finally, we explain why coin flips and other related events have a special status. A central take home lesson is that the correct principle in the vicinity of Safety is importantly normality-theoretic rather than (as it is usually conceived) similarity-theoretic. (shrink)
According to one tradition, uttering an indicative conditional involves performing a special sort of speech act: a conditional assertion. We introduce a formal framework that models this speech act. Using this framework, we show that any theory of conditional assertion validates several inferences in the logic of conditionals, including the False Antecedent inference. Next, we determine the space of truth-conditional semantics for conditionals consistent with conditional assertion. The truth value of any such conditional is settled whenever the antecedent is false, (...) and whenever the antecedent is true and the consequent is false. Then, we consider the space of dynamic meanings consistent with the theory of conditional assertion. We develop a new family of dynamic conditional-assertion operators that combine a traditional test operator with an update operation. (shrink)
Triviality results threaten plausible principles governing our credence in epistemic modal claims. This paper develops a new account of modal credence which avoids triviality. On the resulting theory, probabilities are assigned not to sets of worlds, but rather to sets of information state-world pairs. The theory avoids triviality by giving up the principle that rational credence is closed under conditionalization. A rational agent can become irrational by conditionalizing on new evidence. In place of conditionalization, the paper develops a new account (...) of updating: conditionalization with normalization. (shrink)
A mental state is luminous if, whenever an agent is in that state, they are in a position to know that they are. Following Timothy Williamson’s Knowledge and Its Limits, a wave of recent work has explored whether there are any non-trivial luminous mental states. A version of Williamson’s anti-luminosity appeals to a safety- theoretic principle connecting knowledge and confidence: if an agent knows p, then p is true in any nearby scenario where she has a similar level of confidence (...) in p. However, the relevant notion of confidence is relatively underexplored. This paper develops a precise theory of confidence: an agent’s degree of confidence in p is the objective chance they will rely on p in practical reasoning. This theory of confidence is then used to critically evaluate the anti-luminosity argument, leading to the surprising conclusion that although there are strong reasons for thinking that luminosity does not obtain, they are quite different from those the existing literature has considered. In particular, we show that once the notion of confidence is properly understood, the failure of luminosity follows from the assumption that knowledge requires high confidence, and does not require any kind of safety principle as a premise. (shrink)
Many have accepted that ordinary counterfactuals and might counterfactuals are duals. In this paper, I show that this thesis leads to paradoxical results when combined with a few different unorthodox yet increasingly popular theses, including the thesis that counterfactuals are strict conditionals. Given Duality and several other theses, we can quickly infer the validity of another paradoxical principle, ‘The Counterfactual Direct Argument’, which says that ‘A> ’ entails ‘A> ’. First, I provide a collapse theorem for the ‘counterfactual direct argument’. (...) The counterfactual direct argument entails the logical equivalence of the subjunctive and material conditional, given a variety of assumptions. Second, I provide a semantics that validates the counterfactual direct argument without collapse. This theory further develops extant dynamic accounts of conditionals. I give a new semantics for disjunction, on which A or B is only true in a context when A and B are both unsettled. The resulting framework validates CDA while invalidating other commonly accepted principles concerning the conditional and disjunction. (shrink)
We often claim to know what might be - or probably is - the case. Modal knowledge along these lines creates a puzzle for information-sensitive semantics for epistemic modals. This paper develops a solution. We start with the idea that knowledge requires safe belief: a belief amounts to knowledge only if it could not easily have been held falsely. We then develop an interpretation of the modal operator in safety ("could have") that allows it to non-trivially embed information-sensitive contents. The (...) resulting theory avoids various paradoxes that arise from other accounts of modal knowledge. It also delivers plausible predictions about modal Gettier cases. (shrink)
Many believe that intended harms are more difficult to justify than are harms that result as a foreseen side effect of one's conduct. We describe cases of harming in which the harm is not intended, yet the harmful act nevertheless runs afoul of the intuitive moral constraint that governs intended harms. We note that these cases provide new and improved counterexamples to the so-called Simple View, according to which intentionally phi-ing requires intending to phi. We then give a new theory (...) of the moral relevance of intention. This theory yields the traditional constraint on intending harm as a special case, along with several stronger demands. (shrink)
Formal models of appearance and reality have proved fruitful for investigating structural properties of perceptual knowledge. This paper applies the same approach to epistemic justification. Our central goal is to give a simple account of The Preface, in which justified belief fails to agglomerate. Following recent work by a number of authors, we understand knowledge in terms of normality. An agent knows p iff p is true throughout all relevant normal worlds. To model The Preface, we appeal to the normality (...) of error. Sometimes, it is more normal for reality and appearance to diverge than to match. We show that this simple idea has dramatic consequences for the theory of knowledge and justification. Among other things, we argue that a proper treatment of The Preface requires a departure from the internalist idea that epistemic justification supervenes on the appearances and the widespread idea that one knows most when free from error. (shrink)
In recent years, a number of theorists have claimed that beliefs about probability are transparent. To believe probably p is simply to have a high credence that p. In this paper, I prove a variety of triviality results for theses like the above. I show that such claims are inconsistent with the thesis that probabilistic modal sentences have propositions or sets of worlds as their meaning. Then I consider the extent to which a dynamic semantics for probabilistic modals can capture (...) theses connecting belief, certainty, credence, and probability. I show that although a dynamic semantics for probabilistic modals does allow one to validate such theses, it can only do so at a cost. I prove that such theses can only be valid if probabilistic modals do not satisfy the axioms of the probability calculus. (shrink)
Humans and animals make inferences about the world under limited time and knowledge. In contrast, many models of rational inference treat the mind as a Laplacean Demon, equipped with unlimited time, knowledge, and computational might. Following H. Simon's notion of satisficing, the authors have proposed a family of algorithms based on a simple psychological mechanism: one-reason decision making. These fast and frugal algorithms violate fundamental tenets of classical rationality: They neither look up nor integrate all information. By computer simulation, (...) the authors held a competition between the satisficing "Take The Best" algorithm and various "rational" inference procedures. The Take The Best algorithm matched or outperformed all competitors in inferential speed and accuracy. This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to satisfy the classical norms of rational inference. (shrink)
Are science and religion compatible when it comes to understanding cosmology (the origin of the universe), biology (the origin of life and of the human species), ethics, and the human mind (minds, brains, souls, and free will)? Do science and religion occupy non-overlapping magisteria? Is Intelligent Design a scientific theory? How do the various faith traditions view the relationship between science and religion? What, if any, are the limits of scientific explanation? What are the most important open questions, problems, or (...) challenges confronting the relationship between science and religion, and what are the prospects for progress? These and other questions are explored in Science and Religion: 5 Questions--a collection of thirty-three interviews based on 5 questions presented to some of the world's most influential and prominent philosophers, scientists, theologians, apologists, and atheists. Contributions by Simon Blackburn, Susan Blackmore, Sean Carroll, William Lane Craig, William Dembski, Daniel C. Dennett, George F.R. Ellis, Owen Flanagan, Owen Gingerich, Rebecca Newberger Goldstein, John F. Haught, Muzaffar Iqbal, Lawrence Krauss, Colin McGinn, Alister McGrath, Mary Midgley, Seyyed Hossein Nasr, Timothy O'Connor, Massimo Pigliucci, John Polkinghorne, James Randi, Alex Rosenberg, Michael Ruse, Robert John Russell, John Searle, Michael Shermer, Victor J. Stenger, Robert Thurman, Michael Tooley, Charles Townes, Peter van Inwagen, Keith Ward, Rabbi David Wolpe. (shrink)
Abraham ibn Ezra the Spaniard was one of the foremost transmitters of Arabic science to the West. His astrological and astronomical works, written in Hebrew and later translated into Latin, were considered authoritative by many medieval Jewish and Christian scholars. Some of the works he translated from Arabic are no longer extant in their original form, and on occasion his treatises provide information about earlier sources that is otherwise poorly preserved, if at all. Ibn Ezra seems to be the earliest (...) scholar to record one of the seven methods for setting up the astrological houses, and this method was subsequently used by Levi ben Gerson in southern France. (shrink)
Continuing his exploration of the organization of complexity and the science of design, this new edition of Herbert Simon's classic work on artificial ...
Cap-and-trade systems for greenhouse gas emissions are an important part of the climate change policies of the EU, Japan, New Zealand, among others, as well as China and Australia. However, concerns have been raised on a variety of ethical grounds about the use of markets to reduce emissions. For example, some people worry that emissions trading allows the wealthy to evade their responsibilities. Others are concerned that it puts a price on the natural environment. Concerns have also been raised about (...) the distributional justice of emissions trading. Finally, some commentators have questioned the actual effectiveness of emissions trading in reducing emissions. This paper considers these three categories of objections – ethics, justice and effectiveness – through the lens of moral philosophy and economics. It is concluded that only the objections based on distributional justice can be sustained. This points to reform of the carbon market system, rather than its elimination. (shrink)
[Correction Notice: An erratum for this article was reported in Vol 109 of Psychological Review. Due to circumstances that were beyond the control of the authors, the studies reported in "Models of Ecological Rationality: The Recognition Heuristic," by Daniel G. Goldstein and Gerd Gigerenzer overlap with studies reported in "The Recognition Heuristic: How Ignorance Makes Us Smart," by the same authors and with studies reported in "Inference From Ignorance: The Recognition Heuristic". In addition, Figure 3 in the Psychological Review (...) article was originally published in the book chapter and should have carried a note saying that it was used by permission of Oxford University Press.] One view of heuristics is that they are imperfect versions of optimal statistical procedures considered too complicated for ordinary minds to carry out. In contrast, the authors consider heuristics to be adaptive strategies that evolved in tandem with fundamental psychological mechanisms. The recognition heuristic, arguably the most frugal of all heuristics, makes inferences from patterns of missing knowledge. This heuristic exploits a fundamental adaptation of many organisms: the vast, sensitive, and reliable capacity for recognition. The authors specify the conditions under which the recognition heuristic is successful and when it leads to the counter-intuitive less-is-more effect in which less knowledge is better than more for making accurate inferences. (shrink)
This volume collects some influential essays in which Simon Blackburn, one of our leading philosophers, explores one of the most profound and fertile of philosophical problems: the way in which our judgments relate to the world. This debate has centered on realism, or the view that what we say is validated by the way things stand in the world, and a variety of oppositions to it. Prominent among the latter are expressive and projective theories, but also a relaxed pluralism (...) that discourages the view that there are substantial issues at stake. The figure of the "quasi-realist" dramatizes the difficulty of conducting these debates. Typically philosophers thinking of themselves as realists will believe that they alone can give a proper or literal account of some of our attachments--to truth, to facts, to the independent world, to knowledge and certainty. The quasi-realist challenge, developed by Blackburn in this volume, is that we can have those attachments without any metaphysic that deserves to be called realism, so that the metaphysical picture that goes with our practices is quite idle. The cases treated here include the theories of value and knowledge, modality, probability, causation, intentionality and rule-following, and explanation. A substantial new introduction has been added, drawing together some of the central themes. The essays articulate a fresh alternative to a primitive realist/anti-realist opposition, and their cumulative effect is to yield a new appreciation of the delicacy of the debate in these central areas. (shrink)
In the last quarter of the nineteenth century, Ludwig Boltzmann explained how irreversible macroscopic laws, in particular the second law of thermodynamics, originate in the time-reversible laws of microscopic physics. Boltzmann’s analysis, the essence of which I shall review here, is basically correct. The most famous criticisms of Boltzmann’s later work on the subject have little merit. Most twentieth century innovations – such as the identification of the state of a physical system with a probability distribution on its phase space, (...) of its thermodynamic entropy with the Gibbs entropy of , and the invocation of the notions of ergodicity and mixing for the justification of the foundations of statistical mechanics – are thoroughly misguided. (shrink)
Bohmian mechanics, which is also called the de Broglie-Bohm theory, the pilot-wave model, and the causal interpretation of quantum mechanics, is a version of quantum theory discovered by Louis de Broglie in 1927 and rediscovered by David Bohm in 1952. It is the simplest example of what is often called a hidden variables interpretation of quantum mechanics. In Bohmian mechanics a system of particles is described in part by its wave function, evolving, as usual, according to Schrödinger's equation. However, the (...) wave function provides only a partial description of the system. This description is completed by the specification of the actual positions of the particles. The latter evolve according to the.. (shrink)
Canguilhem is known to have regretted, with some pathos, that Life no longer serves as an orienting question in our scientific activity. He also frequently insisted on a kind of uniqueness of organisms and/or living bodies – their inherent normativity, their value-production and overall their inherent difference from mere machines. In addition, Canguilhem acknowledged a major debt to the German neurologist-theoretician Kurt Goldstein, author most famously of The Structure of the Organism in 1934; along with Merleau-Ponty, Canguilhem was the (...) main figure who introduced the work of Goldstein and his ‘phenomenology of embodiment’ into France. In this paper I inquire if we should view Canguilhem and Goldstein as ‘biochauvinists’, that is, as thinkers who consider that there is something inherently unique about biological entities as such, and if so, of what sort. (shrink)
Bohmian mechanics is a theory about point particles moving along trajectories. It has the property that in a world governed by Bohmian mechanics, observers see the same statistics for experimental results as predicted by quantum mechanics. Bohmian mechanics thus provides an explanation of quantum mechanics. Moreover, the Bohmian trajectories are defined in a non-conspiratorial way by a few simple laws.
That all pleasure is good and all pain bad in itself is an eternally true ethical principle. The common claim that some pleasure is not good, or some pain not bad, is mistaken. Strict particularism (ethical decisions must be made case by case; there are no sound universal normative principles) and relativism (all good and bad are relative to society) are among the ethical theories we may refute through an appeal to pleasure and pain. Daniel Dennett, Philippa Foot, R M (...) Hare, Gilbert Harman, Immanuel Kant, J. L. Mackie, and Jean-Paul Sartre are among the many philosophers addressed. (shrink)
Laurence Goldstein gives a straightforward and lively account of some of the central themes of Wittgenstein's writings on meaning, mind, and mathematics.
The most puzzling issue in the foundations of quantum mechanics is perhaps that of the status of the wave function of a system in a quantum universe. Is the wave function objective or subjective? Does it represent the physical state of the system or merely our information about the system? And if the former, does it provide a complete description of the system or only a partial description? We shall address these questions here mainly from a Bohmian perspective, and shall (...) argue that part of the difficulty in ascertaining the status of the wave function in quantum mechanics arises from the fact that there are two different sorts of wave functions involved. The most fundamental wave function is that of the universe. From it, together with the configuration of the universe, one can define the wave function of a subsystem. We argue that the fundamental wave function, the wave function of the universe, has a law-like character. (shrink)
Despite its extraordinary predictive successes, quantum mechanics has, since its inception some seventy years ago, been plagued by conceptual di culties. The basic problem, plainly put, is this: It is not at all clear what quantum mechanics is about. What, in fact, does quantum mechanics describe?
Against Hume and Epicurus I argue that our selection of pleasure, pain and other objects as our ultimate ends is guided by reason. There are two parts to the explanation of our attraction to pleasure, our aversion to pain, and our consequent preference of pleasure to pain: 1. Pleasure presents us with reason to seek it, pain presents us reason to avoid it, and 2. Being intelligent, human beings (and to a degree, many animals) are disposed to be guided by (...) reason, and hence by what there is reason to choose, seek, and prefer, when they act. (shrink)