Research ethics committees, while in many ways an excellent innovation, do have some drawbacks. This paper examines three of these. The first problem of such committees is that their approval of specific projects in their own institutions acquires intrinsic value. The second problem relates to the possible devolution of responsibility from the investigator to the committee. The committee approves, the investigator feels relieved of some responsibility and things can be done to patients which neither the committee nor the investigator might (...) countenance if they had sole responsibility. The third problem arises directly from the bureaucratic nature of the committee itself. And one consequence of the resulting rigid guidelines is the insistence, by most committees, on the written consent of patients. Demanding this can, in some circumstances, mean giving the patient very disturbing information. The paper suggests that in patients with a fatal disease where trials compare two accepted therapies committees dispense with written consent. There is a commentary on this paper by Dr D J Weatherall of the Nuffield Department of Clinical Medicine, University of Oxford. (shrink)
The main difficulty facing no-collapse theories of quantum mechanics in the Everettian tradition concerns the role of probability within a theory in which every possible outcome of a measurement actually occurs. The problem is two-fold: First, what do probability claims mean within such a theory? Second, what ensures that the probabilities attached to measurement outcomes match those of standard quantum mechanics? Deutsch has recently proposed a decision-theoretic solution to the second problem, according to which agents are rationally required to weight (...) the outcomes of measurements according to the standard quantum-mechanical probability measure. I show that this argument admits counterexamples, and hence fails to establish the standard probability weighting as a rational requirement. (shrink)
Putnam and Laudan separately argue that the falsity of past scientific theories gives us reason to doubt the truth of current theories. Their arguments have been highly influential, and have generated a significant literature over the past couple of decades. Most of this literature attempts to defend scientific realism by attacking the historical evidence on which the premises of the relevant argument are based. However, I argue that both Putnam's and Laudan's arguments are fallacious, and hence attacking their premises is (...) unnecessary. The paper concludes with a discussion of the further historical evidence that would be required if the pessimistic induction is to present a serious threat to scientific realism. (shrink)
There is a recurring line of argument in the literature to the effect that Bohm's theory fails to solve the measurement problem. I show that this argument fails in all its variants. Hence Bohm's theory, whatever its drawbacks, at least succeeds in solving the measurement problem. I briefly discuss a similar argument that has been raised against the GRW theory.
All parties to the Sleeping Beauty debate agree that it shows that some cherished principle of rationality has to go. Thirders think that it is Conditionalization and Reflection that must be given up or modified; halfers think that it is the Principal Principle. I offer an analysis of the Sleeping Beauty puzzle that allows us to retain all three principles. In brief, I argue that Sleeping Beauty’s credence in the uncentered proposition that the coin came up heads should be 1/2, (...) but her credence in the centered proposition that the coin came up heads and it is Monday should be 1/3. I trace the source of the earlier mistakes to an unquestioned assumption in the debate, namely that an uncentered proposition is just a special kind of centered proposition. I argue that the falsity of this assumption is the real lesson of the Sleeping Beauty case. (shrink)
A few years ago, I argued that according to spontaneous collapse theories of quantum mechanics, arithmetic applies to macroscopic objects only as an approximation. Several authors have written articles defending spontaneous collapse theories against this charge, including Bassi and Ghirardi, Clifton and Monton, and now Frigg. The arguments of these authors are all different and all ingenious, but in the end I think that none of them succeeds, for reasons I elaborate here. I suggest a fourth line of response, based (...) on an analogy with epistemic paradoxes, which I think is the best way to defend spontaneous collapse theories, and which leaves my main thesis intact. (shrink)
The Sleeping Beauty paradox in epistemology and the many-worlds interpretation of quantum mechanics both raise problems concerning subjective probability assignments. Furthermore, there are striking parallels between the two cases; in both cases personal experience has a branching structure, and in both cases the agent loses herself among the branches. However, the treatment of probability is very different in the two cases, for no good reason that I can see. Suppose, then, that we adopt the same treatment of probability in each (...) case. Then the dominant ‘thirder’ solution to the Sleeping Beauty paradox becomes incompatible with the tenability of the many-worlds interpretation. (shrink)
In quantum mechanics it is usually assumed that mutually exclusives states of affairs must be represented by orthogonal vectors. Recent attempts to solve the measurement problem, most notably the GRW theory, require the relaxation of this assumption. It is shown that a consequence of relaxing this assumption is that arithmatic does not apply to ordinary macroscopic objects. It is argued that such a radical move is unwarranted given the current state of understanding of the foundations of quantum mechanics.
I examine recent arguments based on functionalism that claim to show that Bohm's theory fails to solve the measurement problem, or if it does so, it is only because it reduces to a form of the many-worlds theory. While these arguments reveal some interesting features of Bohm's theory, I contend that they do not undermine the distinctive Bohmian solution to the measurement problem. ‡I would like to thank Harvey Brown, Martin Thomson-Jones, and David Wallace for helpful discussions. †To contact the (...) author, please write to: Department of Philosophy, University of Miami, P.O. Box 248054, Coral Gables, FL 33124–4670; e-mail: email@example.com. (shrink)
This paper investigates the tenability of wavefunction realism, according to which the quantum mechanical wavefunction is not just a convenient predictive tool, but is a real entity figuring in physical explanations of our measurement results. An apparent difficulty with this position is that the wavefunction exists in a many-dimensional configuration space, whereas the world appears to us to be three-dimensional. I consider the arguments that have been given for and against the tenability of wavefunction realism, and note that both the (...) proponents and the opponents assume that quantum mechanical configuration space is many-dimensional in exactly the same sense in which classical space is three-dimensional. I argue that this assumption is mistaken, and that configuration space can be taken as three-dimensional in a relevant sense. I conclude that wavefunction realism is far less problematic than it has been taken to be. Introduction Non-separability The instantaneous solution The dynamical solution Invariance What is configuration space, anyway? Conclusion. (shrink)
It has long been recognized that a local hidden variable theory of quantum mechanics can in principle be constructed, provided one is willing to countenance pre-measurement correlations between the properties of measured systems and measuring devices. However, this ‘conspiratorial’ approach is typically dismissed out of hand. In this article I examine the justification for dismissing conspiracy theories of quantum mechanics. I consider the existing arguments against such theories, and find them to be less than conclusive. I suggest a more powerful (...) argument against the leading strategy for constructing a conspiracy theory. Finally, I outline two alternative strategies for constructing conspiracy theories, both of which are immune to these arguments, but require one to either modify or reject the common cause principle. Introduction The incompleteness of quantum mechanics Hidden variables Hidden mechanism conspiracy theories Existing arguments against hidden mechanisms A new argument against hidden mechanisms Backwards-causal conspiracy theories Acausal conspiracy theories Conclusion. (shrink)
A major problem facing no-collapse interpretations of quantum mechanics in the tradition of Everett is how to understand the probabilistic axiom of quantum mechanics (the Born rule) in the context of a deterministic theory in which every outcome of a measurement occurs. Deutsch claims to derive a decision-theoretic analogue of the Born rule from the non-probabilistic part of quantum mechanics and some non-probabilistic axioms of classical decision theory, and hence concludes that no probabilistic axiom is needed. I argue that Deutsch’s (...) derivation begs the question. (shrink)
There is an important sense in which an agent’s credences are universal: while they reflect an agent’s own judgments, those judgments apply equally to everyone’s bets. This point, while uncontentious, has been overlooked; people automatically assume that credences concern an agent’s own bets, perhaps just because of the name “subjective” that is typically applied to this account of belief. This oversight has had unfortunate consequences for recent epistemology, in particular concerning the Sleeping Beauty case and its myriad variants.
Everettian accounts of quantum mechanics entail that people branch; every possible result of a measurement actually occurs, and I have one successor for each result. Is there room for probability in such an account? The prima facie answer is no; there are no ontic chances here, and no ignorance about what will happen. But since any adequate quantum mechanical theory must make probabilistic predictions, much recent philosophical labor has gone into trying to construct an account of probability for branching selves. (...) One popular strategy involves arguing that branching selves introduce a new kind of subjective uncertainty. I argue here that the variants of this strategy in the literature all fail, either because the uncertainty is spurious, or because it is in the wrong place to yield probabilistic predictions. I conclude that uncertainty cannot be the ground for probability in Everettian quantum mechanics. (shrink)
The main problem with the many‐worlds theory is that it is not clear how the notion of probability should be understood in a theory in which every possible outcome of a measurement actually occurs. In this paper, I argue for the following theses concerning the many‐worlds theory: (1) If probability can be applied at all to measurement outcomes, it must function as a measure of an agent’s self‐location uncertainty. (2) Such probabilities typically violate (...) reflection. (3) Many‐worlds branching does not have sufficient structure to admit self‐location probabilities. (4) Decision‐theoretic arguments do not solve this problem. †To contact the author, please write to: Department of Philosophy, University of Miami, P.O. Box 248054, Coral Gables, FL 33124‐4670; e‐mail: firstname.lastname@example.org. (shrink)
The Simulation Argument and the Doomsday Argument share certain structural similarities, and hence are often discussed together (Bostrom 2003, Aranyosi 2004, Richmond 2008, Bostrom and Kulczycki 2011). Both are cases where reflecting on one’s location among a set of possibilities yields a counter-intuitive conclusion—in one case that the end of humankind is closer than you initially thought, and in the second case that it is more likely than you initially thought that you are living in a computer simulation. Indeed, the (...) two arguments do share strong structural similarities. But there are also some disanalogies between the two arguments, and I argue that these disanalogies mean that the Simulation Argument succeeds and the Doomsday Argument fails. (shrink)
In 1994, Maudlin proposed an objection to retrocausal approaches to quantum mechanics in general, and to the transactional interpretation (TI) in particular, involving an absorber that changes location depending on the trajectory of the particle. Maudlin considered this objection fatal. However, the TI did not die; rather, a number of responses were developed, some attempting to accommodate Maudlin'�s example within the existing TI, and others modifying the TI. I argue that none of these responses is fully adequate. The reason, I (...) submit, is that there are two aspects to Maudlin�s objection; the more readily soluble aspect has received all the attention, but the more problematic aspect has gone unnoticed. I consider the prospects for developing a succesful retrocausal quantum theory in light of this second aspect of the objection. (shrink)
Spontaneous collapse theories of quantum mechanics require an interpretation if their claim to solve the measurement problem is to be vindicated. The most straightforward interpretation rule, the fuzzy link, generates a violation of common sense known as the counting anomaly. Recently, a consensus has developed that the mass density link provides an appropriate interpretation of spontaneous collapse theories that avoids the counting anomaly. In this paper, I argue that the mass density link violates common sense in just as striking a (...) way as the fuzzy link, and hence should not be regarded as a problem-free alternative to the fuzzy link. Hence advocates of spontaneous collapse theories must accept some violation of common sense, although this is not necessarily fatal to their project. (shrink)
I argued that anyone who adopts the Everettian approach to the foundations of quantum mechanics must also accept the (unpopular) ‘halfer’ solution to the Sleeping Beauty puzzle. Papineau and Durà-Vilà have responded with an argument that it is perfectly cogent both to be an Everettian and to accept the (popular) ‘thirder’ solution to Sleeping Beauty. Here I attempt to rebut their argument, and to clarify my original position.
This article addresses five research questions: What specific behaviors are described in the literature as ethical or unethical? What percentage of business people are believed to be guilty of unethical behavior? What specific unethical behaviors have been observed by bank employees? How serious are the behaviors? Are experiences and attitudes affected by demographics? Conclusions suggest: There are seventeen categories of behavior, and that they are heavily skewed toward internal behaviors. Younger employees have a higher level of ethical consciousness than older (...) employees. The longer one works for a company, the more one may look to job security as a priority; this can lead to rationalizing or overlooking apparently unethical behaviors. More emphasis is needed on internal behaviors with particular attention on the impact that external behaviors have on internal behaviors. (shrink)
It is widely acknowledged that the link between quantum language and ordinary language must be "fuzzier" than the traditional eigenstate-eigenvalue link. In the context of spontaneous-collapse theories, Albert and Loewer (1996) argue that the form of this fuzzy link is a matter of convention, and can be freely chosen to minimize anomalies for those theories. I defend the position that the form of the link is empirical, and could be such as to render collapse theories idle. This means that (...) defenders of spontaneous-collapse theories must gamble that the actual form of the link renders such theories tenable. (shrink)