Kilimanjaro is a paradigmatic mountain, if any is. Consider atom Sparky, which is neither determinately part of Kilimanjaro nor determinately not part of it. Let Kilimanjaro(+) be the body of land constituted, in the way mountains are constituted by their constituent atoms, by the atoms that make up Kilimanjaro together with Sparky, and Kilimanjaro(–) the one constituted by those other than Sparky. On the one hand, there seems to be just one mountain in the vicinity of Kilimanjaro. On the other (...) hand, both Kilimanjaro(+) and Kilimanjaro(–)—and indeed many other similar things—seem to have an equal claim to be a mountain: all of them exhibit the grounds for something being a mountain—like being an elevation of the earth’s surface rising abruptly and to a large height from the surrounding level,1 or whathaveyou—; and there seems to be nothing in the vicinity with a better claim. Hence, the problem of the many. (shrink)
The problem of the many poses the task of explaining mereological indeterminacy of ordinary objects in a way that sustains our familiar practice of counting these objects. The aim of this essay is to develop a solution to the problem of the many that is based on an account of mereological indeterminacy as having its source in how ordinary objects are, independently of how we represent them. At the center of the account stands a quasi-hylomorphic ontology (...) of ordinary objects as material objects with multiple individual forms. (shrink)
Consider a cat on a mat. On the one hand, there seems to be just one cat, but on the other there seem to be many things with as good a claim as anything in the vicinity to being a cat. Hence, the problem of the many. In his ‘Many, but Almost One,’ David Lewis offered two solutions. According to the first, only one of the many is indeed a cat, although it is indeterminate exactly (...) which one. According to the second, the many are all cats, but they are almost identical to each other, and hence they are almost one. For Lewis, the two solutions do not compete with each other but are mutually complementary, as each one can assist the other. This paper has two aims: to give some reasons against the first of these two solutions, but then to defend the second as a self-standing solution from Lewis’s considerations to the contrary. (shrink)
I argue that the many worlds explanation of quantum computation is not licensed by, and in fact is conceptually inferior to, the many worlds interpretation of quantum mechanics from which it is derived. I argue that the many worlds explanation of quantum computation is incompatible with the recently developed cluster state model of quantum computation. Based on these considerations I conclude that we should reject the many worlds explanation of quantum computation.
Consider a cat on a mat. On the one hand, there seem to be just one cat, but on the other there seem to be many things with as good a claim to being a cat, and there seems to be nothing in the vicinity with a better claim. Hence, the problem of the many. In his ‘Many, but Almost One,’ David Lewis offered two solutions. According to the ﬁrst, only one of the many is (...) indeed a cat, although it is indeterminate exactly which one. According to the second, the many are all cats, but they are almost identical to each other, and hence they are almost one. For Lewis, the two solutions do not compete with each other but are mutually complementary, as each can assist the other. This paper has two aims: ﬁrst to argue against the ﬁrst of these two solutions, and then to defend the second as a self-standing solution from Lewis’s considerations to the contrary. In both parts I will assume the certainly plausible but also controversial view on the nature of vagueness, having it that vagueness is a kind of semantic indecision—of which Lewis himself is one of the main defenders. (shrink)
Although the predominant view is that vagueness is due to our language being imprecise, the alternative idea that objects themselves do not have determinate borders has received an occasional hearing. But what has failed to be appreciated is how this idea can avoid a puzzle Peter Unger named “The Problem of the Many.”[i].
Naive mereology studies ordinary, common-sense beliefs about part and whole. Some of the speculations in this article on naive mereology do not bear directly on Peter van Inwagen's "Material Beings". The other topics, (1) and (2), both do. (1) Here is an example of Peter Unger's "Problem of the Many". How can a table be a collection of atoms when many collections of atoms have equally strong claims to be that table? Van Inwagen invokes fuzzy sets to (...) solve this problem. I claim that an alternative treatment of vagueness, supervaluations over many-value valuations, provides a better solution. (2) The Special Composition Question asks how parts compose a whole. One who rejects van Inwagen's answer in terms of constituting a life need not provide some alternative answer. Even if all answers to the Special Question fail, there are a multitude of less general composition questions that are not so difficult. (shrink)
Peter Unger's 'problem of the many' has elicited many responses over the past quarter of a century. Here I present a new problem of the many. This new problem, I claim, is resistant to the solutions cunently on offer for Unger's problem.
A plausible desideratum for an account of the nature of objects, at, and across time, is that it accommodate the phenomenon of vagueness without locating vagueness in the world. A series of arguments have attempted to show that while universalist perdurantism – which combines a perdurantist account of persistence with an unrestricted mereological account of composition – meets this desideratum, endurantist accounts do not. If endurantists reject unrestricted composition then they must hold that vagueness is ontological. But if they embrace (...) unrestricted composition they are faced with the problem of the many, and cannot plausibly accommodate vagueness. This paper disambiguates two related sub-problems of the problem of the many, and argues that universalist perdurantism is not superior to universalist endurantism with respect to either of these. (shrink)
Supervaluational treatments of vagueness are currently quite popular among those who regard vagueness as a thoroughly semantic phenomenon. Peter Unger's 'problem of the many' may be regarded as arising from the vagueness of our ordinary physical-object terms, so it is not surprising that supervaluational solutions to Unger's problem have been offered. I argue that supervaluations do not afford an adequate solution to the problem of the many. Moreover, the considerations I raise against the supervaluational solution (...) tell also against the solution to the problem of the many which is suggested by adherents of the epistemic theory of vagueness. (shrink)
It has been argued that St. Thomas Aquinas’s anthropological views fall prey to the problem of “Too Many Thinkers.” The worry, roughly, is that his views entail that I—a human person—am able to think, but that my soul—which is not a human person—is also able to think. Hence, too many thinkers: there are too many ofus having my thoughts. In this paper, I show why this is not a problem for St. Thomas. Along the way, (...) I also address Peter Unger’s argument for substance dualism. (shrink)
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. (...) It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans. Although this seems to be a merely technical puzzle, even a triviality, a surprising range of proposed solutions has emerged, many of them mutually inconsistent. It is not even settled whether a solution should come from metaphysics, or from philosophy of language, or from logic. Here we survey the options, and provide several links to the many topics related to the Problem. (shrink)
In this paper I develop a novel response to the exclusion problem. I argue that the nature of the events in the causally complete physical domain raises the “problem of many causes”: there will typically be countless simultaneous low-level physical events in that domain that are causally sufficient for any given high-level physical event (like a window breaking or an arm raising). This shows that even reductive physicalists must admit that the version of the exclusion principle used (...) to pose the exclusion problem against non-reductive physicalism is too strong. The burden is on proponents of the exclusion problem to provide a reason to think that any qualifications placed on the exclusion principle will solve the problem of many causes while ruling out causation by irreducible mental events. (shrink)
In some situations in which undesirable collective effects occur, it is very hard, if not impossible, to hold any individual reasonably responsible. Such a situation may be referred to as the problem of many hands. In this paper we investigate how the problem of many hands can best be understood and why, and when, it exactly constitutes a problem. After analyzing climate change as an example, we propose to define the problem of many (...) hands as the occurrence of a gap in the distribution of responsibility that may be considered morally problematic. Whether a gap is morally problematic, we suggest, depends on the reasons why responsibility is distributed. This, in turn, depends, at least in part, on the sense of responsibility employed, a main distinction being that between backward-looking and forward-looking responsibility. (shrink)
Next SectionAn attempt to resolve the controversy regarding the solution of the Sleeping Beauty Problem in the framework of the Many-Worlds Interpretation led to a new controversy regarding the Quantum Sleeping Beauty Problem. We apply the concept of a measure of existence of a world and reach the solution known as ‘thirder’ solution which differs from Peter Lewis’s ‘halfer’ assertion. We argue that this method provides a simple and powerful tool for analysing rational decision theory problems.
The, so called, ‘conceptual problem of other minds’ has been articulated in a number of different ways. I discuss two, drawing out some constraints on an adequate account of the grasp of concepts of mental states. Distinguishing between behaviour-based and identity-based approaches to the problem, I argue that the former, exemplified by Brewer and Pickard, are incomplete as they presuppose, but do not provide an answer to, what I shall call the conceptual problem of other bodies. I (...) end with some remarks on identity-based approaches, pointing out related problems for versions of this approach held by Cassam and Peacocke. (shrink)
It is argued that, given certain reasonable premises, an infinite number of qualitatively identical but numerically distinct minds exist per functioning brain. The three main premises are (1) mental properties supervene on brain properties; (2) the universe is composed of particles with nonzero extension; and (3) each particle is composed of continuum many point-sized bits of particle-stuff, and these points of particlestuff persist through time.
One winter’s Saturday Clarence wakes up. He realises he has left his umbrella at work. The oﬃce is locked, and he can’t get in. Being one of those people who punish themselves for their mistakes, he can’t bring himself to buy a replacement. He has an engagement six kilometres down the road and starts wondering whether it will rain. Normally, this would not be a problem, but his motor vehicle has broken down because he forgot to have it serviced. (...) And of course, he blames himself for this mistake, so it is only natural that he can’t bring himself to hire a cab or take a bus. He really should hope that it rains and that he gets drenched on the way to his engagement, but he is only human after all, and a small part of himhopes that it is a sunny day. (shrink)
This paper explores the relationship between scepticism and epistemic relativism in the context of recent history and philosophy of science. More specifically, it seeks to show that significant treatments of epistemic relativism by influential figures in the history and philosophy of science draw upon the Pyrrhonian problem of the criterion. The paper begins with a presentation of the problem of the criterion as it occurs in the work of Sextus Empiricus. It is then shown that significant treatments of (...) epistemic relativism in recent history and philosophy of science (critical rationalism, historical philosophy of science and the strong programme) draw upon the problem of the criterion. It is briefly suggested that a particularist response to the problem of the criterion may be put to good use against epistemic relativism. (shrink)
The diverse number of N-space theories and the unrestrained growth of the number of spaces within the multiple space models has incurred general skepticism about the new search space variants within the search space paradigm of psychology. I argue that any N-space theory is computationally equivalent to a single space model. Nevertheless, the N-space theories may explain the systematic behavior of human problem solving better than the original one search space theory by identifying relationships between the tasks that occur (...) in problem solving. These tasks are independent of the particular process and may not be explicitly represented by the problem solver. N-space theorists seem to overlook their own reason for distinguishing N-space theories from single space models, namely the presupposition that these tasks must have a unified, underlying search space architecture. This assumption is ill-founded and may implement a procedural restraint that could impede psychological research. (shrink)
I argue that the personhood of a fetus is analogous to the the heap. If this is correct, then the moral status or intrinsic value of a fetus would be supervenient upon the fetus's biological development. Yet to compare its claim vis-a-vis its mother's, we need to consider not only their moral status, but also the type of claim they each have. Thus we have to give weight to the two factors or variables of the mother's moral status and her (...) claim to some lesser good (assuming that this is not the kind of case in which the mother would suffer some great harm, such as death). And then we have to consider the fetus's lesser moral status and its claim to some greater good, namely, life. I argue that we do not know how to compare these two-variable claims. This also explains why the central cases of abortion have been so difficult to resolve. I suggest that the problem of animal rights has a similar structure. (shrink)
We numerically solve the functional differential equations (FDEs) of 2-particle electrodynamics, using the full electrodynamic force obtained from the retarded Lienard–Wiechert potentials and the Lorentz force law. In contrast, the usual formulation uses only the Coulomb force (scalar potential), reducing the electrodynamic 2-body problem to a system of ordinary differential equations (ODEs). The ODE formulation is mathematically suspect since FDEs and ODEs are known to be incompatible; however, the Coulomb approximation to the full electrodynamic force has been believed to (...) be adequate for physics. We can now test this long-standing belief by comparing the FDE solution with the ODE solution, in the historically interesting case of the classical hydrogen atom. The solutions differ. A key qualitative difference is that the full force involves a ‘delay’ torque. Our existing code is inadequate to calculate the detailed interaction of the delay torque with radiative damping. However, a symbolic calculation provides conditions under which the delay torque approximately balances (3rd order) radiative damping. Thus, further investigations are required, and it was prematurely concluded that radiative damping makes the classical hydrogen atom unstable. Solutions of FDEs naturally exhibit an infinite spectrum of discrete frequencies. The conclusion is that (a) the Coulomb force is not a valid approximation to the full electrodynamic force, so that (b) the n-body interaction needs to be reformulated in various current contexts such as molecular dynamics. (shrink)
J.L. Mackie’s version of the logical problem of evil is a failure, as even he came to recognize. Contrary to current mythology, however, its failure was not established by Alvin Plantinga’s Free Will Defense. That’s because a defense is successful only if it is not reasonable to refrain from believing any of the claims that constitute it, but it is reasonable to refrain from believing the central claim of Plantinga’s Free Will Defense, namely the claim that, possibly, every essence (...) suffers from transworld depravity. (shrink)
According to Principles of Sufficient Reason, every truth (in some relevant group) has an explanation. One of the most popular defenses of Principles of Sufficient Reason has been the presupposition of reason defense, which takes endorsement of the defended PSR to play a crucial role in our theory selection. According to recent presentations of this defense, our method of theory selection often depends on the assumption that, if a given proposition is true, then it has an explanation, and this will (...) only be justified if we think this holds for all propositions in the relevant group. I argue that this argument fails even when restricted to contingent propositions, and even if we grant that there is no non-arbitrary way to divide true propositions that have explanations from those that lack them. Further, we can give an alternate explanation of what justifies our selecting theories on the basis of explanatory features: the crucial role is not played by an endorsement of a PSR, but rather by our belief that, prima facie, we should prefer theories that exemplify explanatory power to greater degrees than their rivals. This guides our theory selection in a manner similar to ontological parsimony and theoretical simplicity. Unlike a PSR, our belief about explanatory power gives us a prima facie guiding principle, which provides justification in the cases where we think we have it, and not in the cases where we think we don't. (shrink)
In this paper I argue, contra Fraser MacBride, that conceptual analysis, and in particular the distinction between numerical and qualitative identity, can solve the Problem of Universals, whether understood as the One over Many or the as the Many over One. In this paper I show why the solutions needed to solve either version of the problem must be in terms of truthmakers, and that the distinction between numerical and qualitative identity is not sufficient to solve (...) them. (shrink)
In this paper, I hope to solve a problem that’s as old as the hills: the problem of contingency for religious belief. Paradigmatic examples of this argument begin with a counterfactual premise: had we been born at a different time or in a difference place, we easily could have held different beliefs on religious topics. Ultimately, and perhaps by additional steps, we're meant to reach the skeptical conclusion that very many of our religious beliefs do not amount (...) to knowledge. I survey some historical examples of this argument, and I try to fill the gap between the counterfactual premise and the skeptical conclusion as forcefully as possible. I consider the following possibilities: there are no additional steps in the argument; or there are and they concern the alleged safety condition on knowledge, or the alleged non-accidentality condition on knowledge, or the unclarity produced by disagreement. On every possibility, the argument from the counterfactual premise to the conclusion of widespread skepticism is invalid. It seems, then, that there is no serious problem of contingency for religious belief. (shrink)
We show how an epistemology informed by cognitive science promises to shed light on an ancient problem in the philosophy of mathematics: the problem of exactness. The problem of exactness arises because geometrical knowledge is thought to concern perfect geometrical forms, whereas the embodiment of such forms in the natural world may be imperfect. There thus arises an apparent mismatch between mathematical concepts and physical reality. We propose that the problem can be solved by emphasizing the (...) ways in which the brain can transform and organize its perceptual intake. It is not necessary for a geometrical form to be perfectly instantiated in order for perception of such a form to be the basis of a geometrical concept. (shrink)
The article introduces a special issue of the journal Metaphysica on vagueness and ontology. The conventional view has it that all vagueness is semantic or representational. Russell, Dummett, Evans and Lewis, inter alia, have argued that the notion of “ontic” or “metaphysical” vagueness is not even intelligible. In recent years, a growing minority of philosophers have tried to make sense of the notion and have spelled it out in various ways. The article gives an overview and relates the idea of (...) ontic vagueness to the unquestioned phenomenon of fuzzy spatiotemporal boundaries and to the associated “problem of the many”. It briefly discusses the question of whether ontic vagueness can be spelled out in terms of “vague identity”, emphasizes the often neglected role of the difference between sortal and non-sortal ontologies and suggests a deflationary answer to the ill-conceived question of whether the “ultimate source” of vagueness lies either in language or in the world. (shrink)
Unger has recently argued that if you are the only thinking and experiencing subject in your chair, then you are not a material object. This leads Unger to endorse a version of Substance Dualism according to which we are immaterial souls. This paper argues that this is an overreaction. We argue that the specifically Dualist elements of Unger’s view play no role in his response to the problem; only the view’s structure is required, and that is available to Unger’s (...) opponents. We outline one such non-Dualist view, suggest how to resolve the dispute, respond to some objections, and argue that ours is but one of many views that survive Unger’s challenge. All these views are incompatible with microphysicalism. So Unger’s discussion does contain an insight: if you are the only conscious subject in your chair, then microphsyicalism is false. Unger’s mistake was to infer Substance Dualism from this; for microphysicalism is not the only alternative to Dualism. (shrink)
Two recurrent arguments levelled against the view that enduring objects survive change are examined within the framework of the B-theory of time: the argument from Leibniz's Law and the argument from Instantiation of Incompatible Properties. Both arguments are shown to be question-begging and hence unsuccessful.
Shoemaker maintains that when a functionalist theory of mind is combined with his belief about individuating properties and the well-known cerebrumtransplant thought experiment, the resulting position will be a version of the psychological approach to personal identity that can avoid The Problem of Too Many Thinkers. I maintain that the costs of his solution—that the human animal is incapable of thought—are too high. Shoemaker also has not provided an argumentagainst there existing a merely conscious being that is not (...) essentially self-conscious but is spatially coincident with a person who is essentially self-conscious. Both the person and the merely sentient being will be transplanted when the cerebrum is. And another thought experiment will make it impossible for Shoemaker to identify the person and the merely conscious being. (shrink)
Lockean accounts of personal identity face a problem of too many thinkers arising from their denial that we are identical to our animals and the assumption that our animals can think. Sydney Shoemaker has responded to this problem by arguing that it is a consequence of functionalism that only things with psychological persistence conditions can have mental properties, and thus that animals cannot think. I discuss Shoemaker’s argument and demonstrate two ways in which it fails. Functionalism does (...) not rid the Lockean of the problem of too many thinkers. (shrink)
The case is discussed for the doctrine of hell as posing a unique problem of evil for adherents to the Abrahamic religions who endorse traditional theism. The problem is particularly acute for those who accept retributivist formulations of the doctrine of hell according to which hell is everlasting punishment for failing to satisfy some requirement. Alternatives to retributivism are discussed, including the unique difficulties that each one faces.
Peter Hare and Edward Madden's collaborative book Evil and the Concept of God (968) has become a staple in literature about the problem of evil and remains frequently cited by supporters and critics alike. The major concepts of the work arose out of earlier papers in which they first began to formulate their arguments about the problem of evil. Their article "Evil and Unlimited Power" embodies many of their arguments against quasi-theist attempts to resolve the problem (...) of evil.1 Assembled from these and other papers, their compendium frames a thorough synthesis of the long history of debate regarding the problem of evil, and contributes their own exhaustive, point-by-point attack on modern defenders of three main .. (shrink)
The fundamental constants that are involved in the laws of physics which describe our universe are finely tuned for life, in the sense that if some of the constants had slightly different values life could not exist. Some people hold that this provides evidence for the existence of God. I will present a probabilistic version of this fine-tuning argument which is stronger than all other versions in the literature. Nevertheless, I will show that one can have reasonable opinions such that (...) the fine-tuning argument doesn't lead to an increase in one's probability for the existence of God. The fine-tuning argument Objective versus subjective probability Observational selection effects The problem of old evidence Against the fine-tuning argument Many universes. (shrink)
The quantitative problem of old evidence is the problem of how to measure the degree to which e confirms h for agent A at time t when A regards e as justified at t. Existing attempts to solve this problem have applied the e-difference approach, which compares A's probability for h at t with what probability A would assign h if A did not regard e as justified at t. The quantitative problem has been widely regarded (...) as unsolvable primarily on the grounds that the e-difference approach suffers from intractable problems. Various philosophers have proposed that 'Bayesianism' should be rejected as a research strategy in confirmation theory in part because of the unsolvability of this problem. I develop a version of the e-difference approach which overcomes these problems and possesses various advantages (but also certain limitations). I develop an alternative 'theistic' approach which handles many cases that my development of the e-difference approach does not handle. I conclude with an assessment of the significance of the quantitative problem for Bayesianism and argue that this problem is misunderstood in so far as it is regarded as unsolvable, and in so far as it is regarded as a problem only for Bayesians. (shrink)
Wars are large-scale conflicts between organized groups of belligerents, which involve suffering, devastation, and brutality unlike almost anything else in human experience. Whatever one’s other beliefs about morality, all should agree that the horrors of war are all but unconscionable, and that warfare can be justified only if we have some compel- ling account of what is worth fighting for, which can justify contributing, as individu- als and as groups, to this calamitous endeavour. Although this question should obviously be central (...) to both philosophical and politi- cal discussion about war, it is at the forefront of neither. In recent years, philosophical discussion of warfare has bloomed, but the debate has focused on whom we may kill, on the assumption that our aims are justified.1 Political debate, meanwhile, is more concerned with matters of prudence, international law, and public justification, than with reassessing what is worth fighting for. For wars of intervention to halt or prevent massive humanitarian crises, this gap is not so troubling. When warfare is the only means to prevent the mass killing or enslavement of the innocent, the purposes of military force are clear enough (though undoubtedly many other problems remain). The problem is more pressing, how- ever, for the justification of national defence.3 Although common-sense morality and international law view national defence as the paradigm case of justified warfare, grounding this consensus is surprisingly difficult.4 We typically believe that any state is justified in using lethal force to protect its territory against any form of uninvited military incursion by any other state. And yet we lack a good argument to explain why this should be so. In this chapter, I explain why one familiar and otherwise plausible approach to the justification of killing in war cannot adequately ground common-sense views of permissible national defence.5 Reductionists believe that justified warfare reduces to an aggregation of acts that are justified under ordinary principles of interpersonal morality.6 The standard form of reductionism focuses on the principles governing killing in ordinary life, specifically those that justify intentional killing in self- and other-defence, and unintended but foreseen (for short, collateral) killing as a lesser evil. Justified warfare, on this view, is no more than the coextension of multiple acts justified under these two principles. Reductionism is the default philosophical approach to thinking through the ethics of killing in war. It makes perfect sense to ask what principles govern permissible kill- ing in general, before applying them to the particular context of war. If it cannot deliver a plausible set of conclusions about when national defence is permitted, then we must either revise our beliefs about which conclusions count as plausible, or else face the significant challenge of developing a different theoretical model for justifying war- fare—an exceptionalist model, which views war as an exception to the regular moral landscape, to which principles apply which apply to nothing else but war.7 We must show, in other words, that there is something worth fighting for in wars of national defence, which is not engaged when we use force in any other context. The chapter proceeds as follows. Section 2.2 sets out the argument against reduc- tionism.8 Section 2.3 considers and rebuts one common response to the argument, which has often been thought sufficient grounds to disregard its conclusion. Section 2.4 then asks whether a modified reductionism would survive unscathed by the argu- ment. Finally, section 2.5 sets out some desiderata on a plausible exceptionalist alterna- tive. Section 2.6 concludes. (shrink)
Many distinct, controvertial issues are to be found within the labyrinthine\ntwists and turns of the problem of evil. For philosophers of the\nseventeenth and early eighteenth centures, evil presented a challenge\nto the consistency and rationality of the world-picture disclosed\nby the new way of ideas. In dealing with this challenge, however,\nphilosophers were also concerned with their positions in the theological\ndebates about original sin, free will, and justification that were\nthe legacy of the Protestant Reformation to European intellectual\nlife. Emerging from a conference (...) on the problem of evil in the early\nmodern period held at the University of Toronto in 1999, the papers\nin this collection represent some of the best original work being\ndone today on the theodicies of such early modern philosophers as\nLeibniz, Suarez, Spinoza, Malebranche, and Pierre Bayle. (shrink)