Kilimanjaro is a paradigmatic mountain, if any is. Consider atom Sparky, which is neither determinately part of Kilimanjaro nor determinately not part of it. Let Kilimanjaro(+) be the body of land constituted, in the way mountains are constituted by their constituent atoms, by the atoms that make up Kilimanjaro together with Sparky, and Kilimanjaro(–) the one constituted by those other than Sparky. On the one hand, there seems to be just one mountain in the vicinity of Kilimanjaro. On the other (...) hand, both Kilimanjaro(+) and Kilimanjaro(–)—and indeed many other similar things—seem to have an equal claim to be a mountain: all of them exhibit the grounds for something being a mountain—like being an elevation of the earth’s surface rising abruptly and to a large height from the surrounding level,1 or whathaveyou—; and there seems to be nothing in the vicinity with a better claim. Hence, the problem of the many. (shrink)
The problem of the many poses the task of explaining mereological indeterminacy of ordinary objects in a way that sustains our familiar practice of counting these objects. The aim of this essay is to develop a solution to the problem of the many that is based on an account of mereological indeterminacy as having its source in how ordinary objects are, independently of how we represent them. At the center of the account stands a quasi-hylomorphic ontology (...) of ordinary objects as material objects with multiple individual forms. (shrink)
I argue that the many worlds explanation of quantum computation is not licensed by, and in fact is conceptually inferior to, the many worlds interpretation of quantum mechanics from which it is derived. I argue that the many worlds explanation of quantum computation is incompatible with the recently developed cluster state model of quantum computation. Based on these considerations I conclude that we should reject the many worlds explanation of quantum computation.
Consider a cat on a mat. On the one hand, there seem to be just one cat, but on the other there seem to be many things with as good a claim to being a cat, and there seems to be nothing in the vicinity with a better claim. Hence, the problem of the many. In his ‘Many, but Almost One,’ David Lewis offered two solutions. According to the ﬁrst, only one of the many is (...) indeed a cat, although it is indeterminate exactly which one. According to the second, the many are all cats, but they are almost identical to each other, and hence they are almost one. For Lewis, the two solutions do not compete with each other but are mutually complementary, as each can assist the other. This paper has two aims: ﬁrst to argue against the ﬁrst of these two solutions, and then to defend the second as a self-standing solution from Lewis’s considerations to the contrary. In both parts I will assume the certainly plausible but also controversial view on the nature of vagueness, having it that vagueness is a kind of semantic indecision—of which Lewis himself is one of the main defenders. (shrink)
Although the predominant view is that vagueness is due to our language being imprecise, the alternative idea that objects themselves do not have determinate borders has received an occasional hearing. But what has failed to be appreciated is how this idea can avoid a puzzle Peter Unger named “The Problem of the Many.”[i].
Naive mereology studies ordinary, common-sense beliefs about part and whole. Some of the speculations in this article on naive mereology do not bear directly on Peter van Inwagen's "Material Beings". The other topics, (1) and (2), both do. (1) Here is an example of Peter Unger's "Problem of the Many". How can a table be a collection of atoms when many collections of atoms have equally strong claims to be that table? Van Inwagen invokes fuzzy sets to (...) solve this problem. I claim that an alternative treatment of vagueness, supervaluations over many-value valuations, provides a better solution. (2) The Special Composition Question asks how parts compose a whole. One who rejects van Inwagen's answer in terms of constituting a life need not provide some alternative answer. Even if all answers to the Special Question fail, there are a multitude of less general composition questions that are not so difficult. (shrink)
A plausible desideratum for an account of the nature of objects, at, and across time, is that it accommodate the phenomenon of vagueness without locating vagueness in the world. A series of arguments have attempted to show that while universalist perdurantism – which combines a perdurantist account of persistence with an unrestricted mereological account of composition – meets this desideratum, endurantist accounts do not. If endurantists reject unrestricted composition then they must hold that vagueness is ontological. But if they embrace (...) unrestricted composition they are faced with the problem of the many, and cannot plausibly accommodate vagueness. This paper disambiguates two related sub-problems of the problem of the many, and argues that universalist perdurantism is not superior to universalist endurantism with respect to either of these. (shrink)
Supervaluational treatments of vagueness are currently quite popular among those who regard vagueness as a thoroughly semantic phenomenon. Peter Unger's 'problem of the many' may be regarded as arising from the vagueness of our ordinary physical-object terms, so it is not surprising that supervaluational solutions to Unger's problem have been offered. I argue that supervaluations do not afford an adequate solution to the problem of the many. Moreover, the considerations I raise against the supervaluational solution (...) tell also against the solution to the problem of the many which is suggested by adherents of the epistemic theory of vagueness. (shrink)
It has been argued that St. Thomas Aquinas’s anthropological views fall prey to the problem of “Too Many Thinkers.” The worry, roughly, is that his views entail that I—a human person—am able to think, but that my soul—which is not a human person—is also able to think. Hence, too many thinkers: there are too many ofus having my thoughts. In this paper, I show why this is not a problem for St. Thomas. Along the way, (...) I also address Peter Unger’s argument for substance dualism. (shrink)
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. (...) It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans. Although this seems to be a merely technical puzzle, even a triviality, a surprising range of proposed solutions has emerged, many of them mutually inconsistent. It is not even settled whether a solution should come from metaphysics, or from philosophy of language, or from logic. Here we survey the options, and provide several links to the many topics related to the Problem. (shrink)
In this paper I develop a novel response to the exclusion problem. I argue that the nature of the events in the causally complete physical domain raises the “problem of many causes”: there will typically be countless simultaneous low-level physical events in that domain that are causally sufficient for any given high-level physical event (like a window breaking or an arm raising). This shows that even reductive physicalists must admit that the version of the exclusion principle used (...) to pose the exclusion problem against non-reductive physicalism is too strong. The burden is on proponents of the exclusion problem to provide a reason to think that any qualifications placed on the exclusion principle will solve the problem of many causes while ruling out causation by irreducible mental events. (shrink)
In some situations in which undesirable collective effects occur, it is very hard, if not impossible, to hold any individual reasonably responsible. Such a situation may be referred to as the problem of many hands. In this paper we investigate how the problem of many hands can best be understood and why, and when, it exactly constitutes a problem. After analyzing climate change as an example, we propose to define the problem of many (...) hands as the occurrence of a gap in the distribution of responsibility that may be considered morally problematic. Whether a gap is morally problematic, we suggest, depends on the reasons why responsibility is distributed. This, in turn, depends, at least in part, on the sense of responsibility employed, a main distinction being that between backward-looking and forward-looking responsibility. (shrink)
A sort of 'modal problem of the many' applies to reference to Harry Potter and Sherlock Holmes. An indefinite number of possible beings completely satisfy the stories. Which one of them is Harry? No principled answer seems possible. This led Kripke to deny that names of fictional characters denote possible people. I argue that a supervaluationist theory of the the truth of claims about fictional characters solves Kripke's problem.
The, so called, ‘conceptual problem of other minds’ has been articulated in a number of different ways. I discuss two, drawing out some constraints on an adequate account of the grasp of concepts of mental states. Distinguishing between behaviour-based and identity-based approaches to the problem, I argue that the former, exemplified by Brewer and Pickard, are incomplete as they presuppose, but do not provide an answer to, what I shall call the conceptual problem of other bodies. I (...) end with some remarks on identity-based approaches, pointing out related problems for versions of this approach held by Cassam and Peacocke. (shrink)
It is argued that, given certain reasonable premises, an infinite number of qualitatively identical but numerically distinct minds exist per functioning brain. The three main premises are (1) mental properties supervene on brain properties; (2) the universe is composed of particles with nonzero extension; and (3) each particle is composed of continuum many point-sized bits of particle-stuff, and these points of particlestuff persist through time.
One winter’s Saturday Clarence wakes up. He realises he has left his umbrella at work. The oﬃce is locked, and he can’t get in. Being one of those people who punish themselves for their mistakes, he can’t bring himself to buy a replacement. He has an engagement six kilometres down the road and starts wondering whether it will rain. Normally, this would not be a problem, but his motor vehicle has broken down because he forgot to have it serviced. (...) And of course, he blames himself for this mistake, so it is only natural that he can’t bring himself to hire a cab or take a bus. He really should hope that it rains and that he gets drenched on the way to his engagement, but he is only human after all, and a small part of himhopes that it is a sunny day. (shrink)
This paper explores the relationship between scepticism and epistemic relativism in the context of recent history and philosophy of science. More specifically, it seeks to show that significant treatments of epistemic relativism by influential figures in the history and philosophy of science draw upon the Pyrrhonian problem of the criterion. The paper begins with a presentation of the problem of the criterion as it occurs in the work of Sextus Empiricus. It is then shown that significant treatments of (...) epistemic relativism in recent history and philosophy of science (critical rationalism, historical philosophy of science and the strong programme) draw upon the problem of the criterion. It is briefly suggested that a particularist response to the problem of the criterion may be put to good use against epistemic relativism. (shrink)
The diverse number of N-space theories and the unrestrained growth of the number of spaces within the multiple space models has incurred general skepticism about the new search space variants within the search space paradigm of psychology. I argue that any N-space theory is computationally equivalent to a single space model. Nevertheless, the N-space theories may explain the systematic behavior of human problem solving better than the original one search space theory by identifying relationships between the tasks that occur (...) in problem solving. These tasks are independent of the particular process and may not be explicitly represented by the problem solver. N-space theorists seem to overlook their own reason for distinguishing N-space theories from single space models, namely the presupposition that these tasks must have a unified, underlying search space architecture. This assumption is ill-founded and may implement a procedural restraint that could impede psychological research. (shrink)
I argue that the personhood of a fetus is analogous to the the heap. If this is correct, then the moral status or intrinsic value of a fetus would be supervenient upon the fetus's biological development. Yet to compare its claim vis-a-vis its mother's, we need to consider not only their moral status, but also the type of claim they each have. Thus we have to give weight to the two factors or variables of the mother's moral status and her (...) claim to some lesser good (assuming that this is not the kind of case in which the mother would suffer some great harm, such as death). And then we have to consider the fetus's lesser moral status and its claim to some greater good, namely, life. I argue that we do not know how to compare these two-variable claims. This also explains why the central cases of abortion have been so difficult to resolve. I suggest that the problem of animal rights has a similar structure. (shrink)
J.L. Mackie’s version of the logical problem of evil is a failure, as even he came to recognize. Contrary to current mythology, however, its failure was not established by Alvin Plantinga’s Free Will Defense. That’s because a defense is successful only if it is not reasonable to refrain from believing any of the claims that constitute it, but it is reasonable to refrain from believing the central claim of Plantinga’s Free Will Defense, namely the claim that, possibly, every essence (...) suffers from transworld depravity. (shrink)
According to Principles of Sufficient Reason, every truth (in some relevant group) has an explanation. One of the most popular defenses of Principles of Sufficient Reason has been the presupposition of reason defense, which takes endorsement of the defended PSR to play a crucial role in our theory selection. According to recent presentations of this defense, our method of theory selection often depends on the assumption that, if a given proposition is true, then it has an explanation, and this will (...) only be justified if we think this holds for all propositions in the relevant group. I argue that this argument fails even when restricted to contingent propositions, and even if we grant that there is no non-arbitrary way to divide true propositions that have explanations from those that lack them. Further, we can give an alternate explanation of what justifies our selecting theories on the basis of explanatory features: the crucial role is not played by an endorsement of a PSR, but rather by our belief that, prima facie, we should prefer theories that exemplify explanatory power to greater degrees than their rivals. This guides our theory selection in a manner similar to ontological parsimony and theoretical simplicity. Unlike a PSR, our belief about explanatory power gives us a prima facie guiding principle, which provides justification in the cases where we think we have it, and not in the cases where we think we don't. (shrink)
We show how an epistemology informed by cognitive science promises to shed light on an ancient problem in the philosophy of mathematics: the problem of exactness. The problem of exactness arises because geometrical knowledge is thought to concern perfect geometrical forms, whereas the embodiment of such forms in the natural world may be imperfect. There thus arises an apparent mismatch between mathematical concepts and physical reality. We propose that the problem can be solved by emphasizing the (...) ways in which the brain can transform and organize its perceptual intake. It is not necessary for a geometrical form to be perfectly instantiated in order for perception of such a form to be the basis of a geometrical concept. (shrink)
Unger has recently argued that if you are the only thinking and experienc- ing subject in your chair, then you are not a material object. This leads Unger to endorse a version of Substance Dualism according to which we are immaterial souls. This paper argues that this is an overreaction. We argue that the specifically Dualist elements of Unger’s view play no role in his response to the problem; only the view’s structure is required, and that is available to (...) Unger’s opponents. We outline one such non-Dualist view, suggest how to resolve the dispute, respond to some objections, and argue that ours is but one of many views that survive Unger’s challenge. All these views are incompatible with microphysicalism. So Unger’s discussion does contain an insight: if you are the only conscious subject in your chair, then microphsyicalism is false. Unger’s mistake was to infer Substance Dualism from this; for microphysi- calism is not the only alternative to Dualism. (shrink)
Two recurrent arguments levelled against the view that enduring objects survive change are examined within the framework of the B-theory of time: the argument from Leibniz's Law and the argument from Instantiation of Incompatible Properties. Both arguments are shown to be question-begging and hence unsuccessful.
Shoemaker maintains that when a functionalist theory of mind is combined with his belief about individuating properties and the well-known cerebrumtransplant thought experiment, the resulting position will be a version of the psychological approach to personal identity that can avoid The Problem of Too Many Thinkers. I maintain that the costs of his solution—that the human animal is incapable of thought—are too high. Shoemaker also has not provided an argumentagainst there existing a merely conscious being that is not (...) essentially self-conscious but is spatially coincident with a person who is essentially self-conscious. Both the person and the merely sentient being will be transplanted when the cerebrum is. And another thought experiment will make it impossible for Shoemaker to identify the person and the merely conscious being. (shrink)
The case is discussed for the doctrine of hell as posing a unique problem of evil for adherents to the Abrahamic religions who endorse traditional theism. The problem is particularly acute for those who accept retributivist formulations of the doctrine of hell according to which hell is everlasting punishment for failing to satisfy some requirement. Alternatives to retributivism are discussed, including the unique difficulties that each one faces.
Peter Hare and Edward Madden's collaborative book Evil and the Concept of God (968) has become a staple in literature about the problem of evil and remains frequently cited by supporters and critics alike. The major concepts of the work arose out of earlier papers in which they first began to formulate their arguments about the problem of evil. Their article "Evil and Unlimited Power" embodies many of their arguments against quasi-theist attempts to resolve the problem (...) of evil.1 Assembled from these and other papers, their compendium frames a thorough synthesis of the long history of debate regarding the problem of evil, and contributes their own exhaustive, point-by-point attack on modern defenders of three main .. (shrink)
The fundamental constants that are involved in the laws of physics which describe our universe are finely tuned for life, in the sense that if some of the constants had slightly different values life could not exist. Some people hold that this provides evidence for the existence of God. I will present a probabilistic version of this fine-tuning argument which is stronger than all other versions in the literature. Nevertheless, I will show that one can have reasonable opinions such that (...) the fine-tuning argument doesn't lead to an increase in one's probability for the existence of God. The fine-tuning argument Objective versus subjective probability Observational selection effects The problem of old evidence Against the fine-tuning argument Many universes. (shrink)
The quantitative problem of old evidence is the problem of how to measure the degree to which e confirms h for agent A at time t when A regards e as justified at t. Existing attempts to solve this problem have applied the e-difference approach, which compares A's probability for h at t with what probability A would assign h if A did not regard e as justified at t. The quantitative problem has been widely regarded (...) as unsolvable primarily on the grounds that the e-difference approach suffers from intractable problems. Various philosophers have proposed that 'Bayesianism' should be rejected as a research strategy in confirmation theory in part because of the unsolvability of this problem. I develop a version of the e-difference approach which overcomes these problems and possesses various advantages (but also certain limitations). I develop an alternative 'theistic' approach which handles many cases that my development of the e-difference approach does not handle. I conclude with an assessment of the significance of the quantitative problem for Bayesianism and argue that this problem is misunderstood in so far as it is regarded as unsolvable, and in so far as it is regarded as a problem only for Bayesians. (shrink)
According to David Chalmers, the hard problem of consciousness consists of explaining how and why qualitative experience arises from physical states. Moreover, Chalmers argues that materialist and reductive explanations of mentality are incapable of addressing the hard problem. In this chapter, I suggest that Chalmers’ hard problem can be usefully distinguished into a ‘how question’ and ‘why question,’ and I argue that evolutionary biology has the resources to address the question of why qualitative experience arises from brain (...) states. From this perspective, I discuss the different kinds of evolutionary explanations (e.g., adaptationist, exaptationist, spandrel) that can explain the origins of the qualitative aspects of various conscious states. This argument is intended to clarify which parts of Chalmers’ hard problem are amenable to scientific analysis. (shrink)
According to orthodox quantum mechanics, state vectors change in two incompatible ways: "deterministically" in accordance with Schroedinger's time-dependent equation, and probabilistically if and only if a measurement is made. It is argued here that the problem of measurement arises because the precise mutually exclusive conditions for these two types of transitions to occur are not specified within orthodox quantum mechanics. Fundamentally, this is due to an inevitable ambiguity in the notion of "meawurement" itself. Hence, if the problem of (...) measurement is to be resolved, a new, fully objective version of quantjm mechanics needs to be developed which does not incorporate the notion of measurement in its basic postuolates at all. (shrink)
O problema do conhecimento fácil tem sido definido na literatura epistemológica contemporânea com um problema que nasce de duas formas distintas. O propósito deste ensaio é mostrar que essas supostas maneiras diferentes de gerar o mesmo problema em verdade originam dois problemas distintos, que requerem respostas distintas. Um deles está relacionado à aquisição fácil (inaceitável) de conhecimento de primeira-ordem e o outro à aquisição fácil (inaceitável) de conhecimento de segunda-ordem. Além disso, é apresentada a maneira como o infinitismo, a teoria (...) epistêmica segundo a qual as razões que justificam uma opinião devem ser infinitas em número e não-repetidas, pode lidar com cada um desses problemas. (shrink)
This paper examines Husserl’s fascination with the issues raised by Hume’s critique of the philosophy of the ego and the continuity of consciousness. The path taken here follows a continental and phenomenological approach. Husserl’s 1905 lecture course on the temporalization of immanent time-consciousness is a phenomenological-eidetic examination of how the continuity of consciousness and the consciousness of continuity are possible. It was by way of Husserl’s reading of Hume’s discussion of “flux” or “flow” that his discourse on temporal phenomena led (...) to the classification of a point-like now as a “fiction” and opened up a horizonal approach to the present that Hume’s introspective analyses presuppose but that escaped the limitations of the language that was available to him. In order to demonstrate the radicality of Husserl’s temporal investigations and his inspiration in the work of Hume, I show how his phenomenological discourse on the living temporal flow of consciousness resolves the latter’s concern about the problem of continuity by re-thinking how, in the absence of an abiding impression of Self, experience is continuous throughout the flux of its running off impressions. (shrink)
It has been recently argued by a number of metaphysicians—Trenton Merricks and Eric Olson among them—that any variety of dualism that claims that human persons have souls as proper parts (rather than simply being identical to souls) will face a too-many-thinker problem. In this paper, I examine whether this objection applies to the views of Aquinas, who famously claims that human persons are soul-body composites. I go on to argue that a straightforward readingof Aquinas’s texts might lead us (...) to believe that he falls prey to Merricks and Olson’s objection, but that a more heterodox interpretation reveals a way to avoidthis problem. (shrink)
In 1955, Goodman set out to 'dissolve' the problem of induction, that is, to argue that the old problem of induction is a mere pseudoproblem not worthy of serious philosophical attention. I will argue that, under naturalistic views of the reflective equilibrium method, it cannot provide a basis for a dissolution of the problem of induction. This is because naturalized reflective equilibrium is -- in a way to be explained -- itself an inductive method, and thus renders (...) Goodman's dissolution viciously circular. This paper, then, examines how the old problem of induction crept back in while nobody was looking. (shrink)
It is argued that the main problem with "the problem of the direction of time" is to figure out what the problem is or is supposed to be. Towards this end, an attempt is made to disentangle and to classify some of the many issues which have been discussed under the label of 'the direction of time'. Secondly, some technical apparatus is introduced in the hope of producing a sharper formulation of the issues than they have (...) received in the philosophical literature. Finally, some tentative suggestions about the central issues are offered. In particular, it is suggested that entropy and irreversibility are much less crucial to the central issues than most philosophers would have us believe. This suggestion is not made because of any firm conviction of its correctness but rather because it helps to focus the discussion on some basic but long neglected assumptions which underlie traditional approaches. (shrink)
Watkins proposes a neo-Popperian solution to the pragmatic problem of induction. He asserts that evidence can be used non-Inductively to prefer the principle that corroboration is more successful over all human history than that, Say, Counter-Corroboration is more successful either over this same period or in the future. Watkins's argument for rejecting the first counter-Corroborationist alternative is beside the point, However, As whatever is the best strategy over all human history is irrelevant to the pragmatic problem of induction (...) since we are not required to act in the past, And his argument for rejecting the second presupposes induction. (shrink)
In his recent book, A Frightening Love: Recasting the Problem of Evil, Andrew Gleeson challenges a certain conception of justification assumed in mainstream analytic philosophy and argues that analytic philosophy is ill-suited to deal with the most pressing, existential, form of the problem of evil. In this article I examine some aspects of that challenge.
Molyneux’s question, whether the newly sighted might immediately recognize tactilely familiar shapes by sight alone, has produced an array of answers over three centuries of debate and discussion. I propose the first pluralist response: many different answers, both yes and no, are individually sufficient as an answer to the question as a whole. I argue that this is possible if we take the question to be cluster concept of sub-problems. This response opposes traditional answers that isolate specific perceptual features (...) as uniquely applicable to Molyneux’s question and grant viability to only one reply. Answering Molyneux’s question as a cluster concept may also serve as a methodology for resolving other philosophical problems. (shrink)
I show that the recursive structure of Leibniz's Law requires agents to perform infinitely many operations to psychologically identify the referents of phenomenal and physical concepts, even though the referents of ordinary concepts (e.g. Hesperus and Phosphorus) can be identified in a finite number of steps. The resulting problem resembles the hard problem of consciousness in the fact that it appears (and indeed is) unsolvable by anyone for whom it arises, and in the fact that it invites (...) dualist and eliminativist responses. Moreover, if this is the hard problem then we can predict that regardless of the strength of the argument for physicalism, and regardless of physicalism's truth, an ineliminable dissatisfaction is bound to accompany any physicalist theory of consciousness. Accordingly, I suggest that this is the hard problem of consciousness, and therefore that the hard problem arises from a recursively degenerate application of Leibniz's Law. (shrink)
Many epistemologists accept some version of the following foundationalist epistemic principle: if one has an experience as if p then one has prima facie justification that p. I argue that this principle faces a challenge that it inherits from classical foundationalism: the problem of the speckled hen. The crux of the problem is that some properties are presented in experience at a level of determinacy that outstrips our recognitional capacities. I argue for an amendment to the principle (...) that adds to its antecedent the requirement that the subject have a recognitional capacity with respect to the given property. (shrink)
Many environmental problems are longitudinal collective action problems. They arise from the cumulative unintended effects of a vast amount of seemingly insignificant decisions and actions by individuals who are unknown to each other and distant from each other. Such problems are likely to be effectively addressed only by an enormous number of individuals each making a nearly insignificant contribution to resolving them. However, when a person’s making such a contribution appears to require sacrifice or costs, the problem of (...) inconsequentialism arises: given that a person’s contribution, although needed (albeit not necessary), is nearly inconsequential to addressing the problem and may require some cost from the standpoint of the person’s own life, why should the person make the effort, particularly when it is uncertain (or even unlikely) whether others will do so? In this article I argue that justifications for making the effort to respond to longitudinal collective action environmental problems are, on the whole, particularly well supported by virtue-oriented normative theories, on which character traits are evaluated as virtues and vices consequentially or teleologically and actions are evaluated in terms of virtues and vices. If ethical theories are to be assessed on their theoretical and practical adequacy, and if providing a compelling response to the problem of inconsequentialism is an instance of such adequacy, then this is a reason for preferring virtue-oriented ethical theory over non-virtue-oriented ethical theories, such as Kantian, act utilitarian, and global utilitarian theories. (shrink)
A crucial question for libertarians about free will and moral responsibility concerns how their accounts secure more control than compatibilism. This problem is particularly exasperating for event-causal libertarianism, as it seems that the only difference between these accounts and compatibilism is that the former require indeterminism. But how can indeterminism, a mere negative condition, enhance control? This worry has led many to conclude that the only viable form of libertarianism is agent-causal libertarianism. In this paper I show that (...) this conclusion is premature. I explain how event-causal libertarianism secures more control than compatibilism by offering a novel argument for incompatibilism. Part of the reason my solution has gone unnoticed is that it is often mistakenly assumed that an agent's control is wholly exhausted by the agent's powers and abilities. I argue, however, that control is constituted not just by what we have the ability to do, but also by what we have the opportunity to do. And it is by furnishing agents with new opportunities that event-causal libertarianism secures enhanced control. In order to defend this claim, I provide an analysis of opportunities and construct a novel incompatibilist argument to show that the opportunity to do otherwise is incompatible with determinism. (shrink)
On the “Russellian” solution to the Gettier problem, every Gettier case involves the implicit or explicit use of a false premise on the part of the subject. We distinguish between two senses of “justification” ---“legitimation” and “justification proper.” The former does not require true premises, but the latter does. We then argue that in Gettier cases the subject possesses “legitimation” but not “justification proper,” and we respond to many attempted counterexamples, including several variants of the Nogot scenario, a (...) case involving induction, and the case of the sight-seer and the barn. Finally, we show that, given our analysis, any challenge to a belief’s justification on the grounds that it might be “Gettierized” only requires an argument that one’s premises are themselves likely to be true, moving backwards along the object-Ievel regress. Hence, a move to externalism is neither useful nor necessary in response to the Gettier problem. (shrink)
The argument of Kant's Second Analogy provides only for causal connections between successive appearances, but, as Kant himself immediately notes, in many cases cause and effect are simultaneous. This essay examines Kant's solution to the resulting problem of simultaneous causation. I argue that there are, in fact, at least two distinct problems falling together under the rubric 'simultaneous causation', both reflecting significant features of paradigmatic causal-explanatory scenarios within Newtonian mechanics - a problem about the 'persisting simultaneity' of (...) a continuous or sustaining cause with its effect, and a problem about the 'instantaneous simultaneity' of what Kant calls the causality of a cause with the onset of its effect. An exploration of the ingenious conceptual resources which Kant brings to bear on these problems turns out to yield interesting and important insights regarding his philosophy of mathematics as well. (shrink)
Many philosophers argue that Bayesian epistemology cannot help us with the traditional Humean problem of induction. I argue that this view is partially but not wholly correct. It is true that Bayesianism does not solve Hume’s problem, in the way that the classical and logical theories of probability aimed to do. However I argue that in one important respect, Hume’s sceptical challenge cannot simply be transposed to a probabilistic context, where beliefs come in degrees, rather than being (...) a yes/no matter. (shrink)
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of non-equivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class (...) of Bayesian confirmation-theoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. (shrink)
This paper addresses the issue of rule-following in the context of the problem of the criterion. It presents a line of reasoning which concludes we do not know what rule we follow, but which develops independently of the problem of extrapolation that plays a major role in many recent discussions of rule-following. The basis of the argument is the normativity of rules, but the problem is also distinct from the issue of the gap between facts and (...) values in axiology. The paper further points out that the epistemic problem of not knowing what rule we follow leads to the outright denial of rule-following. (shrink)
Thus Spoke Zarathustra expresses a revolt against the quest for “afterworlds.” Nietzsche is seen transferring rationality to the body, welcoming the many in a kingdom of the un-unified multiple, with a burst of enthusiasm at the figure of recurrence. At first, he values an acceptation of suffering through reconciliation with time, and puts the onus on the divine to refute the dismembering of the oneness of meaning and unity of the soul’s quest for joy in eternity. Then confronting Christianity, (...) he sees its refusal to sacrifice anyone, at the cost of making all sick with a unique healer, and rejects it as incompatible with his ideal of plenitude. In the absence of an ontology of the person, the affirmation of the individual and his value, opposed to the antagonistic affirmation of the many put in front of the one God and destroyed by him, ends up dislocating the reality of the self. The Nietzschean option resisted any leveling down—this is its merit—yet the mystery of the Trinity needs to be brought into the reflection to respect Nietzsche’s own terms in defining the final problem which is also the one option: Dionysus or the Crucified? (shrink)
This paper discusses certain conceptual tensions in a set of archeological texts from the Warring States period, the Guodian corpus. One of the central themes of the Guodian corpus is the disanalogy between spontaneous, natural familial relationships and artificial political relationships. This is problematic because, like many early Chinese texts, the Guodian corpus believes that political relationships must come to be characterized by unselfconsciousness and spontaneity if social order is to prevail. This tension will be compared to my earlier (...) work on the “paradox of wu-wei (effortless action),” and the Guodian corpus’ “solution” to the problem of teaching spontaneity—drawing upon the transformative power of music—will be placed within the landscape of early “Confucian” and “Daoist” theories concerning human nature and self-cultivation. (shrink)
Many modern commentators on inertial phenomena hold (or just assume) that there is no "problem of inertia", on the grounds that either (a) no explanation is needed for such phenomena or (b) the explanation is already at hand. My purpose here is to comment on both views, defending the thesis that the problem is real and still unsolved.
This article begins with a criticism of Mclntyre and Gorovitz's account of medical error. Their theory implies that error, at least sometimes, it a necessary consequence of the inductive character of medical inquiry. The counter intuitive consequences of this account suggest that the issues surrounding induction may not be the most fertile area for developing a coherent interpretation of medical error. Given these shortcomings, I develop a new theory which assumes that the best philosophical soil for constructing a theory of (...) medical trror is the problem of universals. I then explain the problem and how the medical universal functions within arguments concerning diagnosis and treatment. A Wittgensteinian solution to the problem is presented which emphasizes the "borderline" character of some, if not many, medical judgments. Next, an argument is offered to establish that questions of medical error are not purely professional questions. This is because the theories used to make such professional judgments permit borderline cases even under the best circumstances. Finally, while there are a number of legitimate responses to this situation, I recommend one which involves legislative action. CiteULike Connotea Del.icio.us What's this? (shrink)
I argue in the paper that the problem of freedom has been misconstrued. There is no one problem of freedom but many problems concerning individual agents’ responsiveness to principles and reasons. The problem of free will results from attempts to incorporate the notion of freedom, which belongs to the order of guiding action, into a determinist framework of explanation. My view could be seen as compatibilist because it denies the existence of a fundamental conflict between freedom (...) and determinism. However, since libertarian accounts of local indeterminism are pointless on my view, it cannot be easily place with the compatibilism/libertarianism distinction. Instead of entering the hopelessly unproductive metaphysical debates about freedom and determinism, I propose to turn attention to the domain of ethics. Problems of freedom are questions about the deliberative processes that terminate in action and about reasons and principles on which they are based. To say that an action is free is not to claim that it is independent from causal determination; rather, it is to say that it has been decided upon. (shrink)
Abstract The relatively recent addition of women's voices to the study of moral development has led to the postulation of two separate moral contexts defined by gender, each with its own dominating concerns; guiding principles, forms of reasoning and hypothetical end point. While many developmental theorists agree that mature moral reasoning entails some sort of integration of these two perspectives, the exact nature of that reconciliation is a matter of considerable speculation and debate. This paper begins with the premise (...) that the mark of a moral developmental model's philosophical adequacy is its handling of the problem of moral relativism. It examines the strengths and weaknesses of the justice and caring approaches in regulating the contextual relativism inherent in genderized moralities. And it concludes by proposing that only by reframing the gender question in broader, teleological terms than present theories have attempted can the problem be resolved. (shrink)
Machine generated contents note: -- Introduction to the OneThe Concept of One: From Philosophy to Politics -Artemy Magun Part I. Metaphysics of the One and the Multiple1. More than One -Jean Luc Nancy 2. Condivision, or Towards a Non- communitarian Concatenation of Singularities -Gerald Raunig 3. Unity and Solitude -Artemy Magun 4. The Fragility of the One -Maria Calvacante 5. The One: Construction or Event? For a Politics of Becoming -Boyan Mancher Part II. 20th-Century Thinkers of Unity and Multiplicity 6. (...) Truth and Infinity in Badiou and Heidegger -Alexey Chernyakov 7. Complicated Presence: The Unity of Being in Parmenides and Heidegger -Jussi Bachman 8. The Universal, the General, the Multiple in the Perspective of a Political Utopia: Deleuze and Badiou on the Event -Keti Chukhrov 9. Humanity, Unity and the One -Nina Power Part III. Unity and Multiplicity in Nature 10. Elemental Nature as the Ultimate Common Ground of the World Community -Susanna Lindberg 11. Vegetative Democracy, or the Post-metaphysics of Plants -Michael Marder Part IV. Unity in Action: Forms of Political Consolidation in the Case of Contemporary Russia12. Collectivity in Post-revolutionary Russia -Igor Tchubarov13. Street University: Production of Collective Time and Public Space -Pavel Arsenyev 14. Fighting Together: the Problem of Solidarity -Carine Cle;ment Part V. E Pluribus Unum: Res Publica and Community 5. How Does One Constitute the One? Theology of the Icon, Theory of Non-representative Art and of Non-representative Politics -Oleg Kharkhodin 12. Drawing Lots in Politics: Unity and Totality -Yves Sintomer. (shrink)
Machine generated contents note: -- Introduction to the OneThe Concept of One: From Philosophy to Politics -Artemy Magun Part I. Metaphysics of the One and the Multiple1. More than One -Jean Luc Nancy 2. Condivision, or Towards a Non- communitarian Concatenation of Singularities -Gerald Raunig 3. Unity and Solitude -Artemy Magun 4. The Fragility of the One -Maria Calvacante 5. The One: Construction or Event? For a Politics of Becoming -Boyan Mancher Part II. 20th-Century Thinkers of Unity and Multiplicity 6. (...) Truth and Infinity in Badiou and Heidegger -Alexey Chernyakov 7. Complicated Presence: The Unity of Being in Parmenides and Heidegger -Jussi Bachman 8. The Universal, the General, the Multiple in the Perspective of a Political Utopia: Deleuze and Badiou on the Event -Keti Chukhrov 9. Humanity, Unity and the One -Nina Power Part III. Unity and Multiplicity in Nature 10. Elemental Nature as the Ultimate Common Ground of the World Community -Susanna Lindberg 11. Vegetative Democracy, or the Post-metaphysics of Plants -Michael Marder Part IV. Unity in Action: Forms of Political Consolidation in the Case of Contemporary Russia12. Collectivity in Post-revolutionary Russia -Igor Tchubarov13. Street University: Production of Collective Time and Public Space -Pavel Arsenyev 14. Fighting Together: the Problem of Solidarity -Carine Cle;ment Part V. E Pluribus Unum: Res Publica and Community 5. How Does One Constitute the One? Theology of the Icon, Theory of Non-representative Art and of Non-representative Politics -Oleg Kharkhodin12. Drawing Lots in Politics: Unity and Totality -Yves Sintomer. (shrink)
Since Darwin it is widely accepted that natural selection (NS) is the most important mechanism to explain how biological organisms—in their amazing variety—evolve and, therefore, also how the complexity of certain natural systems can increase over time, creating ever new functions or functional structures/relationships. Nevertheless, the way in which NS is conceived within Darwinian Theory already requires an open, wide enough, functional domain where selective forces may act. And, as the present paper will try to show, this becomes even more (...) evident if one looks into the problem of origins. If there was a time when NS was not operating (as it is quite reasonable to assume), where did that initial functional diversity, necessary to trigger off the process, come from? Self-organization processes may be part of the answer, as many authors have claimed in recent years, but surely not the complete one. We will argue here that a special type of self-maintaining organization, arising from the interplay among a set of different endogenously produced constraints (pre-enzymatic catalysts and primitive compartments included), is required for the appearance of functional diversity in the first place. Starting from that point, NS can progressively lead to new (and, at times, also more complex) organizations that, in turn, provide wider functional variety to be selected for, enlarging in this way the range of action and consequences of the mechanism of NS, in a kind of mutually enhancing effect. (shrink)
In this article I address the Problem of Universals by answering questions about what facts a solution to the Problem of Universals should explain and how the explanation should go. I argue that a solution to the Problem of Universals explains the facts the Problem of Universals is about by giving the truthmakers (as opposed to the conceptual content and the ontological commitments) of the sentences stating those facts. I argue that the sentences stating the relevant (...) facts are those like 'a has the property F', that is, sentences stating that a particular has a certain property. Finally I show how answering these questions in this way transforms the Problem of Universals, traditionally conceived as the One over Many, that is, the problem of explaining how different particulars can have the same properties, into the Many over One, that is, the problem of explaining how the same particular can have different properties. The Problem of Universals is the problem of the Many over One. (shrink)
Abstract This paper offers an appraisal of Phillip Pettit?s approach to the problem how a merely finite set of examples can serve to represent a determinate rule, given that indefinitely many rules can be extrapolated from any such set. I argue that Pettit?s so-called ethocentric theory of rule-following fails to deliver the solution to this problem he sets out to provide. More constructively, I consider what further provisions are needed in order to advance Pettit?s general approach to (...) the problem. I conclude that what is needed is an account that, whilst it affirms the view that agents? responses are constitutively involved in the exemplification of rules, does not allow such responses the pride of place they have in Pettit?s theory. (shrink)
In his popular film An Inconvenient Truth (Guggenheim 2006), Al Gore identifies anthropogenic climate change as the most menacing threat to the future of life on Earth, and he describes that threat specifically as a moral problem: an uninhabitable planetary environment would be an immoral outcome of human behavior. That outcome must be avoided which means, he argues, that a low-carbon trajectory for future human development must be charted without delay. His call-to-action then advocates, among many other things, (...) fast-tracking clean energy technologies and galvanizing the necessary international political will to get climate change under control. Gore is quite correct to identify climate change as an .. (shrink)
This paper offers an appraisal of Phillip Pettit’s approach to the problem how a finite set of examples can serve to represent a determinate rule, given that indefinitely many rules can be extrapolated from any such set. Negatively, I argue that Pettit’s so-called ethocentric theory of rule-following fails to deliver the solution to this problem that he sets out to provide. More constructively, I consider what further provisions are needed in order to advance Pettit’s distinctive general approach (...) to the problem. I conclude that what is needed is a ‘no-priority’ account of rule-exemplification: that is, an account that (a) affirms the constitutive role of agents’ responses in the exemplification of rules but (b) denies the explanatory priority given to such responses in Pettit’s theory. (shrink)
The problem I have in mind is the problem of the possible justification of subjecting one's will to that of another, and of the normative standing of demands to do so. The account of authority that I offered, many years ago, under the title of the service conception of authority, addressed this issue, and assumed that all other problems regarding authority are subsumed under it. Many found the account implausible. It is thin, relying on very few (...) ideas. It may well appear to be too thin, and to depart too far from many of the ideas that have gained currency in the history of reflection on authority. The present article modifies some aspects the account, and defends it against some criticism made against it. (shrink)
The problem of the morality of abortion is one of the most complex and controversial in the entire field of applied ethics. It may therefore appear rather surprising that the most popular proposed “solutions” to it are extremely simple and straightforward, based on clear-cut universal rules which typically either condemn abortion severely in virtually every case or else deem it to be morally quite unproblematic, and hence permissible whenever the mother wishes. This polarised situation in the theoretical debate, however, (...) is in clear contrast with the abortion law in many countries (including Britain), where abortions are treated very differently according to the stage of pregnancy at which they are carried out, so that early abortions are permitted relatively easily, whereas very late abortions are sanctioned only in exceptional cases. It seems likely, moreover, that in thus taking account of the time of an abortion, the law genuinely reflects the weight of public opinion - there may be no overall consensus on the underlying moral issues, but it does appear to be part of “commonsense” morality to accept that, whatever the ultimate rights and wrongs of abortion in general may be, at any rate abortion early in pregnancy is morally greatly preferable to late abortion. Let us call this “the developmental view”, since it holds that the moral gravity of abortion increases with the degree of development of the fetus. (shrink)
Franciscus Suarez de additione Unitatis ad Ens et prioritate Unitatis respectu MultitudinisSolutio quaestionis de natura additionis conceptuali Unius ad Ens, quam Suarez proponit, traditionem Aristotelico-Averroisticam (per Aquinatum mediatam) primo sequitur. Secundum hanc traditionem, Unum non superaddit Enti nisi determinationem negativam. Suárez similiter negat Unum dicere perfectionem positivam ab Ente ut sic distinctam, sive ex natura rei, sive ratione tantum. Sententiam suam exponens, Suarez multas alias conceptiones critice pertractat, praecipue autem doctrinam auctorum quorundam (plerumque Franciscanorum) impugnat, qui docent Unum superaddere ad (...) Ens perfectionem quandam positivam, quae tamen ratione tantum ab Ente ut sic distinguitur. Argumentum principale pro ista sententia assumit, indivisionem ut negationem negationis intelligendam esse, quae dicit affirmationem. Secundum Suarezium istam notionem indivisionis etiam D. Thomas defendit, qui negationem, quam Unum dicit, divisionem unius entis ab altero negare tenet. Istam solutionem Suarez reicit, sententiam propriam proponens, secundum quam Unum non negativam divisionem unius entis ab alio, sed intrinsecam et essentialem divisionem unius entis in semetipso negat, quae est divisio realis et positiva. Hac explicatione innitens Suarez consequenter doctrinam Aquinatis et Thomistarum de prioritate concpetuali Unius prae Multitudine, quem ut solutionem difficultatis in doctrina Aristotelis de oppositione privativa Unius ad Multum repertae confecerunt, reicit. Suárez prioritatem realem indivisionis prae divisione, itemque et realem et conceptualem prioritatem Unius prae Multo defendit. Haec Suarezii sententia cum doctrina eius de additione mere negativa Unius ad Ens bene consona esse videtur. Translatio: L. NovákFrancisco Suárez on the Addition of the One to Being and the Priority of the One over the ManySuárez’s solution to the problem of the conceptual Addition of the One to being follows firstly the Aristotelian-Averroistic tradition mediated by Aquinas. According to this tradition, the One adds to being only a negative determination. Suárez claims that the One does not signify any positive perfection either really or conceptually distinct from being as such. Suárez’s own solution to the problem is presented in a critical discussion with many different conceptions, but Suárez pays most attention to the theory of certain, mainly Franciscan, authors who hold that the One adds to being a positive perfection which is only conceptually distinct from being as such. The main argument for this thesis is based on the assumption that indivision is to be taken as a double negation, by which an affirmation is expressed. This concept of indivision was, according to Suárez, also defended by Aquinas, who holds that the negation which is expressed by the One negates the division of one being from another. Suárez rejects this solution and proposes his own conception, according to which the One does not negate the negative moment of the division of one being from another, but the positive moment of an essential division of a being in itself. The One thus negates a real positive division of being in itself. On the basis of this theory, Suárez further rejected Aquinas’s (and the Thomistic) conception of a conceptual priority of the One over the Many, which was put forth as an answer to the old Aristotelian problem of a privative opposition between the One and the Many. Suárez defends the real priority of an indivision over a division as well as a real and conceptual priority of the One over the Many. Suárez’s conception seems to us to be compatible with his concept of a negative Addition of the One to being. (shrink)
Many philosophers, in different areas, are tempted by what variously goes under the name of Contextualism, Speaker Relativism, Indexical Relativism. (I’ll just use Indexical Relativism in this paper.) Thinking of certain problematic expressions as deriving their content from elements of the context of use solves some problems. But it faces some problems of its own, and in this paper I’m interested in one in particular, namely, the problem of disagreement. Two alternative theories, tempting for just the same kinds (...) of expressions as Indexical Relativism is meant to handle, promise to solve the problem of disagreement. I’ll argue that they do not live up to their promise. At the end of the paper, I’ll ask what exactly disagreement amounts to, and I’ll canvass some purported solutions. (shrink)
Writers on presupposition, and on the ‘‘projection problem’’ of determining the presuppositions of compound sentences from their component clauses, traditionally assign presuppositions to each clause in isolation. I argue that many presuppositional elements are anaphoric to previous discourse or contextual elements. In compound sentences, these can be other clauses of the sentence. We thus need a theory of presuppositional anaphora, analogous to the corresponding pronominal theory.
• It would be a moral disgrace for God (if he existed) to allow the many evils in the world, in the same way it would be for a parent to allow a nursery to be infested with criminals who abused the children. • There is a contradiction in asserting all three of the propositions: God is perfectly good; God is perfectly powerful; evil exists (since if God wanted to remove the evils and could, he would). • The religious (...) believer has no hope of getting away with excuses that evil is not as bad as it seems, or that it is all a result of free will, and so on. Piper avoids mentioning the best solution so far put forward to the problem of evil. It is Leibniz’s theory that God does not create a better world because there isn’t one — that is, that (contrary to appearances) if one part of the world were improved, the ramifications would result in it being worse elsewhere, and worse overall. It is a “bump in the carpet” theory: push evil down here, and it pops up over there. Leibniz put it by saying this is the “Best of All Possible Worlds”. That phrase was a public relations disaster for his theory, suggesting as it does that everything is perfectly fine as it is. He does not mean that, but only that designing worlds is a lot harder than it looks, and determining the amount of evil in the best one is no easy matter. Though humour is hardly appropriate to the subject matter, the point of Leibniz’s idea is contained in the old joke, “An optimist is someone who thinks this is the best of all possible worlds, and a pessimist thinks.. (shrink)
Many philosophers hold some verion of the doctrine of "basic knowledge". According to this doctrine, it's possible for S to know that p, even if S doesn't know the source of her knowledge that p to be reliable or trustworthy. Stewart Cohen has recently argued that this doctrine confronts the problem of easy knowledge. I defend basic knowledge against this criticism, by providing a contextualist solution to the problem of easy knowledge.
My target in this paper is a view that has sometimes been called the ‘Linguistic Doctrine of Necessary Truth’ (L-DONT) and sometimes ‘Conventionalism about Necessity’. It is the view that necessity is grounded in the meanings of our expressions—meanings which are sometimes identified with the conventions governing those expressions—and that our knowledge of that necessity is based on our knowledge of those meanings or conventions. In its simplest form the view states that a truth, if it is necessary, is necessary (...) (and knowably necessary) because it is analytic. It is widely recognized that this simple version of the view faces a prima facie problem with the existence of the necessary a posteriori. Assuming that all analytic truths are a priori, if there are necessary a posteriori truths then there are necessary synthetic truths—contradicting the view’s claim that all necessary truths are analytic. Contemporary L-DONTers have things to say about the problem, but in this paper I want to suggest that there is a different, more serious, problem which arises from the phenomenon of indexicality, which L-DONTers have not taken account of. Though there are many versions of the problem, a simple one is this. Consider Kaplan’s celebrated sentence. (shrink)
Intelligent design—the idea that a designing intelligence plays a substantive and empirically significant role in the natural world—no longer sits easily in our intellectual environment. Science rejects it for invoking an unnecessary teleology. Philosophy rejects it for committing an argument from ignorance. And theology rejects it for, as Edward Oakes contends, making the task of theodicy impossible.1 I want in this lecture to address all these concerns but especially the last. For many thinkers, particularly religious believers, intelligent design exacerbates (...) the problem of natural evil—intelligent design makes natural evil not an accident of natural history or a price exacted by evolution or a necessary consequence of creation’s freedom but an outcome fully intended by a sadistic designer. Or, as Robert Russell put it to me on the PBS program Uncommon Knowledge, “The notion of intelligent design is incoherent because it’s either a natural cause, in which case you don’t go anywhere, or it’s a divine cause, in which case you don’t have the biblical God.”2 The biblical God, presumably, would not design the rabies virus, the bubonic plague bacterium, or the mosquito. (shrink)
Computationalism provides a framework for understanding how a mathematically describable physical world could give rise to conscious observations without the need for dualism. A criterion is proposed for the implementation of computations by physical systems, which has been a problem for computationalism. Together with an independence criterion for implementations this would allow, in principle, prediction of probabilities for various observations based on counting implementations. Applied to quantum mechanics, this results in a Many Computations Interpretation (MCI), which is an (...) explicit form of the Everett style Many Worlds Interpretation (MWI). Derivation of the Born Rule emerges as the central problem for most realist interpretations of quantum mechanics. If the Born Rule is derived based on computationalism and the wavefunction it would provide strong support for the MWI; but if the Born Rule is shown not to follow from these to an experimentally falsified extent, it would indicate the necessity for either new physics or (more radically) new philosophy of mind. (shrink)
The so-called ‘‘species problem’’ has plagued evolution- ary biology since before Darwin’s publication of the aptly titled Origin of Species. Many biologists think the problem is just a matter of semantics; others complain that it will not be solved until we have more empirical data. Yet, we don’t seem to be able to escape discussing it and teaching seminars about it. In this paper, I briefly examine the main themes of the biological and philosophical liter- atures on (...) the species problem, focusing on identifying common threads as well as relevant differences. I then argue two fundamental points. First, the species problem is not primarily an empirical one, but it is rather fraught with philosophical questions that require—but cannot be settled by—empirical evidence. Second, the (dis-)solution lies in explicitly adopting Wittgenstein’s idea of ‘‘family resemblance’’ or cluster concepts, and to consider spe- cies as an example of such concepts. This solution has several attractive features, including bringing together apparently diverging themes of discussion among bio- logists and philosophers. The current proposal is con- ceptually independent (though not incompatible) with the pluralist approach to the species problem advocated by Mishler, Donoghue, Kitcher and Dupre ́, which implies that distinct aspects of the species question need to be emphasized depending on the goals of the researcher. From the biological literature, the concept of species that most closely matches the philosophical discussion pre- sented here is Templeton’s cohesion idea. (shrink)
Abstract: According to Laurence BonJour, the problem of induction can be solved by recognizing the a priori necessity that inductive conclusions constitute the best explanations of inductive premises. I defend an interpretation of the key probability claims BonJour makes about inductive premises and show that they are not susceptible to many of the objections that have been lodged against them. I then argue that these purportedly necessary probability claims nevertheless remain deeply problematic and that, as a result, BonJour's (...) proposal fails to provide a satisfactory resolution of the problem of induction. (shrink)
With the success of cognitive science's interdisciplinary approach to studying the mind, many theorists have taken up the strategy of appealing to science to address long standing disputes about metaphysics and the mind. In a recent case in point, philosophers and psychologists, including Robert Kane, Daniel C. Dennett, and Daniel M. Wegner, are exploring how science can be brought to bear on the debate about the problem of free will. I attempt to clarify the current debate by considering (...) how empirical research can be useful. I argue that empirical findings don't apply to one basic dimension of the problem, namely the dispute between compatibilism and incompatibilism. However, I show that empirical research can provide constraints in connection with another fundamental dimension, namely the dispute between libertarianism, which claims that indeterminacy is, in certain contexts, sufficient for freedom, and hard determinism and compatibilism, which deny this. I argue that the source of the most powerful constraint is psychological research into the accuracy of introspection. (shrink)
Many philosophical problems are rooted in everyday thought, and experimental philosophy uses social scientific techniques to study the psychological underpinnings of such problems. In the case of free will, research suggests that people in a diverse range of cultures reject determinism, but people give conflicting responses on whether determinism would undermine moral responsibility. When presented with abstract questions, people tend to maintain that determinism would undermine responsibility, but when presented with concrete cases of wrongdoing, people tend to say that (...) determinism is consistent with moral responsibility. It remains unclear why people reject determinism and what drives people’s conflicted attitudes about responsibility. Experimental philosophy aims to address these issues and thereby illuminate the philosophical problem of free will. (shrink)
In this paper, the author argues that a second-person experience is an experience one has when one has conscious awareness of another consciously aware person. The author shows that there are some things we know in second-person experiences which are either difficult or impossible to put in propositional form at all but stories can capture them for us. An account of a second-person experience is what we typically find in narratives. The author argues that the second-person point of view has (...) a special place in some areas of philosophy, including philosophy of religion. It matters to many issues in philosophy of religion, perhaps especially to the problem of evil, that God is taken to be a person who has personal relations with created persons. The paper proceeds to show that philosophical attention to the second-person account in the biblical narrative of the book of Job illuminates the book in a new way and sheds light on the problem of evil. /// O presente artigo parte do princípio de que uma experiência na segunda-pessoa constitui a experiência que temos quando estamos conscientemente atentos à presença de outra pessoa conscientemente desperta. A autora demonstra a existência de algumas coisas que conhecemos mediante a experiência da segunda pessoa as quais são ou difíceis ou até mesmo impossíveis de colocar em forma proposicional, ainda que narrativas as possam perfeitamente captar para nós. O relato de uma experiência na segunda pessoa é precisamente aquilo que tipicamente encontramos em narrativas. A autora argumenta que o ponto de vista da segunda pessoa tem um lugar especial em algumas áreas da filosofia, incluindo a filosofia da religião. Na verdade, não deixa de ser relevante para o tratamento de muitas questões no âmbito da filosofia da religião, mas talvez de um modo muito especial no que se refere ao tratamento do problema do mal, partir do princípio de que Deus é pessoa e mantém relações de pessoa-a-pessoa com as suas criaturas pessoais. Posto isto, o artigo procede com a demonstração de que a atenção filosófica ao relato na segunda pessoa que se encontra no Livro de Job não deixa de iluminar este livro sapiencial bem como de projectar nova luz sobre o velho problema do mal. (shrink)
Stoljar’s book has three parts. In the first part, he discusses the “problem of experience”: though we have experiences, it isn’t clear that the experiential fits into the actual world, given that the actual world is fundamentally non-experiential. Stoljar focuses on what he views as one facet of the problem of experience, the “logical problem”, which consists of three jointly inconsistent claims: (T1) there are experiential truths; (T2) if there are experiential truths, every experiential truth is entailed (...) by some non-experiential truth; and (T3) if there are experiential truths, not every experiential truth is entailed by some non-experiential truth. The logical problem is a problem, according to Stoljar, because each of T1–T3 is prima facie plausible. In the second part, Stoljar sets out his solution to the logical problem, the “epistemic view”, and defends it against various objections. According to the epistemic view, (i) we’re ignorant of a special type of empirical experience-relevant non-experiential truth; (ii) were we to come to understand truths of this type, we would see that the modal arguments against physicalism (i.e. the zombie and knowledge arguments) fail; and (iii) given (i) and (ii), we should reject T3 in order to resolve the logical problem. In the third part Stoljar argues that alternative solutions to the logical problem either fail or collapse into the epistemic view. While this is certainly the most careful and extended defense of the epistemic view to date (a view, by the way, in various forms, with which many seem to find sympathy), the epistemic view as Stoljar develops it faces a formidable problem. The central problem.. (shrink)
The aggregation of consistent individual judgments on logically interconnected propositions into a collective judgment on those propositions has recently drawn much attention. Seemingly reasonable aggregation procedures, such as propositionwise majority voting, cannot ensure an equally consistent collective conclusion. The literature on judgment aggregation refers to that problem as the discursive dilemma. In this paper, we motivate that many groups do not only want to reach a factually right conclusion, but also want to correctly evaluate the reasons for that (...) conclusion. In other words, we address the problem of tracking the true situation instead of merely selecting the right outcome. We set up a probabilistic model analogous to Bovens and Rabinowicz (2006) and compare several aggregation procedures by means of theoretical results, numerical simulations and practical considerations. Among them are the premise-based, the situation-based and the distance-based procedure. Our findings confirm the conjecture in Hartmann, Pigozzi and Sprenger (2008) that the premise-based procedure is a crude, but reliable and sometimes even optimal form of judgment aggregation. (shrink)
In this essay I offer a theory of the outward directedness of intentional states, namely, an account of what makes intentional states directed at their respective intentional objects. The theory is meant to be complementary to the canonical interactivist account of mental content in that the latter emphasizes the predicative, intensional, and internal aspects of representation whereas here I shall focus on its denotative, extensional, and external aspects. Thus, the aim is to establish that the two projects are not only (...) consistent but mutually supportive. Further, it is hoped that supplementing the interactivist conception of representation with a theory of intentional directedness along such lines will increase its overall appeal to critical readers. Based on the core idea that the directedness of a representation is a function of the manner in which it is constructed within, and contributes to the ongoing unfolding of a dynamical interactive loop connecting information to focused action, the theory is subsequently extended to cover manyproblem domains familiarly associated with representation and reference. (shrink)
Case methods of reasoning are persuasive, but we need to address problems of bias in order to use them to reach morally justifiable conclusions. A bias is an unwarranted inclination or a special perspective that disposes us to mistaken or one-sided judgments. The potential for bias arises at each stage of a case method of reasoning including in describing, framing, selecting and comparing of cases and paradigms. A problem of bias occurs because to identify the relevant features for such (...) purposes, we must use general views about what is relevant; but some of our general views are biased, both in the sense of being unwarranted inclinations and in the sense that they are one of many viable perspectives. This reliance upon general views to determine relevancy creates additional difficulties for defenders who maintain that case methods of moral reasoning are not only useful, but more basic, reliable or prior to other forms of moral reasoning. If we cannot identify the case's relevant features and issues independently of our general views or biases, we need further explanation about why a case method or casuistry should be viewed as prior to or more basic or reliable than other forms of moral reasoning. Problems of bias also arise for other methods of reasoning. In medical science, case reviews are regarded as an unreliable way to form generalizations, and methods such as clinical trials are used to address bias. (shrink)
Abstract Robert Stern's Understanding Moral Obligation is a remarkable achievement, representing an original reading of Kant's contribution to modern moral philosophy and the legacy he bequeathed to his later-eighteenth- and early-nineteenth-century successors in the German tradition. On Stern's interpretation, it was not the threat to autonomy posed by value realism, but the threat to autonomy posed by the obligatory nature of morality that led Kant to develop his critical moral theory grounded in the concept of the self-legislating moral agent. Accordingly, (...) Stern contends that Kant was a moral realist of sorts, holding certain substantive views that are best characterized as realist commitments about value. In this paper, I raise two central objections to Stern's reading of Kant. The first objection concerns what Stern identifies as Kant's solution to the problem of moral obligation. Whereas Stern sees the distinction between the infinite will and the finite will as resolving the problem of moral obligation, I argue that this distinction merely explains why moral obligations necessarily take the form of imperatives for us imperfect human beings, but does not solve the deeper problem concerning the obligatory nature of morality?why we should take moral norms to be supremely authoritative laws that override all other norms based on our non-moral interests. The second objection addresses Stern's claim that Kantian autonomy is compatible with value realism. Although this is an idea with which many contemporary readers will be sympathetic, I suggest that the textual evidence actually weighs in favor of constructivism. (shrink)
In 2003, Ruth Faden and eighteen other colleagues argued that a "problem of unequal biological access" is likely to arise in access to therapies resulting from human embryonic stem cell research. They showed that unless deliberate steps are taken in the United States to ensure that the human embryonic stem cell lines available to researchers mirrors the genetic diversity of the general population, white Americans will likely receive the benefits of these therapies to the relative exclusion of minority ethnic (...) groups. Over the past five years the problem of unequal biological access has not received much attention from politicians, bioethicists and even many researchers in the United States, in spite of the widely held belief in the country that there is an obligation to prevent and correct ethnic disparities in access to medical care. The purpose of this paper is to increase awareness of the problem of unequal biological access and of the need to do more than is currently being done to ensure that ethnic disparities in access to human embryonic stem cell-based therapies do not arise. Specifically, this paper explains why the problem of unequal biological access will likely arise in the United States in such a way that white Americans will disproportionately receive most of the benefits of the therapies resulting from human embryonic stem cell research. It also argues for why there is an obligation to prevent these ethnic disparities in access from happening and outlines four steps that need to be taken towards meeting this obligation. (shrink)
In the 1960’s, Lars Bergström and Hector-Neri Castañeda noticed a problem with alternative acts and consequentialism. The source of the problem is that some performable acts are versions of other performable acts and the versions need not have the same consequences as the originals. Therefore, if all performable acts are among the agent’s alternatives, act consequentialism yields deontic paradoxes. A standard response is to restrict the application of act consequentialism to certain relevant alternative sets. Many proposals are (...) based on some variation of maximalism, that is, the view that act consequentialism should only be applied to maximally specific acts. In this paper, I argue that maximalism cannot yield the right prescriptions in some cases where one can either (i) form at once the intention to do an immediate act and form at a later time the intention to do a succeeding act or (ii) form at once the intention to do both acts and where the consequences of (i) and (ii) differ in value. Maximalism also violates normative invariance, that is, the condition that if an act is performable in a situation, then the normative status of the act does not depend on what acts are performed in the situation. Instead of maximalism, I propose that the relevant alternatives should be the exhaustive combinations of acts the agent can jointly perform without performing any other act in the situation. In this way, one avoids the problem of act versions without violating normative invariance. Another advantage is that one can adequately differentiate between possibilities like (i) and (ii). (shrink)
An action-oriented theory of embodied memory is favorable for many reasons, but it will not provide a quick yet clean solution to the grounding problem in the way Glenberg (1997t) envisages. Although structural mapping via analogical representations may be an adequate mechanism of cognitive representation, it will not suffice to explain representation as such.
I shall address a familiar, yet persistent, problem confronted by welfare-based moral theories. Welfare is often based on suspect attitudes. Many people's pleasure, happiness, or preference satisfaction, for example, are based on racist, sexist, envious, meddlesome, or mali¬cious attitudes. Is welfare derived from such sources relevant to the deter¬mination of what is morally permis¬sible? Almost everyone has at least some "suspect" attitudes, so to ignore welfare based on suspect attitudes is to ignore things that people actually care about. (...) To take such welfare at face value, however, seems to give it too much of a role in determining what is permis¬sible. The welfare that a sadist gains from torturing others, it seems, does not have the same status as the welfare that victims lose. This problem has already been discussed by a number of authors. Typically, however, authors take one of two extreme positions: they hold that all welfare should be taken at face value, or they hold that "suspect" welfare should be completely ignored. My contribution here is the following: First, I introduce the notion of unauthor¬ized (suspect) welfare, of which welfare from meddlesome preferences, offensive tastes, expensive tastes, etc. are special cases. Second, I formu¬late four condi¬tions of adequacy, applicable to any welfare-based theory, for dealing with unauthor¬ized welfare. These conditions require that unauthorized welfare be "discount¬ed" (play a restricted role) but not be completely ignored. Thus, I shall be explor¬ing a position inter¬mediate between taking "unauthor¬ized" welfare at face value and simply ignoring it. Moreover, the four conditions jointly determine exactly how existing welfare-based theories need to be revised so as to be appropriately sensitive to unauthorized welfare. The problem of suspect welfare is best known for the problems it raises for utilitarianism, but the problem arises for all welfare-based theories. Welfare is here understood as subjective well-being. Pleasure, happiness, and preference satis¬faction are each conceptions of welfare, but liberty, health, wealth, and skills are not. Welfare-based moral theories base permis¬sibility at least partly on the extent to which welfare is pro¬moted. Utilita¬rianism is the most well-known example of a welfare-based theory. Welfare egalitarianism (which requires that welfare be distri¬buted as equally as possible), and maximin wel¬farism (which requires that the minimum welfare be maximized) are two other examples. The paper, it should be emphasized, is very exploratory. My goal is to stake out some unexplored terrain, and boldly to erect a theoretical foundation. That foundation will no doubt will be weak at a number of points -- and perhaps even completely unstable. But I hope that it will at least provide the basis for future construction. (shrink)
This paper focuses on the question of whether divine passibility is metaphysically possible using the work of Hartshorne, Creel, Shields, Taliaferro and Sarot. Passibilism is seen to be difficult to assert because of the problem of radical particularity, which is the problem of how God might feel in exactitude the experience of many diverse creatures which are radically particular while also feeling different experiences of other equally radically particular beings. I conclude that the question of passibility is (...) an unresolved problem and should be addressed from the perspective of specific attributes of God such as omniscience and omnipresence. (shrink)
There has been much discussion of late on what exactly the Problem of Universals is and is not. Of course answers to these questions and many more like it depend on what is supposed to be explained by a solution to the Problem of Universals. In this paper, I seek to establish two claims: first, that when the facts (explanada) to be explained and the kind of explanation needed are elucidated, it will be shown that the (...) class='Hi'>Problem of Universals is a real metaphysical problem, not a pseudo problem; secondly, the facts whose explanation posed the problem in the Problem of Universals still provide reason to think realism regarding universals is true, even if God exists. (shrink)