Many have thought that there is a problem with causal commerce between immaterial souls and material bodies. In Physicalism or Something Near Enough, Jaegwon Kim attempts to spell out that problem. Rather than merely posing a question or raising a mystery for defenders of substance dualism to answer or address, he offers a compelling argument for the conclusion that immaterial souls cannot causally interact with material bodies. We offer a reconstruction of that argument that hinges on two premises: (...) Kim’s Dictum and the Nowhere Man principle. Kim’s Dictum says that causation requires a spatial relation. Nowhere Man says that souls can’t be in space. By our lights, both premises can be called into question. We’ll begin our evaluation of the argument by pointing out some consequences of Kim’s Dictum. For some, these will be costs. We will then present two defeaters for Kim’s Dictum and a critical analysis of Kim’s case for Nowhere Man. The upshot is that Kim’s argument against substance dualism fails. (shrink)
There is no doubt that spatial relations aid us in pairing up causes and effects. But when we consider the possibility of qualitatively indiscernible things, it might seem that spatial relations are more than a mere aid – they might seem positively required. The belief that spatial relations are required for causal relations is behind an important objection to Cartesian Dualism, the pairingproblem. I argue that the Cartesian can answer this objection by appeal to the possibility (...) of primitive causal relations, a possibility I defend. This topic is of importance beyond the philosophy of mind; the possibility that causal relations might sometimes hold brutely is of general metaphysical importance. I close with a discussion of what Cartesians should say about embodiment, and how that bears on issues of mental causation. (shrink)
J.L. Mackie’s version of the logical problem of evil is a failure, as even he came to recognize. Contrary to current mythology, however, its failure was not established by Alvin Plantinga’s Free Will Defense. That’s because a defense is successful only if it is not reasonable to refrain from believing any of the claims that constitute it, but it is reasonable to refrain from believing the central claim of Plantinga’s Free Will Defense, namely the claim that, possibly, every essence (...) suffers from transworld depravity. (shrink)
The present study tested the existence of a cognitive schema that guides people's evaluations of the likelihood that observed problem-solving processes will succeed. The hypothesised schema consisted of attributes that were found to distinguish between retrospective case reports of successful and unsuccessful real world problem solving (Lipshitz & Bar Ilan, 1996). Participants were asked to evaluate the likelihood of success of identical cases of problem solving that differed in the presence or absence of diagnosis, the selection of (...) appropriate or inappropriate solutions, and the pairing of diagnosis with appropriate or non-appropriate solutions. Consistent with the proposition, diagnosis affected perceived likelihood of success, albeit only when solution quality was held constant, and appropriate diagnosis with a compatible solution produced higher perceived likelihood of success than appropriate diagnosis with incompatible solutions. In addition, results showed that solution quality played a significant role, and that compatibility with a six-phase rational model of problem solving played no role in judging likelihood of success. (shrink)
One of the reasons why most of us feel puzzled about the problem of abortion is that we want, and do not want, to allow to the unborn child the rights that belong to adults and children. When we think of a baby about to be born it seems absurd to think that the next few minutes or even hours could make so radical a difference to its status; yet as we go back in the life of the fetus (...) we are more and more reluctant to say that this is a human being and must be treated as such. No doubt this is the deepest source of our dilemma, but it is not the only one. For we are also confused about the general question of what we may and may not do where the interests of human beings conflict. We have strong intuitions about certain cases; saying, for instance, that it is all right to raise the level of education in our country, though statistics allow us to predict that a rise in the suicide rate will follow, while it is not all right to kill the feeble-minded to aid cancer research. It is not easy, however, to see the principles involved, and one way of throwing light on the abortion issue will be by setting up parallels involving adults or children once born. So we will be able to isolate the “equal rights” issue and should be able to make some advance... (shrink)
The existence of evil and suffering in our world seems to pose a serious challenge to belief in the existence of a perfect God. If God were all-knowing, it seems that God would know about all of the horrible things that happen in our world. If God were all-powerful, God would be able to do something about all of the evil and suffering. Furthermore, if God were morally perfect, then surely God would want to do something about it. And yet (...) we find that our world is filled with countless instances of evil and suffering. These facts about evil and suffering seem to conflict with the orthodox theist claim that there exists a perfectly good God. The challenged posed by this apparent conflict has come to be known as the problem of evil. (shrink)
A philosophical standard in the debates concerning material constitution is the case of a statue and a lump of clay, Lumpl and Goliath respectively. According to the story, Lumpl and Goliath are coincident throughout their respective careers. Monists hold that they are identical; pluralists that they are distinct. This paper is concerned with a particular objection to pluralism, the Grounding Problem . The objection is roughly that the pluralist faces a legitimate explanatory demand to explain various differences she alleges (...) between Lumpl and Goliath, but that the pluralistâ€™s theory lacks the resources to give any such explanation. In this paper, I explore the question of whether there really is any problem of this sort. I argue (i) that explanatory demands that are clearly legitimate are easy for the pluralist to meet; (ii) that even in cases of explanatory demands whose legitimacy is questionable the pluralist has some overlooked resources; and (iii) there is some reason for optimism about the pluralistâ€™s prospects for meeting every legitimate explanatory demand. In short, no clearly adequate statement of a Grounding Problem is extant, and there is some reason to believe that the pluralist can overcome any Grounding Problem that we havenâ€™t thought of yet. (shrink)
Judith Jarvis Thomson has recently proposed a new argument for the thesis that killing the one in the Trolley Problem is not permissible. Her argument relies on the introduction of a new scenario in which the bystander may also sacrifice herself to save the five. Thomson argues that those not willing to sacrifice themselves if they could may not kill the one to save the five. Bryce Huebner and Marc Hauser have recently put Thomson’s argument to the empirical test (...) by asking people what they should do in the new trilemma case in which they may also sacrifice themselves. They found that the majority judge that they should either kill the one or sacrifice themselves; Huebner and Hauser argue that those numbers speak against Thomson’s argument. But Thomson’s argument was about the dialectical effect of the new trilemma on the traditional dilemma, rather than about the trilemma itself. Here I present the results of a study in which I asked subjects first what they should do in the trilemma and then what they should do in the traditional Trolley Problem. I found that, if asked first about the trilemma, subjects then have the intuition that killing the one in the traditional Bystander at the Switch is not permissible – exactly what Thomson’s argument had predicted. (shrink)
Expressivists, such as Blackburn, analyse sentences such as 'S thinks that it ought to be the case that p' as S hoorays that p'. A problem is that the former sentence can be negated in three different ways, but the latter in only two. The distinction between refusing to accept a moral judgement and accepting its negation therefore cannot be accounted for. This is shown to undermine Blackburn's solution to the Frege-Geach problem.
In “Why the generality problem is everybody’s problem,” Michael Bishop argues that every theory of justification needs a solution to the generality problem. He contends that a solution is needed in order for any theory to be used in giving an acceptable account of the justificatory status of beliefs in certain examples. In response, first I will describe the generality problem that is specific to process reliabilism and two other sorts of problems that are essentially the (...) same. Then I will argue that the examples that Bishop presents pose no such problem for some theories. I will illustrate the exempt theories by describing how an evidentialist view can account for the justification in the examples without having any similar problem. It will be clear that other views about justification are likewise unaffected by anything like the generality problem. (shrink)
Necessity holds that, if a proposition A supports another B, then it must support B. John Greco contends that one can resolve Hume's Problem of Induction only if she rejects Necessity in favor of reliabilism. If Greco's contention is correct, we would have good reason to reject Necessity and endorse reliabilism about inferential justification. Unfortunately, Greco's contention is mistaken. I argue that there is a plausible reply to Hume's Problem that both endorses Necessity and is at least as (...) good as Greco's alternative. Hence, Greco provides a good reason for neither rejecting Necessity nor endorsing inferential reliabilism. (shrink)
Scepticism is sometimes expressed about whether there is any interesting problem of other minds. In this paper I set out a version of the conceptual problem of other minds which turns on the way in which mental occurrences are presented to the subject and situate it in relation to debates about our knowledge of other people's mental lives. The result is a distinctive problem in the philosophy of mind concerning our relation to other people.
In 1955, Goodman set out to 'dissolve' the problem of induction, that is, to argue that the old problem of induction is a mere pseudoproblem not worthy of serious philosophical attention. I will argue that, under naturalistic views of the reflective equilibrium method, it cannot provide a basis for a dissolution of the problem of induction. This is because naturalized reflective equilibrium is -- in a way to be explained -- itself an inductive method, and thus renders (...) Goodman's dissolution viciously circular. This paper, then, examines how the old problem of induction crept back in while nobody was looking. (shrink)
A difficulty is exposed in Allan Gibbard's solution to the embedding/Frege-Geach problem, namely that the difference between refusing to accept a normative judgement and accepting its negation is ignored. This is shown to undermine the whole solution.
It is sometimes held that rules of inference determine the meaning of the logical constants: the meaning of, say, conjunction is fully determined by either its introduction or its elimination rules, or both; similarly for the other connectives. In a recent paper, Panu Raatikainen (2008) argues that this view - call it logical inferentialism - is undermined by some "very little known" considerations by Carnap (1943) to the effect that "in a definite sense, it is not true that the standard (...) rules of inference" themselves suffice to "determine the meanings of [the] logical constants" (p. 2). In a nutshell, Carnap showed that the rules allow for non-normal interpretations of negation and disjunction. Raatikainen concludes that "no ordinary formalization of logic ... is sufficient to `fully formalize' all the essential properties of the logical constants" (ibid.). We suggest that this is a mistake. Pace Raatikainen, intuitionists like Dummett and Prawitz need not worry about Carnap's problem. And although bilateral solutions for classical inferentialists - as proposed by Timothy Smiley and Ian Rumfitt - seem inadequate, it is not excluded that classical inferentialists may be in a position to address the problem too. (shrink)
The Gettier problem has stymied epistemologists. But whether or not this problem is resolvable, we still must face an important question: Why does the Gettier problem arise in the first place? So far, philosophers have seen it as either a problem peculiar to the concept of knowledge, or else an instance of a general problem about conceptual analysis. But I would like to steer a middle course. I argue that the Gettier problem arises because (...) knowledge is a thick concept, and a Gettier-like problem is just what we should expect from attempts at analyzing a thick concept. Section 2 is devoted to establishing the controversial claim that knowledge is thick, and, in Sect. 3, I show that there is a general problem for analyzing thick concepts of which the Gettier problem is a special instance. I do not take a stand on whether the Gettier problem, or its general counterpart, is resolvable. My primary aim is to bring these problems into better focus. (shrink)
The, so called, ‘conceptual problem of other minds’ has been articulated in a number of different ways. I discuss two, drawing out some constraints on an adequate account of the grasp of concepts of mental states. Distinguishing between behaviour-based and identity-based approaches to the problem, I argue that the former, exemplified by Brewer and Pickard, are incomplete as they presuppose, but do not provide an answer to, what I shall call the conceptual problem of other bodies. I (...) end with some remarks on identity-based approaches, pointing out related problems for versions of this approach held by Cassam and Peacocke. (shrink)
I resolve the major challenge to an Expressivist theory of the meaning of normative discourse: the Frege–Geach Problem. Drawing on considerations from the semantics of directive language (e.g., imperatives), I argue that, although certain forms of Expressivism (like Gibbard’s) do run into at least one version of the Problem, it is reasonably clear that there is a version of Expressivism that does not.
Examining the moral sense theories of Francis Hutcheson, David Hume, and Adam Smith from the perspective of the is-ought problem, this essay shows that the moral sense or moral sentiments in those theories alone cannot identify appropriate morals. According to one interpretation, Hume's or Smith's theory is just a description of human nature. In this case, it does not answer the question of how we ought to live. According to another interpretation, it has some normative implications. In this case, (...) it draws normative claims from human nature. Anyway, the sentiments of anger, resentment, vengeance, superiority, sympathy, and benevolence show that drawing norms from human nature is sometimes morally problematic. The changeability of the moral sense and moral sentiments in Hume's and Smith's theories supports this idea. Hutcheson's theory is morally more appropriate because it bases morality on disinterested benevolence. Yet disinterested benevolence is not enough for morality. There are no sentiments the presence of which alone makes any action moral. (shrink)
: A key consideration in favour of animalism—the thesis that persons like you and me are identical to the animals we walk around with—is that it avoids a too many thinkers problem that arises for non-animalist positions. The problem is that it seems that any person-constituting animal would itself be able to think, but if wherever there is a thinking person there is a thinking animal distinct from it then there are at least two thinkers wherever there is (...) a thinking person. Most find this result unacceptable, and some think it provides an excellent reason for accepting animalism. It has been argued, however, that animalists face an analogous problem of too many thinkers, the so-called corpse problem, as they must accept both 1) that we are distinct from our bodies, as our bodies can and we cannot persist through death as corpses and 2) that our bodies can think. I argue that the best reasons animalists have for accepting the two claims that generate the distinctness part of the problem double up as reasons to reject the claim that our bodies can think, and vice versa. I argue further that Lockeans cannot similarly get around their problem of too many thinkers. (shrink)
Kilimanjaro is a paradigmatic mountain, if any is. Consider atom Sparky, which is neither determinately part of Kilimanjaro nor determinately not part of it. Let Kilimanjaro(+) be the body of land constituted, in the way mountains are constituted by their constituent atoms, by the atoms that make up Kilimanjaro together with Sparky, and Kilimanjaro(–) the one constituted by those other than Sparky. On the one hand, there seems to be just one mountain in the vicinity of Kilimanjaro. On the other (...) hand, both Kilimanjaro(+) and Kilimanjaro(–)—and indeed many other similar things—seem to have an equal claim to be a mountain: all of them exhibit the grounds for something being a mountain—like being an elevation of the earth’s surface rising abruptly and to a large height from the surrounding level,1 or whathaveyou—; and there seems to be nothing in the vicinity with a better claim. Hence, the problem of the many. (shrink)
This paper explores the relationship between scepticism and epistemic relativism in the context of recent history and philosophy of science. More specifically, it seeks to show that significant treatments of epistemic relativism by influential figures in the history and philosophy of science draw upon the Pyrrhonian problem of the criterion. The paper begins with a presentation of the problem of the criterion as it occurs in the work of Sextus Empiricus. It is then shown that significant treatments of (...) epistemic relativism in recent history and philosophy of science (critical rationalism, historical philosophy of science and the strong programme) draw upon the problem of the criterion. It is briefly suggested that a particularist response to the problem of the criterion may be put to good use against epistemic relativism. (shrink)
According to David Chalmers, the hard problem of consciousness consists of explaining how and why qualitative experience arises from physical states. Moreover, Chalmers argues that materialist and reductive explanations of mentality are incapable of addressing the hard problem. In this chapter, I suggest that Chalmers’ hard problem can be usefully distinguished into a ‘how question’ and ‘why question,’ and I argue that evolutionary biology has the resources to address the question of why qualitative experience arises from brain (...) states. From this perspective, I discuss the different kinds of evolutionary explanations (e.g., adaptationist, exaptationist, spandrel) that can explain the origins of the qualitative aspects of various conscious states. This argument is intended to clarify which parts of Chalmers’ hard problem are amenable to scientific analysis. (shrink)
Watkins proposes a neo-Popperian solution to the pragmatic problem of induction. He asserts that evidence can be used non-Inductively to prefer the principle that corroboration is more successful over all human history than that, Say, Counter-Corroboration is more successful either over this same period or in the future. Watkins's argument for rejecting the first counter-Corroborationist alternative is beside the point, However, As whatever is the best strategy over all human history is irrelevant to the pragmatic problem of induction (...) since we are not required to act in the past, And his argument for rejecting the second presupposes induction. (shrink)
According to orthodox quantum mechanics, state vectors change in two incompatible ways: "deterministically" in accordance with Schroedinger's time-dependent equation, and probabilistically if and only if a measurement is made. It is argued here that the problem of measurement arises because the precise mutually exclusive conditions for these two types of transitions to occur are not specified within orthodox quantum mechanics. Fundamentally, this is due to an inevitable ambiguity in the notion of "meawurement" itself. Hence, if the problem of (...) measurement is to be resolved, a new, fully objective version of quantjm mechanics needs to be developed which does not incorporate the notion of measurement in its basic postuolates at all. (shrink)
It is shown that the Fodor's interpretation of the frame problem is the central indication that his version of the Modularity Thesis is incompatible with computationalism. Since computationalism is far more plausible than this thesis, the latter should be rejected.
Moral non-cognitivists hope to explain the nature of moral agreement and disagreement as agreement and disagreement in non-cognitive attitudes. In doing so, they take on the task of identifying the relevant attitudes, distinguishing the non-cognitive attitudes corresponding to judgments of moral wrongness, for example from attitudes involved in aesthetic disapproval or the sports fan’s disapproval of her team’s performance. We begin this paper by showing that there is a simple recipe for generating apparent counterexamples to any informative specification of the (...) moral attitudes. This may appear to be a lethal objection to non-cognitivism, but a similar recipe challenges attempts by non-cognitivism’s competitors to specify the conditions underwriting the contrast between genuine and merely apparent moral disagreement. Because of its generality, this specification problem requires a systematic response, which, we argue, is most easily available for the non-cognitivist. Building on premisses congenial to the non-cognitivist tradition, we make the following claims: (1) In paradigmatic cases, wrongness-judgements constitute a certain complex but functionally unified state, and paradigmatic wrongness-judgments form a functional kind, preserved by homeostatic mechanisms. (2) Because of the practical function of such judgements, we should expect judges’ intuitive understanding of agreement and disagreement to be accommodating, treating states departing from the paradigm in various ways as wrongness-judgements. (3) This explains the intuitive judgements required by the counterexample-generating recipe, and more generally why various kinds of amoralists are seen as making genuine wrongness-judgements. (shrink)
The paper argues that dualism can explain mental causation and solve the exclusion problem. If dualism is combined with the assumption that the psychophysical laws have a special status, it follows that some physical events counterfactually depend on, and are therefore caused by, mental events. Proponents of this account of mental causation can solve the exclusion problem in either of two ways: they can deny that it follows that the physical effect of a mental event is overdetermined by (...) its mental and physical causes, or they can accept that the physical effect is overdetermined but claim that this is unproblematic because the case is sufficiently dissimilar to prototypical cases of overdetermination. (shrink)
We show how an epistemology informed by cognitive science promises to shed light on an ancient problem in the philosophy of mathematics: the problem of exactness. The problem of exactness arises because geometrical knowledge is thought to concern perfect geometrical forms, whereas the embodiment of such forms in the natural world may be imperfect. There thus arises an apparent mismatch between mathematical concepts and physical reality. We propose that the problem can be solved by emphasizing the (...) ways in which the brain can transform and organize its perceptual intake. It is not necessary for a geometrical form to be perfectly instantiated in order for perception of such a form to be the basis of a geometrical concept. (shrink)
The traditional Bayesian qualitative account of evidential support (TB) takes assertions of the form ‘E evidentially supports H’ to affirm the existence of a two-place relation of evidential support between E and H. The analysans given for this relation is C(H,E)=def Pr(H|E) > Pr(H). Now it is well known that when a hypothesisHentails evidence E, not only is it the case that C(H,E), but it is also the case that C(H&X,E) for any arbitrary X. There is a widespread feeling that (...) this is a problematic result for TB. Indeed, there are a number of cases in which many feel it is false to assert ‘E evidentially supports H&X’, despite H entailing E. This is known, by those who share that feeling, as the ‘tacking problem’ for Bayesian confirmation theory. After outlining a generalization of the problem, I argue that the Bayesian response has so far been unsatisfactory. I then argue the following: (i) There exists, either instead of, or in addition to, a two-place relation of confirmation, a three-place, ‘contrastive’ relation of confirmation, holding between an item of evidence E and two competing hypotheses H1 and H2. (ii) The correct analysans of the relation is a particular probabilistic inequality, abbreviated C(H1, H2, E). (iii) Those who take the putative counterexamples to TB discussed to indeed be counterexamples are interpreting the relevant utterances as implicitly contrastive, contrasting the relevant hypothesis H1 with a particular competitor H2. (iv) The probabilistic structure of these cases is such that ∼C(H1, H2, E). This solves my generalization of the tacking problem. I then conclude with some thoughts about the relationship between the traditional Bayesian account of evidential support and my proposed account of the three-place relation of confirmation. (shrink)
The article argues that theorists who try to justify 'ought'-claims, i.e., who try to show that a standard of behavior has normative authority, will run into a regress problem. The problem is similar in structure to the familiar regress in the justification of belief. The point of the paper is not skeptical. Rather, the aim is to help theorists better understand the challenges associated with formulating a theory of normative authority.
Philosophers have worried that research on animal mind-reading faces a “logical problem”: the difficulty of experimentally determining whether animals represent mental states (e.g. seeing) or merely the observable evidence for those states (e.g. line-of-gaze). The most impressive attempt to confront this problem has been mounted recently by Robert Lurz (2009, 2011). However, Lurz’ approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. Moreover, participants (...) in this debate do not appear to agree on criteria for representation. As such, future debate on this question should either abandon the representational idiom or confront differences in underlying semantics. (shrink)
If the history of the Gettier Problem has taught us anything, it is to be skeptical regarding purported solutions. Nevertheless, in “Manifest Failure: The Gettier Problem Solved” (2011), that is precisely what John Turri offers us. For nearly fifty years, epistemologists have been chasing a solution for the Gettier Problem but with little to no success. If Turri is right, if he has actually solved the Gettier Problem, then he has done something that is absolutely groundbreaking (...) and really quite remarkable. Regrettably, however, while Turri’s account is both intuitive and elegant—improving upon many seminal projects within contemporary epistemology—I argue in this paper that any success against Gettier counterexamples it affords is merely fleeting. Straightforwardly, this is done in two sections. In §1, I briefly sketch Turri’s proposed solution to the Gettier Problem. Then, in §2, I level a counterexample against it. Unfortunately for Turri and his solution, in this paper we will see history repeat itself. (shrink)
The swamping problem is the problem of explaining why reliabilist knowledge (reliable true belief) has greater value than mere true belief. Swamping problem advocates see the lack of a solution to the swamping problem (i.e., the lack of a value-difference between reliabilist knowledge and mere true belief) as grounds for rejecting reliabilism. My aims here are (i) to specify clear requirements for a solution to the swamping problem that are as congenial to reliabilism's critics as (...) possible, (ii) to clear away various existing reliabilist solutions on the basis of these requirements, and (iii) to present a reliabilist solution that succeeds in meeting all of them. To meet all the requirements, my solution develops a more nuanced understanding of the epistemic end than is currently discussed, and with it a novel way of individuating beliefs. I close with a brief discussion of the question whether reliabilism's critics might impose further demands which reliabilism cannot possibly meet. (shrink)
The problem of the many poses the task of explaining mereological indeterminacy of ordinary objects in a way that sustains our familiar practice of counting these objects. The aim of this essay is to develop a solution to the problem of the many that is based on an account of mereological indeterminacy as having its source in how ordinary objects are, independently of how we represent them. At the center of the account stands a quasi-hylomorphic ontology of ordinary (...) objects as material objects with multiple individual forms. (shrink)
In his recent book, A Frightening Love: Recasting the Problem of Evil, Andrew Gleeson challenges a certain conception of justification assumed in mainstream analytic philosophy and argues that analytic philosophy is ill-suited to deal with the most pressing, existential, form of the problem of evil. In this article I examine some aspects of that challenge.
We argue that C. Darwin and more recently W. Hennig worked at times under the simplifying assumption of an eternal biosphere. So motivated, we explicitly consider the consequences which follow mathematically from this assumption, and the infinite graphs it leads to. This assumption admits certain clusters of organisms which have some ideal theoretical properties of species, shining some light onto the species problem. We prove a dualization of a law of T.A. Knight and C. Darwin, and sketch a decomposition (...) result involving the internodons of D. Kornet, J. Metz and H. Schellinx. A further goal of this paper is to respond to B. Sturmfels’ question, “Can biology lead to new theorems?”. (shrink)
On the one hand, Pavel Tichý has shown in his Transparent Intensional Logic (TIL) that the best way of explicating meaning of the expressions of a natural language consists in identification of meanings with abstract procedures. TIL explicates objective abstract procedures as so-called constructions. Constructions that do not contain free variables and are in a well-defined sense ´normalized´ are called concepts in TIL. On the second hand, Kolmogorov in (Mathematische Zeitschrift 35: 58–65, 1932 ) formulated a theory of problems, using (...) NL expressions. He explicitly avoids presenting a definition of problems. In the present paper an attempt at such a definition (explication)—independent of but in harmony with Medvedev´s explication—is given together with the claim that every concept defines a problem. The paper treats just mathematical concepts, and so mathematical problems, and tries to show that this view makes it possible to take into account some links between conceptual systems and the ways how to replace a noneffective formulation of a problem by an effective one. To show this in concreto a wellknown Kleene’s idea from his (Introduction to metamathematics. D. van Nostrand, New York, 1952 ) is exemplified and explained in terms of conceptual systems so that a threatening inconsistence is avoided. (shrink)
I argue that the personhood of a fetus is analogous to the the heap. If this is correct, then the moral status or intrinsic value of a fetus would be supervenient upon the fetus's biological development. Yet to compare its claim vis-a-vis its mother's, we need to consider not only their moral status, but also the type of claim they each have. Thus we have to give weight to the two factors or variables of the mother's moral status and her (...) claim to some lesser good (assuming that this is not the kind of case in which the mother would suffer some great harm, such as death). And then we have to consider the fetus's lesser moral status and its claim to some greater good, namely, life. I argue that we do not know how to compare these two-variable claims. This also explains why the central cases of abortion have been so difficult to resolve. I suggest that the problem of animal rights has a similar structure. (shrink)
It is argued that the so-called minimal statistical interpretation of quantum mechanics does not completely resolve the measurement problem in that this view is unable to show that quantjum mechanics can dispense with classical physics when it comes to a treatment of the measuring interaction. It is suggested that the view that quantum mechanics applies to individual systems should not be too hastily abandoned, in that this view gives perhaps the best hope of leading to a version of quantum (...) mechanics which does provide a complete solution to the measurement problem. (shrink)
Cultural-nationalist and democratic theory both seek to legitimize political power via collective self-rule: their principle of legitimacy refers right back to the very persons over whom political power is exercised. But such self-referential theories are incapable of jointly solving the distinct problems of legitimacy and boundaries, which they necessarily combine, once it is assumed that the self-ruling collectivity must be a pre-political, in-principle bounded, ground of legitimacy. Cultural nationalism claims that political power is legitimate insofar as it expresses the nation’s (...) pre-political culture, but it cannot fix cultural-national boundaries pre-politically. Hence the collapse into ethnic nationalism. Traditional democratic theory claims that political power is legitimized pre-politically, but cannot itself legitimize the boundaries of the people. Hence the collapse into cultural nationalism. Only once we recognize that the demos is in principle unbounded, and abandon the quest for a pre-political ground of legitimacy, can democratic theory fully avoid this collapse of demos into nation into ethnos. But such a theory departs radically from traditional theory. (shrink)
In this paper I begin to develop an account of the acquaintance that each of us has with our own conscious states and processes. The account is a speculative proposal about human mental architecture and specifically about the nature of the concepts via which we think in first personish ways about our qualia. In a certain sense my account is neutral between physicalist and dualist accounts of consciousness. As will be clear, a dualist could adopt the account I will offer (...) while maintaining that qualia themselves are non-physical properties. In this case the non-physical nature of qualia may play no role in accounting for the features of acquaintance. But although the account could be used by a dualist, its existence provides support for physicalism. (shrink)
On a widely shared assumption, our mental states supervene on our microphysical properties – that is, microphysical supervenience is true. When this thesis is combined with the apparent truism that human persons have proper parts, a grave difficulty arises: what prevents some of these proper parts from being themselves thinkers as well? How can I know that I am a human person and not a smaller thinker enclosed in a human person? Most solutions to this puzzle make radical, if not (...) absurd, claims. Recently, however, Michael Burke and Howard Robinson proposed conservative solutions that, according to them, do not have such undesired consequences. This paper argues that the conservative solutions tacitly assume at least one of the radical ones, and therefore they provide no alternative to the extreme solutions. (shrink)
The case is discussed for the doctrine of hell as posing a unique problem of evil for adherents to the Abrahamic religions who endorse traditional theism. The problem is particularly acute for those who accept retributivist formulations of the doctrine of hell according to which hell is everlasting punishment for failing to satisfy some requirement. Alternatives to retributivism are discussed, including the unique difficulties that each one faces.
For many of the authors in this volume, this is the second attempt to explore what McCarthy and Hayes (1969) ﬁrst called the “Frame Problem”. Since the ﬁrst compendium (Pylyshyn, 1987), nicely summarized here by Ronald Loui, there have been several conferences and books on the topic. Their goals range from providing a clariﬁcation of the problem by breaking it down into subproblems (and sometimes declaring the hard subproblems to not be the_ real_ Frame Problem), to providing (...) formal “solutions” to certain aspects of the problem. But more often the message has been that the problem is not solvable except in a piecemeal way in special circumstances by some sort of heuristic approximations. It has sometimes also been said that solving the Frame Problem is not only an unachievable goal, but it is also an unnecessary one since_ humans_ do not solve it either; we simply get along as best we can and deal with the problem of planning in ways that, to use Dennett’s phrase, is “good enough for government work”. (shrink)
Taking their motivation from the perceived failure of the reductive physicalist project concerning consciousness, panpsychists ascribe subjectivity to fundamental material entities in order to account for macro-consciousness. But there exists an unresolved tension within the mainstream panpsychist position, the seriousness of which has yet to be appreciated. I capture this tension as a dilemma, and offer advice to panpsychists on how to resolve it. The dilemma is as follows: Panpsychists take the micro-material realm to feature phenomenal properties, plus micro-subjects to (...) whom these properties belong. However, it is impossible to explain the generation of a macro-subject (like one of us) in terms of the assembly of micro-subjects, for, as I show, subjects cannot combine. Therefore the panpsychist explanatory project is derailed by the insistence that the world’s ultimate material constituents are subjects of experience. The panpsychist faces a choice of giving up her explanatory ambitions, or of giving up the claim that the ultimates are subjects. I argue that the latter option is preferable, leading to neutral monism, on which phenomenal qualities are irreducible but subjects are reducible. So panpsychists should be neutral monists. (shrink)
Two recurrent arguments levelled against the view that enduring objects survive change are examined within the framework of the B-theory of time: the argument from Leibniz's Law and the argument from Instantiation of Incompatible Properties. Both arguments are shown to be question-begging and hence unsuccessful.
The partner choice approach to understanding the evolution of cooperation builds on approaches that focus on partner control by considering processes that occur prior to pair or group formation. Proponents of the partner choice approach rightly note that competition to be chosen as a partner can help solve the puzzle of cooperation. I aim to build on the partner choice approach by considering the role of signalling in partner choice. Partnership formation often requires reliable information. Signalling is thus important in (...) the context of partner choice. However, the issue of signal reliability has been understudied in the partner choice literature. The issue deserves attention because – despite what proponents of the partner choice approach sometimes claim – that approach does face a cheater problem, which we might call the problem of false advertising in biological markets. Both theoretical and empirical work is needed to address this problem. I will draw on signalling theory to provide a theoretical framework within which to organise the scattered discussions of the false advertising problem extant in the partner choice literature. I will end by discussing some empirical work on cooperation, partner choice, and punishment among humans. (shrink)
I argue that the many worlds explanation of quantum computation is not licensed by, and in fact is conceptually inferior to, the many worlds interpretation of quantum mechanics from which it is derived. I argue that the many worlds explanation of quantum computation is incompatible with the recently developed cluster state model of quantum computation. Based on these considerations I conclude that we should reject the many worlds explanation of quantum computation.
I borrow an idea from the fiction of C. S. Lewis that future outcomes may affect the value of past events, defend this idea via the concept of a 'temporal whole' and show its promise as a partial theodicy and its resonance with Christian theism and a robust personalism.
In this paper I criticize the non-consequentialist Weighted Lottery (WL) solution to the choice between saving a smaller or a larger group of people. WL aims to avoid what nonconsequentialists see as consequentialism’s unfair aggregation by giving equal consideration to each individual’s claim to be rescued. In so doing, I argue, WL runs into another common objection to consequentialism: it is excessively demanding. WL links the right action with the outcome of a fairly weighted lottery, which means that an agent (...) can only act rightly if s/he has actually run the lottery. In many actual cases, this involves epistemic demands that can be almost impossible to meet. I argue that plausible moral principles cannot make such extreme epistemic demands. (shrink)
O problema do conhecimento fácil tem sido definido na literatura epistemológica contemporânea com um problema que nasce de duas formas distintas. O propósito deste ensaio é mostrar que essas supostas maneiras diferentes de gerar o mesmo problema em verdade originam dois problemas distintos, que requerem respostas distintas. Um deles está relacionado à aquisição fácil (inaceitável) de conhecimento de primeira-ordem e o outro à aquisição fácil (inaceitável) de conhecimento de segunda-ordem. Além disso, é apresentada a maneira como o infinitismo, a teoria (...) epistêmica segundo a qual as razões que justificam uma opinião devem ser infinitas em número e não-repetidas, pode lidar com cada um desses problemas. (shrink)
The Clarke-Collins correspondence was widely read and frequently printed during the 18th century. Its central topic is the question whether matter can think. Samuel Clarke defends the immateriality of the human soul against Anthony Collins’ materialism. Clarke argues that consciousness must belong to an indivisible entity, and matter is divisible. Collins contends that consciousness could belong to a composite subject by emerging from material qualities that belong to its parts. While many early modern thinkers assumed that this is not possible, (...) this correspondence offers an unusually detailed discussion of this issue. Clarke rejects emergentism because real qualities of a composite must be homogeneous with the qualities of the parts. This rejection is based on considerations about the nature of causation. In addition, the disagreement derives in part from a disagreement between Clarke and Collins about the limits of our knowledge. (shrink)
To make progress on the problem of consciousness, we have to confront it directly. In this paper, I first isolate the truly hard part of the problem, separating it from more tractable parts and giving an account of why it is so difficult to explain. I critique some recent work that uses reductive methods to address consciousness, and argue that such methods inevitably fail to come to grips with the hardest part of the problem. Once this failure (...) is recognized, the door to further progress is opened. In the second half of the paper, I argue that if we move to a new kind of nonreductive explanation, a naturalistic account of consciousness can be given. I put forward my own candidate for such an account: a nonreductive theory based on principles of structural coherence and organizational invariance, and a double-aspect theory of information. (shrink)
Moral disagreement is widely held to pose a threat for metaethical realism and objectivity. In this paper I attempt to understand how it is that moral disagreement is supposed to present a problem for metaethical realism. I do this by going through several distinct (though often related) arguments from disagreement, carefully distinguishing between them, and critically evaluating their merits. My conclusions are rather skeptical: Some of the arguments I discuss fail rather clearly. Others supply with a challenge to realism, (...) but not one we have any reason to believe realism cannot address successfully. Others beg the question against the moral realist, and yet others raise serious objections to realism, but ones that—when carefully stated—can be seen not to be essentially related to moral disagreement. Arguments based on moral disagreement itself have almost no weight, I conclude, against moral realism. (shrink)
The problem of free will arises because of the conflict between two inconsistent impulses, the experience of freedom and the conviction of determinism. Perhaps we can resolve these by examining neurobiological correlates of the experience of freedom. If free will is not to be an illusion, it must have a corresponding neurobiological reality. An explanation of this issue leads us to an account of rationality and the self, as well as how consciousness can move bodies at all. I explore (...) two hypotheses. On the first, freedom is a complete illusion. On the second, it is not an illusion, and there is a corresponding indeterminism at the neurobiological level. This can only occur if there is in fact a quantum mechanical element in the fundamental neurobiology of consciousness. (shrink)
Intuitions based on the first-person perspective can easily mislead us about what is and is not conceivable.1 This point is usually made in support of familiar reductionist positions on the mind-body problem, but I believe it can be detached from that approach. It seems to me that the powerful appearance of contingency in the relation between the functioning of the physical organism and the conscious mind -- an appearance that depends directly or indirectly on the first- person perspective -- (...) must be an illusion. But the denial of this contingency should not take the form of a reductionist account of consciousness of the usual type, whereby the logical gap between the mental and the physical is closed by conceptual analysis -- in effect, by analyzing the mental in terms of the physical (however elaborately this is done -- and I count functionalism as such a theory, along with the topic-neutral causal role analyses of mental concepts from which it descends). (shrink)
I was led to this clarificatory job initially by some puzzlement from a philosopher's standpoint about just why free will questions should come up particularly in connection with the genome project, as opposed to the many other scientific research programs that presuppose determinism. The philosophic concept of determinism involves explanation of all events, including human action, by prior causal factors--so that whether or not human behavior has a genetic basis, it ultimately gets traced back to _something_ true of the world (...) before our birth. The philosophic problem of free will and determinism arises because this seems to undercut moral responsibility: How can we reasonably be held responsible for something whose causes we couldn't control? (shrink)
In the 1960s, Peter Geach and John Searle independently posed an important objection to the wide class of 'noncognitivist' metaethical views that had at that time been dominant and widely defended for a quarter of a century. The problems raised by that objection have come to be known in the literature as the Frege-Geach Problem, because of Geach's attribution of the objection to Frege's distinction between content and assertoric force, and the problem has since occupied a great deal (...) of the attention both of defenders of broadly noncognitivist views, and of their critics. In this article I explain Geach and Searle's historical objections, and put the subsequent discussion into dialectical context, paying some attention to the developments along the way and how they have enhanced our overall understanding of the problem. The article covers a lot of territory, so we will only be able to see the highlights, along the way. For further reading, see the Works Cited. (shrink)
The basic form of the exclusion problem is by now very, very familiar. 2 Start with the claim that the physical realm is causally complete: every physical thing that happens has a sufficient physical cause. Add in the claim that the mental and the physical are distinct. Toss in some claims about overdetermination, give it a stir, and voilá—suddenly it looks as though the mental never causes anything, at least nothing physical. As it is often put, the physical does (...) all the work, and there is nothing left for the mental to do. (shrink)
* Argument from authoritative self-knowledge ("privileged access" to one's own mental states) 1. We have a "privileged access" to our own mental states in the sense we have the authority on what mental states we are in. 2. Through introspection, we are aware of our mental states but not aware of them as physical states of any sort or as functional states. 3. Therefore, our mental states cannot be physical states.
The problem I have in mind is the problem of the possible justification of subjecting one's will to that of another, and of the normative standing of demands to do so. The account of authority that I offered, many years ago, under the title of the service conception of authority, addressed this issue, and assumed that all other problems regarding authority are subsumed under it. Many found the account implausible. It is thin, relying on very few ideas. It (...) may well appear to be too thin, and to depart too far from many of the ideas that have gained currency in the history of reflection on authority. The present article modifies some aspects the account, and defends it against some criticism made against it. (shrink)
This paper is a response to the 26 commentaries on my paper "Facing Up to the Problem of Consciousness". First, I respond to deflationary critiques, including those that argue that there is no "hard" problem of consciousness or that it can be accommodated within a materialist framework. Second, I respond to nonreductive critiques, including those that argue that the problems of consciousness are harder than I have suggested, or that my framework for addressing them is flawed. Third, I (...) address positive proposals for addressing the problem of consciousness, including those based in neuroscience and cognitive science, phenomenology, physics, and fundamental psychophysical theories. Reply to: Baars, Bilodeau, Churchland, Clark, Clarke, Crick & Koch, Dennett, Hameroff & Penrose, Hardcastle, Hodgson, Hut & Shepard, Libet, Lowe, MacLennan, McGinn, Mills, O'Hara & Scutt, Price, Robinson, Rosenberg, Seager, Shear, Stapp, Varela, Velmans. (shrink)
The contextualist epistemological theories proposed by David Lewis and othersoffer a view of knowledge which awards a central role to the contexts ofknowledge attributions. Such contexts are held to determine how strong anepistemic position must be in order to count as knowledge. Lewis has suggestedthat contextualism so construed can be used both to ward off the skeptic and tosolve the Gettier problem. A person knows P, he says, just in case her evidenceeliminates every possibility that not-P, where the domain (...) of `every' is determinedby the context. Lewis provides a list of rules that can tell us, for a given context,which not-P possibilities must be eliminated and which can properly be ignored.But his account entails, counterintuitively, that knowledge can truly be attributedeven to a person in a Gettier situation provided only that the attributor is ignorantof the fact that the person is gettiered. It has been criticized on those grounds byS. Cohen. In this paper I shall argue that most other forms of contextualism sufferthe same fate as Lewis's. The allies of contextualism haven't yet shown us whethercontextualism can succeed in maintaining a notion of ordinary knowledge whileresisting the absurdity that knowledge can be a matter of sheer good luck. At theend of the paper I shall suggest a possible solution to the problem by showing howCohen's line of criticism leads to a modified conception of what sort of justificationa belief must have to count as knowledge in ordinary contexts. (shrink)
As I type these words, cognitive systems in my brain engage in visual and auditory information processing. This processing is accompanied by subjective states of consciousness, such as the auditory experience of hearing the tap-tap-tap of the keyboard and the visual experience of seeing the letters appear on the screen. How does the brain's activity generate such experiences? Why should it be accompanied by conscious experience in the first place? This is the hard problem of consciousness.
The most important scientific discovery of the present era will come when someone -- or some group -- discovers the answer to the following question: How exactly do neurobiological processes in the brain cause consciousness? This is the most important question facing us in the biological sciences, yet it is frequently evaded, and frequently misunderstood when not evaded. In order to clear the way for an understanding of this problem. I am going to begin to answer four questions: 1. (...) What is consciousness? 2. What is the relation of consciousness to the brain? 3. What are some of the features that an empirical theory of consciousness should try to explain? 4. What are some common mistakes to avoid? (shrink)
This is a paper about the problem of realism in meta-ethics (and, I hope, also in other areas, but that hope is so far pretty speculative). But it is not about the problem of whether realism is true. It is about the problem of what realism is. More specifically, it is about the question of what divides meta-ethical realists from irrealists. I start with a potted history of the Good Old Days.
The strategy of divide and conquer is usually an excellent one, but it all depends on how you do the carving. Chalmer's attempt to sort the "easy" problems of consciousness from the "really hard" problem is not, I think, a useful contribution to research, but a major misdirector of attention, an illusion-generator. How could this be? Let me describe two somewhat similar strategic proposals, and compare them to Chalmers' recommendation.
Recently some philosophers interested in consciousness have begun to turn their attention to the question of what evolutionary advantages, if any, being conscious might confer on an organism. The issue has been pressed in recent dicussions involving David Chalmers, Todd Moody, Owen Flanagan and Thomas Polger, Daniel Dennett, and others. The purpose of this essay is to consider some of the problems that face anyone who wants to give an evolutionary explanation of consciousness. We begin by framing the problem (...) in the context of some current debates. Then we. (shrink)
At the very heart of the mind-body problem is the question of the nature of consciousness. It is consciousness, and in particular _phenomenal_ consciousness, that makes the mind-body relation so deeply perplexing. Many philosophers hold that no defi nition of phenomenal consciousness is possible: any such putative defi nition would automatically use the concept of phenomenal consciousness and thus render the defi nition circular. The usual view is that the concept of phenomenal consciousness is one that must be explained (...) by means of specifi c examples and associated comments. (shrink)
Polygamy is a hotly contested practice and open to widespread misunderstandings. This practice is defined as a relationship between either one husband and multiple wives or one wife and multiple husbands. Today, 'polygamy' almost exclusively takes the form of one husband with multiple wives. In this article, my focus will centre on limited defences of polygamy offered recently by Chesire Calhoun and Martha Nussbaum. I will argue that these defences are unconvincing. The problem with polygamy is primarily that it (...) is a structurally inegalitarian practice in both theory and fact. Polygamy should be opposed for this reason. (shrink)
One of the most influential thinkers of the 20th century, Sir Karl Popper here examines the problems connected with human freedom, creativity, rationality and the relationship between human beings and their actions. In this illuminating series of papers, Popper suggests a theory of mind-body interaction that relates to evolutionary emergence, human language and what he calls "the three worlds." Rene; Descartes first posited the existence of two worlds--the world of physical bodies and the world of mental states. Popper argues for (...) the existence of "world 3" which comprises the products of our human minds. He examines the interaction between mental states--hopes, needs, plans, ideologies or hypotheses--and the physical states of our brain. Popper forcefully argues against the materialism forwarded by many philosophers which denies the existence of mental states. Instead, he demonstrates that the problem of the interaction between mental and physical states remains unresolved. Knowledge and the Body-Mind Problem is based on Popper's never-before published lectures at Emory University in 1969. Popper has extensively revised the lectures but has retained their accessible format. He has also incorporated some of the discussions which followed the lectures, providing an engaging exchange between the philosopher and his audience. (shrink)
A lot of people believe that distinct objectscan occupy precisely the same place for theentire time during which they exist. Suchpeople have to provide an answer to the`grounding problem' – they have to explain howsuch things, alike in so many ways, nonethelessmanage to fall under different sortals, or havedifferent modal properties. I argue in detailthat they cannot say that there is anything invirtue of which spatio-temporally coincidentthings have those properties. However, I alsoargue that this may not be as bad (...) as it looks,and that there is a way to make sense of theclaim that such properties are primitive. (shrink)
Popper famously claimed that he had solved the problem of induction, but few agree. This paper explains what Popper's solution was, and defends it. The problem is posed by Hume's argument that any evidence-transcending belief is unreasonable because (1) induction is invalid and (2) it is only reasonable to believe what you can justify. Popper avoids Hume's shocking conclusion by rejecting (2), while accepting (1). The most common objection is that Popper must smuggle in induction somewhere. But this (...) objection smuggles in precisely the justificationist assumption (2) that Popper, as here undestood, rejects. Footnotes1 Invited address at the Karl Popper 2002 Centenary Conference, Vienna, 3–7 July 2002. (shrink)
Stewart Cohen argues that several epistemological theories fall victim to the problem of easy knowledge: they allow us to know far too easily that certain sceptical hypotheses are false and that how things seem is a reliable indicator of how they are. This problem is a result of the theories' interaction with an epistemic closure principle. Cohen suggests that the theories should be modified. I argue that attempts to solve the problem should focus on closure instead; a (...) new and plausible epistemic closure principle can solve the problem of easy knowledge. My solution offers a uniform and more successful response to different versions of the problem of easy knowledge. (shrink)
This paper discusses Wittgenstein's take on the problem of other minds. In opposition to certain widespread views that I collect under the heading of the “No Problem Interpretation,” I argue that Wittgenstein does address some problem of other minds. However, Wittgenstein's problem is not the traditional epistemological problem of other minds; rather, it is more reminiscent of the issue of intersubjectivity as it emerges in the writings of phenomenologists such as Husserl, Merleau-Ponty, and Heidegger. This (...) is one sense in which Wittgenstein's perspective on other minds might be called “phenomenological.” Yet there is another sense as well, in that Wittgenstein's positive views on this issue resemble the views defended by phenomenologists. The key to a proper philosophical grasp of intersubjectivity, on both views, lies in rethinking the mind. If we conceive of minds as essentially embodied we can understand how intersubjectivity is possible. (shrink)
This collection of new essays put the debates on the mind-body problem into historical context. The discussions range from Aristotle, Aquinas and Descartes to the origins of the qualia and intentionality.
The tripartite account of propositional, fallibilist knowledge that p as justified true belief can become adequate only if it can solve the Gettier Problem. However, the latter can be solved only if the problem of a successful coordination of the resources (at least truth and justification) necessary and sufficient to deliver propositional, fallibilist knowledge that p can be solved. In this paper, the coordination problem is proved to be insolvable by showing that it is equivalent to the (...) ''''coordinated attack'''' problem, which is demonstrably insolvable in epistemic logic. It follows that the tripartite account is not merely inadequate as it stands, as proved by Gettier-type counterexamples, but demonstrably irreparable in principle, so that efforts to improve it can never succeed. (shrink)
With regard to the problem of world poverty, libertarian theories of corrective justice emphasize negative duties and the idea of responsibility whereas utilitarian theories of help concentrate on positive duties based on the capacity of the helper. Thomas Pogge has developed a revised model of compensation that entails positive obligations that are generated by negative duties. He intends to show that the affluent are violating their negative duties to ensure that their conduct will not harm others: They are contributing (...) to and profiting from an unjust global order. But the claim that negative duty generated positive obligations are more acceptable than positive duties is contestable. I examine whether Henry Shue’s model that is integrating negative duties and positive duties is more convincing concerning the foundation of positive duties to protect others. I defend the idea that there are positive duties of justice. This approach can integrate an allocation of positive duties via responsibility and maintain the advantage of an independent foundation of positive duties. (shrink)
This paper gives an account of Colin McGinn's essay: "Can We Solve the Mind-Body Problem?". McGinn's answer to his own essay title is that the problem is forever beyond us due to the particular nature of our cognitive abilities.The present author offers a number of criticisms of the arguments which support this conclusion.
The problem of evil can be captured by the following four statements which taken together are inconsistent: 1) God made the world 2) God is a perfect being 3) A perfect being would not create a world containing evil 4) The world contains evil Traditional attempts to grapple with this problem typically center on rejecting (3). Thus Descartes, following Augustine, rejects (3), arguing that evil is the result of man’s exercise of his free will. However, given Descartes plausible (...) claim that God could have created man in such a way that through exercising his free will man comes to only virtuous actions, it is not clear how the problem is solved. Descartes also repeats the Augustinian orthodoxy that though the world contains evil it does not contain it as a positive existence; evil has no real being but is simply the reflection of the inherent lack of full-being in merely finite individuals. Again, that this is a solution is open to serious doubt. (shrink)
Elaborating on the notions that humans possess different modalities of decision-making and that these are often influenced by moral considerations, we conducted an experimental investigation of the Trolley Problem. We presented the participants with two standard scenarios (‹lever’ and ‹stranger’) either in the usual or in reversed order. We observe that responses to the lever scenario, which result from (moral) reasoning, are affected by our manipulation; whereas responses to the stranger scenario, triggered by moral emotions, are unaffected. Furthermore, when (...) asked to express general moral opinions on the themes of the Trolley Problem, about half of the participants reveal some inconsistency with the responses they had previously given. (shrink)
Expressivists have a problem with negation. The problem is that they have not, to date, been able to explain why ‘murdering is wrong’ and ‘murdering is not wrong’ are inconsistent sentences. In this paper, I explain the nature of the problem, and why the best efforts of Gibbard, Dreier, and Horgan and Timmons don’t solve it. Then I show how to diagnose where the problem comes from, and consequently how it is possible for expressivists to solve (...) it. Expressivists should accept this solution, I argue, because it is demonstrably the only way of avoiding the problem, and because it generalizes. Once we see how to solve the negation problem, I show, it becomes easy to state a constructive, compositional expressivist semantics for a purely normative language with the expressive power of propositional logic, in which we can for the first time give explanatory, formally adequate expressivist accounts of logical inconsistency, logical entailment, and logical validity. As a corollary, I give what I take to be the first real expressivist explanation of why Geach’s original moral modus ponens argument is genuinely logically valid. This proves that the problem with expressivism cannot be that it can’t account for the logical properties of complex normative sentences. But it does not show that the same solution can work for a language with both normative and descriptive predicates, let alone that expressivists are able to deal with more complex linguistic constructions like tense, modals, or even quantifiers. In the final section, I show what kind of constraints the solution offered here would place expressivists under, in answering these further questions. (shrink)
The problem of explaining the mind persists essentially unchanged today since the time of Plato and Aristotle. For the ancients, of course, it was not a question of the relation of mind to brain, though the question was fundamentally the same nonetheless. For Plato, the mind was conceived as distinct from the body and was posited in order to explain knowledge which transcends that available to the senses. For his successor, Aristotle, the mind was conceived as intimately related to (...) the body as form is related to substance. On this conception, the mind is an abstract property or condition of the body itself -. (shrink)
Epistemic luck has been the focus of much discussion recently. Perhaps the most general knowledge-precluding type is veritic luck, where a belief is true but might easily have been false. Veritic luck has two sources, and so eliminating it requires two distinct conditions for a theory of knowledge. I argue that, when one sets out those conditions properly, a solution to the generality problem for reliabilism emerges.