Many have thought that there is a problem with causal commerce between immaterial souls and material bodies. In Physicalism or Something Near Enough, Jaegwon Kim attempts to spell out that problem. Rather than merely posing a question or raising a mystery for defenders of substance dualism to answer or address, he offers a compelling argument for the conclusion that immaterial souls cannot causally interact with material bodies. We offer a reconstruction of that argument that hinges on two premises: (...) Kim’s Dictum and the Nowhere Man principle. Kim’s Dictum says that causation requires a spatial relation. Nowhere Man says that souls can’t be in space. By our lights, both premises can be called into question. We’ll begin our evaluation of the argument by pointing out some consequences of Kim’s Dictum. For some, these will be costs. We will then present two defeaters for Kim’s Dictum and a critical analysis of Kim’s case for Nowhere Man. The upshot is that Kim’s argument against substance dualism fails. (shrink)
There is no doubt that spatial relations aid us in pairing up causes and effects. But when we consider the possibility of qualitatively indiscernible things, it might seem that spatial relations are more than a mere aid – they might seem positively required. The belief that spatial relations are required for causal relations is behind an important objection to Cartesian Dualism, the pairingproblem. I argue that the Cartesian can answer this objection by appeal to the possibility (...) of primitive causal relations, a possibility I defend. This topic is of importance beyond the philosophy of mind; the possibility that causal relations might sometimes hold brutely is of general metaphysical importance. I close with a discussion of what Cartesians should say about embodiment, and how that bears on issues of mental causation. (shrink)
Many have thought that there is a problem with causal commerce between immaterial souls and material bodies. In Physicalism or Something Near Enough, Jaegwon Kim attempts to spell out that problem. Rather than merely posing a question or raising a mystery for defenders of substance dualism to answer or address, he offers a compelling argument for the conclusion that immaterial souls cannot causally interact with material bodies. We offer a reconstruction of that argument that hinges on two premises: (...) Kim's Dictum and the Nowhere Man principle. Kim's Dictum says that causation requires a spatial relation. Nowhere Man says that souls can't be in space. By our lights, both premises can be called into question. We'll begin our evaluation of the argument by pointing out some consequences of Kim's Dictum. For some, these will be costs. We will then present two defeaters for Kim's Dictum and a critical analysis of Kim's case for Nowhere Man. The upshot is that Kim's argument against substance dualism fails. (shrink)
J.L. Mackie’s version of the logical problem of evil is a failure, as even he came to recognize. Contrary to current mythology, however, its failure was not established by Alvin Plantinga’s Free Will Defense. That’s because a defense is successful only if it is not reasonable to refrain from believing any of the claims that constitute it, but it is reasonable to refrain from believing the central claim of Plantinga’s Free Will Defense, namely the claim that, possibly, every essence (...) suffers from transworld depravity. (shrink)
The “demarcation problem,” the issue of how to separate science from pseu- doscience, has been around since fall 1919—at least according to Karl Pop- per’s (1957) recollection of when he first started thinking about it. In Popper’s mind, the demarcation problem was intimately linked with one of the most vexing issues in philosophy of science, David Hume’s problem of induction (Vickers 2010) and, in particular, Hume’s contention that induction cannot be logically justified by appealing to the fact (...) that “it works,” as that in itself is an inductive argument, thereby potentially plunging the philosopher straight into the abyss of a viciously circular argument. (shrink)
Ever since Socrates, philosophers have been in the business of asking ques- tions of the type “What is X?” The point has not always been to actually find out what X is, but rather to explore how we think about X, to bring up to the surface wrong ways of thinking about it, and hopefully in the process to achieve an increasingly better understanding of the matter at hand. In the early part of the twentieth century one of the most (...) ambitious philosophers of sci- ence, Karl Popper, asked that very question in the specific case in which X = science. Popper termed this the “demarcation problem,” the quest for what distinguishes science from nonscience and pseudoscience (and, presumably, also the latter two from each other). (shrink)
The present study tested the existence of a cognitive schema that guides people's evaluations of the likelihood that observed problem-solving processes will succeed. The hypothesised schema consisted of attributes that were found to distinguish between retrospective case reports of successful and unsuccessful real world problem solving (Lipshitz & Bar Ilan, 1996). Participants were asked to evaluate the likelihood of success of identical cases of problem solving that differed in the presence or absence of diagnosis, the selection of (...) appropriate or inappropriate solutions, and the pairing of diagnosis with appropriate or non-appropriate solutions. Consistent with the proposition, diagnosis affected perceived likelihood of success, albeit only when solution quality was held constant, and appropriate diagnosis with a compatible solution produced higher perceived likelihood of success than appropriate diagnosis with incompatible solutions. In addition, results showed that solution quality played a significant role, and that compatibility with a six-phase rational model of problem solving played no role in judging likelihood of success. (shrink)
The forensic two-trace problem is a perplexing inference problem introduced by Evett (J Forensic Sci Soc 27:375–381, 1987). Different possible ways of wording the competing pair of propositions (i.e., one proposition advanced by the prosecution and one proposition advanced by the defence) led to different quantifications of the value of the evidence (Meester and Sjerps in Biometrics 59:727–732, 2003). Here, we re-examine this scenario with the aim of clarifying the interrelationships that exist between the different solutions, and in (...) this way, produce a global vision of the problem. We propose to investigate the different expressions for evaluating the value of the evidence by using a graphical approach, i.e. Bayesian networks, to model the rationale behind each of the proposed solutions and the assumptions made on the unknown parameters in this problem. (shrink)
One of the reasons why most of us feel puzzled about the problem of abortion is that we want, and do not want, to allow to the unborn child the rights that belong to adults and children. When we think of a baby about to be born it seems absurd to think that the next few minutes or even hours could make so radical a difference to its status; yet as we go back in the life of the fetus (...) we are more and more reluctant to say that this is a human being and must be treated as such. No doubt this is the deepest source of our dilemma, but it is not the only one. For we are also confused about the general question of what we may and may not do where the interests of human beings conflict. We have strong intuitions about certain cases; saying, for instance, that it is all right to raise the level of education in our country, though statistics allow us to predict that a rise in the suicide rate will follow, while it is not all right to kill the feeble-minded to aid cancer research. It is not easy, however, to see the principles involved, and one way of throwing light on the abortion issue will be by setting up parallels involving adults or children once born. So we will be able to isolate the “equal rights” issue and should be able to make some advance... (shrink)
In this paper, I argue that there is a kind of evil, namely, the unequal distribution of natural endowments, or natural inequality, which presents theists with a new evidential (not logical or incompatibility) problem of evil. The problem of natural inequality is a new evidential problem of evil not only because, to the best of my knowledge, it has not yet been discussed in the literature, but also because available theodicies, such the free will defense and the (...) soul-making defense, are not adequate responses in the face of this particular evil, or so I argue. (shrink)
The existence of evil and suffering in our world seems to pose a serious challenge to belief in the existence of a perfect God. If God were all-knowing, it seems that God would know about all of the horrible things that happen in our world. If God were all-powerful, God would be able to do something about all of the evil and suffering. Furthermore, if God were morally perfect, then surely God would want to do something about it. And yet (...) we find that our world is filled with countless instances of evil and suffering. These facts about evil and suffering seem to conflict with the orthodox theist claim that there exists a perfectly good God. The challenged posed by this apparent conflict has come to be known as the problem of evil. (shrink)
A philosophical standard in the debates concerning material constitution is the case of a statue and a lump of clay, Lumpl and Goliath respectively. According to the story, Lumpl and Goliath are coincident throughout their respective careers. Monists hold that they are identical; pluralists that they are distinct. This paper is concerned with a particular objection to pluralism, the Grounding Problem . The objection is roughly that the pluralist faces a legitimate explanatory demand to explain various differences she alleges (...) between Lumpl and Goliath, but that the pluralistâ€™s theory lacks the resources to give any such explanation. In this paper, I explore the question of whether there really is any problem of this sort. I argue (i) that explanatory demands that are clearly legitimate are easy for the pluralist to meet; (ii) that even in cases of explanatory demands whose legitimacy is questionable the pluralist has some overlooked resources; and (iii) there is some reason for optimism about the pluralistâ€™s prospects for meeting every legitimate explanatory demand. In short, no clearly adequate statement of a Grounding Problem is extant, and there is some reason to believe that the pluralist can overcome any Grounding Problem that we havenâ€™t thought of yet. (shrink)
This is about a proposed solution to the exclusion problem, one I've defended elsewhere (for example, in "The Properties of Mental Causation"). Details aside, it's just the identity theory: mental properties face no threat of exclusion from, or preemption by, physical properties, because every mental property is a physical property. Here I elaborate on this solution and defend it from some objections. One of my goals is to place it in the context of a more general ontology of properties, (...) in particular, a trope ontology. (shrink)
Elaborating on the notions that humans possess different modalities of decision-making and that these are often influenced by moral considerations, we conducted an experimental investigation of the Trolley Problem. We presented the participants with two standard scenarios (‹lever’ and ‹stranger’) either in the usual or in reversed order. We observe that responses to the lever scenario, which result from (moral) reasoning, are affected by our manipulation; whereas responses to the stranger scenario, triggered by moral emotions, are unaffected. Furthermore, when (...) asked to express general moral opinions on the themes of the Trolley Problem, about half of the participants reveal some inconsistency with the responses they had previously given. (shrink)
Expressivists, such as Blackburn, analyse sentences such as 'S thinks that it ought to be the case that p' as S hoorays that p'. A problem is that the former sentence can be negated in three different ways, but the latter in only two. The distinction between refusing to accept a moral judgement and accepting its negation therefore cannot be accounted for. This is shown to undermine Blackburn's solution to the Frege-Geach problem.
I resolve the major challenge to an Expressivist theory of the meaning of normative discourse: the Frege–Geach Problem. Drawing on considerations from the semantics of directive language (e.g., imperatives), I argue that, although certain forms of Expressivism (like Gibbard’s) do run into at least one version of the Problem, it is reasonably clear that there is a version of Expressivism that does not.
Scepticism is sometimes expressed about whether there is any interesting problem of other minds. In this paper I set out a version of the conceptual problem of other minds which turns on the way in which mental occurrences are presented to the subject and situate it in relation to debates about our knowledge of other people's mental lives. The result is a distinctive problem in the philosophy of mind concerning our relation to other people.
According to John Mackie, moral talk is representational (the realists go that bit right) but its metaphysical presuppositions are wildly implausible (the non-cognitivists got that bit right). This is the basis of Mackie’s now famous error theory: that moral judgments are cognitively meaningful but systematically false. Of course, Mackie went on to recommend various substantive moral judgments, and, in the light of his error theory, that has seemed odd to a lot of folk. Richard Joyce has argued that Mackie’s approach (...) can be vindicated by a fictionalist account of moral discourse. And Mark Kalderon has argued that moral fictionalism is attractive quite independently of Mackie’s error-theory. Kalderon argues that the Frege–Geach problem shows that we need moral propositions, but that a fictionalist can and should embrace propositional content together with a non-cognitivist account of acceptance of a moral proposition. Indeed, it is clear that any fictionalist is going to have to postulate more than one kind of acceptance attitude. We argue that this double-approach to acceptance generates a new problem – a descendent of Frege–Geach – which we call the acceptance–transfer problem. Although we develop the problem in the context of Kalderon’s version of non-cognitivist fictionalism, we show that it is not the non-cognitivist aspect of Kalderon’s account that generates the problem. A closely related problem surfaces for the more typical variants of fictionalism according to which accepting a moral proposition is believing some closely related non-moral proposition. Fictionalists of both stripes thus have an attitude problem. (shrink)
The frame problem is the difficulty of explaining how non-magical systems think and act in ways that are adaptively sensitive to context-dependent relevance. Influenced centrally by Heideggerian phenomenology, Hubert Dreyfus has argued that the frame problem is, in part, a consequence of the assumption (made by mainstream cognitive science and artificial intelligence) that intelligent behaviour is representation-guided behaviour. Dreyfus' Heideggerian analysis suggests that the frame problem dissolves if we reject representationalism about intelligence and recognize that human agents (...) realize the property of thrownness (the property of being always already embedded in a context). I argue that this positive proposal is incomplete until we understand exactly how the properties in question may be instantiated in machines like us. So, working within a broadly Heideggerian conceptual framework, I pursue the character of a representation-shunning thrown machine. As part of this analysis, I suggest that the frame problem is, in truth, a two-headed beast. The intra-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action within a context. The inter-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action in worlds in which adaptation to new contexts is open-ended and in which the number of potential contexts is indeterminate. Drawing on the field of situated robotics, I suggest that the intra-context frame problem may be neutralized by systems of special-purpose adaptive couplings, while the inter-context frame problem may be neutralized by systems that exhibit the phenomenon of continuous reciprocal causation. I also defend the view that while continuous reciprocal causation is in conflict with representational explanation, special-purpose adaptive coupling, as well as its associated agential phenomenology, may feature representations. My proposal has been criticized recently by Dreyfus, who accuses me of propagating a cognitivist misreading of Heidegger, one that, because it maintains a role for representation, leads me seriously astray in my handling of the frame problem. I close by responding to Dreyfus' concerns. (shrink)
In “Why the generality problem is everybody’s problem,” Michael Bishop argues that every theory of justification needs a solution to the generality problem. He contends that a solution is needed in order for any theory to be used in giving an acceptable account of the justificatory status of beliefs in certain examples. In response, first I will describe the generality problem that is specific to process reliabilism and two other sorts of problems that are essentially the (...) same. Then I will argue that the examples that Bishop presents pose no such problem for some theories. I will illustrate the exempt theories by describing how an evidentialist view can account for the justification in the examples without having any similar problem. It will be clear that other views about justification are likewise unaffected by anything like the generality problem. (shrink)
Necessity holds that, if a proposition A supports another B, then it must support B. John Greco contends that one can resolve Hume's Problem of Induction only if she rejects Necessity in favor of reliabilism. If Greco's contention is correct, we would have good reason to reject Necessity and endorse reliabilism about inferential justification. Unfortunately, Greco's contention is mistaken. I argue that there is a plausible reply to Hume's Problem that both endorses Necessity and is at least as (...) good as Greco's alternative. Hence, Greco provides a good reason for neither rejecting Necessity nor endorsing inferential reliabilism. (shrink)
In 1955, Goodman set out to 'dissolve' the problem of induction, that is, to argue that the old problem of induction is a mere pseudoproblem not worthy of serious philosophical attention. I will argue that, under naturalistic views of the reflective equilibrium method, it cannot provide a basis for a dissolution of the problem of induction. This is because naturalized reflective equilibrium is -- in a way to be explained -- itself an inductive method, and thus renders (...) Goodman's dissolution viciously circular. This paper, then, examines how the old problem of induction crept back in while nobody was looking. (shrink)
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of (...) the reference class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all “no-theory” theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a “metaphysical” and an “epistemological” reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains. (shrink)
The Gettier problem has stymied epistemologists. But, whether or not this problem is resolvable, we still must face an important question: Why does the Gettier problem arise in the first place? So far, philosophers have seen it as either a problem peculiar to the concept of knowledge, or else an instance of a general problem about conceptual analysis. But I would like to steer a middle course. I argue that the Gettier problem arises because (...) knowledge is a thick concept, and a Gettier-like problem is just what we should expect from attempts at analyzing a thick concept. Section 2 is devoted to establishing the controversial claim that knowledge is thick, and, in Sect. 3, I show that there is a general problem for analyzing thick concepts of which the Gettier problem is a special instance. I do not take a stand on whether the Gettier problem, or its general counterpart, is resolvable. My primary aim is to bring these problems into better focus. (shrink)
A difficulty is exposed in Allan Gibbard's solution to the embedding/Frege-Geach problem, namely that the difference between refusing to accept a normative judgement and accepting its negation is ignored. This is shown to undermine the whole solution.
Within cognitive science, mental processing is often construed as computation over mental representations—i.e., as the manipulation and transformation of mental representations in accordance with rules of the kind expressible in the form of a computer program. This foundational approach has encountered a long-standing, persistently recalcitrant, problem often called the frame problem; it is sometimes called the relevance problem. In this paper we describe the frame problem and certain of its apparent morals concerning human cognition, and we (...) argue that these morals have significant import regarding both the nature of moral normativity and the human capacity for mastering moral normativity. The morals of the frame problem bode well, we argue, for the claim that moral normativity is not fully systematizable by exceptionless general principles, and for the correlative claim that such systematizability is not required in order for humans to master moral normativity. (shrink)
It is sometimes held that rules of inference determine the meaning of the logical constants: the meaning of, say, conjunction is fully determined by either its introduction or its elimination rules, or both; similarly for the other connectives. In a recent paper, Panu Raatikainen (2008) argues that this view - call it logical inferentialism - is undermined by some "very little known" considerations by Carnap (1943) to the effect that "in a definite sense, it is not true that the standard (...) rules of inference" themselves suffice to "determine the meanings of [the] logical constants" (p. 2). In a nutshell, Carnap showed that the rules allow for non-normal interpretations of negation and disjunction. Raatikainen concludes that "no ordinary formalization of logic ... is sufficient to `fully formalize' all the essential properties of the logical constants" (ibid.). We suggest that this is a mistake. Pace Raatikainen, intuitionists like Dummett and Prawitz need not worry about Carnap's problem. And although bilateral solutions for classical inferentialists - as proposed by Timothy Smiley and Ian Rumfitt - seem inadequate, it is not excluded that classical inferentialists may be in a position to address the problem too. (shrink)
Philosophers have worried that research on animal mind-reading faces a “logical problem”: the difficulty of experimentally determining whether animals represent mental states (e.g. seeing) or merely the observable evidence for those states (e.g. line-of-gaze). The most impressive attempt to confront this problem has been mounted recently by Robert Lurz (2009, 2011). However, Lurz’ approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. Moreover, participants (...) in this debate do not appear to agree on criteria for representation. As such, future debate on this question should either abandon the representational idiom or confront differences in underlying semantics. (shrink)
(June 2013) “The mind-body problem in cognitive neuroscience”, Philosophia Scientiae 17/2, Gabriel Vacariu and Mihai Vacariu (eds.): 1. William Bechtel (Philosophy, Center for Chronobiology, and Interdisciplinary Program in Cognitive Science University of California, San Diego) “The endogenously active brain: the need for an alternative cognitive architecture” 2. Rolls T. Edmund (Oxford Centre for Computational Neuroscience, Oxford, UK) “On the relation between the mind and the brain: a neuroscience perspective” 3. Cees van Leeuwen (University of Leuven, Belgium; Riken Brain Science (...) Institute, Japan) “Brain and mind” 4. Kari Theurer (Trinity College) and John Bickle (Philosophy, Mississippi State University) “What’s old is new again: Kemeny-Oppenheim reduction at work in current molecular neuroscience” 5. Bernard Andrieu (Staps Université de Lorraine) “Sentir son cerveau? Les dispositifs neuro-expérientiels en 1er personne” 6. Corey Maley and Gualtiero Piccinini (Philosophy, University of Missouri – St. Louis) “Get the latest upgrade: Functionalism 6.3.1” 7. Paula Droege (Philosophy, Pennsylvania State University) “Memory and consciousness” 8. Gabriel Vacariu and Mihai Vacariu (Philosophy, University of Bucharest) “Troubles with cognitive neuroscience”. (shrink)
Graeme Forbes (2011) raises some problems for two-dimensional semantic theories. The problems concern nested environments: linguistic environments where sentences are nested under both modal and epistemic operators. Closely related problems involving nested environments have been raised by Scott Soames (2005) and Josh Dever (2007). Soames (forthcoming) goes so far as to say that nested environments pose the “chief technical problem” for strong two-dimensionalism. We might call the problem of handling nested environments within two-dimensional semantics the nesting problem. (...) We first lay out the basic principles of two-dimensional semantics and a simple treatment of necessity and apriority operators, and spell out how Forbes' puzzle arises within this framework. We then show how a generalized version of the puzzle arises independently of two-dimensional semantics. We go on to spell out a two-dimensional treatment of attitude verbs and spell out a two-dimensional treatment of the apriority operator that fits the two-dimensional treatment of attitude verbs and show how these handle Forbes' puzzles. (shrink)
It is shown that the Fodor's interpretation of the frame problem is the central indication that his version of the Modularity Thesis is incompatible with computationalism. Since computationalism is far more plausible than this thesis, the latter should be rejected.
The paper critically examines an objection to epistemic contextualism recently developed by Elke Brendel and Peter Baumann, according to which it is impossible for the contextualist to know consistently that his theory is true. I first present an outline of contextualism and its reaction to scepticism. Then the necessary and sufficient conditions for the knowability problem to arise are explored. Finally, it will be argued that contextualism does not fulfil these minimal conditions. It will be shown that the contrary (...) view is based on a misunderstanding of what contextualists are claiming. (shrink)
In a formal theory of induction, inductive inferences are licensed by universal schemas. In a material theory of induction, inductive inferences are licensed by facts. With this change in the conception of the nature of induction, I argue that the celebrated “problem of induction” can no longer be set up and is thereby dissolved. Attempts to recreate the problem in the material theory of induction fail. They require relations of inductive support to conform to an unsustainable, hierarchical empiricism.
Examining the moral sense theories of Francis Hutcheson, David Hume, and Adam Smith from the perspective of the is-ought problem, this essay shows that the moral sense or moral sentiments in those theories alone cannot identify appropriate morals. According to one interpretation, Hume's or Smith's theory is just a description of human nature. In this case, it does not answer the question of how we ought to live. According to another interpretation, it has some normative implications. In this case, (...) it draws normative claims from human nature. Anyway, the sentiments of anger, resentment, vengeance, superiority, sympathy, and benevolence show that drawing norms from human nature is sometimes morally problematic. The changeability of the moral sense and moral sentiments in Hume's and Smith's theories supports this idea. Hutcheson's theory is morally more appropriate because it bases morality on disinterested benevolence. Yet disinterested benevolence is not enough for morality. There are no sentiments the presence of which alone makes any action moral. (shrink)
According to orthodox quantum mechanics, state vectors change in two incompatible ways: "deterministically" in accordance with Schroedinger's time-dependent equation, and probabilistically if and only if a measurement is made. It is argued here that the problem of measurement arises because the precise mutually exclusive conditions for these two types of transitions to occur are not specified within orthodox quantum mechanics. Fundamentally, this is due to an inevitable ambiguity in the notion of "meawurement" itself. Hence, if the problem of (...) measurement is to be resolved, a new, fully objective version of quantjm mechanics needs to be developed which does not incorporate the notion of measurement in its basic postuolates at all. (shrink)
In this paper, it is argued that there are (at least) two different kinds of ‘epistemic normativity’ in epistemology, which can be scrutinized and revealed by some comparison with some naturalistic studies of ethics. The first kind of epistemic normativity can be naturalized, but the other not. The doctrines of Quine’s naturalized epistemology is firstly introduced; then Kim’s critique of Quine’s proposal is examined. It is argued that Quine’s naturalized epistemology is able to save some room for the concept of (...) epistemic normativity and therefore his doctrine can be protected against Kim’s critique. But, it is the first kind of epistemic normativity that can be naturalized in epistemology. With the assistance of Goldman’s fake barn case, it is shown that the concept of epistemic normativity that is involved in the concept of knowing, which cannot be fully naturalized. The Gettier problem indicates that Quine only gets partially right idea concerning whether epistemology can (and should) be natualized. (shrink)
The, so called, ‘conceptual problem of other minds’ has been articulated in a number of different ways. I discuss two, drawing out some constraints on an adequate account of the grasp of concepts of mental states. Distinguishing between behaviour-based and identity-based approaches to the problem, I argue that the former, exemplified by Brewer and Pickard, are incomplete as they presuppose, but do not provide an answer to, what I shall call the conceptual problem of other bodies. I (...) end with some remarks on identity-based approaches, pointing out related problems for versions of this approach held by Cassam and Peacocke. (shrink)
Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what (...) is "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body. (shrink)
: A key consideration in favour of animalism—the thesis that persons like you and me are identical to the animals we walk around with—is that it avoids a too many thinkers problem that arises for non-animalist positions. The problem is that it seems that any person-constituting animal would itself be able to think, but if wherever there is a thinking person there is a thinking animal distinct from it then there are at least two thinkers wherever there is (...) a thinking person. Most find this result unacceptable, and some think it provides an excellent reason for accepting animalism. It has been argued, however, that animalists face an analogous problem of too many thinkers, the so-called corpse problem, as they must accept both 1) that we are distinct from our bodies, as our bodies can and we cannot persist through death as corpses and 2) that our bodies can think. I argue that the best reasons animalists have for accepting the two claims that generate the distinctness part of the problem double up as reasons to reject the claim that our bodies can think, and vice versa. I argue further that Lockeans cannot similarly get around their problem of too many thinkers. (shrink)
Moral contextualism is the view that claims like ‘A ought to X’ are implicitly relative to some (contextually variable) standard. This leads to a problem: what are fundamental moral claims like ‘You ought to maximize happiness’ relative to? If the claim is relative to a utilitarian standard, then its truth conditions are trivial: ‘Relative to utilitarianism, you ought to maximize happiness’. But it certainly doesn’t seem trivial that you ought to maximize happiness (utilitarianism is a highly controversial position). Some (...) people believe this problem is a reason to prefer a realist or error theoretic semantics of morals. I argue two things: first, that plausible versions of all these theories are afflicted by the problem equally, and second, that any solution available to the realist and error theorist is also available to the contextualist. So the problem of triviality does not favour noncontextualist views of moral language. (shrink)
The No-Miracles Argument (NMA) is often used to support scientific realism. We can formulate this argument as an inference to the best explanation this accusation of circularity by appealing to reliabilism, an externalist epistemology. In this paper I argue that this retreat fails. Reliabilism suffers from a potentially devastating difficulty known as the Generality Problem and attempts to solve this problem require adopting both epistemic and metaphysical assumptions regarding local scientific theories. Although the externalist can happily adopt the (...) former, if he adopts the latter then the Generality Problem arises again, but now at the level of scientific methodology. Answering this new version of the Generality Problem is impossible for the scientific realist without making the important further assumption that there exists the possibility of a unique rule of Doing this however would make the NMA viciously premise circular. (shrink)
In “Practical Knowledge of Language”, C.-h. Tsai criticizes the arguments in “Swimming and Speaking Spanish” (this issue, pp. 331–341), on the grounds that its account of knowledge of language as knowledge-how is mistaken. In its place, he proposes an alternative account in terms of Russell’s concept “knowledge-by-acquaintance”. In this paper, I show that this account succeeds neither in displacing the account in Swimming and Speaking Spanish nor in addressing Tsai’s main concern: solving the “delivery problem”.
Kilimanjaro is a paradigmatic mountain, if any is. Consider atom Sparky, which is neither determinately part of Kilimanjaro nor determinately not part of it. Let Kilimanjaro(+) be the body of land constituted, in the way mountains are constituted by their constituent atoms, by the atoms that make up Kilimanjaro together with Sparky, and Kilimanjaro(–) the one constituted by those other than Sparky. On the one hand, there seems to be just one mountain in the vicinity of Kilimanjaro. On the other (...) hand, both Kilimanjaro(+) and Kilimanjaro(–)—and indeed many other similar things—seem to have an equal claim to be a mountain: all of them exhibit the grounds for something being a mountain—like being an elevation of the earth’s surface rising abruptly and to a large height from the surrounding level,1 or whathaveyou—; and there seems to be nothing in the vicinity with a better claim. Hence, the problem of the many. (shrink)
The paper argues that dualism can explain mental causation and solve the exclusion problem. If dualism is combined with the assumption that the psychophysical laws have a special status, it follows that some physical events counterfactually depend on, and are therefore caused by, mental events. Proponents of this account of mental causation can solve the exclusion problem in either of two ways: they can deny that it follows that the physical effect of a mental event is overdetermined by (...) its mental and physical causes, or they can accept that the physical effect is overdetermined but claim that this is unproblematic because the case is sufficiently dissimilar to prototypical cases of overdetermination. (shrink)
Miracles and the problem of evil are two prominent areas of research within philosophy of religion. On occasion these areas converge, with God’s goodness being brought into question by the claim that either there is a lack of miracles, or there are immoral miracles. In this paper I shall highlight a second manner in which miracles and the problem of evil relate. Namely, I shall give reason as to why what is considered to be miraculous may be dependent (...) upon a particular response to the problem of natural evil. To establish this claim, I shall focus upon Aquinas’s definition of a miracle and a particular free-will defence, the Luciferous defence. (shrink)
The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment. Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese room argument. The main thesis in this paper is that although related, these two issues present different problems in (...) the framework presented by Harnad himself. The work presented here attempts to shed some light on the relationship between John Searle’s intentionality notion and Harnad’s Symbol Grounding Problem. (shrink)
This paper explores the relationship between scepticism and epistemic relativism in the context of recent history and philosophy of science. More specifically, it seeks to show that significant treatments of epistemic relativism by influential figures in the history and philosophy of science draw upon the Pyrrhonian problem of the criterion. The paper begins with a presentation of the problem of the criterion as it occurs in the work of Sextus Empiricus. It is then shown that significant treatments of (...) epistemic relativism in recent history and philosophy of science (critical rationalism, historical philosophy of science and the strong programme) draw upon the problem of the criterion. It is briefly suggested that a particularist response to the problem of the criterion may be put to good use against epistemic relativism. (shrink)